Classification: Basic Concepts and Decision Trees A programming task Classification: Definition • Given a collection of records (training set ) – Each record contains a set of attributes, one of the attributes is the class. • Find a model for class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible. – A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. Illustrating Classification Task Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes Learning algorithm Induction Learn Model Model 10 Training Set Tid Attrib1 Attrib2 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Attrib3 Apply Model Class Deduction Examples of Classification Task • Predicting tumor cells as benign or malignant • Classifying credit card transactions as legitimate or fraudulent • Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil • Categorizing news stories as finance, weather, entertainment, sports, etc Classification Using Distance • Place items in class to which they are “closest”. • Must determine distance between an item and a class. • Classes represented by – Centroid: Central value. – Medoid: Representative point. – Individual points • Algorithm: KNN Classification Techniques • • • • • • Decision Tree based Methods Rule-based Methods Memory based reasoning Neural Networks Naïve Bayes and Bayesian Belief Networks Support Vector Machines A first example Database of 20,000 images of handwritten digits, each labeled by a human [28 x 28 greyscale; pixel values 0-255; labels 0-9] Use these to learn a classifier which will label digit-images automatically… The learning problem Input space X = {0,1,…,255}784 Output space Y = {0,1,…,9} Training set (x1, y1), …, (xm, ym) m = 20000 Learning Algorithm To measure how good f is: use a test set [Our test set: 100 instances of each digit.] Classifier f: X ! Y A possible strategy Input space X = {0,1,…,255}784 Output space Y = {0,1,…,9} Treat each image as a point in 784-dimensional Euclidean space To classify a new image: find its nearest neighbor in the database (training set) and return that label f = entire training set + search engine K Nearest Neighbor (KNN): • Training set includes classes. • Examine K items near item to be classified. • New item placed in class with the most number of close items. • O(q) for each tuple to be classified. (Here q is the size of the training set.) KNN Nearest neighbor Image to label Nearest neighbor Overall: error rate = 6% (on test set) Question: what is the error rate for random guessing? What does it get wrong? Who knows… but here’s a hypothesis: Each digit corresponds to some connected region of R784. Some of the regions come close to each other; problems occur at these boundaries. R1 R2 e.g. a random point in this ball has only a 70% chance of being in R2 Nearest neighbor: pros and cons Pros Simple Flexible Excellent performance on a wide range of tasks Cons Algorithmic: Statistical: time consuming – with n training points in Rd, time to label a new point is O(nd) memorization, not learning! no insight into the domain would prefer a compact classifier Prototype selection A possible fix: instead of the entire training set, just keep a “representative sample” Voronoi cells “Decision boundary” Prototype selection A possible fix: instead of the entire training set, just keep a “representative sample” Voronoi cells “Decision boundary” How to pick prototypes? They needn’t be actual data points Idea 2: one prototype per class: mean of training points Examples: Error = 23% Postscript: learning models Batch learning Training data Learning Algorithm On-line learning See a new point x predict label Classifier f See y test Update classifer Example of a Decision Tree T id R e fu n d Splitting Attributes M a rita l S ta tu s T a x a b le In c o m e C heat 1 Yes S in g le 125K No 2 No M a rrie d 100K No 3 No S in g le 70K No 4 Yes M a rrie d 120K No 5 No D iv o rc e d 95K Y es 6 No M a rrie d 60K No 7 Yes D iv o rc e d 220K No 8 No S in g le 85K Y es 9 No M a rrie d 75K No 10 No S in g le 90K Y es Refund Yes No NO MarSt Single, Divorced TaxInc < 80K NO NO > 80K YES 10 Training Data Married Model: Decision Tree Another Example of Decision Tree MarSt T id 10 R e fu n d Married M a rita l S ta tu s T a x a b le In c o m e C heat 1 Yes S in g le 125K No 2 No M a rrie d 100K No 3 No S in g le 70K No 4 Yes M a rrie d 120K No 5 No D iv o rc e d 95K Y es 6 No M a rrie d 60K No 7 Yes D iv o rc e d 220K No 8 No S in g le 85K Y es 9 No M a rrie d 75K No 10 No S in g le 90K Y es NO Single, Divorced Refund No Yes NO TaxInc < 80K NO > 80K YES There could be more than one tree that fits the same data! Decision Tree Classification Task Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes Tree Induction algorithm Induction Learn Model Model 10 Training Set Tid Attrib1 Attrib2 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Attrib3 Apply Model Class Deduction Decision Tree Apply Model to Test Data Test Data Start from the root of tree. R e fu n d No Refund Yes 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? Apply Model to Test Data Test Data R e fu n d No Refund Yes 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? Apply Model to Test Data Test Data R e fu n d No Refund Yes 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? Apply Model to Test Data Test Data R e fu n d No Refund Yes 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? Apply Model to Test Data Test Data R e fu n d No Refund Yes 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? Apply Model to Test Data Test Data R e fu n d No Refund Yes M a rita l S ta tu s T a x a b le In c o m e C heat M a rrie d 80K ? 10 No NO MarSt Single, Divorced TaxInc < 80K NO Married NO > 80K YES Assign Cheat to “No” Decision Tree Classification Task Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes Tree Induction algorithm Induction Learn Model Model 10 Training Set Tid Attrib1 Attrib2 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Attrib3 Apply Model Class Deduction Decision Tree Decision Tree Induction • Many Algorithms: – Hunt’s Algorithm (one of the earliest) – CART – ID3, C4.5 – SLIQ,SPRINT General Structure of Hunt’s Algorithm • Let Dt be the set of training records that reach a node t • General Procedure: – If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt – If Dt is an empty set, then t is a leaf node labeled by the default class, yd – If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Dt ? 60K Hunt’s Algorithm T id Refund Don’t Cheat Yes No Don’t Cheat Don’t Cheat Refund Refund Yes Yes No Don’t Cheat Don’t Cheat Marital Status Single, Divorced Cheat Married No Marital Status Single, Divorced >= 80K Don’t Cheat Cheat T a x a b le In c o m e C heat Yes S in g le 125K No 2 No M a rrie d 100K No 3 No S in g le 70K No 4 Yes M a rrie d 120K No 5 No D ivo rc e d 95K Yes 6 No M a rrie d 60K No 7 Yes D ivo rc e d 220K No 8 No S in g le 85K Yes 9 No M a rrie d 75K No 10 No S in g le 90K Yes 10 Don’t Cheat < 80K M a rita l S ta tu s 1 Married Taxable Income Don’t Cheat R e fu n d Tree Induction • Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting Tree Induction • Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting How to Specify Test Condition? • Depends on attribute types – Nominal – Ordinal – Continuous • Depends on number of ways to split – 2-way split – Multi-way split Splitting Based on Nominal Attributes • Multi-way split: Use as many partitions as distinct values. CarType Family Luxury Sports • Binary split: Divides values into two subsets. Need to find optimal partitioning. {Sports, Luxury} CarType {Family} OR {Family, Luxury} CarType {Sports} Splitting Based on Ordinal Attributes • Multi-way split: Use as many partitions as distinct values. Size Small Large Medium • Binary split: Divides values into two subsets. Need to find optimal partitioning. {Small, Medium} Size {Large} OR {Medium, Large} Size • What about this split? {Small, Large} Size {Medium} {Small} Splitting Based on Continuous Attributes • Different ways of handling – Discretization to form an ordinal categorical attribute • Static – discretize once at the beginning • Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. – Binary Decision: (A < v) or (A v) • consider all possible splits and finds the best cut • can be more compute intensive Splitting Based on Continuous Attributes Taxable Income > 80K? Taxable Income? < 10K Yes > 80K No [10K,25K) (i) Binary split [25K,50K) [50K,80K) (ii) Multi-way split Tree Induction • Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Own Car? Yes Car Type? No Family Student ID? Luxury c1 Sports C0: 6 C1: 4 C0: 4 C1: 6 C0: 1 C1: 3 C0: 8 C1: 0 C0: 1 C1: 7 Which test condition is the best? C0: 1 C1: 0 ... c10 C0: 1 C1: 0 c11 C0: 0 C1: 1 c20 ... C0: 0 C1: 1 How to determine the Best Split • Greedy approach: – Nodes with homogeneous class distribution are preferred • Need a measure of node impurity: C0: 5 C1: 5 C0: 9 C1: 1 Non-homogeneous, Homogeneous, High degree of impurity Low degree of impurity Measures of Node Impurity • Gini Index • Entropy • Misclassification error How to Find the Best Split Before Splitting: C0 C1 N00 N01 M0 A? B? Yes No Node N1 C0 C1 Node N2 N10 N11 C0 C1 N20 N21 M2 M1 Yes No Node N3 C0 C1 Node N4 N30 N31 C0 C1 M3 M12 M4 M34 Gain = M0 – M12 vs M0 – M34 N40 N41 Measure of Impurity: GINI • Gini Index for a given node t : GINI ( t ) 1 [ p ( j | t )] 2 j (NOTE: p( j | t) is the relative frequency of class j at node t). – Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0.0) when all records belong to one class, implying most interesting information C1 0 C1 1 C1 2 C1 3 C2 6 C2 5 C2 4 C2 3 G in i= 0 .0 0 0 G in i= 0 .2 7 8 G in i= 0 .4 4 4 G in i= 0 .5 0 0 Examples for computing GINI GINI ( t ) 1 [ p ( j | t )] 2 j C1 0 P(C1) = 0/6 = 0 C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0 C1 1 P(C1) = 1/6 C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 C1 2 P(C1) = 2/6 C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444 P(C2) = 6/6 = 1 P(C2) = 5/6 P(C2) = 4/6 Splitting Based on GINI • Used in CART, SLIQ, SPRINT. • When a node p is split into k partitions (children), the quality of split is computed as, k GINI split i 1 where, ni GINI ( i ) n ni = number of records at child i, n = number of records at node p. Binary Attributes: Computing GINI Index Splits into two partitions Effect of Weighing partitions: Larger and Purer Partitions are sought for. P a re n t B? Yes No C1 6 C2 6 G in i = 0 .5 0 0 Gini(N1) = 1 – (5/6)2 – (2/6)2 = 0.194 Gini(N2) = 1 – (1/6)2 – (4/6)2 = 0.528 Node N1 Node N2 N1 N2 C1 5 1 C2 2 4 G in i= 0 .3 3 3 Gini(Children) = 7/12 * 0.194 + 5/12 * 0.528 = 0.333 Categorical Attributes: Computing Gini Index • For each distinct value, gather counts for each class in the dataset • Use the count matrix to make decisions Multi-way split Two-way split (find best partition of values) C a rT y p e C a rT y p e F a m ily S p o rts L u x u ry {S p o rts , {F a m ily } L u x u ry } C1 1 2 1 C2 4 1 1 G in i 0 .3 9 3 C a rT y p e {S p o rts } {F a m ily , L u x u ry } C1 3 1 C1 2 2 C2 2 4 C2 1 5 G in i 0 .4 0 0 G in i 0 .4 1 9 Continuous Attributes: Computing Gini Index • Use Binary Decisions based on one value • Several Choices for the splitting value – Number of possible splitting values = Number of distinct values • Each splitting value has a count matrix associated with it – Class counts in each of the partitions, A < v and A v • Simple method to choose best v – For each v, scan the database to gather count matrix and compute its Gini index – Computationally Inefficient! Repetition of work. Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K 10 Taxable Income > 80K? Yes No Continuous Attributes: Computing Gini Index... • For efficient computation: for each attribute, – Sort the attribute on values – Linearly scan these values, each time updating the count matrix and computing gini index – Choose the split position that has the least gini index Cheat No No No Yes Yes Yes No No No No 100 120 125 220 T a x a b le In c o m e 60 Sorted Values 70 55 Split Positions 75 65 85 72 90 80 95 87 92 97 110 122 172 230 <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0 No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0 G in i 0 .4 2 0 0 .4 0 0 0 .3 7 5 0 .3 4 3 0 .4 1 7 0 .4 0 0 0 .3 0 0 0 .3 4 3 0 .3 7 5 0 .4 0 0 0 .4 2 0 Alternative Splitting Criteria based on INFO • Entropy at a given node t: Entropy ( t ) p ( j | t ) log p ( j | t ) j (NOTE: p( j | t) is the relative frequency of class j at node t). – Measures homogeneity of a node. • Maximum (log nc) when records are equally distributed among all classes implying least information • Minimum (0.0) when all records belong to one class, implying most information – Entropy based computations are similar to the GINI index computations Examples for computing Entropy Entropy ( t ) p ( j | t ) log p ( j | t ) j 2 C1 0 P(C1) = 0/6 = 0 C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0 C1 1 P(C1) = 1/6 C2 5 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65 C1 2 P(C1) = 2/6 C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92 P(C2) = 6/6 = 1 P(C2) = 5/6 P(C2) = 4/6 Splitting Based on INFO... • Information Gain: n Entropy ( p ) Entropy ( i ) n k GAIN i split i 1 Parent Node, p is split into k partitions; ni is number of records in partition i – Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) – Used in ID3 and C4.5 – Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure. Splitting Based on INFO... • Gain Ratio: GainRATIO split GAIN Split SplitINFO k SplitINFO i 1 n i log n Parent Node, p is split into k partitions ni is the number of records in partition i – Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! – Used in C4.5 – Designed to overcome the disadvantage of Information Gain n i n Splitting Criteria based on Classification Error • Classification error at a node t : Error ( t ) 1 max P ( i | t ) i • Measures misclassification error made by a node. • Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information • Minimum (0.0) when all records belong to one class, implying most interesting information Examples for Computing Error Error ( t ) 1 max P ( i | t ) i C1 0 P(C1) = 0/6 = 0 C2 6 Error = 1 – max (0, 1) = 1 – 1 = 0 C1 1 P(C1) = 1/6 C2 5 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 C1 2 P(C1) = 2/6 C2 4 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3 P(C2) = 6/6 = 1 P(C2) = 5/6 P(C2) = 4/6 Comparison among Splitting Criteria For a 2-class problem: Misclassification Error vs Gini P a re n t A? Yes No Node N1 Gini(N1) = 1 – (3/3)2 – (0/3)2 =0 Gini(N2) = 1 – (4/7)2 – (3/7)2 = 0.489 Node N2 C1 C2 N1 3 0 N2 4 3 C1 7 C2 3 G in i = 0 .4 2 Gini(Children) = 3/10 * 0 + 7/10 * 0.489 = 0.342 Tree Induction • Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting Stopping Criteria for Tree Induction • Stop expanding a node when all the records belong to the same class • Stop expanding a node when all the records have similar attribute values • Early termination (to be discussed later) Decision Tree Based Classification • Advantages: – Inexpensive to construct – Extremely fast at classifying unknown records – Easy to interpret for small-sized trees – Accuracy is comparable to other classification techniques for many simple data sets Example: C4.5 • • • • • Simple depth-first construction. Uses Information Gain Sorts Continuous Attributes at each node. Needs entire data to fit in memory. Unsuitable for Large Datasets. – Needs out-of-core sorting. • You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz Practical Issues of Classification • Underfitting and Overfitting • Missing Values • Costs of Classification Underfitting and Overfitting (Example) 500 circular and 500 triangular data points. Circular points: 0.5 sqrt(x12+x22) 1 Triangular points: sqrt(x12+x22) > 0.5 or sqrt(x12+x22) < 1 Underfitting and Overfitting Overfitting Underfitting: when model is too simple, both training and test errors are large Overfitting due to Noise Decision boundary is distorted by noise point Overfitting due to Insufficient Examples Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task Notes on Overfitting • Overfitting results in decision trees that are more complex than necessary • Training error no longer provides a good estimate of how well the tree will perform on previously unseen records • Need new ways for estimating errors Estimating Generalization Errors • Re-substitution errors: error on training ( e(t) ) • Generalization errors: error on testing ( e’(t)) • Methods for estimating generalization errors: – Optimistic approach: e’(t) = e(t) – Pessimistic approach: • • • For each leaf node: e’(t) = (e(t)+0.5) Total errors: e’(T) = e(T) + N 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 300.5)/1000 = 2.5% – Reduced error pruning (REP): • uses validation data set to estimate generalization error Occam’s Razor • Given two models of similar generalization errors, one should prefer the simpler model over the more complex model • For complex models, there is a greater chance that it was fitted accidentally by errors in data • Therefore, one should include model complexity when evaluating a model Minimum Description Length (MDL) X X1 X2 X3 X4 y 1 0 0 1 … … Xn 1 A? Yes No 0 B? B1 A B2 C? 1 C1 C2 0 1 B X X1 X2 X3 X4 y ? ? ? ? … … Xn ? • Cost(Model,Data) = Cost(Data|Model) + Cost(Model) – Cost is the number of bits needed for encoding. – Search for the least costly model. • Cost(Data|Model) encodes the misclassification errors. • Cost(Model) uses node encoding (number of children) plus splitting condition encoding. How to Address Overfitting • Pre-Pruning (Early Stopping Rule) – Stop the algorithm before it becomes a fully-grown tree – Typical stopping conditions for a node: • Stop if all instances belong to the same class • Stop if all the attribute values are the same – More restrictive conditions: • Stop if number of instances is less than some user-specified threshold • Stop if class distribution of instances are independent of the available features (e.g., using 2 test) • Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain). How to Address Overfitting… • Post-pruning – Grow decision tree to its entirety – Trim the nodes of the decision tree in a bottom-up fashion – If generalization error improves after trimming, replace sub-tree by a leaf node. – Class label of leaf node is determined from majority class of instances in the sub-tree – Can use MDL for post-pruning Example of Post-Pruning Training Error (Before splitting) = 10/30 Class = Yes 20 Pessimistic error = (10 + 0.5)/30 = 10.5/30 Class = No 10 Training Error (After splitting) = 9/30 Error = 10/30 Pessimistic error (After splitting) = (9 + 4 0.5)/30 = 11/30 A? A1 PRUNE! A4 A3 A2 Class = Yes 8 Class = Yes 3 Class = Yes 4 Class = Yes 5 Class = No 4 Class = No 4 Class = No 1 Class = No 1 Examples of Post-pruning Case 1: – Optimistic error? Don’t prune for both cases – Pessimistic error? C0: 11 C1: 3 C0: 2 C1: 4 C0: 14 C1: 3 C0: 2 C1: 2 Don’t prune case 1, prune case 2 – Reduced error pruning? Case 2: Depends on validation set Handling Missing Attribute Values • Missing values affect decision tree construction in three different ways: – Affects how impurity measures are computed – Affects how to distribute instance with missing value to child nodes – Affects how a test instance with missing value is classified Computing Impurity Measure Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813 Tid Refund Marital Status Taxable Income Class 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes Entropy(Refund=Yes) = 0 9 No Married 75K No 10 ? Single 90K Yes Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183 60K C la ss = Yes C la ss = No R e fu n d = Y e s 0 3 R e fu n d = N o 2 4 R e fu n d = ? 1 0 Split on Refund: 10 Missing value Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551 Gain = 0.9 (0.8813 – 0.551) = 0.3303 Distribute Instances Tid Refund Marital Status Taxable Income Class 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 60K Tid Refund Marital Status Taxable Income Class 10 90K Single ? Yes 10 Refund Yes No Class=Yes 0 + 3/9 Class=Yes 2 + 6/9 Class=No 3 Class=No 4 Probability that Refund=Yes is 3/9 10 Refund Yes Probability that Refund=No is 6/9 No Class=Yes 0 Cheat=Yes 2 Class=No 3 Cheat=No 4 Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9 Classify Instances Married New record: Tid Refund Marital Status Taxable Income Class 11 85K No ? ? 10 Single Divorce d Total Class=No 3 1 0 4 Class=Yes 6/9 1 1 2.67 Total 3.67 2 1 6.67 Refund Yes NO No Single, Divorced MarSt Married TaxInc < 80K NO NO > 80K YES Probability that Marital Status = Married is 3.67/6.67 Probability that Marital Status ={Single,Divorced} is 3/6.67 Scalable Decision Tree Induction Methods • SLIQ (EDBT’96 — Mehta et al.) – Builds an index for each attribute and only class list and the current attribute list reside in memory • SPRINT (VLDB’96 — J. Shafer et al.) – Constructs an attribute list data structure • PUBLIC (VLDB’98 — Rastogi & Shim) – Integrates tree splitting and tree pruning: stop growing the tree earlier • RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti) – Builds an AVC-list (attribute, value, class label) • BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan & Loh) – Uses bootstrapping to create several small samples Postscript: learning models Batch learning Training data Learning Algorithm On-line learning See a new point x predict label Classifier f See y test Update classifer Preprocessing step Generative and Discriminative Classifiers Generative vs. Discriminative Classifiers Training classifiers involves estimating f: X Y, or P(Y|X) Discriminative classifiers (also called ‘informative’ by Rubinstein&Hastie): 1. Assume some functional form for P(Y|X) 2. Estimate parameters of P(Y|X) directly from training data Generative classifiers 1. Assume some functional form for P(X|Y), P(X) 2. Estimate parameters of P(X|Y), P(X) directly from training data 3. Use Bayes rule to calculate P(Y|X= xi) Bayes Formula Generative Model • • • • • Color Size Texture Weight … Discriminative Model • Logistic Regression • • • • • Color Size Texture Weight … Comparison • Generative models – Assume some functional form for P(X|Y), P(Y) – Estimate parameters of P(X|Y), P(Y) directly from training data – Use Bayes rule to calculate P(Y|X= x) • Discriminative models – Directly assume some functional form for P(Y|X) – Estimate parameters of P(Y|X) directly from training data Probability Basics • Prior, conditional and joint probability for random variables – Prior probability: P(X ) – Conditional probability: – Joint probability: – Relationship: • X ( X 1 , X 2 ), P ( X ) P(X P(X – Independence: P ( X 1 | X 2 ) , P(X 2 | X 1 ) 1 1 ,X 2 ) ,X 2 ) P ( X 2 | X 1 ) P ( X 1 ) P ( X 1 | X 2 ) P ( X 2 ) P ( X 2 | X 1 ) P ( X 2 ), P ( X 1 | X 2 ) P ( X 1 ), P(X 1 ,X 2 ) P ( X 1 ) P ( X 2 ) Bayesian Rule P (C | X ) P ( X |C ) P (C ) P(X ) Posterior Likelihood Prior Evidence 111 Probability Basics • Quiz: We have two six-sided dice. When they are tolled, it could end up with the following occurance: (A) dice 1 lands on side “3”, (B) dice 2 lands on side “1”, and (C) Two dice sum to eight. Answer the following questions: 1) P ( A ) ? 2) P(B) ? 3) P(C) ? 4) P ( A | B ) ? 5) P ( C | A ) ? 6) P ( A , B ) ? 7) P ( A , C ) ? 8) Is P ( A , C ) equals P(A) P(C) ? 112 Probabilistic Classification • Establishing a probabilistic model for classification – Discriminative model P ( C | X ) C c 1 , , c L , X (X 1 , , X n ) P(c1 | x ) P(c 2 | x ) P(c L | x ) Discriminative Probabilistic Classifier x1 x2 xn x ( x 1 , x 2 , , x n ) 113 Probabilistic Classification • Establishing a probabilistic model for classification (cont.) – Generative model P ( X | C ) C c 1 , , c L , X (X 1 , , X n ) P(x | c1 ) P(x |c2 ) P(x |cL ) Generative Probabilistic Model Generative Probabilistic Model Generative Probabilistic Model for Class 1 for Class 2 x1 x2 xn x1 x2 for Class L xn x1 x2 x ( x 1 , x 2 , , x n ) 114 xn Probabilistic Classification • MAP classification rule – MAP: Maximum A Posterior – Assign x to c* if P (C c | X x ) P (C c | X x ) * • c c , c c 1 , , c L * Generative classification with the MAP rule – Apply Bayesian rule to convert them into posterior probabilities P (C c i | X x ) P ( X x | C c i ) P (C c i ) P(X x) P ( X x | C c i ) P (C c i ) for i 1 , 2 , , L – Then apply the MAP rule 115 Naïve Bayes • Bayes classification P ( C | X ) P ( X | C ) P ( C ) P ( X 1 , , X n | C ) P ( C ) Difficulty: learning the joint probability • P ( X 1 , , X n | C ) Naïve Bayes classification – Assumption that all input attributes are conditionally independent! P ( X 1 , X 2 , , X n | C ) P ( X 1 | X 2 , , X n ; C ) P ( X 2 , , X n | C ) P ( X 1 | C ) P ( X 2 , , X n | C ) P( X 1 |C )P( X 2 |C ) P( X n |C ) – MAP classification rule: for x ( x 1 , x 2 , , x n ) [ P ( x 1 | c ) P ( x n | c )] P ( c ) [ P ( x 1 | c ) P ( x n | c )] P ( c ), c c , c c 1 , , c L * * * * 116 Naïve Bayes • Naïve Bayes Algorithm (for discrete input attributes) – Learning Phase: Given a training set S, For each target value of c i (c i c 1 , , c L ) Pˆ ( C c i ) estimate For every attribute P ( C c i ) with examples value x jk of each attribute Pˆ ( X j x jk | C c i ) estimate in S ; X j ( j 1 , , n ; k 1 , , N j ) P ( X j x jk | C c i ) with examples Output: conditional probability tables; for in S ; X j , N j Lelements – Test Phase: Given an unknown instance X ( a , , a ) , 1 n Look up tables to assign the label c* to X’ if * * * * [ Pˆ ( a 1 | c ) Pˆ ( a n | c )] Pˆ ( c ) [ Pˆ ( a 1 | c ) Pˆ ( a n | c )] Pˆ ( c ), c c , c c 1 , , c L 117 Example • Example: Play Tennis 118 Example • Learning Phase Outlook Play=Yes Play=No Sunny 2/9 3/5 4/9 0/5 3/9 2/5 Overcast Rain Humidity Play=Yes Play=No High 3/9 Normal 6/9 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 2/5 Cool 3/9 1/5 Wind Play=Yes Play=No 4/5 Strong 3/9 3/5 1/5 Weak 6/9 2/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14 119 Example • Test Phase – Given a new instance, x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) – Look up tables P(Outlook=Sunny|Play=Yes) = 2/9 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play=Yes) = 3/9 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Wind=Strong|Play=No) = 3/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14 – MAP rule P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”. 120 Example • Test Phase – Given a new instance, x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) – Look up tables P(Outlook=Sunny|Play=Yes) = 2/9 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play=Yes) = 3/9 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Wind=Strong|Play=No) = 3/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14 – MAP rule P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”. 121 Relevant Issues • Violation of Independence Assumption – For many real world tasks, P ( X 1 , , X n | C ) P ( X 1 | C ) P ( X n | C ) – Nevertheless, naïve Bayes works surprisingly well anyway! • Zero conditional probability Problem – If no example contains the attribute value – In this circumstance, X j a jk , Pˆ ( X j a jk | C c i ) 0 Pˆ ( x 1 | c i ) Pˆ ( a jk | c i ) Pˆ ( x n | c i ) 0during test – For a remedy, conditional probabilities estimated with n mp Pˆ ( X j a jk | C c i ) c nm n c : number of training examples for w hich X j a jk and C c i n : number of training examples for w hich C c i p : prior estimate (usually, p 1 / t for t possible values of X j ) m : w eight to prior (number of " virtual" examples, m 1) 122 Relevant Issues • Continuous-valued Input Attributes – Numberless values for an attribute – Conditional probability modeled with the normal distribution Pˆ ( X j | C c i ) 1 2 ji ( X j ji ) 2 exp 2 2 ji ji : mean (avearage) of attribute values X j of examples for w hich C c i ji : standard deviation of attribute values X j of examples for w hich C c i – Learning Phase: for X ( X 1 , , X n ), C c 1 , , c L Output: n L normal distributions and P ( C c i ) i 1 , , L – Test Phase: for X ( X 1 , , X n ) • • Calculate conditional probabilities with all the normal distributions Apply the MAP rule to make a decision 123 Conclusions • Naïve Bayes based on the independence assumption – Training is very easy and fast; just requiring considering each attribute in each class separately – Test is straightforward; just looking up tables or calculating conditional probabilities with normal distributions • A popular generative model – Performance competitive to most of state-of-the-art classifiers even in presence of violating independence assumption – Many successful applications, e.g., spam mail filtering – A good candidate of a base learner in ensemble learning – Apart from classification, naïve Bayes can do more… 124