Machine Learning: Ensemble Methods 1 Learning Ensembles • Learn multiple alternative definitions of a concept using different training data or different learning algorithms. • Combine decisions of multiple definitions, e.g. using weighted voting. Training Data Data1 Data2 Data m Learner1 Learner2 Learner m Model1 Model2 Model m Model Combiner Final Model 2 Why does it work? • Suppose there are 25 base classifiers – Each classifier has error rate, = 0.35 – Assume classifiers are independent – Probability that the ensemble classifier makes a wrong prediction (it is wrong if at least 13 out of 25 make the wrong prediction): æ 25 ö i 25-i e (1e ) = 0.06 ç ÷ å i ø i=13 è 25 •3 Value of Ensembles • When combing multiple independent and diverse decisions each of which is at least more accurate than random guessing, random errors , correct decisions are reinforced. • Human ensembles are demonstrably better – How many jelly beans in the jar?: Individual estimates vs. group average. 4 Homogenous Ensembles • Use a single, arbitrary learning algorithm but manipulate training data to make it learn multiple models. – Data1 Data2 … Data m – Learner1 = Learner2 = … = Learner m • Different methods for changing training data: – Bagging: Resample training data – Boosting: Reweight training data – DECORATE: Add additional artificial training data • In WEKA, these are called meta-learners, they take a learning algorithm as an argument (base learner) and create a new learning algorithm. 5 Bagging • Create m samples of n data with replacement Data ID Original Data Bagging (Round 1) Bagging (Round 2) Bagging (Round 3) Training Data 1 7 1 1 2 8 4 8 3 10 9 5 4 8 1 10 5 2 2 5 6 5 3 5 7 10 2 9 8 10 7 6 9 5 3 3 10 9 2 7 • Build a classifier on each bootstrap sample • Each sample has probability (1 – 1/n)n of being selected as test data • Training data = 1- (1 – 1/n)n of the original data •6 The 0.632 bootstrap • This method is also called the 0.632 bootstrap – A particular training data has a probability of 1-1/n of not being picked – Thus its probability of ending up in the test data (not selected n times) is: n 1 1 1 e 0.368 n – This means the training data will contain approximately 63.2% of the instances •7 7 Example of Bagging Assume that the training data is: +1 0.4 to 0.7: -1 +1 0.8 0.3 Goal: find a collection of 10 simple thresholding classifiers that collectively can classify correctly. E.g. each classifier learn a threshold t such that: If x<t then C else not(C) •8 x •9 Bagging (applied to training data) Accuracy of ensemble classifier: 100% •10 N simple classifiers work like a complex classifier • Note: initial data could not be correctly separated by a simple threshold classifier • With boosting , we obtain a perfect classifier! 11 Bagging- Summary • Works well if the base classifiers are unstable (complement each other) • Increased accuracy because it reduces the variance of the individual classifier • Does not focus on any particular instance of the training data – Therefore, less susceptible to model overfitting when applied to noisy data • What if we want to focus on a particular instances of training data? •12 Boosting • An iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified records – Initially, all N records are assigned equal weights – Unlike bagging, weights may change at the end of a boosting round – Higher weight of a record increase its probability of being selected in the next round •13 Boosting • Records that are wrongly classified will have their weights increased • Records that are classified correctly will have their weights decreased Original Data Boosting (Round 1) Boosting (Round 2) Boosting (Round 3) 1 7 5 4 2 3 4 4 3 2 9 8 4 8 4 10 5 7 2 4 6 9 5 5 7 4 1 4 8 10 7 6 9 6 4 3 • Example 4 is hard to classify • Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds •14 10 3 2 4 Boosting • Equal weights are assigned to each training instance (1/d for round 1) at first • After a classifier Ci is learned, the weights are adjusted to allow the subsequent classifier Ci+1 to “pay more attention” to data that were misclassified by Ci. • Final boosted classifier C* combines the votes of each individual classifier – Weight of each classifier’s vote is a function of its accuracy • Adaboost – popular boosting algorithm •15 Adaboost (Adaptive Boost) • Input: – Training set D containing N instances – T rounds – A classification learning scheme • Output: – A composite model •16 Adaboost: Training Phase • Training data D contain N labeled data (X1,y1), (X2,y2 ), (X3,y3),….(XN,yN) • Initially assign equal weight 1/d to each data • To generate T base classifiers, we need T rounds or iterations • Round i, data from D are sampled with replacement , to form Di (size N) • Each data’s chance of being selected in the next rounds depends on its weight – Each time the new sample is generated directly from the training data D with different sampling probability according to the weights; •17 Adaboost: Training Phase • Base classifier Ci, is derived from training data of Di • Error of Ci is tested using Di • Weights of training data are adjusted depending on how they were classified – Correctly classified: Decrease weight – Incorrectly classified: Increase weight • Weight of a data indicates how hard it is to classify it (directly proportional) •18 Adaboost: Testing Phase • The lower a classifier’s error rate e i , the more accurate it is, and therefore, the higher its weight for voting should be 1 1 i • Weight of a classifier Ci’s vote is ln i 2 i • Testing (unseen data): – For each class c, sum the weights of each classifier that assigned class c to X. The class with the highest sum is the WINNER! T C * ( xtest ) arg max i Ci ( xtest ) y i 1 y d (x) =1 if x = true •19 Example: AdaBoost • Base classifiers: C1, C2, …, CT • Error rate: (i = index of classifier, j=index of instance, yj correct class for xj) 1 i N w C ( x ) y N j 1 j i j j • Importance of a classifier: 1 1 i i ln 2 i •20 Example: AdaBoost • Assume: N training data in D, T rounds, (xj,yj) are the training data, Ci, ai are the classifier and weight of the ith round, respectively. • Weight update on all training data in D: (i ) i w exp if Ci ( x j ) y j ( i 1) j wj Z i exp i if Ci ( x j ) y j where Z i is thenormalization factor T C * ( xtest ) arg max i Ci ( xtest ) y y i 1 •21 Illustrating AdaBoost Initial weights for each data point Original Data Data points for training 0.1 0.1 0.1 +++ - - - - - ++ B1 0.0094 Boosting Round 1 +++ 0.0094 0.4623 - - - - - - - = 1.9459 C1 is wrong on these 2 samples, hence their weight is increased •22 Illustrating AdaBoost B1 0.0094 Boosting Round 1 +++ 0.0094 0.4623 - - - - - - - = 1.9459 B2 Boosting Round 2 0.3037 - - - 0.0009 - - - - - 0.0422 ++ = 2.9323 B3 0.0276 0.1819 0.0038 Boosting Round 3 +++ ++ ++ + ++ Overall +++ - - - - •23 ++ = 3.8744 Random Forests • Ensemble method specifically designed for decision tree classifiers • Random Forests grows many trees – Ensemble of unpruned decision trees – Each base classifier classifies a “new” vector of attributes from the original data – Final result on classifying a new instance: voting. Forest chooses the classification result having the most votes (over all the trees in the forest) •24 Random Forests • Introduce two sources of randomness: “Bagging” and “Random input vectors” – Bagging method: each tree is grown using a bootstrap sample of training data – Random vector method: At each node, best split is chosen from a random sample of m attributes instead of all attributes •25 Random forest algorithm • Let the number of training cases be N, and the number of variables in the classifier be M. • We are told the number m of input variables to be used to determine the decision at a node of the tree; m should be much less than M. • Choose a training set for this tree by choosing n times with replacement from all N available training cases (i.e. take a bootstrap sample). Use the rest of the cases to estimate the error of the tree, by predicting their classes. • For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set. • Each tree is fully grown and not pruned (as may be done in constructing a normal tree classifier). 26 Random Forests •27 DECORATE (Melville & Mooney, 2003) • Change training data by adding new artificial training examples that encourage diversity in the resulting ensemble. • Improves accuracy when the training set is small, and therefore resampling and reweighting the training set would have limited ability to generate diverse alternative hypotheses. 28 Overview of DECORATE Current Ensemble Training Examples + + + C1 Base Learner + + + Artificial Examples 29 Overview of DECORATE Current Ensemble Training Examples + + + C1 Base Learner C2 + + + Artificial Examples 30 Overview of DECORATE Current Ensemble Training Examples + + + C1 Base Learner + + + Artificial Examples C2 C3 31 Creating artificial examples • Create a set of new examples which are maximally diverse from training set. Let x be an example in this new set. • To classify x, proceed as follows: – Each base classifer, Ci, in the ensemble C*, provides probabilities for the class membership of a sample x, i.e. PˆC ,y (x) = PˆC (C(x) = y) i i – The category label for x is selected according to: å PˆC ,y (x) i C * (x) = arg max Ci ÎC* C* yÎY 32 Issues in Ensembles • Parallelism in Ensembles: Bagging is easily parallelized, Boosting is not. • Variants of Boosting to handle noisy data. • How “weak” should a base-learner for Boosting be? • What is the theoretical explanation of boosting’s ability to improve generalization? • Exactly how does the diversity of ensembles affect their generalization performance. • Combining Boosting and Bagging. 33