different training data

advertisement
Machine Learning:
Ensemble Methods
1
Learning Ensembles
• Learn multiple alternative definitions of a concept using
different training data or different learning algorithms.
• Combine decisions of multiple definitions, e.g. using
weighted voting.
Training Data
Data1
Data2

Data m
Learner1
Learner2

Learner m
Model1
Model2

Model m
Model Combiner
Final Model
2
Why does it work?
• Suppose there are 25 base classifiers
– Each classifier has error rate,  = 0.35
– Assume classifiers are independent
– Probability that the ensemble classifier
makes a wrong prediction (it is wrong if at
least 13 out of 25 make the wrong
prediction):
æ 25 ö i
25-i
e
(1e
)
= 0.06
ç
÷
å i
ø
i=13 è
25
•3
Value of Ensembles
• When combing multiple independent and
diverse decisions each of which is at least
more accurate than random guessing,
random errors
,
correct decisions are reinforced.
• Human ensembles are demonstrably better
– How many jelly beans in the jar?: Individual
estimates vs. group average.
4
Homogenous Ensembles
• Use a single, arbitrary learning algorithm but
manipulate training data to make it learn
multiple models.
– Data1  Data2  …  Data m
– Learner1 = Learner2 = … = Learner m
• Different methods for changing training data:
– Bagging: Resample training data
– Boosting: Reweight training data
– DECORATE: Add additional artificial training data
• In WEKA, these are called meta-learners, they
take a learning algorithm as an argument (base
learner) and create a new learning algorithm.
5
Bagging (BAGGING is short for
Bootstrap AGGregatING)
6
Bagging (BAGGING is short for
Bootstrap AGGregatING)
• Create m samples of n data with replacement
(means same item can be resampled)
Data ID
Original Data
Bagging (Round 1)
Bagging (Round 2)
Bagging (Round 3)
Training Data
1
7
1
1
2
8
4
8
3
10
9
5
4
8
1
10
5
2
2
5
6
5
3
5
7
10
2
9
8
10
7
6
9
5
3
3
10
9
2
7
• Build a classifier using these m samples
• Each sample has probability (1 – 1/n)n of
being selected as test data
• Training data = 1- (1 – 1/n)n of the original
data
• Why?
•7
Example
8
9
10
11
12
13
14
The 0.632 bootstrap
• Each example in the original dataset has a
selection probability of 1/n
• On average, 36.8% of the datapoints are left
unselected.
• This is calculated considering the limit of
15
The 0.632 bootstrap
• This method is also called the 0.632
bootstrap
– A particular training instance has a
probability 1/n of being picked and 1-1/n of
not being picked
– Thus its probability of ending up in the test
data (=not being selected n times) is:
n
 1
1
1    e  0.368
 n
– This means the training data will contain
approximately 63.2%
of the instances
•16
16
Example of Bagging
Assume that the training data is:
+1
0.4 to 0.7: -1
+1
0.8
0.3
x
Data are not linearly separable, a classifier for these data should be an interval, e.g.
IF t1<=x<=t2 then C else not(C)
Goal: find a collection of 10 simple thresholding classifiers (e.g. learn a single
Threshold) that collectively can classify correctly.
E.g. each classifier learn a threshold t such that:
If x<t then C else not(C)
•17
Note: each classifier is inconsistent! E.g. classifier 1 is wrong on last
two items of “sampled” learning set
•18
Bagging (applied to training data)
If sign(åCi (x)) =1 then C(x) =1
Accuracy of ensemble classifier: 100% 
•19
Example 2: non-linear classifier out of
many linear classifiers
20
21
22
23
24
25
N simple classifiers work like a complex
classifier
• Note: initial data could not be correctly
separated by a simple threshold/linear
classifier
• With boosting , we obtain a perfect
classifier!
26
Bagging- Summary
• Works well if the base classifiers are
unstable (complement each other)
• Increased accuracy because it reduces
the variance of the individual classifier
• Does not focus on any particular
instance of the training data
• What if we want to focus on a particular
instances of training data?
• E.g. same instance can be more difficult
to classify than others (and on these
instances most classifier may err)
•27
Boosting
• An iterative procedure to adaptively
change distribution of training data by
focusing more on previously
misclassified examples
Each new member of the ensamble focuses on the instances that the
previous ones got wrong!
•28
Boosting
• Instances that are wrongly classified will
have their weights increased
• Those that are classified correctly will
have their weights decreased
Original Data
Boosting (Round 1)
Boosting (Round 2)
Boosting (Round 3)
• Example 4
1
7
5
4
2
3
4
4
3
2
9
8
4
8
4
10
5
7
2
4
6
9
5
5
7
4
1
4
8
10
7
6
9
6
4
3
10
3
2
4
is hard to classify
• Its weight is increased, therefore it is more likely to be
chosen again in subsequent rounds
•29
Boosting flow diagram
30
Boosting
• At first, equal weights are assigned to each training
instance (1/n for round 1)
• After a classifier Ci is learned, the weights are adjusted
to allow the subsequent classifier Ci+1 to “pay more
attention” to data that were misclassified by Ci. Higher
weights, higher probability for an instance of being
extracted
• Final boosted classifier C* combines the votes of each
individual classifier
– Weight of each classifier’s vote is a function of its
accuracy
• Adaboost – most opular boosting algorithm
•31
Adaboost (Adaptive Boost)
• Input:
– Training set D containing n instances
– T rounds
– A classification learning scheme
• Output:
– A composite model
•32
Adaboost: Training Phase
• Training data D contain n labeled data (X1,y1),
(X2,y2 ), (X3,y3),….(Xn,yn)
• Initially assign equal weight 1/n to each data
• To generate T base classifiers, we need T
rounds or iterations
• Round i, data from D are sampled with
replacement , to form Di (size n)
• Each data’s chance of being selected in the
next rounds depends on its weight
– Each time the new sample is generated directly from
the training data D with different sampling
probability according to the weights;
•33
Testing phase in AdaBoost
• Base classifiers: C1, C2, …, CT
• Error rate: (i = index of
classifier, j=index of instance,
yj correct class for xj)
1n
ei = å w jd (Ci (x j ) ¹ y j )
n j=1
• Importance of a classifier:
1  1  i 

i  ln
2  i 
•34
Testing Phase (2)
• The lower a classifier’s error rate e i , the more accurate
it is, and therefore, the higher its weight for voting
should be


1
1


i
• Weight of a classifier Ci’s vote is   ln

i
2   i 
• Testing (unseen data):
– For each class C, sum the weights of each classifier that assigned
class C to instance Xtest. The class with the highest sum is the
WINNER!
T
C * ( xtest )  arg max i Ci ( xtest )  y 
i 1
y
d (x) =1 if x = true
•35
Training Phase i depends on
previous testing phase (i-1)
• Base classifier Ci, is derived from training
data of set Di
• Error of Ci is tested using Di
• Weights of training data are adjusted
depending on how they were classified
– Correctly classified: Decrease weight
– Incorrectly classified: Increase weight
• Weight of a data indicates how hard it is to
classify it (directly proportional)
•36
Example: AdaBoost
• Weight update rule on all training data in D:
(i )
 i

w
exp
if Ci ( x j )  y j
( i 1)
j 
wj


Z i  exp i if Ci ( x j )  y j
where Z i is thenormalization factor
If classification is correct, decrease weight (divide by expαi )
else increase (multiply by expαi )
T
C * ( xtest )  arg max i Ci ( xtest )  y 
y
i 1
•37
Illustrating AdaBoost
Initial weights for each data point
Original
Data
Data points
for training
0.1
0.1
0.1
+++
- - - - -
++
B1
0.0094
Boosting
Round 1
+++
0.0094
0.4623
- - - - - - -
α=1.9459
α = 1.9459

C1 is wrong on these 2 samples, hence
their weight is increased
•38
Illustrating AdaBoost
B1
0.0094
Boosting
Round 1
+++
0.0094
0.4623
- - - - - - -
 = 1.9459
B2
Boosting
Round 2
0.3037
- - -
0.0009
- - - - -
0.0422
++
 = 2.9323
B3
0.0276
0.1819
0.0038
Boosting
Round 3
+++
++ ++ + ++
Overall
+++
- - - - •39
++
 = 3.8744
Random Forests
• Ensemble method specifically designed for
decision tree classifiers
• Random Forests grows many trees
– Ensemble of unpruned decision trees
– Each base classifier classifies a “new” vector of
attributes from the original data
– Final result on classifying a new instance: voting.
Forest chooses the classification result having the
most votes (over all the trees in the forest)
•40
Random Forests
• Introduce two sources of randomness:
“Bagging” and “Random input vectors”
– Bagging method: each tree is grown using a
bootstrap sample of training data
– Random vector method: At each node, best
split is chosen from a random sample of m
attributes instead of all attributes
•41
Random forest algorithm
• Let the number of training cases be N, and the number of
features in the classifier be M.
• We are told the number m of input features to be used to
determine the decision at a node of the tree; m should be much
less than M (so each node performs m decisions, e.g: if f1=0 and
f2=1 ..and fm=0, then ..)
• Choose a training set for this tree by choosing n times with
replacement from all N available training cases (i.e. take a
bootstrap sample). Use the rest of the cases to estimate the error
of the tree, by predicting their classes.
• For each node of the tree, randomly choose m features on
which to base the decision at that node. Calculate the best split
based on these m variables in the training set.
• Each tree is fully grown and not pruned (as may be done in
constructing a normal tree classifier).
42
Random Forests
•43
DECORATE
(Melville & Mooney, 2003)
• Change training data by adding new
artificial training examples that encourage
diversity in the resulting ensemble.
• Improves accuracy when the training set is
small, and therefore resampling and
reweighting the training set would have
limited ability to generate diverse
alternative hypotheses.
44
Overview of DECORATE
Current Ensemble
Training Examples
+
+
+
C1
Base Learner
+
+
+
Artificial Examples
45
Overview of DECORATE
Current Ensemble
Training Examples
+
+
+
C1
Base Learner
C2
+
+
+
Artificial Examples
46
Overview of DECORATE
Current Ensemble
Training Examples
+
+
+
C1
Base Learner
+
+
+
Artificial Examples
C2
C3
47
Creating artificial examples
• Create a set of new examples which are
maximally diverse from training set. Let x
be an example in this new set.
• To classify x, proceed as follows:
– Each base classifer, Ci, in the ensemble C*,
provides probabilities for the class membership
of a sample x, i.e. PˆC ,y (x) = PˆC (C(x) = y)
i
i
– The category label for x is selected according
to:
å PˆC ,y (x)
i
C * (x) = arg max Ci ÎC*
C*
yÎY
48
Issues in Ensembles
• Parallelism in Ensembles: Bagging is easily
parallelized, Boosting is not.
• Variants of Boosting to handle noisy data.
• How “weak” should a base-learner for Boosting
be?
• Exactly how does the diversity of ensembles affect
their generalization performance.
• Combining Boosting and Bagging.
49
Download