BMC-Ensembles

advertisement
Ensembles, Model Combination and
Bayesian Combination
CS 678 - Ensembles and Bayes
1


COD
Classifier Output Distance
How functionally similar are
different learning models
Independent of accuracy
CS 678 - Ensembles and Bayes
2


Mapping Learning Algorithm
Space
Based on 30 Irvine Date Sets
CS 678 - Ensembles and Bayes
3

Mapping Learning Algorithm Space – 2
dimensional rendition
CS 678 - Ensembles and Bayes
4

Mapping Task Space – How
similarly do different tasks
react to different learning
algorithms (COD based
similarity)
CS 678 - Ensembles and Bayes
5

Mapping Task Space – How similarly do different tasks react to
different learning algorithms (COD based similarity)
CS 678 - Ensembles and Bayes
6
Bayesian Learning


P(h|D) - Posterior probability of h, this is what we usually want
to know in machine learning
P(h) - Prior probability of the hypothesis independent of D - do
we usually know?
– Could assign equal probabilities
– Could assign probability based on inductive bias (e.g. simple
hypotheses have higher probability)


P(D) - Prior probability of the data
P(D|h) - Probability “likelihood” of data given the hypothesis
–



Approximated by the accuracy of h on the data set
P(h|D) = P(D|h)P(h)/P(D) Bayes Rule
P(h|D) increases with P(D|h) and P(h). In learning when
seeking to discover the best h given a particular D, P(D) is the
same in all cases and thus is dropped.
Good approach when P(D|h)P(h) is more appropriate to calculate
than P(h|D)
–
–
If we do have knowledge about the prior P(h) then that is useful info
P(D|h) can be easy to compute in many cases (generative models)
CS 678 - Ensembles and Bayes
7
Bayesian Learning






Maximum a posteriori (MAP) hypothesis
hMAP = argmaxhHP(h|D) = argmaxhHP(D|h)P(h)/P(D) ∝
argmaxhHP(D|h)P(h)
Maximum Likelihood (ML) Hypothesis hML = argmaxhHP(D|h)
MAP = ML if all priors P(h) are equally likely
Note that prior can be like an inductive bias (i.e. simpler
hypothesis are more probable)
For Machine Learning P(D|h) is usually measured using the
accuracy of the hypothesis on the data
–

If the hypothesis is very accurate on the data, that implies that the data is
more likely given that the hypothesis is true (correct)
Example (assume only 3 possible hypotheses) with different
priors
CS 678 - Ensembles and Bayes
8
Bayesian Learning (cont)



Brute force approach is to test each h  H to see which
maximizes P(h|D)
Note that the argmax is not the real probability since P(D) is
unknown, but not needed if we're just trying to find the best
hypothesis
Can still get the real probability (if desired) by normalization if
there is a limited number of hypotheses
– Assume only two possible hypotheses h1 and h2
– The true posterior probability of h1 would be
P(h1 | D) 
P(D | h1)P(h1)
P(D | h1)  P(D | h2 )
CS 678 - Ensembles and Bayes
9
Bayesian Learning

Bayesian view is that we measure uncertainty, which we
can do even if there are not a lot of examples
– What is the probability that your team will win the championship
this year?

Cannot do this with a frequentist approach
– What is the probability that a particular coin will come up as
heads?
– Without much data we put our initial belief in the prior


But as more data comes available we transfer more of our
belief to the data (likelihood)
With infinite data, we do not consider the prior at all
CS 678 - Ensembles and Bayes
10
Bayesian Example





Assume that we want to learn the mean μ of a random
variable x where the variance σ2 is known and we have not
yet seen any data
P(μ|D,σ2) = P(D|μ,σ2)P(μ)/P(D) ∝ P(D|μ,σ2)P(μ)
A Bayesian would want to represent the prior μ0 and the
likelihood μ as parameterized distributions (e.g. Gaussian,
Multinomial, Uniform, etc.)
Let's assume a Gaussian
Since the prior is a Gaussian we would like to multiply it
by whatever the distribution of the likelihood is in order to
get a posterior which is also a parameterized distribution
CS 678 - Ensembles and Bayes
11
Conjugate Priors




P(μ|D, σ02) = P(D|μ)P(μ)/P(D) ∝ P(D|μ)P(μ)
If the posterior is the same distribution as the prior after
multiplication then we say the prior and posterior are
conjugate distributions and the prior is a conjugate prior
for the likelihood
In the case of a known variance and a Gaussian prior we
can use a Gaussian likelihood and the product (posterior)
will also be a Gaussian
If the likelihood is multinomial then we would need to use
a Dirichlet prior and the posterior would be a Dirichlet
CS 678 - Ensembles and Bayes
12
Some Discrete Conjugate Distributions
CS 678 - Ensembles and Bayes
13
Some Continuous Conjugate Distributions
CS 678 - Ensembles and Bayes
14
More Continuous Conjugate Distributions
CS 678 - Ensembles and Bayes
15
Bayesian Example



Prior(μ) = P(μ) = N(μ|μ0,σ02)
Posterior(μ) = P(μ|D) = N(μ|μN,σN2)
Note how belief transfers from prior to data as more data is
seen
CS 678 - Ensembles and Bayes
16
Bayesian Example
CS 678 - Ensembles and Bayes
17
Bayesian Example

If for this problem the mean had been known and the
variance was the unknown then the conjugate prior would
need to be the inverse gamma distribution
– If we use precision (the inverse of variance) then we use a gamma
distribution


If both the mean and variance were unknown (the typical
case) then the conjugate prior distribution is a combination
of a Normal (Gaussian) and an inverse gamma and is
called a normal-inverse gamma distribution
For the typical multi-variate case this would be the multivariate normal-inverse gamma distribution also known as
the inverse Wishart distribution
CS 678 - Ensembles and Bayes
18
Bayesian Inference

A Bayesian would frown on using an MLP/Decision
Tree/Nearest Neighbor model, etc. as the maximum
likelihood part of the equation
– Why?
CS 678 - Ensembles and Bayes
19
Bayesian Inference

A Bayesian would frown on using an MLP/Decision
Tree/SVM/Nearest Neighbor model, etc. as the maximum
likelihood part of the equation
– Why? – These models are not standard parameterized distributions
and there is no direct way to multiply the model with a prior
distribution to get a posterior distribution
– Can do things to make MLP, Decision Tree, etc. outputs to be
probabilities and even add variance but not really exact
probabilities/distributions


Softmax, Ad hoc, etc.
A distribution would be nice, but usually the most
important goal is best overall accuracy
– If can have an accurate model that is a distribution, then
advantageous, otherwise…
CS 678 - Ensembles and Bayes
20
Bayes Optimal Classifiers


Best question is what is the most probable classification for a
given instance, rather than what is the most probable hypothesis
for a data set
Let all possible hypotheses vote for the instance in question
weighted by their posterior (an ensemble approach) - usually
better than the single best MAP hypothesis
P(v j | D,H) 
 P(v
j
| hi )P(hi | D)   P(v j | hi )
hi  H


Bayes Optimal Classification:
argmax 
v j V


hi  H
P(D | hi )P(hi )
P(D)
 P(v
j
| hi )P(hi | D)  argmax 
v j V
hi  H
 P(v
j
| hi )P(D | hi )P(hi )
hi  H
Example: 3 hypotheses with different priors and posteriors and
show results for ML, MAP, Bag, and Bayes optimal
–
Discrete and probabilistic outputs
CS 678 - Ensembles and Bayes
21
Bayes Optimal Classifiers (Cont)



No other classification method using the same hypothesis space can
outperform a Bayes optimal classifier on average, given the available
data and prior probabilities over the hypotheses
Large or infinite hypothesis spaces make this impractical in general
Also, it is only as accurate as our knowledge of the priors (background
knowledge) for the hypotheses, which we often do not know
–
–


But if we do have some insights, priors can really help
for example, it would automatically handle overfit, with no need for a validation
set, early stopping, etc.
If our priors are bad, then Bayes optimal will not be optimal. For
example, if we just assumed uniform priors, then you might have a
situation where the many lower posterior hypotheses could dominate
the fewer high posterior ones.
However, this is an important theoretical concept, and it leads to many
practical algorithms which are simplifications based on the concepts of
full Bayes optimality
CS 678 - Ensembles and Bayes
22
Bayesian Model Averaging

The most common Bayesian approach to "model combining"
– A Bayesian would not call BMA a model combining approach and it
really isn't the goal


Assumes the the correct h is in the hypothesis space H and
that the data was generated by this correct h (with possible
noise)
The Bayes equation simply expresses the uncertainty that the
correct h has been chosen
M
P(v j | x j ,D)   P(v j | x j ,hi ,D)P(hi | D)
i1

Looks like model combination, but as more data is given, the
P(h|D) for the highest likelihood model dominates
– 
A problem with practical Bayes optimal. The MAP hypothesis will
eventually dominate.
CS 678 - Ensembles and Bayes
23
Bayesian Model Averaging
M
P(v j | x j ,D)   P(v j | x j ,hi ,D)P(hi | D)
i1

Even if the top 3 models have accuracy 90.1%, 90%, and 90%, only
the top model will be significantly considered as the data increases
– All
 posteriors must sum to 1 and as data increases the variance decreases
and the probability mass converges to the MAP hypothesis

This is overfit for typical ML, but exactly what BMA seeks
– And in fact empirically, BMA is usually less accurate than even simple
model combining techniques (bagging, etc.)

How to select the M models
– Heuristic, keep models with combination of simplicity and highest accuracy
– Gibbs – Randomly sample models based on their probability
– MCMC – Start at model Mi, sample, then probabilistically transition to itself
or neighbor model


Gibbs an MCMC require ability to generate many arbitrary models
and possibly many samples before convergence
CS 678 - Ensembles and Bayes
24
Model Combination - Ensembles





One of the significant potential
advantages of model
combination is an enrichment
of the original hypothesis space
H, or easier ability to arrive at
accurate members of H
There are three members of H
to the right
BMA would give almost all
weight to the top sphere
The optimal solution is a
uniform vote between the 3
spheres (all h's)
This optimal solution h' is not
in the original H, but is part of
the larger H' created when we
combine models
CS 678 - Ensembles and Bayes
25
Bayesian Model Combination

Could do Bayesian model combination where we still have priors
but they are over combinations of models
 E is the space of model combinations using hypotheses from H
P(v j | x j ,D) 
 P(v
j
| x j ,hi ,D)P(hi | D)
hH
P(v j | x j ,D, E)   P(v j | x j ,ei ,D)P(ei | D)
eE
 would move confidence over time to one particular
This
combination of models
 Ensembles,
on the other hand, are typically ad-hoc but still often

lead empirically to more accurate overall solutions

– BMC would actually be the more fair comparison between ensembles
and Bayes Optimal, since in that case Bayes would be trying to find
exactly one ensemble, where usually it tries to find one h
CS 678 - Ensembles and Bayes
26
Download