PPT - Grxuan.org

advertisement
Enhanced EM (EEM)
Algorithm
G. R. Xuan1, Y. Q. Shi2, P. Chai1,
P. Sutthiwan2
1Tongji University, Shanghai China
2NJIT, New Jersey, USA
ICPR2012
Conventional EM algorithm
• EM algorithm: a powerful tool for Gaussian
mixture model (GMM) unsupervised
learning [Dempster et al. 77]
• Its convergence has been mathematically
proved.
• However, it may converge to local
maximum.
• It may suffer from occasional singularity.
First novelty
The uniform distribution (hence maximum
entropy) has been proposed as the initial condition.
– The EEM algorithm can achieve the global
optimality in our extensive experimental works.
– If a particular uniform distribution is used as the
initial condition, the solution is global optimum, and
repeatable.
Second novelty
• Singularity avoidance by using perturbation
– That is, for those possible singularity, e.g., 1/x,
or log(x), or C-1,we propose to add a positive
small-value, ε, to avoid singularity.
– Normally ε=10-20
– For example, we use:
• 1/(x+ε)
• Log(x+ε)
• (C+εI)-1
Others
• Histogram is used as input.
– G. Xuan et al. “EM algortihm of Gaussian
mixture model and hidden Markov model,”
ICIP2001.
• Performance in GMM is good.
Note
• With fixed sharp Gaussian distribution
initialization (five differnet solutions are
obtained)
• With non-fixed (stochastically selected)
uniform distribution initialization
occasionally resulting in possible multiple
solutions.
• With a fixed uniform distribution as
initialization the solution is optimal and
repeatable.
Download