PPT Presentation - Department of Electrical and Computer

advertisement
Hidden Markov Model
based 2D Shape
Classification
Ninad Thakoor1 and Jean Gao2
1 Electrical Engineering, University of Texas at
Arlington, TX-76013, USA
2 Computer Science and Engineering, University
of Texas at Arlington, TX-76013, USA
Introduction
 Problem of object recognition
 Shape recognition
 Shape classification
 Shape classification techniques
 Dynamic programming based
 Hidden Markov Model (HMM) based
 Advantages of HMM
 Time warping capability
 Robustness
 Probabilistic framework
Introduction (cont.)
 Limitations of HMM



Unable to distinguish between similar shapes
No mechanism to select important parts of
shape
Does not guarantee minimum classification
error
 Proposed method deals with these limitations
by designing a weighted likelihood
discriminant function and formulates a
minimum error training algorithm for it.
Terminology
 S, set of HMM states. State of HMM at instance t is




denoted by qt.
A, state transition probability distribution. A = {aij},
aij denotes the probability of changing the state from
Si to Sj .
B, observation symbol probability distribution.
B={bj(o)}, bj(o) gives probability of observing the
symbol o in state Sj at instance t.
, initial state distribution.  = {i}, i gives
probability of HMM being in state Si at instance t = 1.
Cj is jth shape class where j=1,2, … ,M. HMM for Cj
can be denoted compactly as
Shape description with HMM
 Shape is assumed to be formed by multiple
constant curvature segments. These are
hidden states of HMM.
 Each state is assumed to have Gaussian
distribution. Mean of the distribution is the
constant curvature of the segment.
 Noise and details of the shape are standard
deviation of the state distribution.
HMM construction
 Preprocessing



Filter the shape
Normalize the shape length to T
Calculate discrete curvature (,i.e., turn angles)
which will be treated as observations for the
HMM
 Initialization

Gaussian mixture model with N clusters built
from unrolled example sequences
HMM construction (cont.)
 Training



Individual HMM are trained by Baum-Welch
algorithm for varying number of states N
Model selection (,i.e, optimum N) is carried
out with Bayesian Information Criterion (BIC)
N is selected to maximize BIC.
Weighted likelihood (WtL)
discriminant
 Motivation
 Similar objects can be discriminated by
comparing only part of the shapes
 No point wise comparison is required for
shape classification
 Maximum likelihood criterion gives equal
importance to all shape points
 WtL function weights likelihoods of individual
observations such that the ones important for
classifications are weighted higher.
WtL discriminant (Cont.)
 Log likelihood of the optimal path Q* followed
by observation O is given by
Where
 A simple weighted likelihood discriminant
can be defined as
WtL discriminant (Cont.)
 We use the following weighting function
which is sum of S Gaussian windows
 Parameter pi,j governs the height, i,j controls
the position, while si,j determines spread of ith
window of jth class.
GPD algorithm
 Misclassification measure
 Cost function
 Re-estimation rule
Experimental results
 Plane shapes:
 Classification accuracies (in %):
Experimental results (cont.)
 Discriminant function comparison:
HMM ML
HMM WtL
Questions?
 Please email your questions to
ninad.thakoor@uta.edu OR
ninad.thakoor@ieee.org
 Copy of the presentation is available at
http://visionlab.uta.edu/~ninad/acivs2005/
THANK YOU!!!!!
Download