Partial Shape Matching

advertisement
Partial Shape Matching
Outline:
• Motivation
• Sum of Squared Distances
Representation Theory
Motivation
We have seen a large number of different shape
descriptors:
–
–
–
–
–
–
–
–
–
–
Shape Distributions
Extended Gaussian Images
Shape Histograms
Gaussian EDT
Wavelets
Spherical Parameterizations
Spherical Extent Functions
Light Field Descriptors
Shock Graphs
Reeb Graphs
Representation Theory
Challenge
Partial shape matching problem:
Given a part of a model S and a whole model M
determine if the part is a subset of the whole SM.
S
M

Representation Theory
Difficulty
For whole object matching, we would associate a
shape descriptor vM to every model M and would
define the measure of similarity between models M
and N as the distance between their descriptors:
D(M,N)=||vM-vN||
Representation Theory
Difficulty
For partial object matching, we cannot use the same
approach:
– Vector differences are symmetric but subset matching
is not:
v M v N  v N v M
but S  M  M  S
– If SM, then we would like to have vS≠vM and:
D(S , M )  0
which means that we cannot use difference norms to
measure similarity.
Representation Theory
Motivation
We have seen a number of different ways for
addressing the alignment problem:
–
–
–
–
–
Center of Mass Normalization
Scale Normalization
PCA Alignment
Translation Invariance
Rotation Invariance
Representation Theory
Motivation
Most of these methods will change give very
different descriptors if only part of the model is
given.
– Center of mass, variance, and principal axes of a part of
the model will not be the same as those of the whole.
Representation Theory
Motivation
Most of these methods will change give very
different descriptors if only part of the model is
given.
– Changing the values of a function will change the (nonconstant) frequency distribution in non-trivial ways.
Outline:
• Motivation
• Sum of Squared Distances
Representation Theory
Goal
Design a new paradigm for shape matching that
associates a simple structure to each shape M→vM
such that if SM, then:
– vS≠vM (unless S=M)
– but D(S,M)=0
That is, we would like to define a measure of
similarity that answers the question:
“How close is S to being a subset of M?”
Representation Theory
Key Idea
Instead of using the norm of the difference, use the
dot product:
D(S , M )  v~S ,v M
v~
Then, S is a subset of M if
S
is orthogonal to v M .
To do this, we have to define different descriptors for
a model depending on whether it is the target or the
query.
Representation Theory
Implementation
For a model M, represent the model by two different
3D function:
1 if p  M
~
v M  RasterM ( p )  
0 otherwise
2
v M  EDTM (p )  min p  q
q M
M
RasterM
EDTM
Representation Theory
Implementation
Then RasterM is non-zero only on the boundary
points of the model, and EDTM is non-zero
everywhere else. Consequently we have:
RasterM (p )  EDTM (p )  0
p
and hence:
v~M ,v M   RasterM ( p )  EDTM ( p )dp  0.
M
RasterM
EDTM
Representation Theory
Implementation
Moreover, if SM, then we still have:
RasterS ( p )  EDTM ( p )  0
p
so that:
v~S ,v M   RasterS ( p )  EDTM ( p )dp  0.
S
RasterS
M
EDTM
Representation Theory
What is the value of D(S,M)?
Representation Theory
What is the value of D(S,M)?
D(S , M )  v~S ,v M
  RasterS ( p )  EDTM ( p )dp
Representation Theory
What is the value of D(S,M)?
D(S , M )  v~S ,v M
  RasterS ( p )  EDTM ( p )dp
Since RasterS is equal to 1 for points that lie on S and
equal to 0 everywhere else:
D(S , M ) 
EDTM ( p )dp

p S

Representation Theory
What is the value of D(S,M)?
D(S , M ) 
EDTM ( p )dp

p S

So that distance between S and M is equal to the sum
of squared distances from points on S to the nearest
point M.
S
M
Representation Theory
What is the value of D(S,M)?
D(S , M ) 
EDTM ( p )dp

p S

Note that if we rasterize the models into an nxnxn
voxel grid, then a brute force computation would
compute the sum of the distances for each of O(n2)
on the query by testing against each of O(n2) points
on the target for the minimum distance, giving a total
running time of O(n4). By pre-computing the EDT,
we reduce the computation to O(n2) operations.
Representation Theory
Advantages
Model similarity is defined in terms of the dotproduct:
– We can still use SVD for efficiency and compression
(since rotations do not change the dot product)
– We can still use fast correlation methods (translation,
rotation, axial flip) but now we want to find the
transformation minimizing the correlation.
Representation Theory
Advantages
We can use a symmetric version of this for whole
Sum of Squared Distances (3D)
100%
object matching.
Spherical Extent Function (2D)
Gaussian EDT (3D)
Shape Histograms (3D)
Extended Gaussian Image (2D)
D2 (1D)
50%
0%
Representation Theory
Advantages
We can perform importance matching by assigning a
value larger than 1 to sub-regions of the
rasterization.
Representation Theory
Disadvantage
Aside from using fast Fourier / Spherical-Harmonic /
Wigner-D transforms, we still have no good way to
address the alignment problem.
Download