Application to Image Registration

advertisement
1




Probability density function (pdf)
estimation using isocontours/isosurfaces
Application to Image Registration
Application to Image Filtering
Circular/spherical density estimation in
Euclidean space
2
Histograms
Kernel
density
estimate
Parameter selection:
bin-width/bandwidth/number
of components
Bias/variance tradeoff:
large bandwidth: high bias,
low bandwidth: high variance)
Mixture model
Sample-based
methods
Do not treat
a signal as a
signal
3
Trace out isocontours
of the intensity function I(x,y) at
Continuous
image representation:
several
intensity values.
using some
interpolant.
4
P (Level
 I Level
Curves
 Curves

 atarea
of
at )Intensity
Intensity
 brown
and 
region

Assume a uniform
density on (x,y)
Random variable
transformation from
(x,y) to (I,u)
u = direction
Integrate
out u along
to get the
the level
set
density
of intensity
I
(dummy variable)

1 d 
p I ( ) 
dxdy

|  | d 
 I ( x , y ) 


1
| |

I ( x , y ) 





du
2
2
Ix  Iy
Every point in the image
domain contributes to
the density.
Published in CVPR 2006, PAMI 2009.
6
7
Level
 1I, 2 121 ) 
inI 12 ) 
P (
 ICurves
  1 at
 Intensity
  1 ,  2 (
1
1
Level Curves at Intensity
area
andof(black
,  2  region
  2 ) in I 2
2
pI
1 ,I 2
( 1 ,  2 ) 
1
| |

C
 1 in I 1 and  2 in I 2
1
|  I 1 ( x , y )  I 2 ( x , y ) sin(  ( x , y )) |
C  {( x , y ) | I 1 ( x , y )   1 , I 2 ( x , y )   2 }
 ( x , y )  angle between gradients at ( x , y )
Relationships between
geometric and probabilistic
entities.
8

Similar density estimator developed by Kadir
and Brady (BMVC 2005) independently of us.

Similar idea: several differences in
implementation, motivation, derivation of
results and applications.
9
p I ( ) 
pI
1 ,I 2
( 1 ,  2 ) 
1
| |
1
| |

C

I ( x , y ) 
du
|  I ( x, y ) |
1
|  I 1 ( x , y )  I 2 ( x , y ) sin(  ( x , y )) |
C  {( x , y ) | I 1 ( x , y )   1 , I 2 ( x , y )   2 }
Densities
(derivatives of the cumulative) do not
Compute
cumulative
P (are
 zero,
I  orwhere
)
existmeasures.
where image gradients
interval
image gradients run parallel.
10
11
1024
12864
256
512
32
bins
bins
bins
Standard histograms
Isocontour Method
12
13

Randomized/digital approximation to area
calculation.

Strict lower bound on the accuracy of the
isocontour method, for a fixed interpolant.

Computationally more expensive than the
isocontour method.
14
128 x 128
bins
15

Simplest one: linear interpolant to each halfpixel (level curves are segments).

Low-order polynomial interpolants: high bias,
low variance.

High-order polynomial interpolants: low bias,
high variance.
16
Polynomial
Interpolant
Assumptions on signal:
better interpolant
Accuracy of estimated density
improves as signal is sampled
with finer resolution.
Bandlimited analog signal,
Nyquist-sampled digital
signal:
Accurate reconstruction by
sinc interpolant!
(Whitaker-Shannon
Sampling Theorem)
17




Probability density function (pdf) estimation
using isocontours
Application to Image Registration
Application to Image Filtering
Circular/spherical density estimation in
Euclidean space
18
known
image
•Mutual
GivenInformation:
two imagesWell
of an
object,
to find the
similarity measure Viola and Wells (IJCV
geometric transformation that “best”
1995) and Maes et al (TMI 1997).
aligns one with the other, w.r.t. some
image similarity
measure.changes:
Insensitive
to illumination
useful in multimodality image
registration
19
MI ( I1 , I 2 )  H ( I1 )  H ( I1 | I 2 )
 H ( I1 )  H ( I 2 )  H ( I1 , I 2 )
H ( I1 )   p1 (1 ) log p1 (1 )
Marginal
Probabilities
p1 (1 )
p2 ( 2 )
1
H ( I 2 )   p2 ( 2 ) log p2 ( 2 )
2
H ( I1 , I 2 )    p12 (1 ,  2 ) log p12 (1 ,  2 )
1
Joint Probability
p12(1 ,  2 )
2
Marginal entropy
H ( I1 )
Joint entropy
H ( I1 , I 2 )
Conditional entropy
H ( I1 | I 2 )
20
Hypothesis: If the alignment between images is
optimal then Mutual information is maximum.
I2
I1
Functions of Geometric
Transformation
p12(i, j)
H ( I1 , I 2 )
MI( I1 , I 2 )
21
  0 .05
  0 .2
  0 .7
22
 

0
0 ..2
05
7
32 bins
bins
128
PVI=partial volume
interpolation (Maes
et al, TMI 1997)
23
PD slice
Warped
Warped
and
T2 slice
Noisy
T2 slice
T2 slice
Brute force search for the
maximum of MI
24
MI with standard
histograms
MI with our method
  0 .7
Par. of affine transf.
  30 , s  t   0 . 3 ,   0
25
32 BINS
Method
Error in
Theta
(avg., var.)
Error in s
(avg.,var.)
Error in t
(avg., var.)
Histograms
(bilinear)
3.7,18.1
0.7,0
0.43,0.08
Isocontours
0,0.06
0,0
0,0
PVI
1.9, 8.5
0.56,0.08
0.49,0.1
Histograms
(cubic)
0.3,49.4
0.7,0
0.2,0
2DPointProb
0.3,0.22
0,0
0,0
26




Probability density function (pdf) estimation
using isocontours
Application to Image Registration
Application to Image Filtering
Circular/spherical density estimation in
Euclidean space
27
Anisotropic neighborhood filters (Kernel
density based filters): Grayscale images
 K ( I ( x , y )  I ( a , b );  ) I ( x , y )
I (a, b) 
( x , y ) N ( a , b )
 K ( I ( x , y )  I ( a , b );  )
( x , y ) N ( a , b )
K: a decreasing
Parameter
σ
function
controls
the degree
of anisotropicity
(typically of
the smoothing
Gaussian)
Central Pixel (a,b):
Neighborhood
N(a,b) around
(a,b)
28
Anisotropic Neighborhood filters:
Problems
Sensitivity to the
parameter 
Sensitivity to
the SIZE of the
Neighborhood
Does not
account for
gradient
information
29
Anisotropic Neighborhood filters:
Problems
Treat pixels as independent samples
30
Continuous Image Representation
Interpolate in between the pixel values
 K ( I ( x , y )  I ( a , b );  ) I ( x , y )
I (a, b) 
( x , y ) N ( a , b )
 K ( I ( x , y )  I ( a , b );  )
( x , y ) N ( a , b )
 I ( x , y ) K ( I ( x , y )  I ( a , b );  ) dxdy
I (a, b) 
N ( a ,b )
 K ( I ( x , y )  I ( a , b );  ) dxdy
N ( a ,b )
31
Continuous Image Representation
Areas between
isocontours at
intensity α and α+Δ
(divided by area of
neighborhood)=
Pr(α < Intensity <
α+Δ|N(a,b))
32
 I ( x , y ) K ( I ( x , y )  I ( a , b );  ) dxdy
I (a, b) 
N ( a ,b )
 K ( I ( x , y )  I ( a , b );  ) dxdy
N ( a ,b )
LimLim
 Pr (  I     | N ( a , b ))    K (  I ( a , b );  )

 00 Area
I ( aI ,(ba), b) 
 
LimLim
 Pr (  I     | N ( a , b ))  K (  I ( a , b );  )

 00 Area
 
Areas between isocontours:
contribute to weights for
averaging.
Published in
EMMCVPR 2009
33
Extension to RGB images
R ( a , b ), G ( a , b ), B ( a , b )


 

  Pr(   ( R , G , B )     ) K (  ( R , G , B );  )


( )

 Probability of
 R,G,B
= Area of overlap
Joint
 ( R , G , B )     ) K (  ( R , G , B );  )
 Pr(of isocontour
pairs from R, G, B images
( )
34
Mean-shift framework
• A clustering method developed by Fukunaga
& Hostetler (IEEE Trans. Inf. Theory, 1975).
• Applied to image filtering by Comaniciu and
Meer (PAMI 2003).
• Involves independent update of each pixel by
maximization of local estimate of probability
density of joint spatial and intensity
parameters.
35
Mean-shift framework
• One step of mean-shift update around (a,b,c) where
c=I(a,b).
 (x, y, I(x,
1. ( aˆ , bˆ , cˆ ) :
y)) w(x, y)
(x, y) N(a, b)
 w(x,
y)
(x, y) N(a, b)
( aˆ , bˆ )  updated center of the neighborho od , cˆ  updated intensity value
 (x  a)
w(x, y) : exp  
2

σ
s

2

(y  b)
2
σs
2
2
(I(x, y)  c) 


2

σr

2. (a, b, c)  ( aˆ , bˆ , cˆ )
3. Repeat steps (1) and (2) till (a, b, c) stops changing.
4. Set I(a, b)  cˆ .
36
Our Method in Mean-shift Setting
I(x,y)
X(x,y)=x
Y(x,y)=y
37
Our Method in Mean-shift Setting
Facets of tessellation
induced by isocontours
and the pixel grid
= Centroid of Facet #k.
I k = Intensity (from interpolated
image) at ( X k , Y k ) .
A k = Area of Facet #k.
( X k , Yk )
 (X
( X ( a , b ), Y ( a , b ), I ( a , b )) 
k
, Y k , I k ) A k K( ( X k , Y k , I k )  ( X ( a , b ), Y ( a , b ), I ( a , b )) ;  )
k  N ( a ,b )

A k K( ( X k , Y k , I k )  ( X ( a , b ), Y ( a , b ), I ( a , b )) ;  )
k  N ( a ,b )
38
Experimental Setup: Grayscale Images
• Piecewise-linear interpolation used for our
method in all experiments.
• For our method, Kernel K = pillbox kernel, i.e.
K ( z; )  1
If |z| <= σ
K ( z; )  0
If |z| > σ
• For discrete mean-shift, Kernel K = Gaussian.
• Parameters used: neighborhood radius ρ=3, σ=3.
• Noise model: Gaussian noise of variance 0.003
(scale of 0 to 1).
39
Original
Image
Noisy Image
MSE
Denoised
(Isocontour
Mean Shift)
Denoised
(Gaussian Kernel
Mean Shift)
Noisy Image
181.27
Isocontour
(ρ=3, σ =3)
110.95
Std. Mean
shift
(ρ=3, σ =3)
175.27
Std. Mean
shift
(ρ=5, σ =5)
151.27
40
Original
Image
Denoised
(Isocontour
Mean Shift)
Noisy Image
MSE
Denoised
(Std.Mean
Shift)
Noisy
image
Isocontour Std. mean Std. mean
Mean shift shift
shift
(ρ=3, σ =3) (ρ=3, σ =3) (ρ=5, σ =3)
190
113.8
184.77
153.5
41
Experiments on color images
• Use of pillbox kernels for our method.
• Use of Gaussian kernels for discrete mean
shift.
• Parameters used: neighborhood radius ρ= 6,
σ = 6.
• Noise model: Independent Gaussian noise on
each channel with variance 0.003 (on a scale
of 0 to 1).
42
Experiments on color images
• Independent piecewise-linear interpolation
on R,G,B channels in our method.
• Smoothing of R, G, B values done by coupled
updates using joint probabilities.
43
Original
Image
Noisy Image
MSE
Denoised
(Isocontour
Mean Shift)
Denoised
(Gaussian Kernel
Mean Shift)
Noisy Image
572.24
Isocontour
(ρ=3, σ =3)
319.88
Std. Mean
shift
(ρ=3, σ =3)
547.96
Std. Mean
shift
(ρ=5, σ =5)
496.7
44
Original
Image
Noisy Image
MSE
Denoised
(Isocontour
Mean Shift)
Denoised
(Gaussian Kernel
Mean Shift)
Noisy
image
Isocontour Std. mean Std. mean
Mean shift shift
shift
(ρ=3, σ =3) (ρ=3, σ =3) (ρ=5, σ =5)
547.9
306.14
526.8
477.25
45
Observations
• Discrete kernel mean shift performs poorly with
small neighborhoods and small values of σ.
• Why? Small sample-size problem for kernel
density estimation.
• Isocontour based method performs well even in
this scenario (number of isocontours/facets >>
number of pixels).
• Large σ or large neighborhood values not always
necessary for smoothing.
46
Observations
• Superior behavior observed when comparing
isocontour-based neighborhood filters with
standard neighborhood filters for the same
parameter set and the same number of
iterations.
47




Probability density function (pdf) estimation
using isocontours
Application to Image Registration
Application to Image Filtering
Circular/spherical density estimation in
Euclidean space
48
Examples of unit vector data:
1. Chromaticity vectors of color values:

(r , g , b) 
(R,G , B)
2
R G
2
B
2
2. Hue (from the HSI color scheme) obtained
from the RGB values.
 3 (G  B )
  arctan 
 2R  G  B





49
Convert RGB values to unit
vectors
Estimate density of unit
vectors
v i  ( ri , g i , b i ) 
p (v ) 
1
( Ri , G i , Bi )
2
2
2
Ri  G i  Bi
N
K(v ;  , v

N
i
)
i 1
K ( v;  , u )  C p (  )e
v T u
K  von - Mises Fisher Kernel
  concentrat ion parameter
C
p
(  )  normalizat
ion constant
Other
voMF mixture
popular
kernels:
models
Watson,
Banerjee
cosine.
et al (JMLR
2005)
50
Density
ofof RGB
Estimate
density
(magnitude,chromaticity)
2
using KDE/Mixture models
p (m , v)  m p ( R , G , B )
using random-variable
2
2
2
m

R

G

B
transformation
 ( R  Ri ) 2  ( G  G i ) 2  ( B  Bi ) 2 
1 N




Density
of chromaticity
Density
of chromaticity:
conditioningout
on magnitude)
m=1.
(integrate
p ( R ,G , B ) 
 exp
N i 1


1

T


p (v ) 
exp

v
vi 
2
i
p ( v )N m
p
(
R
,
G
,
B
)
dm
2  I o  i 
i 
1
21
N

m 0
wi
m
2
m i  w i , v 2
, i 2 i
m 
R mG  B
2
Projected normal estimator:
Watson,”Statistics
on spheres”,
Variable
bandwidth
voMF KDE:
1983, networks for
Bishop, “Neural
Small,”The
statistical theory
of
pattern recognition”
2006.
shape”, 1995
i

What’s new? The notion that
all estimation can proceed in
Euclidean space.
51
Estimate density of RGB using KDE/Mixture models
Use random variable transformation to get density of HSI
(hue, saturation,intensity)
Integrate out S,I to get density of hue
52

Consistency between densities of Euclidean and
unit vector data (in terms of random variable
transformation/conditioning).

Potential to use the large body of literature
available for statistics of Euclidean data
(example: Fast Gauss Transform Greengard et al
(SIAM Sci. Computing 1991), Duraiswami et al
(IJCV 2003).

Model selection can be done in Euclidean space.
53
Download