K-Space Edge Detection

advertisement
Octavian Biris ‘09 |EN250
|Spring 2009
The detection of tissue borders is of great
importance in several MRI applications. Edge
detection is typically performed as a postprocessing step, using magnitude images that
are reconstructed from fully-sampled k-space
data. In dynamic imaging (e.g. of human speech,
ventricular function, and joint kinematics), tissue
borders often comprise the primary information
of interest.
Thus a fast and accurate method to detect tissue
borders warrants the need to perform image
processing prior to the MRI reconstruction step
which adds unwanted image artifacts.
 Find
the gradient of the image
 Take its magnitude
 Apply a threshold
 ‘Thin’ it by applying non-maximal
suppression along the direction of the
gradient.
 Find
its X and Y components
 Approximate the X derivative at a point
(x,y) in the image as
• dI/dx=[I(x+1,y)-I(x-1,y)]/2.
• dI/dy=[I(x,y+1)-I(x,y-1)]/2
 This
translates to convolving the image
with [-1/2 , 0 ,1/2] and [-1/2, 0, 1/2]’
respectively
 Computationally
cost-effective
approximation on how the image varies
in intensity.
 Crude approximation in regions of high
frequency variation.
 Need to low-pass filter before applying.
 Take
a Gaussian kernel G(r). Take the
differential operator del(r). Apply each one
to the image I(r).
• J(r)=del(r) º G(r) º I(r) ,where º stands for
convolution.
 Convolution
is associative so combine del(r)
º G(r) in a single kernel
 Use this Kernel to compute the gradient
more accurately since the high-frequencies
are attenuated. The kernel is also the
derivative of the Gaussian (DoG)

I have taken six total approaches in computing the gradient of the image.
Three of them are standard spatial domain approaches while five of them
are frequency domain methods.
1.
Convolve the image with the simple filter [-0.5, 0 ,0.5] and its
transpose in the spatial domain and obtain the two derivatives.
2.
Convolve the image with the 3x3 Sobel operator filter and its
transpose in the spatial domain and obtain the two derivatives.
1 0 1 
S x  2 0 2
1 0 1 
3.
Convolve the image with the Derivative of the Gaussian operator. I
chose a size of 5x5 and a standard deviation of 0.5 in each direction
4.
5.
6.
7.
8.
This approach finds the derivatives in each direction by using the differential
property of the Continuous Time Fourier Transform
Because the above method assumes a continuous function and we are working on a
discretised space, we need to use the discreet version of the continuous derivative
which can be approximated by the first order difference. In order to find the
equivalent of the first order difference in the Fourier Domain, the time shift property
of the Fourier transform was used.
A step above the previous method, this method uses the central difference equivalent
in the Fourier domain to compute the derivative
In this method a more smooth approximation of the central difference was employed,
involving two neighboring terms. Based on its equivalent in the Fourier domain, this
“smooth central difference” was computed.
This method is the combination of the first method and the fourth method. Essentially,
the convolution of the derivative filter with a Gaussian and then with the image is
translated in the Fourier domain to the multiplication of the k-space with the Fourier
transform of the derivative of the Gaussian. The latter quantity translates to the
multiplication of the Fourier transform of the Gaussian with jω. It was suggested by the
instructor NOT to apply the hamming window. However I did it both ways: 8a without the
hamming window and 8b with the hamming window.
I ( x, y )  G  I ( x, y )  (G )
I ( x, y )  G Fourier
 K ( x ,  y )  F (G )
I ( x, y )  G Fourier
 K ( x ,  y )  j  F (G )
Since most energy of the k-space is contained in
certain samples one can do a partial sampling of
the k-space that contain only the high energy
samples that contain edge information.
 This will improve computation cost for large
amounts of data.
 Also, since the derivative is just a measure of
image intensity change, most image information
does not contribute to its outcome.
 Thus I used a 2D hamming window to take only the
high energy data.

1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0
100
50
100
50
0
0
-50
50
100
50
0
-50
-100
0.2
100
-100
0
-50
-50
-100
-100
2x
2y
Ham m2d ( x, y )  [0.54  0.46 cos(
)][0.54  0.46 cos(
)]
N
M
Gauss2d ( x, y ) 
e
( x  x0 ) 2 ( x  y 0 ) 2
[

]
2 x 2
2 y 2
2 x y
I ( x, y )  
x ,y
K ( x ,  y ) e
j (  x x  y y )
I ( x, y )
 j (  x x  y y )
 
K ( x ,  y )  e

x
,

y
x
x
I ( x, y )
 F 1[ j x K [ x ,  y )]
x
I ( x, y )
 F 1[ j y K ( x ,  y )]
y
x[m, n]  x[m  1, n]   x[m, n]  e
 j(
2mr 2nk

)
M
N
m 0 n 0
x[m, n]  x[m  1, n]   [ x[m, n]  e
m 0 n 0
x[m, n]  x[m  1, n]  X [r , k ](1  WMr )
Sm ilarly,
x[m, n]  x[m, n  1]  X [r , k ](1  WNk )
, where
W e
r
M
j
2r
M
  x[m  1, n]  e
 j(
2mr 2nk

)
M
N
m 0 n 0
 j(
2mr 2nk

)
M
N
]e
 j(
2r
)
M
  x[m, n]  e
m 1 n  0
 j(
2mr 2nk

)
M
N
x[m  1, n]  x[m  1, n]   x[m  1, n]  e
 j(
2mr 2nk

)
M
N
m 0 n 0
x[m  1, n]  x[m  1, n]   [ x[m, n]  e
 j(
2mr 2nk

)
M
N
x[m  1, n]  x[m  1, n]  X [r , k ](WM r  WMr )
2r
)]
M
Sm ilarly,
x[m, n  1]  x[m, n  1]  X [r , k ][2 j sin(
where
W e
r
M
j
2r
M
2mr 2nk

)
M
N
m 0 n 0
m 1 n  0
x[m  1, n]  x[m  1, n]  X [r , k ][2 j sin(
  x[m  1, n]  e
 j(
2k
)]
N
]e
 j(
2r
)
M
  x[m, n]  e
m 1 n  0
 j(
2mr 2nk

)
M
N
x[m  2, n]  x[m  1, n]  x[m  1, n]  x[m  2, n] 
X [r , k ]  WM 2 r  X [r , k ]  WM r  X [r , k ]  WM2 r  X [r , k ]  WMr 
X [r , k ](WM 2 r  WM2 r  WM r  WMr ) 
4r
2r
X [r , k ]  2 j[sin(
)(
)]
M
M
Sim ilarly,
x[m, n  2]  x[m, n  1]  x[m, n  2]  x[m, n  1] 
4k
2k
X [r , k ]  2 j[sin(
)  sin(
)]
N
N
where
WMr  e
j
2r
M
50
50
100
100
150
150
200
200
80 100 120 140 160 180
80 100 120 140 160 180
50
50
100
100
150
150
200
200
80 100 120 140 160 180
80 100 120 140 160 180
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
300
50
100
150
200
250
300
Method 4
Method 5
Method 6
Method 7
Method 8b
200
200
200
200
200
250
250
250
250
250
300
300
300
300
300
350
350
350
350
350
400
400
400
400
400
450
450
450
450
450
500
500
500
500
500
550
550
550
550
550
100
150
200
100
150
200
100
150
200
100
150
200
100
150
200
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250
Method 4
Method 5
Method 6
Method 7
Method 8b
200
200
200
200
200
250
250
250
250
250
300
300
300
300
300
350
350
350
350
350
400
400
400
400
400
450
450
450
450
450
500
500
500
500
500
550
550
550
550
550
100
150
200
100
150
200
100
150
200
100
150
200
100
150
200
Method 2
Method 3
Method 7
200
200
200
250
250
250
300
300
300
350
350
350
400
400
400
450
450
450
500
500
500
550
550
550
100
150
200
100
150
200
100
150
200

Find the Approximation of the Laplacian of the image.
The Laplacian changes sign whenever the first
derivative switches monotony. Usually that occurs at a
local maximum.
 Find
the Laplacian of the image
I ' ( x  dx )  I ' ( x ) I ' ( x 1)  I ' ( x )
I 

 I ( x  2 )  2 I ( x 1)  I ( x )
dx
1
''
 Iterate
through every point and mark if
there are zero crossings in its
neighborhood (e.g. 5x5 window). If so,
keep the point, if not set it to zero.
x[ m  2 , n ] 2 x[ m 1, n ] x[ m , n ]
4r
2r
X [r , k ]  2 j[sin(
)  2 sin(
)  1]
M
M
sim ilarly,
x[ m , n  2 ] 2 x[ m , n 1] x[ m , n ]
4k
2k
X [r , k ]  2 j[sin(
)  2 sin(
)  1]
N
N
 Prior
to Calculating the laplacian, smooth
the image with a Gaussian Kernel.
 This translates to convolving the image
with the Laplacian of the Gaussian just
like in the gradient method due to the
asociativity of convolution.
50
50
100
100
150
150
200
200
80 100 120 140 160 180
80 100 120 140 160 180
200
200
250
250
300
300
350
350
400
400
450
450
500
500
550
550
50
100
150
200
250
50
100
150
200
250






Method 7 provided the best results both when compared to the
rest of the methods. Method 6 is close behind.
The Sobel operator is much better than DoG (hence its popularity
in widely used edge detection applications)
The reason behind the failure of the spatial domain methods (DoG
+ Sobel)is the fact that the smoothing provided by the Gaussian is
futile since most of the energy of the slice I used is concentrated
around the center. The Hamming Window is required since it
attenuates the Gibbs effect that is very pronounced when no
window (which is equivalent to the rectnagular window)is used
Method 7 is a better approximation of the discreet time derivative
of a signal than method 5 or 6.
Method 4 is faulty to begin with since it assumes the data is
continuous.
Method 4 and 7 would have the same results provided that the
images would be very large (equivalent to mimicking a
continuous 2D space).
 Perform
the thinning directly on the k-
space
 A. Oppenheim & R. Schafer.
 http://www.cs.berkeley.edu/~jfc/
Download