Table 1. Distance errors of segments before and after searching

advertisement
A Model Based Contour Searching Method
Yingjie Tang, Lei He, Xun Wang and William G. Wee
Department of Electrical & Computer Engineering and Computer Science
University of Cincinnati
Email: William.Wee@uc.edu
Abstract
A two-step model based approach to a contour
extraction problem is developed to provide a solution to
more challenging contour extraction problems of
biomedical images. A biomedical contour image is initially
processed by a deformable contour method to obtain a first
order approximation of the contour. The two-step model
includes a linked contour model and a posteriori
probability model. Initially, the output contour from the
deformable contour method is matched against the linked
contour model for both model detection and corresponding
landmark contour points identification. Segments obtained
from these landmarks are matched for errors. Larger error
segments are then passed on to a regionalized a posteriori
probability model for further fine tuning to obtain a final
result. Experiments on both MR brain images are most
encouraging.
1. Introduction
For the challenging contour extraction problems, using
a deformable contour method [20] has some successes.
There are difficulties due to gaps, very blur contour
segments, contour within contour structures, and
inhomogeneous contour region brightness distribution,
just to name a few. While a divide and conquer approach
[21] has been employed in dividing the marching closed
contour into a set of linked marching contour segments
with successes, other far more challenging contour
extraction images maybe handled better by using a
segmented model approach to search for the challenging
contour segments. The purpose of this paper is to provide
further improvements on resulting contours obtained from
a deformable contour method for far more challenging
and difficult biomedical images. The proposed approach
is a two-step model based approach including a linked
contour model and a posteriori probability model. The
output contour of the deformable contour method is
initially matched against the linked contour model for
both model detection and corresponding landmark contour
points identification. Segments obtained from these
landmarks are matched for errors. Segments with larger
error are then past on to a regionalized a posteriori
probability model for further fine tuning to obtain a final
result. The rest of this paper is organized as follows:
Section 2 will briefly review the recent literature,
including contour representation / modeling and
deformable contour methods for contour extraction; then
we present in detail and validate our proposed approach in
Section 3; Implementation and results on MR brain
images are given in Section 4.
2. Background
The contour representation methods play an important
role in the model based contour extraction applications. In
the area of contour representation, we have considered
chain codes [4], Fourier descriptors [9], B-spline, and 1-D
wavelet descriptors [6]. Chain code outperforms any other
method in bit-rate but is not robust inherently. Fourier
descriptors are indirectly insensitive to translation, scaling
and rotation. A variety of them have been proposed in
shape representation and object recognition applications.
Elliptic Fourier descriptors [9] prove to be one of the best
among the Fourier descriptors. Spline representation of
curves are easy to compute and flexible to control. The
smooth interpolation of splines is highly appreciated in
computer graphics, e.g. automobile design. However, they
have the drawback of either information loss or low
efficiency in fitting the real world biomedical image
shapes, which are not smooth in nature. Wavelets
transform have the ideal “zooming” property against
Fourier transform, as wavelets has localization abilities in
both space (time) and frequency domains. The
localization feature makes wavelets well suited for
approximating data with discontinuities. However, the
good translation (module) invariant property in Fourier
transform does not hold for wavelet transform. This
means that the starting point in the contour matters. We
will address this problem later. In addition, curvature
function [8] is a commonly used technique in that it is
invariant under rotation and translation transforms. We
choose curvature function also because its scale-
dependent feature enables a multi-resolution analysis
under wavelet transform.
Many deformable contour methods, such as snakes
[11], [12], level set [13], [14] and non-deterministic
methods [15], [16], have been applied extensively to
automatic biomedical image contour extraction problems
with limited successes. A large number of biomedical
images have contours that are difficult to extract. A divide
and conquer approach has been proposed [21] to provide a
solution to some of these difficult problems. An initial
inside closed contour is divided into segments, and these
segments are allowed to deform separately preserving
segments’ connectivity. A deformable contour method is
adapted to each contour segment movement. Leemput et
al.[17] proposed a model based method for automated bias
correction of MR brain images. The image is modeled as a
realization of a random process with a parametric
probability distribution that is corrupted by a smooth
polynomial inhomogeneity or bias field. The method can
deal with blur shapes and gaps, but can’t handle complex
shapes. Without edge information of the input image, this
method can’t provide exact location of the desired
boundary. It requires a long computation time. Wang and
Staib[18] constructed discrete models, or statistical points
models, for boundary finding and determining the
correspondence of a subset of boundary points to a model.
This method performs well at both boundary finding and
determining correspondence, and it’s also relatively
insensitive to noise. However, it can’t handle more
complex shapes. Staib and Duncan [19] used elliptical
Fourier descriptors as model parameters to represent open
and closed boundaries. The prior information available is
a flexible bias toward more likely shapes. A generic
parameterization model with probability distributions
defined on the parameters represents a stronger use of
prior information than methods that use only simple shape
characteristics. The prior knowledge comes from
experience with a sample set of images of the object being
delineated when such a sample is available. When prior
information is not available, uniform distributions are
used for the prior probabilities of the parameters.
3. The proposed approach
This is a two-step model based approach. Step 1 is a
linked contour matching operation and step 2 is a contour
segment searching using a posteriori probability model
[19].
Step 1- Linked contour matching
i)
Extraction of curvature function
ii)
Wavelet transform
iii)
Model detection
iv)
Landmark extraction and matching
Step 2- Contour segment searching
i)
ii)
iii)
Contour segment extraction and length
equalization
Segment alignment and error calculation
Segment model based searching
Details are given below:
Step 1: Linked contour matching
i) Extraction of curvature function
This step has the following operations:
1. Obtain the output of a deformable contour
method: a closed contour in the format of
( xi , yi ) , with i =1,2,N
2.
3.
Normalize N to 128 or 256.
Compute the curvature value of each point and
obtain the discrete curvature function
sequence f using the K-slope method [10].
ii) Wavelet transform
The cubicle spline function is used as the mother
wavelet. In our applications, 2-level or 3-level
decomposition is used, making the descriptor number P
=N/4 or N/8, where N is the total contour point number.
iii) Model detection
Compute the maximum circular cross-correlation
value between a set of model contours and an input
contour both in wavelet descriptors:
If



C j ,k  max max  FD (i ) FM j (i  k )  , then
j

 k  i
we decide: the j-th model best matches the input contour
with the starting point at k. In the equation, j is the model
index (j=1,2,T, and T is the total number of models), k
is the position index (k=1,2,P), FD and FM are
j
wavelet descriptors of input data and the j-th model
respectively, obtained from ii).
iv) Landmark extraction and matching
Before the landmark extraction and matching
procedure is presented, the discussion of the contour
matching problem is in order. Duncan [2] and Cohen [3]
proposed a model that minimizes a total energy function
consisting two components of local bending energy
(curvature matching) and variation of the displacement
vector (smooth mapping) of the contour. There are many
modifications thereafter and the following two conditions
have been widely accepted:
1. At a neighborhood where curvature value is
large, the curvature term is dominant, and as a
result matching occurs for equal curvature value.
They are called “landmarks” in this paper.
2. At a low curvature neighborhood, the
smoothness is the main factor to be considered,
and the mapping relationship is linear between
the two landmarks.
The two conditions can be illustrated by the two
rectangles matching example shown in Figure1-1, where
the four pairs of corners are matched with each other, and
in between the edge points are mapped in linear manner.
Figure 1-1. Rectangles matching
A
D
B
C
A’
B’
The relative threshold values are determined by
the adjacent peak and valley values.
2. Landmark registration—for each landmark in the
model, an index of neighbor landmarks near the
corresponding position in the data contour are
calculated for decision-making. These indices are
dependent on position, peak/valley mode,
descriptor’s value and landmark neighborhood
size.
3. Contour registration—matching operation for
peak and valley landmarks of both contours is
carried out. Extra peak and valley landmarks are
eliminated with a crossing check matrix.
Experiments of landmark matching and model detection
on artificial shapes have been conducted to show that our
method is effective. The following are two examples.
Example 1 is to illustrate 4 shapes matching shown in
Figure 2, where points A, B, C and D are landmarks found
and matched.
D’
C’
C’’
Figure 1-2. Matching and mismatching
Since the above energy function is a global function,
its minimization forces the matching between two
contours with a total error indication. It does not bear the
information with respect to matched or mismatched
contour segments. Example of Figure1-2 is an illustration:
the error of the energy function will force the point C to
be matched with either C’ or C’’. Therefore, with our goal
of searching for matched and mismatched segments, we
have to seek alternate approach. Since the starting point
problem (remember the k value!), or orientation problem,
as well as the size problem, has been solved in Step 1-iii),
a simple registration approach is then adapted. To achieve
this, position, peak/valley mode, curvature value and the
size of the neighborhood of the landmarks are factors to
consider. Once a potential matching is found, represented
by a reasonable high credibility value given by the above
factors, they are cross-checked in both the data and model
contour to make sure that this matching is in an exclusive
manner (one to one). The matching steps include:
1. Landmark extraction from wavelet descriptors—
landmark are peaks and valleys among the
wavelet descriptors series. They are detected
according to the absolute and relative thresholds.
Figure 2. Shape figures matching
Maximum circular cross-correlation function is used in
model detection. Experiment 2 of illustrating the
selectivity of circular cross-correlation on wavelet
descriptors are six dolphins shown in Figure3. These six
dolphins have subtle differences in shape with visual
observance, but the circular cross-correlation can be used
to detect these differences. Figure 4 illustrates five
circular cross-correlation curves of all other dolphins with
Dolphin 2 of Figure3-2. According to the maximum value
the Dolphin 1 of Figure3-1 is chosen as the best fitting
and the Dolphin 3 of Figure3-3 is the second best. These
results coincide with our visual inspection.
Step 2: Contour segment searching
i) Contour segment extraction and length equalization
1.
Figure3-1 Dolphin 1
Figure3-2 Dolphin 2
Figure3-3 Dolphin 3
Figure3-4 Dolphin 4
Figure3-5 Dolphin 5
Figure3-6 Dolphin 6
Identify corresponding contour segments
between the input contour and model contour.
2. Equalize each model contour segment length to
that of its corresponding input contour segment.
Input from step 1 are a sequence of corresponding
landmarks and also corresponding control contour points.
Corresponding segments are easily identified between
input contour and model contour, with each segment
identified by its two end points. Equalization operation is
to make sure that each extracted input image segment has
equal number of points with the corresponding model
segment. If model segment has more points than input
contour segment, then model points are reduced
accordingly with equal point interval and otherwise,
linearly interpolated model points are inserted.
Figure4: Five circular cross-correlation curves of
all other dolphins with Dolphin 2 of Figure3
ii) Segment alignment and error calculation
1.
Translate the model contour segment coordinate
system into the input contour segment coordinate
system by:
 xedi 
 ye  
 di 
2.
a b   xemi   e 
 c d   ye    f  ,

  mi   
where [xedi yedi] is the end point of input
segment, and [xemi yemi] is the end point of the
equalized model segment, i=1, 2; b=c=0.
Compute the error function E(M, D) as:
E ( M , D) 
1 n
[( xdk  xmk ) 2  ( y dk  y mk ) 2 ] ,

n k 1
where [xdi ydi] is the point of input segment, and
[xmi ymi] is the point of the equalized model
segment, k=1, 2,…n, and n is the total number of
points in the segment.
In our application, no rotation is needed and therefore,
b=c=0. The values of a, d, e and f are then determined
from the two corresponding end points. If rotation
parameters b and c are needed, two additional
corresponding contour points are used. All segment points
are then translated to the image coordinate system for
error computations. For segment error larger than a preset
threshold, Step 2-iii) is undertaken. Otherwise, proceed to
another segment.
iii) Segment model based searching
Here the segment model based matching method is
identical to the procedures used in [19]. As in [19], we
used Fourier parameters to represent our contour
segments. The contour segments are all open curves. Thus
the parameter vector is like p= (a0, c0, a1, c1, … aM, cM).
We adapt the contour searching optimization operation
[19] to a particular segment searching optimization
operation. Input segment c(x, y) corresponds to
regionalized segment model tp(x, y) with p belonging to a
finite potential solution set of parameter vectors. Using
Bayes rule, we have:
Pr( c | tp ) Pr( tp )
Pr(tmap|c) = max Pr(tp|c) = max
,
p
p
Pr( c)
where tmap is the maximum a posteriori solution, Pr(tp) is
the prior probability of tp, and Pr(c| tp) is the conditional
probability, or likelihood, of the contour segment given
the model. This function can be simplified to maximize
(as in [19]):
M ( c, p ) 
M
 [ln( 
i 1

1
 n2
1
i
2
)
( pi  mi ) 2
]
2 i2
N
c( x ( p, n ), y ( p, n ))

n 1
where mi and i2 are the mean and the variance of pi, n2
is the noise variance of the input image part including the
segment, N is the number of points on that segment. Our
linked contour model and a posteriori probability model
derive from a hand-drawn contour, which is in turn
derived from a set of model images. Two 1-D Fourier
parameters are extracted to represent the hand-drawn
contour. The a posteriori probability model is derived
from having a two dimensional normal distribution of
equal variance  and zero mean vector at the two 1-D
Fourier parameters.
4. Implementation and results
Experiments are conducted on biomedical images. For
illustration, the segmentation of corpus callosum of an
MR brain image is presented here. Figure6 shows the
contour initially provided by a deformable contour
method [21] of an original MR image in Figure5. Figure7
shows the wavelet transforms of both input contour (solid
line) and linked contour model (dashed line). As shown in
Figure7, corresponding matched landmarks are A-a, B-b,
C-c and D-d. Some contour points are interpolated to
increase the total number of segments, for more precise
calculation purpose. The landmarks, together wither these
added contour points of the input image, A, B, H, are
marked in the enlarged input contour image Figure8.
Table 1 shows distance error of segments before searching
and after searching. Only segment DE is selected for a
posteriori probability model searching due to high average
distance error of 17.07. The final contour is shown in
Figure9 and Figure10 is the enlargement, showing the
refined D’E’ segment.
Figure5. An original
MR brain image
Figure6. Input contour
(from a D.C.M[21])
Figure7. Wavelet descriptors
Table 1. Distance errors of segments before and after searching
Before Searching (Figure8)
After Searching (Figure10 )
Distance
Point Number of
Distance
Point Number of
Errors
Each Segment
Errors
Each segment
17.07
51 (DE)
2.21
43 (D’E’)
1.54
33 (EA)
1.54
33 (E’A’)
0.81
24 (AF)
0.81
24 (A’F’)
1.64
10 (FB)
1.64
10 (F’B’)
0.4
43 (BG)
0.4
43 (B’G’)
1.19
13 (GC)
1.19
13 (G’C’)
0.63
8 (CH)
0.63
8 (C’H’)
0.14
14 (HD)
0.14
14 (H’D’)
5. Conclusion
Figure8 Landmarks on enlarged
version of Figure6
Our algorithm is to solve far more challenging and
difficult contour extraction problems of biomedical
images which may be difficult for current deformable
contour methods to handle. A linked contour model and a
posteriori probability model make up a two-step model
based approach for locating and identifying mis-matched
contour segments between the input contour (from
deformable contour method) and a linked contour model,
and searching for refined contour segments of larger
distance error using the a posteriori probability model.
Experimented results on MR brain images are rather good.
References
[1] Quang Minh Tiern, “Wavelet-based invariant representation:
a tool for recognizing planar objects in 3D space”, IEEE Trans.
on Pattern Analysis and Machine Intelligence, Vol.19, No. 8,
pp.846-857, Aug.1997
[2] J.S. Duncan, R. Owen, P. Anandan, “Shape based tracking of
left ventricularwall motion” Computers in Cardiology 1990
IEEE Computer Society, Sept. 1990, pp23-26
Figure9. Final extracted
contour of Figure5
[3] I.Cohen, N.Ayache and P.Sulger, “Tracking points on
deformable curves,” Proc. Second Euro. Conf. Computer Vision
1992, May 1992.
[4] Z. Cai, “Restoration of binary images using contour
direction chain codes description”, Computer Vision, Graphics,
and Image Processing, Vol. 41, pp. 101-106, 1988
[5] Donald Hearn, M. Pauline Baker, Computer Graphics, pp.
317-327, 1994
[6] Patrick Wunch and Andrew F. Laine “Wavelet descriptors
for multiresolution recognition of handprinted characters”,
Pattern Recognition, Vol. 28, No.8, pp. 1237-1249, Oct. 1999.
Figure10. Final extracted
contour: enlarged view
[7] S. Mallat, “A theory for multiresolution signal
decomposition: the wavelet representation”, IEEE Trans. on
Pattern Analysis and Machine Intelligence, Vol.11, pp.674-693,
July 1989
[8] Yu. P. Wang and S.L.Lee, “Multiscale curvature-based shape
representation using B-spline wavelets”, IEEE Trans. on Image
Processing, Vol.8 No.11, pp.1586-1593, November 1999
[9] F. P. Kuhl and C. R. Giardina, “Elliptic Fourier features of a
closed contour”, Computer Graphics Image Processing, Vol. 18,
pp. 236-258, 1982
[10] A. Rosenfeld and A C. Kak, Digital Picture Processing, 2nd
Edition, Vol. 2, pp. 257-260
[11] M. Kass, A. Witkin and D. Terzopoulos, “Snakes: active
contour models,” International Journal of Computer Vision,
Vol. 1, No.4, pp. 321-331, 1988.
[12] L. Cohen, “On active contour models and balloons,”
CVGIP: Image Understanding, Vol. 52, No.2, pp. 211-218,
March, 1991.
[13] R. Malladi, J. Sethian and B. Vemuri, “Shape Modeling
with Front propagation”, IEEE Trans on PAMI, Vol. 17, No.2,
pp. 158-171, Feb. 1995.
[14] J. Sethian, “A Fast Marching Level Set Method for
Monotonically Advancing Fronts,” Proc. Nat. Acad. Sci., Vol.
93, No. 4, 1996.
[15] G. Stovik, “A Bayesian approach to dynamic contours
through stochastic sampling and simulated annealing,” IEEE
Trans. on PAMI, Vol.16, No.10, pp. 976-986, Oct. 1994.
[16] A. Lundervold and G. Storvik, “Segmentation of brain
parenchyma and cerabrospinal fluid in multispectral magnetic
resonance images,” IEEE Trans. on Medical Imaging, Vol. 14,
No.2, pp. 339-349, June 1995.
[17] Koen Van Leemput, Frederik Maes, Dirk Vandermeulen,
and Paul Suetens, “Automated model-based bias field correction
of MR images of the brain,” IEEE Trans. on Medical Imaging,
Vol. 18, No.10, pp. 885-896, Oct. 1999.
[18] Yongmei Wang and Lawrence H. Staib, “Boundary finding
with correspondence using statistical shape models”, Proc. IEEE
Conf. Computer Vision and Pattern Recognition, pp. 338-345,
1998
[19] Lawrence H. Staib and James S. Duncan, “Boundary
finding with parametrically deformable models”,
IEEE Trans on PAMI, Vol. 14, No.11, pp. 1061-1075, Nov.
1992.
[20] X. Wang and W. Wee, “On a new deformable contour
method”, IEEE International Conference ICIAP, pp. 430-435,
September, 1999.
[21] X. Wang, and W. Wee, “A deformable contour method:
divide and conquer approach”, in this conference proceedings.
Download