Component Analysis Approach to Estimation of Tissue Intensity Distributions of

advertisement
Component Analysis Approach to Estimation of Tissue Intensity Distributions of
3D images
Arridhana Ciptadi, Cheng Chen and Vitali Zagorodnov
Department of Computer Engineering
Nanyang Technological University, Singapore
carridhana@ntu.edu.sg ccheng@pmail.ntu.edu.sg zvitali@ntu.edu.sg
Abstract
Many segmentation problems in medical imaging rely
on accurate modeling and estimation of tissue intensity
probability density functions. Gaussian mixture modeling, currently the most common approach, has several
drawbacks, such as reliance on a specific model and
iterative optimization. It also does not take advantage
of substantially larger amount of data provided by 3D
acquisitions, which are becoming standard in clinical environment. We propose a novel completely non-parametric
algorithm to estimate the tissue intensity probabilities in
3D images. Instead of relying on traditional framework of
iterating between classification and estimation, we pose
the problem as an instance of a blind source separation
problem, where the unknown distributions are treated as
sources and histograms of image subvolumes as mixtures.
The new approach performed well on synthetic data and
real magnetic resonance (MR) scans, robustly capturing
intensity distributions of even small image structures and
partial volume voxels.
I.
. Introduction
Many segmentation problems in medical imaging rely
on accurate modeling and estimation of tissue intensity
probability density functions (pdfs) [20], [24], [30], [14],
[21], usually in the context of statistical region-based
segmentation. Commonly, tissue intensity probabilities are
represented using the finite mixture (FM) model [20],
[7], [29], and its special case the finite Gaussian mixture
(FGM) [27], [1]. In these models the intensity pdf of each
tissue class is represented by a parametric (e.g. Gaussian
in the case of FGM) function called the component density
while the intensity pdf of the whole image is modeled by
a weighted sum of the tissue component densities. The
fitting is usually done using Expectation Maximization
(EM) algorithm [4], [16], [24], [21], [8], which iterates
between soft classification and parameter estimation until
a stable state is reached.
The main deficiency of FGM models is that the tissue
intensity distributions do not always have a Gaussian form.
The noise in magnetic resonance (MR) images is known
to be Rician rather than Gaussian [9]. Partial volume (PV)
voxels represent a mixture of ‘pure’ classes and have
non-Gaussian distribution even when the pure classes are
Gaussian [12], [14], [26], [24], [28].
Another problem generally associated with the FM+EM
framework is the local convergence properties of the iterative EM algorithm, requiring sufficiently close parameter
initialization [7], especially for distribution means [6]. The
convergence of the EM algorithm to a more meaningful
optimum can be improved by including prior information
in the classification step, such as pixel correlations [25],
MRF priors [15], [28], [30], [18] or probabilistic atlas [15],
[18], [21]. However, probabilistic atlases are not available
for some applications, as is in the case of segmentation
of brain lesions [22] or localization of fMRI activity [23].
Moreover, reliance on prior information can cause bias in
estimation [25].
Finally, the FM+EM approach often fails to take advantage of substantially larger amount of data present in
3D images, which are becoming more and more common
due to increasing use of MR and CT scanning techniques.
We propose a novel non-parametric algorithm to estimate
tissue intensity probabilities in 3D images that completely
departs from traditional classification-estimation framework. To illustrate the main idea behind our approach,
consider the following introductory example.
Shown in Figure 1 are the histograms of a 3D T1weighted MR image and two of its 2D slices. The variability in the shape of 2D histograms is due to varying tissue
proportions across the slices. While this variability can
1765
2009 IEEE 12th International Conference on Computer Vision (ICCV)
978-1-4244-4419-9/09/$25.00 ©2009 IEEE
3
1.2
Let hi ∈ RL be the L-bin histogram of Vi , normalized
to sum to 1. Then
x 10
1
0.8
hi = (ΣK
j=1 λij fj ) + ei
0.6
0.4
0.2
0
0
50
100
150
200
250
300
200
250
300
(a) 3D Image and its histogram
3
4.5
x 10
4
3.5
3
2.5
2
1.5
1
0.5
0
0
50
100
150
(b) Transverse slice 128 and its histogram
(1 ≤ i ≤ N )
(1)
where λij is the j-th tissue proportion in the i-th subvolume, Σj λij = 1, and ej is the noise term that reflects the
difference between the actual probability distribution and
its finite approximation by a histogram.
Let H = [h1 h2 . . . hN ]T and Λ = {λij }. Rewriting (1)
in a matrix form yields
⎡
⎤
⎤ ⎡
⎤
⎡
h1
f1
e1
⎢ h2 ⎥
⎢ f2 ⎥ ⎢ e2 ⎥
⎥
⎥ ⎢
⎥
⎢
H =⎢
(2)
⎣ ... ⎦ = Λ⎣ ... ⎦ + ⎣ ... ⎦
hN
fK
eN
3
9
x 10
8
7
6
5
4
3
2
1
0
0
50
100
150
200
250
300
(c) Transverse slice 152 and its histogram
Fig. 1. Histograms of a 3D brain image and several of its slices
potentially provide useful information for mixture estimation, it is traditionally discarded by performing estimation
directly on the 3D histogram. Instead, the histogram of
each 2D slice can be treated as a mixture realization
of component densities, with the number of realizations
potentially larger than the number of components. This
allows stating the unmixing problem as a blind source
separation (BSS) problem [3], [5].
To solve the problem we use a framework that is
similar to that of independent component analysis (ICA)
[11], but without relying on the independence assumption.
Instead we use the fact that underlying components must
be valid probability distributions with different means,
which results in a simple convex optimization problem that
guarantees convergence to a global optimum.
II.
. Problem Statement
Let V be a 3D image volume partitioned into a set
of N subvolumes V1 , V2 , . . . , VN . We assume the voxel
intensities of V can take L distinct values and are
drawn from K unknown probability mass functions (pmf)
f1 , f2 , . . . , fK ∈ RL . For example, a brain volume can be
assumed to have 3 main tissues, white matter (WM), gray
matter (GM) and cerebrospinal fluid (CSF), so K = 3. For
an 8-bit acquisition, L = 256. Subvolumes can be chosen
arbitrary, for example as coronal, sagittal or transverse
slices of the 3D volume.
which is identical to Blind Source Separation (BSS)
formulation with subvolume histograms as mixtures and
unknown tissue pmf’s as sources. Our goal is to estimate
f1 , f2 , . . . , fK as well as their mixing proportions Λ given
the mixture matrix H.
Our solution requires several assumptions, most of
which are general to a BSS problem: N ≥ K, L > K, and
sufficient variability of mixing proportions Λ. These can be
easily satisfied with proper choice of partitioning and histogram dimensionality. We also assume that distributions
f1 , f2 , . . . , fK have different means and are sufficiently
separated from each other, where the meaning of sufficient
separation is detailed in Section III. These assumptions are
not very restrictive and are generally satisfied for medical
images [27], [19].
III.
. Proposed Solution
BSS problem has been studied extensively in recent
years, with several solutions proposed for selected special cases, e.g. factor analysis (FA) [13] for Gaussian
sources and independent factor analysis (IFA) [2] or independent component analysis (ICA) [11] for independent
non-Gaussian sources. These cannot be extended to our
case because our source components are neither Gaussian
nor independent but instead are constrained to represent
valid probability distribution functions of voxel intensities
within approximately uniform image regions.
As with the ICA, the first step is to determine the
original subspace spanned by tissue pmf’s by applying
principal component analysis (PCA) to the mixture matrix
H. Assuming that PCA was successful, there will be a linear relationship between f1 , f2 , . . . , fK and p1 , p2 , . . . , pK
F = TP
(3)
where F = [f1 , f2 , . . . , fK ]T and P = [p1 , p2 , . . . , pK ]T .
Estimation of T for K = 2 is the topic of Section III-A.
Case K > 2 is treated in Section III-B.
1766
A..Estimating Two Unknown Components
For K = 2, let p1 ,p2 be the estimated principle components and μ1 ,μ2 be the means of f1 and f2 respectively.
Without loss of generality, we assume μ1 < μ2 . Then
according to the following lemma, the mean μ of any valid
pmf that is a linear combination of p1 and p2 must satisfy
μ1 − 1 ≤ μ ≤ μ2 + 2 , where 1 and 2 are usually small
and can be ignored.
Lemma 1: Let K = 2 and μ1 < μ2 be the means of
underlying probability distributions. Let g = τ1 p1 + τ2 p2
satisfy ΣL
j=1 g(j) = 1 and g(j) > 0 for 1 ≤ j ≤ L. Then
μ1 − 1 (μ2 − μ1 ) ≤
where 1 =
ΣL
j=1 jg(j)
1
f (i)
maxi ( f2 (i) )−1
≤ μ2 + 2 (μ2 − μ1 ) (4)
and 2 =
1
1
.
f (i)
maxi ( f1 (i) )−1
The
2
equalities are achieved when g = f1 − 1 (f2 − f1 ) and
g = f2 + 2 (f2 − f1 ).
Proof: From g = τ1 p1 + τ2 p2 and equation (3) it
follows that g is also linear combination of fi ’s:
g = ζ1 f1 + ζ2 f2
If we ignore 1 and 2 , the equalities in (4) are achieved
when g = f1 or g = f2 . Similar statement can be made
in case of more than two components, see the following
corollary.
Corollary 1: Let K > 2 and μ1 < μ2 < . . . < μK
be the means of underlying probability distributions. Let
L
g = ΣK
i=1 τi pi satisfy Σj=1 g(j) = 1 and g(j) > 0 for
1 ≤ j ≤ L. Then
μ1 ≤ ΣL
j=1 jg(j) ≤ μK
The equalities holds when g = f1 or g = fK .
An important implication of Lemma 1 and its corollary
is that one can unmix two components by minimizing or
maximizing the mean of a linear combination of principal
components, subject to constraints that this linear combination represents a valid pmf. In other words, the coefficients
K
τ1i of f1 = ΣK
i=1 τ1i pi and τKi of fK = Σi=1 τKi pi can
be estimated by solving the following Linear Programming
optimization problems:
L
minimize: ΣK
i=1 τ1i Σj=1 (jpi (j))
K
subject to: Σi=1 τ1i pi (j) ≥ 0 , 1 ≤ j ≤ L
L
ΣK
i=1 τ1i Σj=1 pi (j) = 1
Then
L
ΣL
i=1 g(j) = 1 → Σj=1 ζ1 f1 (j) + ζ2 f2 (j) = ζ1 + ζ2 = 1
(5)
and
L
minimize: −ΣK
i=1 τKi Σj=1 (jpi (j))
subject to: ΣK
i=1 τKi pi (j) ≥ 0 , 1 ≤ j ≤ L
L
ΣK
i=1 τKi Σj=1 pi (j) = 1
L
μ = ΣL
j=1 jg(j) = Σj=1 j[ζ1 f1 (j) + ζ2 f2 (j)]
L
= ζ 1 ΣL
j=1 jf1 (j) + ζ2 Σj=1 jf2 (j)
= ζ1 μ1 + ζ2 μ2
From (5) we can use the following parametrization: ζ1 =
1 + α and ζ2 = −α. Then, μ = μ1 − α(μ2 − μ1 ) and
to minimize μ we should make α as large as possible.
However, the largest possible α is controlled by nonnegativity of g:
(1 + α)f1 (i) − αf2 (i)
f1 (i)
α
α
≥ 0
≥ α(f2 (i) − f1 (i))
≤ f2 (i)1
≤
−1
1
f (i)
maxi ( f2 (i) )−1
f1 (i)
1
1
maxi ( ff21 (i)
(i) )
−1
(6)
which leads to the right side of inequality (4).
In practice, i ’s can be assumed small or even zero. For
example, if fi components are Gaussian on unbounded domain, it can be straightforwardly shown that maxi ff21 (i)
(i) =
maxi ff12 (i)
(i)
(8)
(9)
When the number of components is 2, (8) and (9) provide
the complete solution. When the number of components
is larger than two, (8) and (9) produce components with
smallest and largest mean. The next section discusses how
remaining components can be estimated.
B..Estimating More Than Two Components
This leads to the left side of inequality (4). Similarly, using
parametrization ζ1 = −α and ζ2 = 1 + α we obtain μ =
μ2 + α(μ2 − μ1 ) and hence to maximize μ we need to
choose α as large as possible, which results in
α≤
(7)
First, let’s assume that K = 3, and f1 and f3 have been
estimated using (8) and (9). Then the remaining component
f2 can be estimated by minimizing its overlap with the first
two components, which can be solved using another linear
programming problem, as shown in the following lemma.
Here notation ·, · stands for inner product between two
vectors.
2
2
Lemma 2: Let min(f1 , f3 ) > f1 , f2 +f2 , f3 .
If f1 , f3 are known, then the τi coefficients of f2 =
ΣK
i=1 τi pi are the solution of the following linear programming problem:
minimize: Σ3i=1 τi pi , (f1 + f3 )
subject to: Σ3i=1 τi pi (j) ≥ 0, 1 ≤ j ≤ L
Σ3i=1 τi ΣL
j=1 pi (j) = 1
= ∞ and hence 1 = 2 = 0.
1767
(10)
Proof: The sum of overlaps between the unknown
component and f1 ,f3 is
g, (f1 + f3 ) = Σ3i=1 ζi fi , (f1 + f3 )
= Σ3i=1 ζi fi , (f1 + f3 )
= ζ1 f1 2 + (ζ1 + ζ3 )f1 , f3 +
ζ2 (f1 , f2 + f2 , f3 ) + ζ3 f3 2
Let function w(ζi ) = g, (f1 + f3 ), then
IV.
. Experimental Results
wζ 1 (ζi ) = f1 2 + f1 , f3 wζ 3 (ζi )
wζ 2 (ζi )
2
= f3 + f1 , f3 = f1 , f2 + f2 , f3 A..Implementation
Since w(ζi ) is a linear function of ζi ’s, 0 ≤ ζi ≤ 1 and
ζ1 + ζ2 + ζ3 = 1, its minimum occurs at ζk = 1, ζj =
0, j = k, where k is the coordinate along which w has the
smallest slope, i.e.
k = arg min wζ i
i
According to the lemma conditions, min(f1 2 , f3 2 ) >
f1 , f2 +f2 , f3 , hence min(wζ 1 , wζ 3 ) > wζ 2 . Therefore,
the minimum value of w(ζi ) is achieved when ζ1 = 0, ζ3 =
0 and ζ2 = 1 and g = Σ3i=1 ζi fi = f2 .
The
necessary
condition
in
Lemma
2,
min(f1 2 , f3 2 ) > f1 , f2 + f2 , f3 , can be
interpreted as sufficient separation between the underlying
components, since the underlying functions are nonnegative. More specifically, it requires that the sum of
overlaps between f1 and f2 and between f2 and f3
is smaller than the norm of f1 or f3 . This translates
into a minimum SNR of 1.655 in the case of Gaussian
components. This requirement can be easily satisfied for
most medical images.
In the case of more than three components, starting from
the two first estimated components, all other components
can be estimated one by one by minimizing their overlap
with all previously estimated components, as shown in the
following lemma.
Lemma 3: Let f1 , . . . , fK be the underlying components, of which the first n are already known. Let
min Σnj=1 fi , fj > max Σnj=1 fi , fj i,i≤n
i,i>n
(11)
Then g = ΣK
i=1 τi pi , where τi are solutions of the following
linear programming problem, will coincide with one of the
remaining unknown components.
minimize:
subject to:
n
ΣK
i=1 τi pi , Σa=1 fa K
Σi=1 τi pi (j) ≥ 0, 1 ≤ j ≤ L
L
ΣK
i=1 τi Σj=1 pi (j) = 1
k is the coordinate along which w has the smallest slope.
According to (11), the minimum must correspond to one
of the unknown components, i.e. k > n.
Note that one of the terms on the right side of inequality
(11) is equal to the norm of a component, while all other
terms on both sides correspond to overlaps between components. Hence condition (11) can be again interpreted as
sufficient separation between the underlying components.
(12)
Proof: Let function w(ζ1 , . . . , ζK ) = g, Σnj=1 fj .
Then wζ i = Σnj=1 fi , fj . As mentioned in Lemma 2, the
minimum of w occurs when ζk = 1, ζj = 0, j = k, where
Our algorithm was implemented fully in Matlab, using
built-in functions pcacov and linprog to estimate PCA
components and perform linear programming optimization.
The volume partition was limited to cuboids of size
5 × 5 × 5, which was chosen empirically and used for
all subsequent experiments.
During initial testing on simulated data we discovered
that the non-negative constraint imposed on estimated
components fi was too strict. The histogram noise and
errors in estimating the subspace can lead to an infeasible
optimization problem or a very narrow search space. To
overcome this we relaxed the non-negativity constraint
1
fi ≥ 0 to fi ≥ − 2L
, where L is the number of histogram
bins. This negative bound was small enough not to cause
any visible estimation problems in our experiments.
B..Estimating intensity distributions from structural
MR data
We applied our algorithm on T1-weighted brain images from two publicly available data sets, BrainWeb (http://www.bic.mni.mcgill.ca/brainweb/) and IBSR
(http://www.cma.mgh.harvard.edu/ibsr/). The BrainWeb
data set contains realistic synthesized brain volumes with
varying degrees of noise and intensity nonuniformity, and
1 × 1 × 1mm3 resolution. The IBSR data set contains real
MR acquisitions made on a 1.5 T scanner with resolution
1 × 1 × 1.5mm3 . Both data sets contained ground truth
for GM and WM. In addition, the BrainWeb data set also
contained ground truth for CSF. We further augmented
the ground truth to include mix classes of partial volume
voxels, namely the GM-WM (for both data sets) and
the CSF-GM (for BrainWeb data set only). The partial
volume voxels were defined as the voxels located near
the boundary between two tissues. Practically, these were
identified by performing a one voxel erosion of each
tissue with standard 6-neighbor connectivity. All non-brain
tissues were removed prior to processing.
Our algorithm provided excellent estimates of peak
position, shape and proportion of each distribution on
BrainWeb data set, when only the main classes were estimated (Fig. 2). Inclusion of mix classes slightly reduced
1768
0.015
0.015
0.01
probability
probability
Correlation - mean [range]
0.005
0.01
Prop. error - mean [range]
0.005
0
50
100
150
pixel intensity
200
0
250
50
100
150
pixel intensity
200
250
[GM WM]
0.947
[0.758-0.995]
0.072
[0.002-0.172]
[GM WM GM-WM]
0.911
[0.704-0.997]
0.099
[0.005-0.241]
TABLE I. Correlation and proportion error between estimated
and true distributions, averaged over 18 IBSR volumes
(a) Noise=3
C.. Estimating distribution of activated voxels from
simulated functional MR data
0.012
0.01
0.01
probability
probability
0.008
0.008
0.006
0.006
0.004
0.004
0.002
0.002
0
0
50
100
150
pixel intensity
200
250
50
100
150
200
250
100
150
200
250
pixel intensity
(b) Noise=5
0.01
0.01
0.009
0.008
0.008
probability
probability
0.007
0.006
0.004
0.006
0.005
0.004
0.003
0.002
0.002
0.001
0
50
100
150
pixel intensity
200
0
250
50
pixel intensity
(c) Noise=7
Fig. 2. Estimating 3 classes [CSF GM WM] (left) and 3 pure
classes [CSF GM WM] + 2 mix classes [CSF-GM GM-WM]
(right) on BrainWeb data. Ground truth distributions are shown
using dotted lines.
the quality of estimation, especially for large noise levels.
However, it is remarkable that our algorithm was capable
of capturing a two-peak shape of CSF-GM mix class
distribution [Fig. 2 a) and b)], which would not be possible
with a Gaussian mixture model. To compare our algorithm
with several other approaches on the volume with noise=7,
compare Fig. 2 c) with Fig. 6 in the reference [6].
0.01
0.009
0.008
0.008
0.007
0.007
probability
probability
0.01
0.009
0.006
0.006
0.005
0.005
0.004
0.004
0.003
0.003
0.002
0.002
0.001
0
0.001
50
100
150
pixel intensity
200
250
0
50
100
150
pixel intensity
200
250
Fig. 3. Estimating 2 classes [GM WM] (left) and 2 pure classes
[GM WM] + 1 mix class [GM-WM] (right) on IBSR volume 8.
Ground truth distributions are shown using dotted lines.
Our algorithm also performed well on the IBSR data set
(Fig. 3, Table IV-B). Here by proportion error we meant
the mean absolute difference between estimated and true
proportions for each class. To estimate the quality of distribution shape estimation we used the average correlation
between estimated and true distributions.
Activated regions in functional MRI experiments are
typically detected using significance threshold testing [17].
This allows controlling for Type I error but not for Type
II error. While the knowledge of “activated” distribution would be helpful in determining a more appropriate
threshold [10], the small size of activated class makes it
challenging to estimate its distribution.
To simulate functional MRI data we created a set of synthetic 200 × 200 × 200 resolution images, where activated
regions were modeled as uniform intensity cubes of size
3×3×3 voxels on a uniform background. The images were
corrupted by Gaussian noise, thus creating two Gaussian
distributions for non-activated and activated classes. We
then varied the difference between the means of the two
distributions and proportion of activated (smaller) class to
obtain different samples for our experiments.
To provide quantitative performance assessment we
used parameter estimates to determine the optimal threshold that minimizes misclassification error (the sum of Type
I and II errors). We then recorded the percentage increase
in misclassification error when using the found threshold
vs. the optimal one, derived from the true distribution
parameters, comparing the results with those obtained by
the EM algorithm (Figure 4). The EM algorithm was
initialized with true parameter values, corresponding to
ideal performance that can rarely be achieved in practice,
as the parameter values are never known precisely.
The performance of our algorithm was not affected for
SNR (ratio of the difference between means and standard
deviation) range of 2 to 6. In each case, the estimated
threshold was practically as good as the optimal threshold
as long as the proportion of smaller class was larger
than 0.68-1%. Performance of EM-based estimation was
significantly worse than that of our approach for SNR=2-4,
and is comparable (or slightly better) for SNR=6. However,
considering that an imperfect initialization would likely to
reduce EM algorithm performance, our approach offers a
superior alternative to EM algorithm in this application.
V.
. Conclusions
We developed a novel completely non-parametric algorithm to estimate the tissue intensity probability distributions in 3D images, by treating the problem as an
1769
Proporon of smaller class
40%
60%
Ours
80%
EM
100%
120%
140%
160%
180%
200%
Proporon of smaller class
0%
20%
40%
60%
Ours
80%
EM
100%
120%
140%
160%
180%
200%
(a)
(b)
Percentage increase in misclassificaon rate
0%
20%
Percentage increase in misclassificaon rate
Percentage increase in misclassificaon rate
Proporon of smaller class
0%
20%
40%
60%
Ours
80%
EM
100%
120%
140%
160%
180%
200%
(c)
Fig. 4. Our approach vs. EM-based estimation. Percentage increase in misclassification error as a function of smaller class proportion
for a) SNR=2, b) SNR=4, c) SNR=6
instance of blind source separation problem. The new
approach performed well on several sets of synthetic data
and real magnetic resonance (MR) scans, robustly capturing intensity distributions of even small image structures
and partial volume voxels. The new approach presents
a promising alternative to traditional approaches of EMbased estimation.
VI.
. Acknowledgements
This work was supported by SBIC C-012/2006 grant
provided by A*STAR, Singapore (Agency for Science and
Technology and Research).
References
[1] J. Ashburner and K. J. Friston. Unified segmentation. Neuroimage,
26(3):839–51, 2005.
[2] H. Attias. Independent factor analysis. Neural Computation, 11(4),
1999.
[3] A. Belouchrani, K. Abed-Meraim, J.-F. Cardoso, and E. Moulines.
A blind source separation technique using second-order statistics.
IEEE Trans. on Signal Processing, 45(2):434–444, 1997.
[4] C. A. Bouman and M. Shapiro. A multiscale random field model
for bayesian image segmentation. IEEE Trans. Image Processing,
3(2):162–77, 1994.
[5] J.-F. Cardoso. Infomax and maximum likelihood for blind source
separation. IEEE Signal Processing Lett., 4(4):112–114, 1997.
[6] M. B. Cuadra, L. Cammoun, O. C. T. Butz, and J.-P. Thiran.
Comparison and validation of tissue modelization and statistical
classification methods in T1-weighted MR brain images. IEEE
Trans. Med. Imag., 24(12), 2005.
[7] M. A. Figueiredo and A. K. Jain. Unsupervised learning of
finite mixture models. IEEE Trans. Pattern Anal. Machine Intell.,
24(3):381–396, 2002.
[8] H. Greenspan, A. Ruf, and J. Goldberger. Constrained Gaussian
mixture model framework for automatic segmentation of MR brain
images. IEEE Trans. Med. Imag., 25(9):1233–45, 2006.
[9] H. Gudbjartsson and S. Patz. The rician distribution of noisy mri
data. Magn Reson Med, 34(6):910–4, 1995.
[10] N. V. Hartvig and J. L. Jensen. Spatial mixture modeling of fMRI
data. Hum Brain Mapp, 11(4):233–48, 2000.
[11] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component
Analysis. Wiley-Interscience, 2001.
[12] D. Laidlaw, K. W. Fleischer, and A. H. Barr. Partial-volume
bayesian classification of material mixtures in mr volume data using
voxel histograms. IEEE Trans. Med. Imag., 17(1):74–86, 1998.
[13] D. Lawley and A. Maxwell. Factor analysis as a statistical method.
The Statistician, 12(3):209–229, 1962.
[14] K. V. Leemput, D. V. F. Maes, and P. Suetens. A unifying framework
for partial volume segmentation of brain MR images. IEEE Trans.
Med. Imag., 22(1), Jan 2003.
[15] K. V. Leemput, F. Maes, D. Vandermeulen, and P. Suetens. Automated model-based tissue classification of MR images of the brain.
IEEE Trans. Med. Imag., 18(10):897–908, 1999.
[16] T. Lei and W. Sewchand. Statistical approach to X-ray CT imaging
and its applications in image analysis. IEEE Trans. Med. Imag.,
11(1):62–9, 1992.
[17] B. R. Logan and D. B. Rowe. An evaluation of thresholding
techniques in fmri analysis. Neuroimage, 22(1):95–108, 2004.
[18] J. L. Marroquin, B. C. Vemuri, S. Botello, F. Calderon, and
A. Fernandez-Bouzas. An accurate and efficient bayesian method
for automatic segmentation of brain MRI. IEEE Trans. Med. Imag.,
21(8):934–45, 2002.
[19] R. Nagarajan and C. A. Peterson. Identifying spots in microarray
images. IEEE Trans. Nanobiosci., 1(2), 2002.
[20] F. O’Sullivan. Imaging radiotracer model parameters in PET: a
mixture analysis approach. IEEE Trans. Med. Imag., 12(3):399–
412, 1993.
[21] H. Park, P. H. Bland, and C. R. Meyer. Construction of an
abdominal probabilistic atlas and its application in segmentation.
IEEE Trans. Med. Imag., 22(4):483–92, 2003.
[22] M. Prastawa.
Automatic brain tumor segmentation by subject specific modification of atlas priors. Academic Radiology,
10(12):1341–1348, 2003.
[23] J. C. Rajapakse, F. Kruggel, J. M. Maisog, and D. Y. von Cramon.
Modeling hemodynamic response for analysis of functional MRI
time-series. Hum Brain Mapp, 6(4):283–300, 1998.
[24] S. Ruan, C. Jaggi, J. Xue, J. Fadili, and D. Bloyet. Brain tissue
classification of magnetic resonance images using partial volume
modeling. IEEE Trans. Med. Imag., 19(12):1179–87, 2000.
[25] S. Sanjay-Gopal and T. J. Hebert. Bayesian pixel classification using
spatially variant finite mixtures and the generalized EM algorithm.
IEEE Trans. Image Processing, 7(7):1014–28, 1998.
[26] P. Santago and H. D. Gage. Quantification of mr brain images by
mixture density and partial volume modeling. IEEE Trans. Med.
Imag., 12(3):566–74, 1993.
[27] P. Schroeter, J. M. Vesin, T. Langenberger, and R. Meuli. Robust
parameter estimation of intensity distributions for brain magnetic
resonance images. IEEE Trans. Med. Imag., 17(2):172–86, 1998.
[28] D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, and R. M. Leahy. Magnetic resonance image tissue
classification using a partial volume model. Neuroimage, 13(5):856–
76, 2001.
[29] J. Tohka, E. Krestyannikov, I. D. Dinov, A. M. Graham, D. W.
Shattuck, U. Ruotsalainen, and A. W. Toga. Genetic algorithms for
finite mixture model based voxel classification in neuroimaging.
IEEE Trans. Med. Imag., 26(5):696–711, 2007.
[30] Y. Zhang, M. Brady, and S. Smith. Segmentation of brain mr images
through a hidden markov random field model and the expectationmaximization algorithm. IEEE Trans. Med. Imag., 20(1):45–57,
2001.
1770
Download