Anatomically Guided Registration for Multimodal Images

advertisement
Anatomically Guided Registration for Multimodal Images
Manasi Datar, Girish Gopalakrishnan, Sohan Ranjan, Rakesh Mullick
Imaging Technologies, GE Global Research, Bangalore, India
manasi.datar@ge.com
Abstract
With an increase in full-body scans and
longitudinal acquisitions to track disease progression,
it becomes significant to find correspondence between
multiple images. One example would be the monitoring
size/location of tumors using PET images during
chemotherapy to determine treatment progression.
While there is a need to go beyond a single parametric
transform to recover misalignments, pure deformable
solutions become complex, time-consuming and
unnecessary at times. Simple anatomically guided
approach for whole body image registration offers
enhanced alignment of large coverage inter-scan
studies. In this experiment, we provide anatomy
specific transformations to capture their independent
motions. This solution is characterized by an
automatic segmentation of regions in the image,
followed by a custom registration and volume
stitching. We have tested this algorithm on phantom
images as well as clinical longitudinal datasets. We
were successful in proving that decoupling
transformations improves the overall registration
quality.
Keywords: Constraint-based registration, non-rigid
transformation, piece-wise custom registration,
anatomically-guided,
multi-modality,
radiation
therapy, longitudinal studies, optical flow, mutual
information
minutes. In a full body scan, there exist several regions
that are capable of local movement that is independent
from the global body motion. Each such movement is
restricted by the degrees of freedom that exist around
each joint. One such instance is evident in full body
oncology imaging as depicted in Fig. 1. An initial
diagnostic CT scan helps in localizing the pathology.
A targeted PET scan to confirm the presence of a
specific tumor/lesion may follow this. The treatment
planning procedure may involve simulated-CT
acquisition in the surgery position. Treatment efficacy
and disease progression may be monitored by followup scans in the same sequence.
1. Introduction
Fig. 1 Timeline showing the various scans acquired
over a cancer treatment cycle*.
In the field of medical imaging, we observe a steady
increase in the number of scans a patient possesses for
different purposes (diagnosis, therapy planning, intraprocedural and follow-up). These images either
obtained temporally or from different modalities, have
increasing coverage. Current scanning methods
(CT/MR) can acquire full body scans in the order of
As the head and body are both acquired, there are
often registration errors due to neck and shoulder
motion. Another example of this is observed when
multiple scans of the patient are acquired over a period
of time to study pathological changes. These factors
have played a significant role in the emergence of a
registration problem that needs to account for relative
motion between different parts of the body.
*Datasets depicted are only representative images and do not belong to the same patient
35th Applied Imagery and Pattern Recognition Workshop (AIPR'06)
0-7695-2739-6/06 $20.00 © 2006
There is a clear need to go beyond a global rigid
solution in these cases. In the past, this problem has
been addressed using two broad methods: piece-wise
rigid and pure deformable. Prior algorithms perform
piece-wise registrations [1, 2] where the interesting
regions within a volume are selected based on
structure/feature or intensity. Some algorithms perform
a non-rigid registration to obtain the deformation field
[3]. These algorithms are very slow (run into several
hours) and fail to recover large deformations. Finite
element-based registration for local internal body
registration has been loosely recommended in the
literature but has not been implemented or
demonstrated to work. These solutions are not suited
for the application at hand and if used forcefully, might
result in unnecessary computations and inaccuracies.
Since the need for registering volumes that have a
greater coverage is becoming more significant, we
have addressed this issue by using independent
anatomy driven registrations that are finally integrated
to compose the registered volume. Our approach
leverages an understanding of the underlying (skeletal)
anatomy and its motion constrained by joints. This
knowledge is used to separate body regions based on
gross body kinematics.
The three resultant regions were put through custom
registrations based on prior knowledge of suitable
algorithms for a given anatomy-modality pair [Fig. 2].
Post-segmentation, we gather more information about
the two images (such as the nature of the objects, the
elasticity, the type of camera/scanner used for
acquisition etc.) to speculate on an optimal method of
registration for every partition. An example
customization for 2 CT images is as follows: For the
head data from each image, we start by centering the
two volumes and minimize the mean square error by
applying structured versor transforms. We used a
regular step gradient descent algorithm for the
optimization. For the second pair, delineating the
thoracic area, we applied a pure deformable optical
flow based [3] transformation [Appendix] to minimize
the sum of squared differences. We pre-process these
images using histogram matching and perform a global
regularization using a Gaussian kernel after the flow
calculation step. For the third region, below the split at
the pelvic bone, we maximize the mutual information
[6][Appendix] by selecting samples to represent the
volumes. Sampling the volume improves the speed of
the algorithm. We also used a 12 parameter affine
transform to shift the moving image during the
matching process.
2. Method
In our experiment with the phantom, we used the
IEC/NEMA Image Quality Phantom. 2 phantoms each
containing six spheres (ID: 10, 13, 17, 22, 28, 37mm)
and a lung insert were filled with F-18 water (Sphere
to Background ratio of 5/1) and placed back to back in
the FOV of a GE DRX PET/CT scanner. A CT scan
was performed covering both phantoms (120kVp,
150mA, 0.5 sec rotation, pitch of 1.375, 16*1.25). A
PET scan followed CT scan. Prior to the PET scan, one
of the phantoms was rotated by 9 degrees clockwise
along its edge. PET data was acquired in 2D mode
(Duration: 5 min. OSEM-based reconstruction (2 It, 35
subsets). This phantom set-up was an attempt to
simulate the head-torso arrangement in real data.
We then, tested our approach on 8 CT-CTAC
datasets adding to 21 time points. CT-PET whole
body and neurological images were also used for this
investigation. We split the whole body image along the
axial plane into parts that are capable of independent
motion. Perceptible anatomy like the neck, arms, knee
and pelvis can be segmented using a manual z-plane
selection or by automatic schemes such as the profiling
algorithm used by Suryanarayanan et al., [4] and Shen
et al. [5]. We have separated the body along two
planes: one at the neck and the other around the pelvis.
35th Applied Imagery and Pattern Recognition Workshop (AIPR'06)
0-7695-2739-6/06 $20.00 © 2006
Fig. 2 Flowchart highlighting the important steps
in the algorithm
3. Results
Fig. 3 shows the results obtained on the IEC/NEMA
phantom. As the simulated head (top) and the torso
(bottom) move independently, a single transform
obtained by rigid registration is not sufficient to obtain
correct alignment. This is shown in the top row of Fig.
3. The bottom row shows the alignment after the
motion of the head and torso are decoupled and custom
registration is carried out. The improvement in the
results is appreciable. Results on clinical data are
shown in fig. 4 and fig. 5. In fig. 5, the proposed
approach is able to decouple and recover the motion in
the neck and pelvis region independently. In both
cases, we can see a qualitative improvement in the
alignment after anatomically guided registration.
splitting and custom registration. Cropping of
hands in this case improved the registration
Fig. 3 Top: Result of rigid registration on the
phantom {torso (left) and head (right)}. Bottom:
Result of our approach (split indicated as a pink
line) on the phantom
Fig. 4 Left: CT-PET alignment following rigid
registration. Right: CT-PET alignment following
35th Applied Imagery and Pattern Recognition Workshop (AIPR'06)
0-7695-2739-6/06 $20.00 © 2006
Fig.
5 Top: CT-CT alignment after rigid
registration. Bottom: CT-CT alignment after
custom-registration using splits.
4. Conclusion and future directions
We have tested our algorithm on phantom images,
CT-CTAC (Diagnostic CT with CT scans from a
PET/CT scanner used for attenuation correction),
whole body CT-PET and neuro CT-PET images. In all
the above cases, we were successful in decoupling
transformations that occur above and below split
locations, thus improving the overall registration
quality. All images showed significant qualitative
improvement in their registered state when compared
to rigid-only. A clinical assessment is presently being
defined to gauge the potential applicability in routine
clinical practice. Computation time and accuracy
comparison between our method and a non-rigid
approach is yet to be performed. The main challenge as
foreseen today is the relative ease/difficulty to
segregate (patient-specific) regions (moving/nonmoving) e.g. hands-up/hands-down and the bend of the
vertebral column. We have successfully tested this
approach on both mono- and multi-modality images.
Integrating segmentation and registration visually
improved image quantification, diagnosis and planning
in oncology. While performing deformable registration
especially in CT images, rigidity of regions such as the
bones need to preserved. This requires a classification
scheme that will aid in consistently defining regions
based on their rigidity.
An exhaustive clinical evaluation has been
structured
5. Acknowledgements
The authors wish to thank Dr. Mawlawi from MD
Anderson Cancer Center, Houston, TX for providing
data and coordinating experiments using the phantom
and Andre Van Nuffel for providing real data and case
studies.
6. References
[1] G. Gopalakrishnan, S. V. Bharath Kumar, A. Narayanan
and R. Mullick, “A fast piece-wise deformable method for
multi-modality image registration,” Proceedings of the 34th
Applied Imagery Pattern Recognition Workshop (AIPR’05),
2005, 114 -119
[2] A. Pitiot, G. Malndain, E. Bardinet and P. Thompson,
“Piecewise Affine Registration of Biological Images,”
Second International Workshop on Biomedical Image
Registration (WBIR'03), 2003, 91-101
35th Applied Imagery and Pattern Recognition Workshop (AIPR'06)
0-7695-2739-6/06 $20.00 © 2006
[3] J-P. Thirion, "Image matching as a diffusion process: An
analogy with Maxwell’s demons," Medical Image Analysis,
vol. 2, no. 3, 1998, 243-260
[4] S. Suryanarayanan, R. Mullick, Y. Mallya, V. Kamath
and N. Nagaraj, “Automatic partitioning of head CTA for
enabling segmentation”, Medical Imaging 2004: Image
Processing. Proceedings of the SPIE, 2004, vol. 5370, pp.
410-419
[5] H. Shen and E. Bartsch, “Intelligent data splitting for
volume data”, Medical Imaging 2006: Image Processing.
Proceedings of the SPIE, 2004, vol. 6144, pp. 1419-1425
[6] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen and
W. Eubank, “Non-rigid multi-modality registration,”
Medical Imaging 2001: Image Processing. Proceedings of
the SPIE, 2001, pp. 1609-1620
[7] L. Ibanez, W. Schroeder and L. Ng, and J. Cates, “The
ITK Software Guide,” Kitware, Inc, 2005
7. Appendix
Mutual information [6] between two discrete random
variables X and Y is defined as
I ( X ; Y ) = H ( X ) − H (Y X ) = H (Y ) − H ( X Y )
Which can also be written as
I ( X ; Y ) = H ( X ) + H (Y ) − H ( X , Y )
…(1)
The optical flow [3] or displacement D (i ) between
images X (i ) and Y (i ) is calculated using the
equation:
D(i ) = −
( X (i ) − Y (i ))∇X (i )
∇X
2
+ (Y (i ) − X (i )) 2 / K
…(2)
Where K is a normalization factor that accounts for the
units imbalance between intensities and gradients. This
factor is computed as the mean squared value of the
pixel spacings. The inclusion of K makes the force
computation to be invariant to the pixel scaling of the
images [7].
Download