Restrospective Fusion of CT and MR Brain Images

From: AAAI Technical Report SS-94-05. Compilation copyright © 1994, AAAI (www.aaai.org). All rights reserved.
Restrospective
Using
Fusion of CT and MR Brain
Mathematical
Operators.
Images
Petra A. van den Elsen
Radiological Sciences Laboratory, Dept. of Radiology, Stanford University School of Medicine,
Stanford, CA94305-5488 , USA
Computer Vision Research Group, Dept. of Radiology and Nuclear Medicine,
University Hospital Utrecht, The Netherlands
emaih elsen~s-word.st anford.edu
Abstract
or geometrical image features [14, 15, 16]. Since no fiducials are needed, methods using patient related properties are optimal in patient-friendliness and can be used
even with images that were acquired before the need
of matching was recognized. The latter is called retrospective matching. However, the extraction of corresponding features from images of different modalities is
not trivial.
Many methods need human interaction to
select the corresponding features or to guide the matching procedure, and mayconsequently result in user subjectivity.
This paper presents an automated matching
method using only patient related image features.
An automated method to register three-dimensional Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) brain images is presented.
In a first step, mathematically well founded differential operators in scale space are applied to 2Dor
3D CT and MRIimages resulting in feature images
depicting ’ridgeness’. In a second step, a fully automatic hierarchical correlation scheme is applied
to match the feature images. 2D and 3D matching
results are shown.
Methods
Introduction
In this work, mathematically well founded, geometrically invariant feature operators, combining differential geometry with Gaussian blurring [17], are used to
automatically extract similar "ridgeness" features from
brain images. The purpose of blurring is to explicitly
reveal course scale image structures. At the appropriate blurring level, our "ridge" operator, in 2D defined
as the second order derivative of the image intensity in
the direction normal to the (local) gradient, produces
very similar feature images from CT and MRimages,
primarily depicting the center curve of the cranium
[18, 15]. In 3D our measure of ridgeness is defined as
the second derivative of the image intensity, minimized
(or maximizedfor inverse ridges or troughs) over all directions perpendicular to the local gradient [18]. Since
the interindividual differences in skull thickness are relatively small, a fixed scale can be selected at which skull
features for the matching procedure are extracted. In
figure 1 two corresponding slices taken from a matched
pair of high resolution CTand MRIdata sets registered
by fiducial markers placed on the patient’s skin [8] are
shown, together with the corresponding feature image
slices taken from feature images produced by the 3D
ridge operator.
In general, grey value correlation is unsuited to register images made with different imaging devices, because
of the different physical realities involved. The CTand
Clinical diagnosis, as well as therapy planning and evaluation, are increasingly supported by images generated
by more than one imaging device. There are many instances desiring integration of the information obtained
by various imaging devices [1]. For example, combination of information obtained from CT and MRI brain
images, may provide a neurosurgeon with better un=
derstanding of the three dimensional relation between
space occupying lesions and neighboring anatomy. Radiotherapy planning may benefit from CT/MRImatching also. ACTscan is needed for dose distribution
calculations, while the contours of the target lesion are
often best outlined on MRI.
Matching algorithms may be classified according to
various distinguishing criteria. Van den Elsen et al.
propose and elaborate a classification schemefor registration algorithms [1]. The main division of matching
algorithms is into methods that use some kind of fiducial marker system attached to the patient and methods
that use patient related image properties to determine
the transformation relating two images. Examples of
fiducial marker systems are stereotactic frame transformation relating two images, attached to the skull by
fixation screws [2, 3, 4], head moulds [5], dental frames
[6], or skin markers [7, 8]. Patient related image properties can be, for example, anatomical landmark points
[9, 10, 11], objects (e.g., skull, brain, ventricles) [12, 13],
30
.~1 "ridgeness" feature images, however, show such
lilarity that fully automated correlation techniques
i be used for registration purposes. The correlation is
:formed iteratively. For each coordinate transforman T of image g the correlation measure CT of images
mdg is calculated using the formula
p-1 q-1 r-1
E (f(z,y,z).
Cr=E
E
z=O y=O z=O
gCT(z,y,z))),
ere the parameters p, q, and r are the z-,y-, and z.~ensions off; while f(z, y, z) denotes the intensity of
voxel with coordinates (z, y, z) in image f. In order
decrease the computational load, we create a mul;solution pyramid for each feature image, in which
h level contains a lower resolution version of the imin the level below. In each level of the hierarchy, the
tching transformation is determined by an exhaussearch of a small number of regions in parameter
ce. These regions are largest at the level of lowest
flution, and are reduced at higher resolutions. All
raising extrema at each level are used as starting
nts around which the parameter space in the next
.~1 is scanned. At the ground level, the extremum
h the best correlation value is taken to be the global
remum, and is used to determine the transformao
t matrix. If in one image a ridge is present in an
in which the other image is flat, then the high
;eness of the ridge voxels will be multiplied by nearvalues, and therefore hardly influence the value of
This means that, as long as there are sufficient
liar structures in the feature images, the matching
)rithm performs well, even with some dissimilarities
;ent, or if part of the patient’s anatomyis present
.nly one of the scanned volumes.
Results
h resolution CT and MRIdata sets were obtained
two patients, who each carried three skin markers
1.
he time of acquisition
The MRIdata set of the first patient contains 200 conaus slices, obtained on a 1.5 Tesla Philips Gyrosca~
’ACS.A transverse T1 weighted FFEsequence was used
one acquisition (TR/TE30/9 msec). Slice thickness
ram, and pixel size is approximately 1 ram. The CT
¯ set acquired from the same patient, was obtained on
3ilips Tomoscan350 at tube settings of 120 kV and
mA.The set contains 100 contiguous slices with slice
messof 1.5 mmand pixel size of approximately0.9 mm.
MRIdataset of the secondpatient contains 100 contigu~lices, obtained on a 0.5 Tesla Philips GyroscanTS-II. A
sverse T1 weighted FFEsequence was used with one ac.tion (TR/TE30/13 msec). Slice thickness is 1.5 ram,
I size is approximately 0.9 ram. The CTdata set ac.~d from the samepatient was obtained on a Philips To:an LX,at tube settings of 120 kVand 125 mA.The set
ains 128contiguousslices withslice thicknessof 1.5 ram,
pixel size of approximately0.7 ram.
31
Figure h A matching pair ofCT (a) and MRI(b) slices,
and the the same slices taken from CT (c) and MRI(d)
feature volumes obtained with the 3D "ridge" operator.
The center curve of the cranium shows up dark in the
CT feature image and light in the MRfeature image.
Both 2D and 3D matching experiments were performed. For the 2D experiments, we used several pairs
of corresponding image slices that had been matched
using the skin markers [8]. One image of each pair
was artificially rotated and translated. 2D ridge detection, followed by 2Dcorrelation resulted in a very good
approximation of the inverse of the artificially applied
transformation. The difference was typically up to 0.75
pixels for translational parameters, and within 0.75 degrees for rotation [19]. To probe the robustness of our
methodin cases where there are dissimilarities between
the scanned volumes, different parts of the image data
were replaced by zeros. Thematching accuracy did not
change.
In our 3D experiments, a 3D generalization of our
"ridge" detector was invoked on the original high resolution CT and MRIdata sets. After 3D correlation of
the feature images, we compared the results obtained
with our feature correlation scheme to the results obtained by matching using the skin markers. The transformation determined by our feature-correlation scheme
differed somewhatfrom the matrix calculated using the
skin marker positions. By visual comparison of the
results it became evident that, although both methods had produced good results, the feature correlation
scheme had achieved the highest accuracy in both pairs
of images[19]. 3D matching results of the first pair if
image data are shownin figures 2, 3, and 4.
Figure3: A sagittally reformatted
MRI
slice of the first
imagepair is shownin the middle. Skull contoursobtained fromthe matchingCTslices, are overlayed.The
upperand lower frames showmagnified parts of that
slice.
Figure 2: Results obtained with the 3D featurecorrelation-matching
schemefor the first pair of image
data. In the center framea transversal MRIslice is
shown,with skull contoursobtainedfromthe matching
CTslices overlayed. Theupperandlower framesshow
magnifiedpartsof that slice.
tion," IEEEEngngMedBiol, vol. 12, no. 1, pp. 26-39,
1993.
Acknowledgements
This researchwassupportedin part by the Netherlands
ministries of Education&Science andEconomic
Affairs
througha SPINgrant, andby the industrial companies
Philips MedicalSystems, KEMA,
andShell Research.
Partial supportwas obtained fromthe Netherlands
Organizationfor Scientific Research(NWO)
through
Fellowship.
Wethank our colleagues dr. G. Wilts (Hospital
"Medisch Spectrum"in Enschede), and drs. L.M.
l:tamos, P.F.G.M.van Waes,andF.W.Zonneveld(University HospitalUtrecht)for their efforts to supplythe
CTand MRimages.
References
[1] P. A. Vanden Elsen, E. J. D. Pol, and M. A. Viergever,
"Medical image matching--a review with classiflca-
[2] T. M. Peters, J. A. Clark, A. Olivier, E. P. Marchand, G. Mawko, M. Dieumegarde, L. V. Muresan, and
R. Ethier, "Integrated stereotaxic imaging with CT,
MRimaging, and digital subtraction angiography," Radiol, vol. 161, no. 3, pp. 821-826, 1986.
[3] D. Vandermeulen, P. Suetens, J. Gybels, A. Oosterlinck, and G. Ma~chal, "A prototype medical workstation for computerassisted stereotactic neurosurgery,"
in Computer Assisted Radiology (H. U. Lemke, M. L.
Rhodes, C. C. Jaffe, and R. Felix, eds.), pp. 386-389,
Springer-Verlag, Berlin, Germany,1990.
[4] J. Zhang, M. F. Levesque, C. L. Wilson, R. M. Harper,
J. Engel, Jr., R. Lufldn, and E. J. Behnke, "Multimodality imaging of brain structures for stereotactic
surgery," Radiol, vol. 175, no. 2, pp. 435-441, 1990.
[5] D. P. E. Kingsley, M. Bergstr5m, and B.-M. Berggren,
"Acritical evaluation of two methodsof head fixation,"
Neuroradiology, vol. 19, pp. 7-12, 1980.
[6] D. J. Hawkes, D. L. G. Hill, and E. C. M. L. Bracey,
"Multi-modal data fusion to combine anatomical and
32
[10] D. L. G. Hill, D. J. Hawkes, J. E. Crossman, M. J. Gleeson, T. C. S. Cox, E. C. M. L. Bracey, A. J. Strong,
and P. Graves, "Registration
of MRand CT images
for skull base surgery using point-like anatomical features," Brit J Radial, vol. 64, no. 767, pp. 1030-1035,
1991.
[11] G. Q. Maguire Jr., M. Noz, H. Rusinek, J. Jaeger,
E. L. Kramer, J. J. Sanger, and G. Smith, "Graphics applied to medical image registration, ~ IEEE Camp
Graph Appl, vol. 11, no. 2, pp. 20-28, 1991.
[12] C. A. Pelizzaxi,
G. T. Y. Chen, D. R. Spelbring,
R. R. Weichselbanm, and C.-T. Chert, "Accurate threedimensional registration
of CT, PET, and/or MRimages of the brain," J Comput Assist Tomogr, voL 13,
no. 1, pp. 20-26, 1989.
[13] H. Jiang, K. Holton, and R. Robb, "Image registration of multimodality 3-D medical images by chamfer matching," in Proc SPIE Vol 1660 Biomedical
Image Processing and Three-Dimensional Microscopy,
pp. 356-366, SPIE Press, Bellingham, WA,1992.
[14] D. L. Collins, T. M. Peters, W. Dai, and A. C. Evans,
~Model based segmentation of individual brain structures from MRI data," in Proc SPIE Vol 1808 Visualization in Biomedical Computing (R. A. Robb, ed.),
pp. 10-23, SPIE Press, Bellingham, WA, 1992.
[15] P. A. Van den Elsen, J. B. A. Maintz, and M. A.
Viergever, "Geometry driven multimodality
image
matching," Brain Topogr, vol. 5, no. 2, pp. 153-158,
1992.
[16] A. Gugziec and N. Ayache, "Smoothing and matching
of 3-D space curves," in Proc SPIE Vol 1808 Visualization in Biomedical Computing (R. A. Robb, ed.),
pp. 259-273, SPIE Press, Belllngham, WA,1992.
ure 4: A coronally reformatted MRIslice of the first
ge pair is shown in the middle. Skull contours ob~ed from the matching CT slices, are overlayed. The
er and lower frames show magnified parts of that
[17] L. M. J. Florack, B. M. ter Haax Romeny, J. J. Koenderink, and M. A. Viergever, "Scale and the differential structure of images," Image and Vision Computing,
vol. 10, no. 6, pp. 376-388, 1992.
.~.
physiological information in the head and heart," in
Cardiovascular Nuclear Medicine and MRI ( J. H. C.
Reiber and E. E. van der Wall, eds.), pp. 113-130,
Kluwer Academic Publishers, Dordrecht, The Netherlands, 1992.
[18] P. A. Van den Elsen, J. B. A. Malntz, E. J. D. Pal,
and M. A. Viergever, "Image fusion using geometrical features," in Proc SPIE Vol 1808 Visualization in
Biomedical Computing (R. A. Robb, ed.), pp. 172-186,
SPIE Press, Bellingham, WA, 1992.
D. J. Hawkes, D. L. G. Hill, E. D. Lehmann, G. P.
Robinson, M. N. Maisey, and A. C. F. Colchester, "Preliminary work on the interpretation
of SPECTimages
with the aid of registered MRimages and an MRderived 3D neuro-anatomical atlas," in 3D Imaging in
Medicine. NATOASI Series, Vol. F60 (K. H. H6hne,
H. Fuchs, and S. M. Pizer, eds.), pp. 241-251, SpringerVerlag, Berlin, Germany, 1990.
[19] P. A. Van den Elsen, Multimodality matching of brain
images. PhD thesis, Utrecht University, the Netherlands, June 1993.
P. A. Van den Elsen and M. A. Viergever, "Marker
guided multimodality matching of the brain," Ear Ra.
dial. in press¯
A. C. Evans, S. Marrett, L. Collins, and T. M. Peters,
~Anatomical-functional correlative analysis of the human brain using three dimensional imaging systems,"
in Proc SPIE gol I09~, Med Im Ill: Image Processing
(R. H. Schneider, S. J. DwyerIII, and R. Gilbert Jost,
eds.), pp. 264-274, SPIE Press, Bellingham, WA,1989.
33