An Investigation into the Design ... Reconstruction Algorithm for Continuous Wave Tomosynthesis

An Investigation into the Design of an Image
Reconstruction Algorithm for Continuous Wave
Tomosynthesis
by
Pierre-Guy Felix Douyon
B.S., Massachusetts Institute of Technology (2012)
Submitted to the Department of Electrical Engineering and Computer
Science
in Partial Fulfillment of the Requirements for the Degree of Master of
Engineering in Electrical Engineering and Computer Science
at the Massachusetts Institute of Technology
June, 2013
@ 2013 Massachusetts Institute of Technology.
All rights reserved.
Author:
Department of Electrical Engineering and Computer Science
May 28, 2013
Certified by:
Cardinal Warde, Professor, Thesis Supervisor
May 28, 2013
Certified by:
Louis Poulo, AnalogiFellow, Thesis Co-Supervisor
May 28, 2013
Accepted by:
Dennis M. EReeman Chairman, Masters of Engineering Thesis Committee
ACtap
MIT
Libraries
Document Services
Room 14-0551
77 Massachusetts Avenue
Cambridge, MA 02139
Ph: 617.253.2800
Email: docs@mit.edu
http://Iibraries.mit.edu/docs
DISCLAIMER OF QUALITY
Due to the condition of the original material, there are unavoidable
flaws in this reproduction. We have made every effort possible to
provide you with the best copy available. If you are dissatisfied with
this product and find it unusable, please contact Document Services as
soon as possible.
Thank you.
Some pages in the original document contain text that
runs off the edge of the page.
An Investigation into the Design of an Image Reconstruction
Algorithm for Continuous Wave Tomosynthesis
by
Pierre-Guy Felix Douyon
Submitted to the Department of Electrical Engineering and Computer Science
May 28, 2013
In Partial Fulfillment of the Requirements for the Degree of
Master of Engineering in Electrical Engineering and Computer Science
Abstract
Continuous wave (CW) tomosynthesis provides many theoretical advantages over traditional breast imaging techniques, like mammography. These theoretical advantages
include improved spatial resolution, better information weighting, and the ability
to resolve structures hidden by tissue overlap. However, unlike mammography, tomosynthesis is a three-dimensional imaging modality and it will require a complex
reconstruction algorithm to process its projection data and recreate the object planes.
This paper details a reconstruction algorithm proposed for continuous wave tomosynthesis which is based on the filtered backprojection reconstruction algorithm used in
computed tomography. The algorithm for tomosynthesis requires modifications to the
filtered backprojection algorithm in order to account for the lack of complete data
which is generated by a tomosynthesis scan. In order to demonstrate the potential viability of a CW tomosynthesis reconstruction algorithm, the modified reconstruction
algorithm is applied to several simulated phantom images and analyzed for reconstructed image quality.
3
4
Acknowledgements
I would like to express sincere appreciation for all of the help which has been given
to me throughout my investigation.
Firstly, I would like to thank all of the employees at Analogic Corporation who
took part in my investigation and guided me through the project. In particular, I
would like to thank Louis Poulo and Guy Besson who mentored me and passed down
to me more knowledge, intuition, and insight than I had found during my four years
at MIT. I am extremely grateful for their incredible guidance.
Secondly, I would like to thank my family and friends for their continual support
during my time at MIT. Even when I was at my most overwhelmed, they constantly
strengthened me and pushed me to keep going.
Finally, I would like to thank the staff at MIT for all their assistance during this
investigation. Especially Professor Cardinal Warde, for serving as the supervisor of
this thesis and providing excellent guidance in the direction of both the project and
of the writing of this document.
5
6
Contents
1
2
1.1
Overview of Mammography
. . . . . . . . . . . . . . . . . . . . . . .
11
1.2
Shortcomings of Mammography . . . . . . . . . . . . . . . . . . . . .
12
1.3
An Alternative Modality . . . . . . . . . . . . . . . . . . . . . . . . .
12
15
Overview of Pulsed Wave Tomosynthesis
2.1
3
11
Introduction
. . . . . . . . . . . . .
16
. . . . .
. . . . . . . . . . . . .
16
Shortcomings of PW Tomosynthesis ....
2.1.1
Z-Axis Spatial Resolution
2.1.2
In-plane Spatial Resolution
. . . .
. . . . . . . . . . . . .
18
2.1.3
Disproportionate Weighting
....
. . . . . . . . . . . . .
18
Overview of Continuous Wave Tomosynthesis
3.1
21
Advantages of CW Tomosynthesis . . . . . . . . . . . . . . . . . . . .
21
. . . . . . . . . . . . . . . . . . . .
22
3.1.1
Z-Axis Spatial Resolution
3.1.2
In-plane Spatial Resolution
. . . . . . . . . . . . . . . . . . .
23
3.1.3
Disproportionate Weighting
. . . . . . . . . . . . . . . . . . .
24
4
Design Approach
25
5
Filtered Backprojection
27
. . . . . . . . . . . . . . . . . . . . . . . . . .
28
. . . . . . . . . . . . . . . . . . . . .
30
5.3
Parallel Beam Implementation . . . . . . . . . . . . . . . . . . . . . .
32
5.4
Fan Beam Implementation . . . . . . . . . . . . . . . . . . . . . . . .
34
5.1
Fourier Slice Theorem
5.2
Filtered Backprojection Theory
7
6
7
8
Obstacles to FBP Implementation
39
6.1
Spatial Domain Ringing .......
6.2
Parallel Beam Approximation Error ...................
41
6.3
Erroneous Normalization .........................
43
6.4
Non-constant Slice Thickness
44
.........................
......................
40
Tomosynthetic Filter Design
45
7.1
Identifying High Frequency Content . . . . . . . . . . . . . . . . . . .
45
7.2
Parker Weights
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
7.3
Corrections to Normalization . . . . . . . . . . . . . . . . . . . . . . .
47
Results and Discussion
49
8.1
51
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Reconstruction Program Code
A .1
53
M ain.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
A .2 Filters.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
A.3 SpatialWeight.c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
A.4 SpectralFilter.c
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
A.5
Backprojection.c
8
List of Figures
2-1
Diagram of PW tomosynthesis geometry
. . . . . . . . . . . . . . . .
16
5-1
Diagram of parallel beam geometry . . . . . . . . . . . . . . . . . . .
30
5-2
Diagram of fan beam geometry
. . . . . . . . . . . . . . . . . . . . .
35
6-1
Comparison of 3600 scan and 100' scan with no filtering
. . . . . . .
40
6-2
Diagram of the parallel beam approximation data loss . . . . . . . . .
43
8-1
Comparison of 3600 scan and 1000 scan with tomosynthetic filtering .
50
8-2
Comparison of 1000 scans with varying shelf sizes
. . . . . . . . . . .
50
9
10
Chapter 1
Introduction
Breast cancer is one of the most common forms of cancer worldwide and causes
hundreds of thousands of deaths each year. There exist many procedures that can
aid in the early detection and subsequent treatment of breast cancer, the most notable
among these procedures being the mammography screening. Patients have a much
greater chance of survival if these cancers can be detected early on and with a high
degree of accuracy. While mammography has so far been suitable for the task of early
detection, the process, unfortunately, has many shortcomings which hinder its ability
to effectively detect cancers.
1.1
Overview of Mammography
Mammography, the principal technique for breast imaging, is similar to other medical
imaging modalities in that it uses a dose of ionizing x-ray radiation to penetrate
outer layers of tissue and image the inside of the breast. However, unlike computed
tomography and other imaging procedures, mammography does not perform a threedimensional reconstruction. Instead, a mammography screening only consists of two
two-dimensional projections taken from two different angles. These projections are
then analyzed in an attempt to locate cancers and other abnormalities within the
breast. Mammography is advantageous because the procedure can be performed in a
very short amount of time and is able to produce high resolution images using a low
11
radiation dosage.
1.2
Shortcomings of Mammography
However, the diagnostic quality of images produced by mammography is limited by
the two-dimensional nature of the projections. The reduction of a three-dimensional
object down to a two-dimensional image means that a significant loss of information
occurs in the process. The reason for this loss is that all of the information encoded in
the axis normal to the image plane (the z-axis) is superimposed onto one plane. The
overlapping of all of the object planes on one another makes it very easy for an object
of interest (such as microcalcification clusters) to be obscured by the dense tissues
surrounding it [4, 8].
Superimposition makes it much more difficult to accurately
identify abnormalities in the breast using mammography projections. If a system
could be designed which would allow for three-dimensional breast imaging, eliminating
superimposition, it would greatly enhance the diagnostic quality of the reconstructed
image and increase the radiologist's ability to accurately diagnose the patient [4,8].
1.3
An Alternative Modality
Given these shortcomings of mammography, a scanning procedure called tomosynthesis has been proposed as an alternative imaging modality on the basis that it
provides a more complete image of the breast, making it easier to identify tumors.
However, current tomosynthesis scanning geometries also have many shortcomings,
such as poor image resolution, that may cancel out the many advantages the procedure has over conventional mammography. To resolve these shortcomings, a modified
version of tomosynthesis using photon counting technology, called continuous wave
(CW) tomosynthesis, has been under investigation to determine the feasibility and
practicality of its use in medical imaging applications. Improved in-plane spatial resolution, z-axis spatial resolution, and information weighting are among the theoretical
improvements expected from CW tomosynthesis.
12
This thesis will analyze the benefits of this modified version of tomosynthesis and
investigate a suitable reconstruction algorithm to be used in the new procedure. The
ensuing sections will begin with an overview of pulsed wave (PW) tomosynthesis
and its shortcomings in the field of breast imaging followed by an overview of CW
tomosynthesis and its advantages in the field of breast imaging.
Following these
sections will be a derivation and explanation of the base reconstruction algorithm,
known as filtered backprojection, from which the CW reconstruction algorithm will be
derived. Then, this thesis will dive into specific changes that need to be made to the
filtered backprojection algorithm due to complications caused by the tomosynthesis
architecture.
Finally, the document will conclude with an analysis and discussion
of the differences in reconstructed image quality obtained by utilizing the modified
filtered backprojection algorithm.
13
14
Chapter 2
Overview of Pulsed Wave
Tomosynthesis
Tomosynthesis, on the other hand, is a medical imaging modality that scans an object
over a small angle and uses the information collected to perform a three-dimensional
reconstruction. A typical tomosynthesis system, much like a mammography system,
consists of a source which emits pulses of x-rays and a charge integrating (CI) detector which absorbs x-rays coming from the source [4,8]. More specifically, the CI
detector uses the incident photon energy to record attenuation information of the
breast being imaged. Unlike mammography, however, in these pulsed wave (PW)
tomosynthesis systems the x-ray source traverses a small arc while taking a series of
low-dose projections at different angles to the breast. Data from these projections
is then fed to a reconstruction algorithm which computes images of breast planes
along the z-axis. This new ability to resolve objects along the z-axis and image multiple object planes helps to significantly reduce the superimposition effects mentioned
above and makes tomosynthesis a promising candidate as a replacement modality for
mammography [8].
15
Arc of motion of x-ray tube, showing
individual exposures
+-
Reconstructed Slims
Compression Paddle
Compressed Breast
Detector Housin
Figure 2-1: Diagram of the PW tomosynthesis geometry as implemented by Hologic's
Selenia Dimensions 2D/3D system [8]. The source at the top will send out x-ray
pulses as it moves in an arc over the object.
2.1
Shortcomings of PW Tomosynthesis
In order for tomosynthesis to be able to serve as a suitable alternative to mammography there are several crucial requirements that must be met. Firstly, that the overall
dosage and duration of the tomosynthesis scan is comparable to the dosage and duration of mammography. And secondly, that there is an overall increase in spatial
resolution along the z-axis while maintaining a spatial resolution that is comparable
to mammography in the imaging plane. Unfortunately, these requirements inherently
conflict with each other in a tomosynthesis scanning geometry and it is very difficult
to meet all of the criteria using the current pulsed source/charge integrating detector
system.
2.1.1
Z-Axis Spatial Resolution
Producing an image with a high spatial resolution along the z-axis is accomplished by
increasing the total number of projections taken and by increasing the total scan angle.
Increasing the number of projections is equivalent to increasing the spatial sampling
frequency of the detector [5]. It is well known from the Nyquist theorem, that the
highest spatial frequency that will be resolvable in the image is equal to half of the
16
spatial sampling frequency. So by increasing the number of projections it is possible
to detect even higher spatial frequencies and resolve even smaller features. It is also
possible to improve z-resolution by increasing the angle of the imaging arc. A wider
arc angle creates better noise immunity for the lower z-spatial frequencies because
it allows for better separation between the planes for a given number of projections
[5].
But by increasing the arc angle for a given number of projections, there is an
accompanying increase in the angle between contiguous projections and, as a result, an
increase in the number of reconstruction artifacts [8]. It is important to constrain the
angular spacing between two contiguous projections for PW tomosynthesis systems to
avoid unacceptable artifacts in the reconstructed image [5]. Therefore, to maximize
resolution of the spectrum of spatial frequencies, both high and low, along the z-axis
it is crucial to maximize both the number of projections and the total scan angle.
Unfortunately, the number of projections of any given scan is limited by two factors: the dosage limit stated earlier and the electronic noise of the detector [4,8]. Since
the overall radiation dose of the scan must be apportioned among all projections, increasing the number of projections will necessarily decrease the dose per projection.
This results in less signal content per projection and a decrease in the signal-to-noise
ratio (SNR). Readout noise of the detector, which is inherent in the charge integrating architecture, further complicates this matter because it affects all projections
independent of the projection dose [4].
Increasing the number of projections only
increases the z-resolution to a point before the diagnostic quality deteriorates due
to poor SNR. It should be noted that it is possible to increase the signal strength
(and thus the SNR) by increasing the average x-ray photon energy. However, this
would-be solution simply masks the current conflict with a new one as an increase in
x-ray energy is accompanied by a decrease in contrast which again causes a deterioration in diagnostic quality of the image. These trade-offs restricts the total number
of projections that current PW tomosynthesis apparatuses are able to acquire and in
turn restricts the total scan angle which is dependent on the angular separation of
the projections.
17
2.1.2
In-plane Spatial Resolution
In-plane spatial resolution is determined by the resolution of the focal spot and is
extremely difficult to optimize due to the scanning geometry of tomosynthesis. The
ability to resolve objects along the z-axis is the inherent advantage that tomosynthesis has over mammography, but this advantage will quickly disappear if tomosynthesis proves unable to effectively resolve in-plane micro calcification clusters. For
acceptable detection of these clusters, the in-plane resolution must be at least 70-100
microns [8]. This is feasible in mammography but the geometry of PW tomosynthesis
scans complicates the matter. The main problem is that the tomosynthesis scanner
must, while traversing the arc, send out pulses of radiation to be integrated by the
detector. Because the source is moving while transmitting these x-ray pulses, tangential smearing of the focal spot occurs [8]. It would be possible to decrease tangential
smearing by sending out pulses of a shorter duration. However, if the duration is reduced but the dose is held constant, this means that the x-ray source power must be
increased which results in an increase in the focal spot size. Therefore, any attempts
at reducing the tangential smearing of the focal spot will be offset by an increased
smearing along the perpendicular axis of the focal spot. Due to the PW scanning
modality, some in-plane spatial resolution is necessarily lost in both dimensions of
the plane due to both the tangential and perpendicular smearing of the focal spot.
2.1.3
Disproportionate Weighting
Finally, there is another limitation inherent to the charge integrating detectors themselves in that they naturally place a higher weight on the photons which carry less
useful information [3,4]. The reasoning behind this phenomenon is that charge integrating detectors work by storing energy imparted by an incident photon, as a result
the energy stored is directly proportional to the x-ray photon energy. Unfortunately,
this is in direct contrast with the fact that the x-ray photons arriving at the detector
contain information about the object that is inversely proportional to their energy [3].
The reason that photons of higher energy carry less information is because, for most
18
materials, high energy photons see a much lower attenuation value and are extremely
likely to penetrate the material as compared to low energy photons. Since high energy photons penetrate most materials so easily, their arrival at the detector is a poor
indicator of whether or not that photon traveled through a dense material, such as
bone, or a less dense material, such as water. However, since these high energy photons deposit more energy in the detector their contribution to the measured signal is
greater than that of the low energy photons, even though they actually contribute less
information. Then it is a natural property of charge integrating detectors that the
most important information about the object is somewhat overshadowed by the less
important information which is stored in the high energy photons [4]. Because the
reconstruction process receives less useful information, the reconstruction produced
will necessarily contain less useful information as a result.
19
20
Chapter 3
Overview of Continuous Wave
Tomosynthesis
The above conflicts may potentially all be mitigated to some degree by replacing
the traditional tomosynthesis system of a pulsed source/charge integrating detector
with a new tomosynthesis system consisting of a continuous source/photon counting
detector. Continuous wave (CW) tomosynthesis would use a photon counting detector
to count and record photon arrival events within a given time window (known as a
view time) while the source emits a continuous stream of x-rays as it traverses the arc.
Measurements collected over each individual view time would then constitute a single
projection in this new tomosynthesis system. With these changes CW tomosynthesis
would not only retain the advantages that PW tomosynthesis has over mammography,
but it has the potential to also improve upon the shortcomings inherent to the PW
system.
3.1
Advantages of CW Tomosynthesis
In essence, the difference between PW and CW tomosynthesis is that instead of
sampling the x-ray signal at the source and integrating photon energy, the new CW
system will sample the x-ray signal at the detector and sum photon events.
This
significant change alleviates many of the difficulties which hindered the full realization
21
of tomosynthesis' potential. Specifically, CW tomosynthesis should see a theoretical
increase in both z-axis and in-plane spatial resolution in addition to improving upon
the disproportionate weighting which is fundamental to charge integrating detectors.
There now exists the very realistic possibility that a tomosynthesis system utilizing a
continuous source/photon counting detector could develop into a superior alternative
to both traditional mammography and PW tomosynthesis.
3.1.1
Z-Axis Spatial Resolution
As stated above, increasing the number of projections and the total scan angle are
the two methods that can be used to increase z-axis resolution in a tomosynthesis
system. However, it has been shown that, in PW tomosynthesis, there is a trade-off
between increasing the number of projections and decreasing the SNR. It is important
to recall that there were two reasons why the SNR would decrease with an increasing
number of projections.
The first was due to the decreasing dose, and thus signal
content, apportioned to each projection. This limitation is shared by both the pulsed
source and continuous source tomosynthesis systems and will still result in a decrease
in SNR. The second reason for decreasing SNR, however, was due to the electronic
readout noise inherent in charge integrating detectors. Photon counting detectors do
not share this limitation, at least not to a first order approximation [4].
However,
there is an increase in photon noise as the duration of the view time decreases because
there are less photons that can be detected during that window. Fortunately, photon
counters can record events at relatively high rates (in the megahertz range) and so
it is possible to record a large number of photons in a couple of milliseconds. This
allows for a mitigation of the photon noise while also maintaining short view times
relative to PW tomosynthesis (for example, Hologic's breast tomosynthesis system
collects 15 views over a 4 second scan [8]). Rather than have the noise be limited by
total x-ray dose, as in the PW case, in CW tomosynthesis the noise is limited by the
duration of the view time [4].
This dependency of the photon noise on view time, coupled with high photon
counting rates, means that for a given number of projections CW tomosynthesis
22
should exhibit negligible photon noise, as compared to the electronic noise of the
charge integrating detector, yielding a significantly higher SNR than PW tomosynthesis [4]. This means that CW tomosynthesis can afford to obtain many more projections than pulsed tomosynthesis before the diagnostic quality of the image is compromised. Additionally, the increase in the number of projections allows for an increase
in the angle of the imaging arc, as noted earlier, which results in better noise suppression of lower z-spatial frequencies.
As a result, CW tomosynthesis can obtain
better spatial resolution than PW tomosynthesis in the z-axis due to the increased
number of projections and the increased scan angle.
It would seem then that the
use of a photon counting detector can potentially afford the opportunity to take full
advantage of the three-dimensional capabilities of tomosynthesis.
3.1.2
In-plane Spatial Resolution
With regards to the in-plane spatial resolution, the continuous system is able to
reduce the effects of tangential smearing and focal spot elongation by sampling at the
detector rather than pulsing the source. Tangential smearing is merely a function of
the amount of focal spot movement in a given projection. In a pulsed tomosynthesis
system, this is accomplished by reducing the pulse duration (smaller sampling time).
For a photon counting detector, on the other hand, this reduction in sampling time
can be accomplished without increasing the source power. By reducing the amount
of time the detector uses to sample the signal, rather than the source, it is possible to
decrease tangential smearing without elongating the focal spot. In fact, because this
new modality radiates x-rays continuously, with an overall dosage that is comparable
to pulsed tomosynthesis and mammography, it is possible to use a source with less
instantaneous power than the other two modalities resulting in a reduced focal spot
size and an in-plane resolution comparable to that of mammography.
23
3.1.3
Disproportionate Weighting
The final limitation discussed above concerned the problem of disproportionate weighting in charge integrating detectors. Fortunately, using a photon counting detector potentially alleviates this complication because a photon counter does not take energy
into account when recording a photon event (provided the photons are all above the
minimum detectable energy). Instead, because the photon counting detector records
photon arrival events each photon is given the same weight, regardless of its energy [4].
This inherent unity weighting can potentially be a significant improvement over the
poor weighting scheme built into the charge integrating detector. Furthermore, given
this increase of useful information, it may even be possible to lower the dosage of the
scan without sacrificing image quality [3]. This is because each photon transmitted
contributes more information, so less of them may be needed.
A reduced dosage
would allow the system to operate at less power still and reduce the size of the focal
spot while also being beneficial to the patient's health [3].
24
Chapter 4
Design Approach
While a tomosynthesis system utilizing a continuous source/photon counting detector
seems to hold a lot of promise as an improved breast imaging modality, there is still
much investigation that needs to be done to completely specify such a system. A
critical component of the investigation will involve designing a suitable reconstruction
algorithm for CW tomosynthesis. With an architecture as unique as this one, there
are sure to be numerous challenges involved in the design and implementation of this
algorithm. It will be important then, to follow in the footsteps of the work done on
other three-dimensional reconstruction algorithms in PW tomosynthesis as well as
computed tomography and to use these algorithms as a guiding light and a basis for
the development of a reconstruction algorithm for continuous wave tomosynthesis.
Therefore, the purpose of this investigation, working closely with Analogic Corporation, will be to design and implement an image reconstruction algorithm for
continuous wave tomosynthesis by researching, evaluating, and modifying current reconstruction algorithms in use for PW tomosynthesis and computed tomography. I
propose that, working closely with Analogic Corporation, I will design and implement
an image reconstruction algorithm for continuous wave tomosynthesis, by researching,
evaluating, and modifying current reconstruction algorithms in use for PW tomosynthesis.
25
26
Chapter 5
Filtered Backprojection
If CW tomosynthesis is to be a suitable replacement for mammography in diagnostic
screenings then its reconstruction algorithm will need to be designed so that it is fast
enough to allow for high patient throughputs [8].
With this requirement in mind,
there are two main classes of reconstruction algorithms from which the foundation of
the CW reconstruction algorithm can be chosen: filtered backprojection and iterative. Filtered backprojection algorithms rely on the Fourier Slice Theorem to restore
the imaged object using a high pass filter applied to the Fourier transform of the
projection data [9]. Iterative algorithms, on the other hand, recursively reconstruct
the imaged object until the object model converges to a solution which optimizes
some mathematical criteria [9]. It is important to note that there is another class of
reconstruction algorithms, simple backprojection, that was not chosen for the purposes of this investigation due to the strong inter-plane artifacts characteristic of this
reconstruction algorithm [9].
Filtered backprojection (FBP) was chosen as the base algorithm for the purposes
of this investigation because:
1. The FBP class requires much less computational time than the iterative/algebraic
class [1, 9]
2. It is possible to obtain a reconstruction of comparable image quality to that of
the iterative/algebraic class through appropriate filter design [1,9]
27
3. FBP is the class of reconstruction algorithms used by commercial PW tomosynthesis devices like the Hologic Selenia Dimensions 3D [8]
In addition, this investigation will base its FBP algorithm off of one commonly used
in CT reconstructions due to the fact that both tomosynthesis and CT are three
dimensional imaging modalities and that a tomosynthesis scan can be approximated
as a partial CT scan.
In order to understand how to design and optimize a reconstruction algorithm for
CW tomosynthesis, it was first necessary to fully understand both the theory behind
FBP and its implementation. As a result, this investigation begins with an in-depth
look at the FBP algorithm which is described by Kak and Slaney [7] and detailed in
the following sections. It is simpler to first analyze the FBP algorithm assuming the
projection data was obtained in a parallel beam geometry (as opposed to the fan or
cone beam geometries that are typically used in practice) and to then derive the fan
beam solution from the parallel beam solution.
5.1
Fourier Slice Theorem
The Fourier Slice Theorem, which is the backbone of filtered backprojection theory,
states that:
The Fourier transform of a parallel projection of an image f(x, y) taken
at angle 0 gives a slice of the two-dimensional transform [of the object],
F(u,v) subtending an angle 0 with the u-axis [7].
From [7] the derivation for this theorem is shown by first defining a new coordinate
system (t, s) where
t
cos0
sinG
x
s
-sin0
cos0
y
28
(5.1)
In this coordinate system, a projection, Po(t), and its Fourier transform, So(v), of
some object function
f
taken at angle 9 can be represented as
f
Po(t) =
=
So(v)
j
(5.2)
(t, s) ds
(5.3)
Po(t)e-22xwt dt
By substituting the definition of the projection integral, Po(t), and changing our
coordinate system according to (5.1) the equation for the Fourier transform of a
projection becomes
So(v)
=
So(v)
=
j
(5.4)
f(t, s)e-22wtds dt
jf
j
f(x, y)e-
2
2v(x cosO+y sinO)
dx dy
(5.5)
The second equation is equivalent to the definition of the spatial Fourier transform,
F(u, v), of f(x, y) along the line denoted by coordinates (u = v cos 0, v = v sin 0) [7].
Figure 5-1 from [7] shows a detailed representation of the parallel beam scanning
geometry and its relevant parameters.
This theorem suggests that every projection obtained will completely determine
all of the values along a single radial line in F(u, v). Therefore, if an infinite number of
projections are obtained from 0 = 0 to 9 = 21r, then it should be possible to completely
determine F(u,v) from which it would be possible to completely reconstruct the
object function f(x, y).
Unfortunately, in practice, reconstruction algorithms must work with a finite data
set and a finite number of projections which means that F(u, v) is known only along
a correspondingly finite number of radial lines. As a result, it is necessary for these
algorithms to interpolate values from points on one radial line to another. Typical
algorithms will use some kind of nearest neighbor or linear interpolation method in
order to determine the intermediate values. However, the interpolation error gets
larger for values of F(u, v) that are farther from the center as the distance between
radial lines increases. This causes large errors when reconstructing the high frequency
29
projection
P,
(t)
Y
7
f(x, y)
t.=xcos9+y sinS
t, -xcosS+y
sin0
Figure 5-1: Diagram of the parallel beam geometry showing the relations between
the various parameters [7].
components of an image which then manifests itself as an overall increase in noise in
the reconstruction. It is apparent then, that the Fourier Slice Theorem will not be
sufficient for performing accurate reconstructions [7].
5.2
Filtered Backprojection Theory
Ideally, when attempting to reconstruct the imaged object f(x, y) from a set of finite projections taken at angle 0, each of these projections would contain all of the
information in a wedge-shaped portion of the transform F(u,v). By obtaining wedgeshaped sections of F(u, v), as opposed to radial lines of F(u,y), it would be possible
to sum a finite number of wedges from 0 = 0 to 0 = 27r and recover the complete
transform F(u, v) [7].
In FBP, the attempt to reconcile this disparity constitutes
the filtering process where each projection is weighted in the frequency domain with
the goal of approximating a pie-shaped wedge of F(u,v). These filtered projection
transform "wedges" can then be summed together and inverse Fourier transformed to
30
f (x, y),
completely recover the object
this calculation constitutes the backprojection
step of FBP.
Kak and Slaney propose that the simplest way of performing the approximation
would be to multiply the transform of each projection, So(v) by the width of the
wedge at that frequency in order to approximate the "mass" of a wedge [7].
At
a given freqency v a wedge will have width 2irxvl/K, where K is the number of
projections obtained over 1800 [7].
In fact, the factor of v in the wedge's width is actually the Jacobian obtained
when changing from a rectangular coordinate system to a polar coordinate system.
This can be shown by starting with the formula for the inverse Fourier transform of
F(u, v)
f (x, y)
F(u, v)e 27(ux+vy) du dv
=
(5.6)
By transforming to polar coordinates (U = v cos 0 and v = v sin 0 and du dv
=
v dv dO) the equation becomes
f (x, y)
=
j
00
12
-7
0
f0
F(v, 9)e
2
7v(x cosO+y sin O)v dv
(5.7)
dO
It is possible to simplify the integeral by splitting it into two parts: one from 0 = 0
to 9 = 7r and another from 0 = 7r to 9 = 2-r
f (x, Y) =
j j
1 10
F(v,9)e
+
By using the property F(v, 9
2
7rv(xcosO+ysinO)v
F(v, 9 + r)d
2
(5.8)
dv dO
7v(x cos(O+lr)+ysin(O+7r))v
dv dO
(5.9)
+7r) = F(-v,0) the integral can be expressed in terms
of So(v)
f (x,y)
t dv d
=
F(v,0)1)v ej2
=
So(v) vle 27vt dv d9
0
-03
31
(5.10)
(5.11)
This definition may also be expressed in terms of the filtered projection Qo(t) where
QO(t)
=
f (x, y)
=
j
So(v)vle-"irwtdv
Qo(t) dO
(5.12)
(5.13)
In practice, apodizing functions, such as Hamming windows, are also typically used
in the filtering step to reduce noise and improve reconstructed image quality [7].
5.3
Parallel Beam Implementation
As of right now, the theory for FBP has only been developed in the continuous time
and frequency domains and any practical implementation of the algorithm must take
place on a finite and discretized data set. Unfortunately, there are two very serious
obstacles that would make a direct translation from the continuous definition to a
discrete definition an inadequate resolution.
1. The first obstacle occurs when the aperiodic convolution of (5.13) must be discretized and implemented as a periodic convolution which will cause interperiod
interference artifacts [7].
2. The second obstacle arises because the discretization of time and frequency
results in the lumping of several frequencies into the zero index bin (k = 0).
In the continuous definition, only the frequency v = 0 is zeroed out but in
the discrete definition this frequency and all others lumped into the zero index
bin are zeroed out resulting in a significant loss of intensity information in the
reconstructed image [7].
Fortunately, there are techniques to reduce or even eliminate these artifacts introduced by the discrete implementation.
Artifact contributions due to interperiod interference can be completely eliminated
by sufficiently zero-padding the data in the spatial domain, but it can never completely
eliminate the artifacts caused by the zeroing out of the zero index frequencies [7]. To
32
eliminate those artifacts, it is necessary to use a slightly different derivation of the
high pass filter proposed earlier. Assuming that the projection data are sampled with
a sampling period of r then the projections don't have any frequency content above
the cutoff frequency of W
y. As a result,
=
Qo(t) = j
(5.13) becomes
(5.14)
So(v)H(v)e-2'wtdv
where
v
H(v)
(5.15)
v<W
0
otherwise
Since the projection data values are only known at integer multiples of T the inverse
Fourier transform of the filter function, H(v), only needs to be specified for those
values
,
h(nr-) =
0,
n = 0
(5.16)
neven
7-2-2,
nodd
The filtered projection, Qo(t) can then be obtained by convolving the projection data
with the impulse response of the filter function (equivalent to multiplying the Fourier
transform of the projection data by the filter function). The convolution is derived
by discretizing (5.14)
00
Qo(nr) = r E
h(nr - kT)Po(kT)
(5.17)
k=-oo
However, as mentioned above, in practice each projection can only have a finite
number of values and will have no content outside of some range k = 0, 1, ...
,N
- 1
N-1
Qo(nr)
=
Tr
h(nT-kkr)Po(kr),
n=0,1,...,N-1
(5.18)
k=O
N-1
Qo(nr) =
r
E
h(kr)Po(n-r - kT),
k=-(N-1)
33
=
0, 1,. .., N
-
1
(5.19)
Performing the windowing as a convolution in the spatial domain, as opposed to
a multiplication in the frequency domain, enables the retention of frequencies in the
zero index bin. The reason for this is that convolving the projection data with an
impulse response function of finite extent (as dictated by (5.19)) is convolving the
projection data with a windowed impulse response function. Windowing h(n-r) in the
spatial domain causes a convolution of H(m -) with a sinc function which adds a
non-zero amplitude to the window function at the index m = 0 [7].
These two techniques, zero-padding the data and windowing the impulse response
of the window function, allow for a computer implementation of the FBP algorithm
using a finite, discretized data set.
The procedure involves convolving measured
projection data with the impulse response of the filter function (5.16) to obtain the
filtered projection Qo(nr). Then the backprojection step is performed by summing
together the filtered projections across all 0 from 0 to 27r (5.13). The next section will
detail the derivation of the FBP equations for a fan beam geometry from the FBP
equations for a parallel beam geometry.
5.4
Fan Beam Implementation
While analyzing FBP from the perspective of a parallel beam geometry simplifies the
analysis of the algorithm, in practice it is much less efficient to use a parallel beam
scanning geometry. This is because in order to collect parallel beam projection data,
a source will typically have to first scan the length of a projection and then rotate
to the next sampling angle and scan the length of the next projection resulting in
very long scan times [7].
Projection data can instead be generated much faster by
using a point source of radiation which scans the object using a fan-shaped beam of
equiangular rays. It is possible to derive the FBP equations for a fan-beam geometry
from the FBP equations for a parallel beam geometry.
In order to derive the fan beam equations, it will first be necessary to define a new
coordinate system in
#
and 7.
#
will be the angle the source makes with a reference
axis and -y will be the relative angle of a specific beam within the fan. The conversion
34
relations between (0, t)-space to (0, -y)-space are given by the following equations
0 = #+ Y
(5.20)
t = D sin-y
(5.21)
x
=
r cos
(5.22)
y
=
rsino
(5.23)
where D is the distance from the source to the origin, r is the distance from the origin
to a point (x, y), and 0 is the angle the point (x, y) makes with the x-axis. Figure 5-2
from [7] shows a detailed representation of the fan beam scanning geometry.
source
(LY
Figure 5-2: Diagram of the fan beam geometry showing the relations between the
various parameters [7].
Combining (5.13) with the continuous form of (5.17) yields the equation
f (x,y) =
f (X, y)
where Po(t) = 0 for
Iti
=
jf
-2,
2 0
-
Po(t)h(x cos 0 + y sin 0 - t) dt dO
(5.24)
P±
t,,
-tm
Po(t)h(x cos 0 + y sin 0 - t) dt dG
(5.25)
> tm and the second equation is equivalent to the first equation
using projections generated over 360* as opposed to 180*.
35
Using the coordinate transformation relationships above, (5.25) can be rewritten
as
1
27r-y
f (r, 0) =
parcsin(tm /D)
]
Pf+,(D sin -y)h(r cos(+-y-#)-D sin -y)D cos -y dy do
l]
-2
-arin(t./D)
(.6
(5.26)
Replacing P9+7(Dsin -y) with Ra(-y) and arcsin(tm/D) with ym (5.26) can be simplified to
f (r,#) = -
j
20
R(-y)h(r cos(o + y
-
) - D sin -y)D cos -yd d oy
(5.27)
f-,-m
Through a series of complex derivations demonstrated in [7] (5.27) can be represented in the following system of equations
f (r, #)
R"(y)
g1()
L(r, 0, #)
=
- -) dy do
2R(-)g(-y
(5.28)
= R (-y)D cos-y
(5.29)
=
(5.30)
=
2 sin-y
)2h(-)
v[D ± r sin(#
-
#)]2 ± [r cos(#
-
q)]
2
(5.31)
For the discrete implementation, the equations for a fan beam reconstruction
become
f(, Y) = A
M
L2(X,
R
(na) * g(n)
(5.32)
With the above system of equations it is possible to compute the FBP of a finite,
discretized fan beam data set by first calculating the modified projection R'(y) according to (5.30).
Then each modified projection must be convolved with the filter
impulse response g(-y) while making sure to sufficiently zero pad the modified projection to avoid the interperiod interference artifacts discussed earlier. As before,
superior results can be obtained by utilizing a smoothing filter to reduce noise in the
reconstructed image. The final step of a fan beam reconstruction involves a weighted
backprojection of all of the filtered projections. In this case, the weighting factor is
36
1/L
2
(r,#,
#)
which is the length from the source to the point (r,#) [7].
It is important to note that it was necessary to derive these fan beam equations
from the parallel beam equations due to the fact that the Fourier Slice Theorem is
only defined for parallel beam data. The fan beam data does not have a well defined
Fourier transform that can be used to derive the FBP equations as was done in the
parallel beam case. This inability to derive filter equations directly from the Fourier
domain using fan beam geometry will later prove troublesome during the attempt to
design the appropriate filters needed for CW tomosynthesis.
37
38
Chapter 6
Obstacles to FBP Implementation
Unfortunately, the FBP equations and theory detailed in the previous chapter were
designed to reconstruct a complete set of projection data. That is to say, a set of
measured projections whose collective Fourier transforms would specify the entire
Fourier transform of the imaged object. Since it was shown in the previous section
that each projection taken at angle 0 can only determine values of F(u,v) along a
line at angle 0, this means that projections must be collected over 3600 in order to
generate a complete set of projection data (in fact, the true minimum scan angle
needed in order to obtain a complete set of projection data is 180
+ F where IF is the
fan angle of the x-ray source [7]).
A tomosynthetic data set does not meet these requirements for a complete data
set because it only contains projections collected over a limited arc angle. In fact, as
defined above, tomosynthesis and FBP are fundamentally incompatible and attempting to apply the FBP theory and techniques to a tomosynthetic data set results in a
severely degraded reconstructed image quality as seen in Figure 8-1. This section will
explore the incompatibilities between tomosynthesis and FBP and several proposed
resolutions which would ultimately prove to be insufficient.
39
(a) 360* scan
(b) 1000 scan
(c) Difference between Figure 6-1a and Figure 6-1b
Figure 6-1: Comparison of reconstructions on a noisy water ellipse over 360* (Figure 6la) and over 1000 (Figure 6-1b). Figure 6-1c shows the difference between the two
pictures, the high contrast present in the difference demonstrates how poorly the 1000
scan approximates the true reconstructed image.
6.1
Spatial Domain Ringing
The principal cause of distortion in the reconstructed image is ringing in the spatial
domain which arises from the FBP equations treating the incomplete tomosynthetic
data set as a complete one. Because the FBP equations expect a completely specified
Fourier transform of the object, the tomosynthetic data set can be seen as projections that were actually collected over 360' but whose Fourier transforms were then
windowed to only include the tomosynthetic projections and their transforms. The
multiplication of the object's Fourier transform with the tomosynthetic window in
the frequency domain results in a convolution between the object's image and a sinc
function (inverse Fourier transform of the tomosynthetic window) in the spatial domain. This convolution produces the ringing in the spatial domain which distorts the
reconstructed image so severely.
There have been many investigations into methods of optimizing FBP for tomosynthesis in order to reduce this ringing distortion in the spatial domain. Some
of these studies suggest that an apodizing filter be devised in the frequency domain
by making use of a parallel beam approximation to the collected fan beam data [6].
This approximation is necessary because, as was stated in the previous section, the
Fourier Slice theorem is not well defined for fan beam data and as such the fan beam
40
equations (and any corresponding filters) can only be obtained through the parallel
beam equations.
Unfortunately, using a parallel beam approximation is not a practical approach to
a tomosynthetic filter design, because a parallel beam approximation discards a large
fraction of the projection data collected. Given that this projection data is obtained
by bombarding the patient with ionizing radiation, discarded data constitutes an
unnecessary radiation exposure to the patient.
6.2
Parallel Beam Approximation Error
It is useful to calculate the percentage of data discarded from a fan beam data set
Figure 6-2 shows the difference
when moving to a parallel beam approximation.
between the data obtained in a fan beam scan and the data obtained in a parallel
beam approximation. The two diagonal black lines represent the two most extreme
projections in the fan beam geometry and the area in between them is then the
entirety of the projection data collected. The top diagonal black line represents the
equation
Omax = -Y
+
/max
= arcsin
t
+
/max
(6.1)
and the lower diagonal black line represents the equation
0
mnin
= 7+
t
min = arcsin - +f#min
D
(6.2)
The two lighter vertical lines represent the maximum and minimum values for t
with a fan angle of 300.
and tmin = D sin -15'.
The are represented by the equations tmax = D sin 15*
And lastly, the two lighter horizontal lines, represent the
maximum and minimum value of 0 for which a complete parallel projection can be
41
obtained for all values of t. These values are
011,max
=
=
0 l,min
rmax
(6.3)
150 + fmax
(6.4)
7min +
=
7max + /min
(6.5)
=
150 + #min
(6.6)
Using these equations it is possible to approximate the total fraction of data lost
by moving from fan beam data to a parallel beam approximation.
The total area
contained by the fan beam data can be calculated as
A = b - h = (B) - (At)
where B is the total scan angle (#nax -
/min).
(6.7)
And the total area contained by the
parallel beam approximation is
A = b - h = (B - AF) - (At)
where again Al
represents the total fan angle (7max - 7min).
(6.8)
The ratio between
parallel beam data and fan beam data is then
All= B-A?
Afan
B
(6.9)
Given a typical fan angle of 300 and a scan angle of 1000, which is high compared
to commercial tomosynthesis devices on the market today [8], (6.9) indicates that
approximately 70% of projection data will be retained and 30% will be discarded.
Unfortunately, according to (6.9), in order for a parallel beam approximation to
retain just 90% of data collected, it would be necessary to scan at least 3000. In fact,
the parallel beam approximation does such a poor job at maximizing the amount
of data retained that a complete fan beam projection data set would be obtained
before 90% of the measured data could be approximated as parallel beam data (recall
42
Figure 6-2: This diagram shows the amount of data obtained in a fan beam scan, and
the amount of data lost when moving to a parallel beam approximation. The area in
between the two heavy black diagonal lines represents the amount of data collected
from a fan beam scan. The rectangle enclosed by the four lighter lines represents the
amount of data left after moving to a parallel beam approximation
earlier that the minimum scan angle needed to obtain a complete projection data set
is 180*+ F which in this example is 2100). The amount of data discarded due to the
parallel beam approximation makes it an impractical tool to use in the design of a
tomosynthetic filter for FBP.
6.3
Erroneous Normalization
A second cause of distortion in the reconstruction, which is overshadowed by the ringing distortion, is the erroneous normalization of the reconstructed values. The FBP
equations normalize the filtered projections by scaling them by a factor equal to the
angular separation between projections, without this normalization, each projection
contributes too much information to the individual pixels and the reconstruction obtains the wrong attenuation values for the imaged object. Because a tomosynthetic
scan only uses a fraction of the projections of a complete scan, each filtered projection needs to be renormalized to contribute more of its information to the individual
43
pixels. Additionally, any tomosynthetic filter that is designed to mitigate the ringing
in the spatial domain will need to be accounted for in the calculation of the new
normalization factors.
6.4
Non-constant Slice Thickness
The last cause of distortion is beyond the scope of this investigation, but is just
as central to the incompatibility between tomosynthesis and FBP as the ringing in
the spatial domain, so it merits mentioning. Due to the incomplete sampling of tomosynthesis, there is not enough information about the v, frequencies to hold the
slice thickness constant [6]. In tomosynthetic reconstructions, the slice thickness increases as the v. frequencies decrease. Since the scope of this investigation is limited
to the 2D in-plane reconstruction, this investigation will not be considering this limitation of tomosynthesis, yet it will be necessary for a complete three-dimensional
reconstruction algorithm to also incorporate solutions suggested in [6].
44
Chapter 7
Tomosynthetic Filter Design
Considering the many obstacles to a tomosynthetic implementation of the FBP algorithm enumerated in the previous section, and given the fact that the parallel beam
approximation has been shown to be unsuitable for a practical implementation, the
options available for a tomosynthetic filter design are very limited. After considering
all of the constraints detailed above, for the purposes of this investigation, it was
determined that the best method of filter design would be to design an apodizing
function in the spatial domain in place of a frequency dependent filter. The goal
of this apodizing function would be to reduce the contribution of the most extreme
projections which contain the most high frequency content.
7.1
Identifying High Frequency Content
Even though the fan beam projection data does not have a well defined Fourier
transform, some aspects of its behavior in the Fourier domain are still known and
well defined. The tomosynthetic filter derivation will take advantage of the property
that the most extreme projections, those closest to
frequencies for the axis corresponding to
$=
#m,2
m+I1m"".
2
and
#min,
contain the high
For simplicity this will be
defined as the y-axis and will function as the point of reference for the scanning coordinate system, and in keeping with convention the x-axis with be the axis orthogonal
to the y-axis in the plane of the fan beam.
45
This property can be intuitively grasped by considering the frequencies contained
in the single projection taken at
=
"ax+3n.
2
In this projection, all of the fre-
quencies of the x-axis, v2, are known because the fan beam rays can "see" the entire
x-axis, being nearly perpendicular to it, but the y-axis cannot be "seen" at all and
it's frequencies are all aliased to the v. = 0 frequency. This can also be seen utilizing
the system of equations in (5.1) where t = x cos 0
+ y sin G, since 0 = 0 this means
that t = x and the Fourier transform of this projection will only contain v_ frequencies and will not contain any v, frequencies. As the source moves further away from
the y-axis, however, more of the y-axis can be "seen" and measured and higher v.
frequencies can be detected. Again this can be seen from (5.1) because as 0 increases,
so does the y contribution to t and thus the contribution of the Vy frequencies in the
projection transform.
As discussed in Chapter 6, the tomosynthetic window cuts off frequency content
at the extent of the tomosynthesis scan. In the frequency domain, this extent must
occur at the highest frequencies present in the tomosynthetic data by virtue of being
a windowing operation. Therefore, while it is not possible to design a filter in the
frequency domain which will smooth the sharp discontinuity caused by the windowing,
it is possible to, in the space domain, apodize the projections corresponding to those
frequencies at which the sharp discontinuites occur. The proposed filter, then, will be
a raised shelf, which keeps as much of the projection data as possible, and apodizes
the data corresponding to the most extreme projections in order to minimize their
contributions to the sharp discontinuity in the frequency domain.
7.2
Parker Weights
The last specification missing from the design of the tomosynthetic filter is the apodizing function that will be used on the projections furthest from the central axis. The
apodizing function chosen must meet certain criteria in order to smooth the projection data out and create a continuous transition from a scale factor of 1 down to a
scale factor of 0. At the transition between normal scaling and apodization, the filter
46
must be continuous, meaning the limit of both scaling functions as they approach the
transition point must be equal and the limit of the slopes of both scaling functions
as they approach the transition point must be equal. These are the only properties
required of the chosen apodizing function. For this investigation, the decision was
made to use the smoothing function specified in [2] due to its effectiveness in dealing
with partial scan data.
The formula for the smoothing function is
f (x) = 32 - 2X3
(7.1)
where x is any weight smoothing transformation. Keeping in mind that the goal of
the filter is to apodize the most extreme projections, the weight smoothing transformation, X, will then be directly proportional to the projection angle, 3. Incorporating
the projection angle, and seeking to minimize the projections which are close to both
63,x and
#3 min
the smoothing function becomes
3(
3(
)2 - 2(
~'"
fl-shelf.flmax
Omax -shel
f
)2
#-,8""n
_2
-Pmax
)3
8-shelf-Omax
)3
if 0 < shelf -#i
if
Omax -shel f.Omax
> shelf
-
ma
a
where shelf is the parameter which specifies the value of the cutoff projection angle
(how wide the shelf of the filter is).
7.3
Corrections to Normalization
Erroneous normalization was the second cause of distortion in the reconstructed tomosynthetic image, this form of distortion affects the reconstructed attenuation values
themselves. Now that a filter has been specified it is necessary to further modify the
FBP equations in order to account for:
1. The shorter scan angle causing an underweighting of reconstructed attenuation
values
47
2. The apodization values applied to select projections to minimize ringing. This
minimizes the contributions of these projections and must be taken into account
These normalization values must be modified during the backprojection step of the
FBP algorithm, to ensure that the information in the filtered projections are appropriately apportioned among the pixels of the reconstructed image.
To rectify the problem of underweighting of reconstructed attenuation values
caused by the reduced scan angle, the original normalization factor (A,#) must be
scaled to account for the change in total scan angle. The modified normalization
factor is then
2wr
4#B
(7.3)
To account for the change in contribution per projection due to the apodization of
the projection data it will be necessary to sum the contributions each ray makes to
an individual pixel (the contribution values are obtained using (7.2)).
This modi-
fied contribution sum is then used to normalize the effective total number of rays
contributing to that individual pixel using the following scale factor
numRays
(7.4)
normalizeSum
This filter equation will be applied to the projection data in the spatial domain
before it is convolved with the FBP impulse response function (5.31).
Then after
frequency domain processing is complete, the results will be backprojected along
the image plane using the modified normalization factors to restore the attenuation
information.
48
Chapter 8
Results and Discussion
Using the tomosynthetic filter derived in the previous chapter several reconstructions
were performed on simulated phantom objects to ascertain the effectiveness of the
proposed filter at reconstructing tomosynthetic images. In order to approximate a
CW tomosynthesis data set, the phantoms were simulated with a very dense set
of projection data in an attempt to simulate the expected increase in the number
of projections obtainable from a CW tomosynthesis scan. The filter modifications
showed definite improvement in reconstructed image quality, allowing for sharper
resolution of the imaged object.
Unfortunately, the attenuation values of the reconstructed image were not fully
restored. Instead, the reconstructed tomosynthetic attenuation values were approximately 37% smaller than the true values for the imaged object. Additionally, much of
the original intensity of the object image was lost during the reconstruction process
leading to an overall lightening of the reconstructed images.
Figure 8-1 shows a comparison between the original noisy water ellipse from Chapter 6 and a reconstructed tomosynthetic image using a shelf size of 0.7 and tomosynthetic filtering. The pictures show a marked improvement in the approximation to the
original image by applying the tomosynthetic filters and modified normalization values. Comparing the two tomosynthetic reconstructions (with and without filtering)
Figure 6-1b exhibits streaking artifacts and attenuation values even farther from the
true values than the reconstruction with filtering. By contrast, Figure 8-1b displays
49
(a) 3600 scan
(b) 1000 scan
(c) Difference between Figure 8-la and Figure 8-1b
Figure 8-1: Comparison of reconstructions on a noisy water ellipse over 360* (Figure 8la) and over 1000 (Figure 8-1b) with tomosynthetic filtering and a shelf size of 0.7.
Figure 8-1c shows the difference between the two pictures, the dark areas represent
regions of little to no difference and show that tomosynthetic filtering yields a much
better approximation to the original image.
virtually no streaking artifacts and attenuation values which are much closer to the
true values.
Figure 8-2 shows a comparison between various shelf sizes used in the apodization
function. From the images it seems that a shelf size of 0.5 performs the best, with
increasing shelf sizes introducing streaking artifacts into the reconstruction.
It is
worth noting that a shelf size of 0.5 is approximately a raised cosine, and does not
have a true shelf. As a result, a shelf size smaller than 0.5 is also not possible and so
a shelf of 0.5 produces the best reconstruction results obtainable with the proposed
(a) 1000 scan with 0.5 shelf (b) 1000 scan with 0.7 shelf (c) 1000 scan with 0.9 shelf
Figure 8-2: Comparison of reconstructions on a noisy water ellipse with varying shelf
sizes
50
reconstruction algorithm.
8.1
Conclusion
There are very clear improvements in reconstructed image quality when a spatially
derived apodizing function is applied to the projection data. Additionally, by deriving
the function in the spatial domain, and opting out of a parallel beam approximation,
this reconstruction algorithm gets full utilization out of the data collected from the tomosynthesis scan. Overall, the reconstruction produced less artifacts and attenuation
values which were closer to the true values for the imaged object.
However, there are still many areas for improvement in the design of a reconstruction algorithm for continuous wave tomosynthesis.
A suitable reconstruction
algorithm will have to take into account methods to restore the object intensity that
is lost during the reconstruction process.
The drop in intensity is due to a lower
density of low spatial frequencies collected in a tomosynthesis scan. Mertelmeier et
al. propose a possible design for one such filter in [6].
It would also be worthwhile to investigate the effect of different smoothing functions in the apodizing filter than the one used in this investigation.
Great insight
could be gained into how to improve the reconstruction process by comparing the
different images produced by different smoothing filters. However, if the theoretical
advantages of CW tomosynthesis are as promising as they seem, then there could
very well be many more exciting discoveries and a complete transformation of the
field of breast imaging.
51
52
Appendix A
Reconstruction Program Code
A.1
Main.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include "fileio.h"
#include "spectralfilter.h"
#include "spatialweight.h"
#include
#include
"backprojection.h"
"parameters.h"
/ **************************************************************************/
/**
Function Declarations */
/ **************************************************************************/
void initializeParams(int, int, int, FileParams*, ScanParams*, ReconParams*);
int roundToPowerOfTwo(int);
float** readProjectionData(FileParams);
53
void cleanup(float**, int, float**, int);
/ **************************************************************************/
/** Function Definitions *
/ **************************************************************************/
int main()
{
float **p-rowPointers;
float** p-rconRowPointers;
FileParams fParams;
ScanParams sParams;
ReconParams r.Params;
char* localPathName
=
malloc(sizeof(*localPathName)
*
100);
strcpy(localPathName, "/home/pgdouyon/Documents/thesis/recon-files/");
int caseNum = 2;
int nunImages = 1;
int scanAngle
=
100;
int filtertomo
=
1;
int bp-tomo = 1;
for
(nt
i = 0; i <numImages; i++)
{
initializeParams(caseNum, i, scanAngle, &fLParams, &sParams, &rParams);
p-rowPointers = readProjectionData(fParams);
if (!p-rowPointers)
{ return 1; }
54
spatialWeighting(p-rowPointers, &fParams, &s-Params, filter-tomo);
performSpectralFiltering(p-rowPointers, &LParams, &sParams);
p-rconRowPointers = backprojection(p-rowPointers, &fParams,&sParams, &rParams,
bp-tomo);
if (!p.rconRowPointers)
{ return 1; }
char reconFileName[40]
=
char iter[15];
snprintf(iter, 15, "%d_%d_%.ddagain", caseNum, scanAngle, filter-tomo, bpitomo);
strcat(reconFileName, rParams.reconFileName);
streat(reconFileName, iter);
write2DFloatArray(localPathName, reconFileName, p-rconRowPointers, sizeof(**pyrconRowPointers)
rParams.imageDim, rParams.imageDim);
cleanup(p-rowPointers,
_Params.numRows, p.rconRowPointers, rParams.imageDim);
}
free(localPathName);
return 0;
}
/ **************************************************************************/
void initializeParams(int caseNum, int numCalls, int scanAngle, FileParams* pfileParams,
ScanParams* p-scanParams, ReconParams* p-reconParams)
{
//
static int numCalls = -1;
// numCalls++;
switch (caseNum)
{
case 1:
55
p-fileParams->samplesPerRow
=
896;
p-fileParams->numRows = 960;
p-fileParams->headerOffset = 24096;
p.fileParams->zeroPaddedRowSize = roundToPowerOfTwo(piileParams->samplesPerRow);
pifileParams->fileName = "/home/pgdouyon/Documents/thesis/phantoms/VCTlxlp25WB20cm.sino";
p-scanParams->centerRay = 439.25;
p-scanParams->deltaGamnma = 0.001013862;
p-scanParams->deltaBeta = (2*PI/p-fileParans->numRows);
p-scanParams->originDist = 57.3;
p.scanParams->scanAngle = scanAngle*PI/180;
p.scanParams->fanAngle = p..scanParams->deltaGamma * p.fileParams->samplesPerRow;
p.scanParams->betaMax = (int) (p-scanParams->scanAngle/pscanParams->deltaBeta);
p..scanParams->betaInc = 1;
p..scanParams->betaOffset = (int) ((PI*numCalls/20.0)/p.scanParams->deltaBeta);
//sPaams.betaMax = 10000;
p-reconParams->imageDim = 512;
p-reconParams->objectDim = 22;
p-reconParams->reconFileName = "water-disk-angular-sample";
break;
case 2:
p-fileParams->samplesPerRow = 896;
p.fileParams->numRows = 960;
p-fileParams->headerOffset = 32752;
p.fileParams->zeroPaddedRowSize = roundToPowerOfTwo(pfileParams->samplesPerRow);
p.JileParams->fileName = "/home/pgdouyon/Documents/thesis/phantoms/VCT1x1p25WE48cm2Discsn.sino
p.scanParams->centerRay = 439.25;
p-scanParams->deltaGamma = 0.001013862;
p.scanParams->deltaBeta = (2*PI/pfileParams->numRows);
p.scanParams->originDist = 57.3;
56
p-scanParams->scanAngle = scanAngle*PI/180;
p-scanParams->fanAngle = p-scanParams->deltaGamma * p-fileParams->samplesPerRow;
p-scanParams->betaMax = (int) (p-scanParanis->scanAngle/p-scanParams->deltaBeta);
p..scanParams->betalnc = 1;
p-scanParams->betaOffset = (int) ((PI*numCalls/20.0)/pscanParams->deltaBeta);
//sParams.betaMax = 10000;
p-reconParams->imageDim = 512;
p-reconParams->objectDim
50;
=
p-reconParams->reconFileName = "noisy-water-ellipse.sample_";
break;
case 3:
p-fileParams->samplesPerRow = 4096;
pifileParams->numRows = 10000;
pifileParams->headerOffset = 32752;
pifileParams->zeroPaddedRowSize = roundToPowerOfTwo(p-fileParams->samplesPerRow);
p.fileParams->fileName = "/home/pgdouyon/Documents/thesis/phantos/VCT1x1p25WE48cm2Discsn10Ka
p-scanParams->centerRay = 2047.25;
p-scanParams->deltaGamma
=
p.scanParams->deltaBeta
(2*PI/p-fileParams->numRows);
=
0.000253466;
p.scanParams->originDist = 57.3;
p-scanPaxams->scanAngle = scanAngle*PI/180;
p..scanParams->fanAngle = p-scanParams->deltaGamma * p-fileParams->samplesPerRow;
p.scanParams->betaMax = (int) (p..scanParams->scanAngle/p..scanParams->deltaBeta);
p.scanParams->betaInc = 1;
p.scanParams->betaOffset = (int) ((PI*numCalls/20.0)/p.scanParams->deltaBeta);
//sParams.betaMax = 10000;
p-reconParams->imageDim
=
512;
p-reconParams->objectDim
=
50;
p-reconParams->reconFileName = "water..ellipse.sample";
57
break;
case 4:
pfileParams->samplesPerRow = 4096;
p-fileParams->numRows = 10000;
pifileParams->headerOffset = 97672;
p-fileParams->zeroPaddedRowSize = roundToPowerOfTwo(p-fileParams->samplesPerRow);
p.fileParams->fileName = "/home/pgdouyon/Documents/thesis/phantoms/VCTlxlp25WB48cmPinsNoKv4
p-scanParams->centerRay = 2047.25;
p.scanParams->deltaGamma = 0.000253466;
p.scanParams->deltaBeta = (2*PI/pfileParams->numRows);
p.scanParams->originDist = 57.3;
p-scanParams->scanAngle = scanAngle*PI/180;
p.scanParams->fanAngle = p-scanParams->deltaGamma * pifileParams->samplesPerRow;
p.scanParams->betaMax = (int) (p..scanParams->scanAngle/p..scanParams->deltaBeta);
p-scanParams->betaInc = 1;
p.scanParams->betaOffset = (int) ((PI*numCalls/20.0)/p..scanParams->deltaBeta);
//sParams.betaMax = 10000;
p-reconParams->imageDim = 512;
p-reconParams->objectDim = 50;
p-reconParams->reconFileName = "pin-phantom_1,25cm-angular-sample.";
break;
case 5:
p-fileParams->samplesPerRow = 4096;
pfileParans->numRows = 10000;
pifileParams->headerOffset = 97672;
pfileParams->zeroPaddedRowSize = roundToPowerOfTwo(p-fileParams->samplesPerRow);
p-fileParams->fileName = "/home/pgdouyon/Documents/thesis/phantoms/VCT1x1p25WB48cmPins3cni1o
p..scanParams->centerRay = 2047.25;
p-scanParams->deltaGamma = 0.000253466;
58
p.scanParams->deltaBeta = (2*PI/p-fileParams->numRows);
p-scanParams->originDist = 57.3;
p.scanParams->scanAngle = scanAngle*PI/180;
p.scanParams->fanAngle = p-scanParams->deltaGamma * p.fileParams->samplesPerRow;
p..scanParams->betaMax = (int) (p.scanParams->scanAngle/p-scanParams->deltaBeta);
p..scanParams->betalnc = 1;
p.scanParams->betaOffset = (int) ((PI*numCalls/20.0)/p.scanParams->deltaBeta);
//sParams.betaMax = 10000;
p-reconParams->imageDim
p-reconParams->objectDim
=
512;
50;
p.reconParams->reconFileName = "pin-phantom_3,00cm.angular..sample.";
default:
break;
}
}
/ **************************************************************************/
*
Finds the lowest integer power of two greater than twice the argument passed and returns
the
* value of two raised to that power. We use 2*sPR - 1 because according to signal processing
theory
* we need to use values of the filter function from -(sPR-1) to (sPR-1), so we need to zero
pad to
*
at least that length
*
*
@param samplesPerRow - Number of nonzero projection data samples, used to find the
length of the
* zero padded array
*
59
*
@author Pierre-Guy Douyon (pgdouyonaalum.mit.edu)
*/
int roundToPowerOfTwo(int samplesPerRow)
{
int powerOfTwo = 1;
while (1)
{
powerOfTwo = powerOfTwo <<1;
if (powerOfTwo >(2 * samplesPerRow - 1)) {return powerOfTwo;}
}
}
*
Read Projection Data extracts the projection samples from a .sino file and
*
creates a two-dimensional array with the data. This function returns a pointer
*
to that array which will be NULL if there were any errors in file processing.
*
*
The function first opens the file and then skips over the header. It then creates
*
an array of pointers to rows in the array. Then the function reads values into the
*
array one row at a time and zero pads after all non-zero samples of a row have been read.
*
*
@param samplesPerRow - The number of samples in one row of the projection data matrix
*
@param numRows - The total number of rows in the projection data matrix
*
@param headerOffset
- The size of the header file in bytes
*
*
Lauthor Pierre-Guy Douyon (pgdouyon~alum.mit.edu)
*/
float** readProjectionData(FileParams f.Params)
{
60
FILE *p.sinogramFile;
float **pJrowPointers;
size..t sampleSize, elementsRead;
int seekFailure;
int samplesPerRow = fParams.samplesPerRow;
int numRows = fParams.numRows;
int offset
=
fParams.headerOffset;
int rowSize = f-Params.zeroPaddedRowSize;
p-rowPointers = 0;
p.sinogramFile = fopen(fParams.fileName, "rb");
if (!p.sinogramFile)
I
printf("File failed to open!
n");
return p-rowPointers;
}
seekFailure = fseek(p.sinogramFile, offset, SEEK.SET);
if (seekFailure)
{
printf("Failed to move file cursor! Terminating!
n");
return p..rowPointers;
}
p-rowPointers = malloc((sizeof(*psrowPointers))*numRows);
if (!p-rowPointers)
I
61
printf(" No available memory for pointer array! Terminating!
n");
return p.rowPointers;
}
sampleSize = sizeof(**p-rowPointers);
for (int i = 0; i <numRows; i++)
{
p-rowPointers[i] = malloc(sizeof(**p-rowPointers) * rowSize);
if (!p-rowPointers[i])
{
printf("No avaiable memory for float array! Terminating!
p-rowPointers = 0;
return p-rowPointers;
}
elementsRead = fread(p-rowPointers[i], sampleSize, samplesPerRow, p..sinogramFile);
if (elementsRead <samplesPerRow)
I
printf("Error: read less than the expected number of bytes!
p-rowPointers = 0;
return p-rowPointers;
for (int
j
= samplesPerRow;
j
<rowSize;
j++)
{
(p.rowPointers[i])[U] = 0.0;
}
fclose(p.sinogramFile);
return p-rowPointers;
I
62
/ **
*
Cleanup frees all of the allocated memory
*
*
@param p-rowPointers - This is the pointer to the array of row pointers. Both the pointer
and
*
every pointer in the array are holding on to memory in the heap
*
@param numRows - The number of rows in the projection data matrix
*
*
©author Pierre-Guy Douyon (pgdouyondalum.mit.edu)
*/
void cleanup(float** p-rowPointers,
int numRows, float** p-rconRowPointers, int rconDim)
{
for (int i = 0; i <numRows; i++)
{
free(p-rowPointers[i]);
}
for
(nt i
= 0; i <rconDim; i++)
{
free(p...rconRowPointers[i]);
}
free(prconRowPointers);
free(prowPointers);
return;
}
63
A.2
Filters.c
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "filters.h"
#include "parameters.h"
/ **************************************************************************/
/**
Function Declarations *
/**************************************************************************/
float getModifiedWeight(int, ScanParams*);
float getAppodizedWeight(int, int, ScanParams*);
float getSpectralFilterVal(int, ScanParams*);
/ **************************************************************************/
/*
Getter Functions *
/ **************************************************************************/
float getModifiedWeight(int gammaIndex, ScanParams* p.scanParams)
I
return p-scanParams->originDist * cosf((gammaIndex - p-scanParams->centerRay)*p-scanParams>deltaGamma);
}
float getAppodizedWeight(int gammalndex, int betalndex, ScanParams* p~scanParams)
64
{
//
float scanAngle, fanAngle, upperBound, lowerBound, shelfSize, gamma, beta, theta, x,
filterVal;
//
//
//
scanAngle
=
p-scanParams- >scanAngle;
fanAngle = p-scanParams->fanAngle;
//
//
upperBound
=
//
//
//
lowerBound
=
//
shelfSize = 0.99;
0.5*(scanAngle - fanAngle);
-1.0*upperBound;
upperBound += p-scanParams->betaOffset
lowerBound += p-scanParams->betaOffset
p-scanParams->deltaBeta;
*
*
p.scanParams->deltaBeta;
//
//
//
//
//
gamma = (gammalndex - p.scanParams->centerRay)
beta
theta
=
=
*
p..scanParams->deltaGamma;
betaIndex * p-scanParams->deltaBeta;
fmodf(beta
+ gamma, 2*PI);
//if ((theta - PI) >0) { theta -= 2*PI;}
//if ((beta - PI) >0) { beta -= 2*PI;}
//
//if (theta <= shelfSize*lowerBound)
// {
//
x = (beta + 0.5*scanAngle - p-scanParams->betaOffset*p-scanParams->deltaBeta)
/
(shelfSize*lowerBound - gamma + 0.5*scanAngle - p-scanParams->betaOffset*p..scanParams>deltaBeta);
//
filterVal = 3*x*x - 2*x*x*x;
// }
//
else if (theta >= shelfSize*upperBound)
// {
//
x = (beta - shelfSize*upperBound + gamma)
/
(0.5*scanAngle - shelfSize*upperBound
+ gamma + p-scanParams->betaOffset*p.scanParams->deltaBeta);
// filterVal = 1 - (3*x*x - 2*x*x*x);
65
// }
//
else
// {
//
filterVal = 1.0;
// }
//
//if
(filterVal <0)
// {
// printf("FilterVal <0
n");
//
printf("gammalndex:
//
exit(1);
// }
//if (filterVal >1)
// {
// printf("FitlerVal >1
n");
// printf("gammalndex: // exit(1);
//
//
return filterVal;
//
Center betaMax to determine limits of parameter betaIndex
int upperBound, lowerBound;
float shelfSize, x, filterVal;
upperBound = (p-scanParams->betaMax)/2;
lowerBound = -1*upperBound;
shelfSize = 0.9;
//
//
not inclusive
// inclusive
size of shelf of filter (area where filter is 1.0) as a fraction
if (betalndex >upperBound)
{
66
betalndex -=
2.0*PI/p.scanParams->deltaBeta;
}
if (betaIndex <shelfSize*lowerBound)
{
x = (betaIndex - lowerBound)/(lowerBound*(shelfSize - 1
));
filterVal = 3*x*x - 2*x*x*x;
}
else if (betaIndex >shelfSize*upperBound)
{
x = (betaIndex - shelfSize*upperBound)/(upperBound*(1 - shelfSize));
filterVal = 1 - (3*x*x - 2*x*x*x);
}
else
{
filterVal = 1.0;
}
if (filterVal <0)
{
printf(" FilterVal <0
printf("gammaindex: exit(1);
}
if (filterVal >1)
{
printf("FitlerVal >1
n");
printf("gammalndex: exit(1);
}
67
return filterVal;
}
float getSpectralFilterVal(int index, ScanParams* p-scanParams)
{
if (index == 0)
{
return 1.0/(8.0*p..scanParams->deltaGamma*p-scanParams->deltaGamma);
}
else
{
return (abs(index)
}
}
A.3
SpatialWeight.c
#include <stdlib.h>
#include <stdio.h>
#include "spatialweight.h"
#include "parameters.h"
#include "filters.h"
#include "fileio.h"
/**
Function Declarations *
68
void spatialWeighting(float**, FileParams*, ScanParams*, int);
/ **************************************************************************/
/*
Function Definitions *
/ **************************************************************************/
void spatialWeighting(float** p.rowPointers, FileParams* p-fileParams, ScanParams* p-scanParams,
int tomo)
{
int k;
float modifiedWeight, appodizedWeight;
//float* filter;
//filter = malloc(sizeof(float)*pscanParams->betaMax);
for (int j = 0; j <pscanParams->betaMax;
j
+= p-scanParams->betaInc)
{
k =
j-
(p-scanParams->betaMax/2) + p-scanParams->betaOffset;
if (k <0) {k += pifileParams->numRows;}
//filter[j] = getAppodizedWeight(350, k, p..scanParams);
for (int i = 0; i <p-fileParams->samplesPerRow; i++)
{
modifiedWeight = getModifiedWeight(i, p-scanParams);
if (tomo == 1)
{
appodizedWeight = getAppodizedWeight(i, k, p-scanParams);
} else
{
appodizedWeight = 1.0;
}
69
p-rowPointers[k][i] *= (modifiedWeight * appodizedWeight);
}
}
//printf(" BetaMax:
A.4
}
SpectralFilter.c
#include <stdlib.h>
#include <stdio.h>
#include "spatialweight.h"
#include
"parameters.h"
#include "filters.h"
#include "fileio.h"
/ **************************************************************************/
/**
Function Declarations *
/ **************************************************************************/
void spatialWeighting(float**, FileParams*, ScanParams*, int);
/ **************************************************************************/
/** Function Definitions *
/ **************************************************************************/
void spatialWeighting(float** p-rowPointers, FileParams* pifileParams, ScanParams* p.scanParams,
int tomo)
{
int k;
70
float modifiedWeight, appodizedWeight;
//float* filter;
//filter = malloc(sizeof(float)*p.scanParams->betaMax);
for (int
j
= 0;
j
<p..scanParams->betaMax;
j +=
p-scanParams->betalnc)
{
k
=
j-
(p-scanParams->betaMax/2) + p-scanParams->betaOffset;
if (k <0) {k += p-fileParams->numRows;}
//filter[j]
=
getAppodizedWeight(350, k, p..scanParams);
for (int i
=
0; i <pfileParams->samplesPerRow; i++)
{
modifiedWeight = getModifiedWeight(i, p..scanParams);
if (tomo == 1)
{
appodizedWeight = getAppodizedWeight(i, k, p..scanParams);
} else
{
appodizedWeight = 1.0;
}
p-rowPointers[k][i] *= (modifiedWeight
*
appodizedWeight);
}
}
//writeArray("C:
Users
pdouyon
Documents
Simulation Files
", "SpatialFilter", filter, sizeof(float), p..scanParams->betaMax);
//printf(" BetaMax:
}
71
A.5
Backprojection.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include
#include
"backprojection.h"
"fourier.h"
#include "filters.h"
/ **************************************************************************/
/**
Function Declarations **/
/ **************************************************************************/
float** backprojection (float**, FileParams*, ScanParams*, ReconParams*, int);
/ **************************************************************************/
/** Function Definitions *
/ **************************************************************************/
*
Takes a weighted projection array and backprojects into an array of a given dimension
then returns a pointer to an array of pointers. Each pointer
*
in that array points to a row of the backprojection. Returns a NULL pointer on error.
*
*
First calculates the x and y coordinates in centimeters corresponding to the x and y co-
ordinates of the backprojection array. Then for each x and y
* the function computes the corresponding values of r and phi in polar coordinates. It then
calculates the angle, gammaPrime, of the ray passing through
* that point as well as the distance from the source to that point. Next finds the two nearest
72
indices to gammaPrime in the weighted projection array
* and uses those values to compute the backprojection contribution from the weighted projection at that point, interpolating if necessary.
*
*
©param p-rowPointers - A pointer to the two-dimensional array of weighted projection
data, which is an array of rowPointers
*
@param rconDimension - Dimensions of the backprojection array. Both x and y share
this dimension giving an array of rconDimension x rconDimension
*
@param numRows - Number of rows in the array pointed to by p-rowPointers. Because
each row represents projection values for a given Beta, this
* number also represents the number of Beta values as well as the amount Beta increments
by (times 2*PI)
*
@param samplesPerRow - Effective size of the weighted projection array. More values
may have been introduced during convolution, but this is the
* number of values that represents known, physical data and the only values which will be
considered
*
@param originDist - Distance in centimeters between the source and the origin
*
@param deltaGamma - Difference between successive values of gamma
*
@param gammaOneIndex - The offset value of the weighted projection array. Most likely
the first value of the array will not be at gamma=O but at
* gamma=gammaOne, this parameter accounts for that offset when extracting values from
the weighted projection array.
*
*
©author Pierre-Guy Douyon (pgdouyondalum.mit.edu)
*/
float** backprojection (float** p-rowPointers, FileParams* p.fileParams, ScanParams* p..scanParams,
ReconParams* p-reconParams, int tomo)
{
float** psrconRowPointers;
float xCoord, yCoord, r, phi, beta, sourceDistX, sourceDistY, gammaPrime, gammaPrimeIndex, coordDistSquared, gammaPrimeMinus;
float slope, interpolatedValue, normalizeFactor, nslope, ninterpolatedValue;
73
int betaIndex, gPrimeIndexPlus, gPrimelndexMinus, numRays;
//File Parameters
int samplesPerRow = p-fileParams->samplesPerRow;
int betaMax = p.scanParams->betaMax;
//Scan Parameters
float originDist = p.scanParams->originDist;
float deltaGamma = p.scanParams->deltaGamma;
float centerRay = p scanParams->centerRay;
//Recon
Parameters
int imageDim
=
p-reconParams->imageDim;
int objectDim
=
p-reconParams->objectDim;
p-rconRowPointers = malloc(sizeof(*p-rconRowPointers)
*
imageDim);
if ( !p.rconRowPointers)
{
printf("Not enough memory for reconstruction array! Terminating!");
return p-rconRowPointers;
}
for (int y = 0; y <imageDim; y++)
f
p-rconRowPointers[y] = malloc(sizeof(**pjrconRowPointers) * imageDim);
if ( !p.rconRowPointers[y])
{
printf("Not enough memory for reconstruction array! Terminating!");
return (p-rconRowPointers
=
0);
}
for (int x = 0; x <imageDim; x++)
74
{
///
///
Coordinate transformation to change indices from [0...imageDim] to coordinates
in space from [0.5*objectDim...-0.5*objectDimI with an offset (objectDim/2*imageDim)
/ imageDim) - objectDim/(2.0*imageDim);
* y) / imageDim) - objectDim/(2.0*imageDim);
xCoord = (0.5 * objectDim) - ((1.0*objectDim * x)
yCoord = (0.5 * objectDim) - ((1.0*objectDim
r = sqrtf(xCoord*xCoord + yCoord*yCoord);
phi
=
atan2f(yCoord, xCoord);
p-rconRowPointers[y][x] = 0.0;
normalizeFactor = 0.0;
numRays = 0;
for (int
j
=
0;
j
<betaMax;
j
+= p.scanParams->betaInc)
{
betaindex =
j-
(betaMax/2) + p-scanParams->betaOffset;
if (betaIndex <0)
{betaIndex += p-fileParams->numRows;}
beta = betaIndex * p-scanParams->deltaBeta;
///
These values are the distance from the point (x, y) to the source split up into two
components
sourceDistX = r*cosf(beta-phi);
sourceDistY = originDist + r*sinf(beta-phi);
gammaPrime = atan2f(sourceDistX, sourceDistY);
gammaPrimelndex
=
(gammaPrime
/
deltaGamma) + centerRay;
coordDistSquared = (sourceDistX*sourceDistX) + (sourceDistY*sourceDistY);
/// These are the indices containing gammaPrime, the index just above and just below
the gammaPrime index
75
gPrimeIndexMinus
=
(int) gammaPrimeIndex;
gPrimelndexPlus = gPrimelndexMinus + 1;
///
This is the angle of the index just below the gammaPrimelndex
gammaPrimeMinus = deltaGanma * (gPrimelndexMinus - centerRay);
if (gPrimelndexMinus <(samplesPerRow-1) && gPrimelndexPlus >0)
{
//use both indices and interpolate
slope = (p-rowPointers[bet aIndex] [gPrimelndexPlus] - p-rowPointers[betaIndex] [gPrimeIndexMinus])
/
deltaGamma;
interpolatedValue = slope*(gannmaPrime-gammaPrimeMinus) + p-rowPointers[bet alndex] [gPrimelndexMinus
nslope = (getAppodizedWeight(gPrimelndexPlus, betaIndex, p-scanParams) - getAppodizedWeight(gPrimeIndexMinus, betaIndex, p-scanParams))
/
deltaGamma;
ninterpolatedValue = nslope* (gammaPrime - gammaPrimeMinus) + getAppodizedWeight(gPrimeIndexMinus,
betaIndex, p-scanParams);
normalizeFactor += ninterpolatedValue;
numRays++;
p.rconRowPointers[y][x] += (interpolatedValue
/
coordDistSquared);
}
}
//if (normalizeFactor - numRays >0)
// {
//
printf("NormalizeFactor too large!");
//
printf("NormalizeFactor:
//
exit(1);
// }
if (tomo == 1)
{
p-rconRowPointers[y][x] *= (p.scanParams->deltaBeta*p.scanParanms->betaInc)*(2.0*PI/p.scanPaams>scanAngle)*(1.0*numRays/normalizeFactor);
76
} else
{
p-rconRowPointers[y][x] *= p-scanParams->deltaBeta;
}
//p-rconRowPointers[y][x] *= (p.scanParams->deltaBeta*p..scanParams->betaInc)*(2.0*PI/p.scanParams>scanAngle);
//p..rconRowPointers[y][x]
*
p..scanParams->deltaBeta;
}
}
return p-rconRowPointers;
}
77
78
Bibliography
[1] Andrea Schmitz Paul Carson Mitchell Goodsitt Bernhard E.H. Claus, Jeffrey W. Eberhard and Heang-Ping Chan, editors. Generalized Filtered Back-Projection Reconstruction in Breast Tomosynthesis. Number 4046 in Lecture Notes in Computer Science.
Springer Berlin Heidelberg, Manchester, June 2006.
[2] Guy M. Besson and Pei hsun Ma. Partial scan weighting algorithms with applications
to arbitrary pitch selection in multislice ct and cardiac ct sector reconstruction. Nuclear
Science Symposium Conference Record, 2000 IEEE, 2:15/94 - 15/100, 2000.
[3] Jfirgen Giersch, Daniel Niederl6hner, and Gisela Anton. The influence of energy weighting on x-ray imaging quality. Nuclear Instruments and Methods in Physics Research
Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 531(1),
2004.
[4] Amir H. Goldan, Karim S. Karim, and John A. Rowlands. Photon counting pixels
in cmos technology for medical x-ray imaging applications. Canadian Conference on
Electrical and Computer Engineering, pages 370-373, 2005.
[5] David G. Grant. Tomosynthesis: A three-dimensional radiographic imaging technique.
IEEE Transactions on Biomedical Engineering, (1):20-28, January 1972.
[6] Thomas Mertelmeier Jasmina Orman and Wolfgang Haerer. Adaptation of image quality using various filter setups in the filtered backprojection approach for digital breast
tomosynthesis. (4046), June 2006.
[7] Avinash C. Kak and Malcolm Slaney. Principlesof Computerized Tomographic Imaging.
IEEE Press, 1988.
[8] Andrew Smith. Design considerations in optimizing a breast tomosynthesis system.
Technical report, Imaging Science - Hologic.
[9] Tao Wu, Richard H. Moore, Elizabeth A. Rafferty, and Daniel B. Kopans. A comparison of reconstruction algorithms for breast tomosynthesis. American Association of
Physicists in Medicine, 31(9):2636-2647, September 2004.
79