Objectives . 2. Point spread function, line spread function, edge response function

advertisement
Objectives To Understand;
1. Sources of resolution loss
2. Point spread function, line spread function, edge response
function
3. Convolutions and convolution theorem
4. Use of the modulation transfer function to describe the spatial
frequency dependent resolution of imaging systems.
5. Calculation of the modulation transfer function
6. Magnification radiography
7. Image sampling and its effects on resolution
The spatial resolution of a system is a measure of the system’s
ability to see details of various sizes. In this section we will describe
a method for quantitatively describing and measuring the spatial
resolution of a system.
Resolution can be limited by any of the elements in the imaging
chain or, as in conventional photography, by motion of the subject.
Let’s look at a few examples of effects which can limit resolution.
Figure 1 illustrates the effect of x-ray focal spot size on spatial
resolution.
Due to the finite size of the focal spot, the image of a single point in
the image can be no smaller than the image of the focal spot itself.
The focal spot image is called the focal spot point spread function.
As magnification, defined as (d1 + d2) /d1, is increased, focal spot
blurring increases.
Figure 2 illustrates the degradation of spatial resolution due to the
finite thickness of a radiographic intensifying screen.
t
X-ray
film
Intensifying
screen
When an x-ray is absorbed in the screen it produces visible light.
This light then diverges before reaching the film producing a point
spread function due to the screen. This point spread function
increases with screen thickness leading to a tradeoff between screen
detection efficiency and spatial resolution.
Another source of blurring is patient motion, shown below. In
this case motion during the exposure leads to another point spread
function ( 3A ) equivalent to that produced in the case of a
stationary object and a finite focal spot ( 3B ) .
Motion
Equivalent
Focal spot
A
v
B
Blurred
image
Blurred
image
Moving
object
Detector
Detector
A further example of resolution loss occurs when an image is
digitized and represented by a picture element ( pixel ) matrix. This
is illustrated below, which shows images of a chest phantom
represented by pixel matrices of various sizes.
256 x 256
64 x 64
128 x 128
32 x 32
1024
x
1024
512 x 512
Studies have been done to show that for diagnosis of clinical chest
films information must be digitized to 4096 x 4096 to maintain
diagnostic accuracy for the most demanding tasks. However, most
diagnostic tasks are done acceptably well with 2048 x 2048
matrices. The smaller matrix is more convenient in terms of data
storage and retrieval. There is also evidence that digital
enhancement of contrast can compensate to some extent for slight
losses in resolution.
In order to quantitatively describe the effects of various system
elements on the overall resolution of an imaging system the concept
of modulation transfer function has been developed. In this
description of the resolution properties of an imaging system
components are described in terms of their ability to produce images
of sinusoidally varying test objects of various spatial frequencies.
The concept of spatial frequency is illustrated in Figure 5 which
illustrates the variation in x-ray transmission through an object with
a sinusoidally varying transmission in the x-direction of the form
where k is the angular spatial frequency, related to the spatial
frequency fx by
(2)
The contrast associated with the sinusoidal waveform is given by
Note that many discussions of MTF use for the angular spatial
frequency. We will use k to be consistent with our later discussions
of magnetic resonance imaging where k is used for spatial angular
frequency and  is used for temporal angular frequency. Spatial
frequency fx is usually given in units of line pairs per mm, where
one line pair or one cycle refers to a bright and dark band in the
sinusoidal object transmission.
Suppose the waveform of Figure 5 is sent into an arbitrary imaging
system as shown in Figure 6.
Nin
Imaging system
Nout
The effect of the imaging system will in general be to multiply the
signal variation by a complex system transfer function M(k) which
reduces contrast and in general produces a phase shift  giving
The factor MTF(k) modifying the magnitude of output signal
variation is called the modulation transfer function and is given by
the ratio of the output to input contrast for spatial frequency k. We
will refer to both fx and k as spatial frequency. The distinction
between linear and angular frequency should be clear from the
context of the discussion.
Physical objects can be represented as a weighted sum of spatial
frequency components. Because M(k) varies with k, the imaging
system will generally generate a distorted version of the actual
x-ray transmission, typically failing to represent the higher spatial
frequency content of the transmission.
Fourier Series Representation of the Transmitted Image
Let’s consider the case of a one dimensional transmitted fluence
N(x) representing a slit of width L as shown in Figure 7a. The
image profile is shown in Figure 7b.
slit
a
Detector
x
N(x)
b
-L/2
L/2
x
Since the transmitted image is symmetric about x=0, it can be
represented as a cosine series(in general sines and cosines are
required) of the form,
(6)
where
(7)
and, for any n,
(
dx
(8)
multiply both sides of Eq. 6 by cos(kn’x) and integrate from
-L/2 to L/2
L/2
L/2
 N( x ) cos(k ' x)dx a / 2  cos(k ' x)dx
n
L / 2

n
0
L / 2
 a   cos(k x) cos(k ' x)dx
n
n 1
L/2
L / 2
n
n
=Lan/2 if k= k’
= 0 k not = k’
Orthogonality
0
This leads to
Assuming that N(x) has a value of unity in some appropriate unit
and doing the integrals, we obtain
and
giving
The graph of an which will be compared later to the Fourier
transform of the transmitted image is shown below.
Note that the points go through zero at k = 2/L or fx = 1/L. (See
equations 2 and 7)
See Bracewell-Chapter 2
Usually, it is more convenient to express the image as an integral
over the various spatial frequency components required to represent
the image. Any distribution in x can be written as a sum of integrals
of the form
Since
and
(12)
This is just a sum over spatial frequencies similar to the discrete
example above, but now with a continuous distribution of
frequency components. The quantities Ñ+ and Ñ- are in general
complex numbers and are basically the weighting coefficients for
the various frequencies analogous to the discrete points in Figure
8.
For convenience, equation 11 is usually written in compact form as
(13)
The weighting function for the various sinusoidal frequencies Ñ(k)
is called the Fourier transform of N(x), or FT(N(x)).
It should be realized that the appearance of negative spatial
frequencies in equation 13 is a reflection of the fact that there
must, in general, be terms of the form e-ikx. These terms are of
course associated with positive, physically realizable spatial
frequencies. The artificial concept of negative spatial frequencies
just arises in association with writing the integral in compact form.
The Fourier transform or expansion coefficient of the image is
related to the image through the Fourier transform relationship,
(14)
Equation (13) expressing N(x) in terms of Ñ(k) is called the inverse
Fourier transform. N(x) and Ñ(k) are called a Fourier transform pair.
It is interesting to mention at this point that in magnetic resonance
imaging, data is obtained in the form of Ñ(k) in “k space”. The
image is then obtained through an two dimensional transform
equation analogous to equation 13.
Following a brief introduction to the Dirac delta function, which
will be presented below, the path from equation 13 to 14 may
become more clear. However ,at this point lets continue our
discussion of the single slit experiment of Figure 7.
slit
a
x
N(x)
b
L/2
x
The Fourier transform of the detected image is given by equation 14 as
where we have used the definition of the sinc function, namely
(16)
This sinc function is shown in Figure 9 and, aside from the
normalization, has the same shape as the expansion coefficient
distribution shown in Figure 8. Note that in the integral
representation the weighting of the complex exponentials is
symmetric about fx =o which is necessary to reconstruct the cosine
behavior corresponding to the expansion of equation 6. You can
think of a point at each positive frequency having an equally
weighted point at negative frequency in order to construct a cosine
function in accordance with the last of equations 12.
Note that the weighting function (Fourier transform) goes to zero at
fx = 1/L and 2/L as in the discrete case.
The frequency space representation of images will be helpful in the
understanding of spatial resolution as described by the modulation
transfer function.
A physical realization of this representation is to consider the fact
that it would be possible in the laboratory to construct an image on
film of any one dimensional image by making x-ray exposures
through a series of carefully positioned sinusoidal bar patterns of
appropriately selected transmissions and spatial frequencies.
In fact the idea can be generalized to two dimensions by using bar
patterns rotated by ninety degrees. In thisway it would be possible,
in principle, to make an image of the Mona Lisa on film, at least a
black and white version.
Jean Baptiste Joseph Fourier
1768-1830
ky
kx
The Mona Lisa in k-space
ky
kx
Low frequency Mona
k - space
ky
kx
High frequency Mona
k - space
Spock in k-space
The frequency content of a given image is related to its size and
shape. In general, for example, the Fourier transform a broad slit
will contain mostly low spatial frequencies, while a narrow slit will
require much higher frequencies.
This comparison is shown in Figure 10 for slits of width L and L/4.
The curve for the slit of width L corresponds to the sinc function of
Figure 9. The first zero of the Fourier transform of the slit of width
L/4 occurs at fx = 1/(L/4) = 4/L.
Figure 10
Note regarding area of integrals in x-space and k-space:
Since
so the signal at the origin of k-space is equal to the integral over all
image space signal.
It can also be seen that since
1  ~
N( 0 ) 
 N(k )dk
2
i.e. the value at the center of x-space is the integral of the k-space
signal.
In Figure 10 it is assumed that the intensity has been kept the same
as the slit width has been decreased. The Fourier transform goes
from
Since N(0) is the same in each case (fixed intensity through the
slit), the integrals in k-space are the same.
One further example will serve to introduce a mathematical
function which will be useful in several subsequent sections.
Figure 11 shows the Dirac delta function and the magnitude of its
Fourier transform.
Magnitude of Fourier
transform of
delta function 

 (x-a)
a
x
0
Figure 11
k
The delta function, located for example at x=a, is an infinitely narrow,
infinitely intense signal distribution defined by the following properties
The integral property of the delta function whereby it selects out
the value of the integrand at the point where the argument of the
delta function is zero is very convenient in a number of
applications. For example in calculating the Fourier transform of
x-a) we obtain
(18)
For points away from x=o the complex Fourier transform is
modulated by the phase factor e-ika which shuffles signal between
the real and imaginary parts. However, the magnitude of the
Fourier transform is constant at all frequencies.
The MTF of each imaging element multiplies the expansion
coefficient (Fourier transform) of the image entering that element
at each spatial frequency. For example consider the fluence N(x)
incident on the imaging system element which records or further
transmits a modified representation of the fluence NS(x) as shown
N(x)
in Figure 12.
Ns(x)
If we represent the input fluence by equation 13,
then the system element will represent the image as
The system has degraded the information at each spatial frequency
by the value of the system transfer function at that frequency.
The MTF of a given system element can be measured if the Fourier
transform of the signal incident on that element is known. For
example, by using a thin slit which simulates a delta function input
distribution, the MTF of a detector can be found.
The geometry for this experiment is shown in Figure 13.
Figure 13
Because of the narrow slit and near unit magnification, the rays
from the finite focal do not diverge before hitting the detector and
the input intensity distribution may be considered to be a delta
function at x=0, (0). The signal recorded by the detector ND will
be given by
where MD(k) is the system transfer function associated with the
detector.
Since from equation 18, for a delta function at x=o,
we can write
Since the right side is in the form of a Fourier transform, we can
solve for MD(k) by doing the inverse transform, giving
(22)
In general the response of a system element, in this case ND(x), to a
line image input is called the Line Spread Function (LSF) for that
element. A general formula for a normalized system transfer
function may be obtained by normalizing to the zero frequency
value.
(23)
The MTF is usually defined as the magnitude of this transfer
function as a function of positive spatial frequencies. This is a
general recipe for finding the MTF of a system element providing
that elements LSF is known.
Pick the one you think has the best MTF
I
III
II
Students usually select the image with the highest
MTF values at low spatial frequencies.
Although the curve with a higher spatial frequency
cutoff displays objects of finer detail, the contrast at
low spatial frequencies is inferior and presents a less
appealing image.
Which transfer function is more suitable depends on
whether the imaging task is to find large or small
objects.
As a further example, we will now consider the measurement of
the MTF associated with the focal spot. In this case the LSF of the
focal spot is found by imaging a narrow slit as before but this time
with arbitrary magnification m= (d1+d2)/d1 as shown below.
If we use non-screen film as a detector, we may assume that the
MTF of the film is unity for all spatial frequencies where the focal
spot MTF is non zero. The LSF is a magnified version of the focal
spot intensity distribution F(y),
(24)
Let us calculate the focal spot MTF assuming that the focal spot
distribution is a rectangle function defined by
Then
Using equation 23 we can calculate the focal spot transfer function
Mf as
This MTF associated with this function is shown in Figure 15. The
MTF has zeros at detector plane frequencies of fx = n / [a(m-1)].
For a 1mm focal spot, Table 1 shows the spatial frequency at the
first zero of the MTF for various magnifications. Clearly, the effect
of focal spot blurring increases quickly with magnification.
Magnification
Spatial Frequency
10 lp/mm
2 lp/mm
1 lp/mm
An extension of this example allows us to calculate the MTF
associated with uniform motion. Figure 16 shows the equivalent
focal spot as seen from the moving object.
d1
d2
object
Focal
Spot
•
v∆t
Equivalent
Focal
spot
object
Figure 16
detector
In this case the focal spot LSF has a width of mVt and the
equivalent focal spot width am is given by
or
If we go back to equation 27
M
Eq. 27
f
we obtain the transfer function by substituting the equivalent
focal spot for a
As V increases, the first zero of the transfer function moves to
lower and lower spatial frequencies.
We have already seen that due to various factors such as finite focal
spot size, motion or limitations in the detector, the image of a line
object such as a slit will instead be recorded as a line spread
function. Consider a one dimensional imaging system which is
otherwise perfect except for the detector as shown in Figure 17.
No
N(x’)
Detector
NR(x)
The correct signal N(x’) at each point in x’ will be spread to
remote points x in the detector in accordance with the line spread
function LSF(x-x’) which indicates how much of the signal aimed
at x’ will show up at x as the recorded signal NR(x). This is shown
for a delta function N(x’) distribution in Figure 18.
x’
NR(x)
N(x’) = (x-x’)
x-x’
Figure 18
x
In the case of a more general image distribution N(x’) the
recorded signal will be given by the sum of all of the signal
contributions spread from all points within the image.
(28)
The integral in equation 28 is called the CONVOLUTION of
N(x) and LSF(x) and is indicated by the sign.
Such a representation of the blurred image assumes that the
system is STATIONARY meaning that the LSF is the same at all
points and that the system is LINEAR meaning that
contributions from all coordinates sum at a distant coordinate
linearly.
For a two dimensional system, equation 28 can be generalized in
terms of a POINT SPREAD FUNCTION PSF(x-x’, y-y’).
See Bracewell - Chapter 3
The convolution C(x) of two functions A(x) and B(x) is given by the
integral
(29)
where the limits in general go to + or - infinity but more often are
determined by the range of the functions involved. Some of the
treatments of convolution are confusing in terms of visualizing what
is going on. Perhaps the easiest way is to pretend that one of the
functions is made up of a continuous distribution of delta functions.
Lets try to illustrate this.
Suppose we want to convolve the two functions in figure 19.
A(x)
x
B(x)
a
x
Substituting into equation 1, the convolution is given by
(30)
The delta function picks out the value of B at the point x’ = X,
giving
(31)
That is a shifted version of B(x) with its new origin at the location
of the delta function as shown in Figure 20.
C(x)
x
x
Now let us consider a quantity analogous to the line spread
function, namely the edge response function, ERF .
The edge response function is the convolution of the edge with the
system line spread function which describes how the signal from
each of the continuous set of delta functions making up the edge
distributes its signal to distant locations.
We will see that the edge response function is related to the line
spread function and provides an alternate means of measuring
system MTF.
Suppose we convolve the image of a sharp edge with the line
spread function of the imaging element. The functions to be
convolved are shown in Figure 20.
Figure 20
We have represented the edge as a continuum of delta functions,
in this case all equally weighted. The convolution can then be
represented as a set of displaced line spread functions centered at
each of the delta functions making up the edge function.
A few of these are shown in Figure 21. It can be seen that addition of
the contributions of all of these displaced line spread functions will
produce an edge response function which is gradually rolled off at the
edge and which comes to a constant value at large distances from the
edge.
Figure 21
This way of thinking about the convolution process works for more
general functions. For example you should convince yourself that
the convolution of two rectangle functions produces a triangle
function.
Once again, represent one of the rectangle functions as a continuum
of delta functions and add up the displaced versions of the other
rectangle function.
The edge response function provides a more convenient way of
measuring MTF than the line spread function, simply because it is
easier to make an edge than a slit.
The mathematical relationship between the two is illustrated
below. The ERF is the convolution of the edge E(x) with the LSF.
Assuming E(x) =0 for x<0 and 1
otherwise,
if we make the substitution that u=x-x’ and du = -dx’ we get
(33)
To understand the meaning of this relationship it is helpful to draw
the integration over the LSF as shown in Figure 22.
LSF(u)
ERF
integral
under curve
du
The incremental increase in the ERF integral is given by
(34)
In other words, by the fundamental theorem of calculus, we can
obtain line spread function as the derivative of the edge response
function.
The recipe of equation 23
(23)
can then be used to calculate the system transfer function and
MTF. The edge response method is commonly used to evaluate
radiographic detector resolution.
Suppose that NR(x) is the convolution of N(x) and LSF(x) ie,
(35)
Then, in the Fourier transform convention we have chosen
( equations 13 and 14) the k space transforms are related by (See
Figure 28 in Math Appendix)
This can be proved by writing out the explicit integral forms of
these relationships as shown in the appendix to this section.
This result can be extended to a series of imaging elements, each
of which degrade the system resolution by imposing an
additional convolution of its input image with its LSF.
For example, in a chain with two image elements and an input
image N(x), we would have
Applying the convolution theorem in two steps we obtain
In other words, recalling that the MTF is basically the FT of the
LSF, normalized to 1 at zero spatial frequency, the MTF of the
eventually recorded image is the product of the original image
transform multiplied by MTFs of all of the serial imaging
elements leading to the overall system MTFS for a series of N
imaging elements of the form
An illustration of equation 37 occurs in the case of magnification
radiography.
When radiographic magnification is used, the overall system MTF
is the product of the focal spot MTF and the MTF of the detector.
The geometry is shown in Figure 23.
From equation 27, the MTF of the focal spot is given by,
where fx is the spatial frequency in the detector plane. The
relationship between the detector frequency fx and the patient
frequency fpatient is
(38)
If we model the detector line spread function as a rectangle
function of width d, the detector MTF is given by
The product MTF MfsMDet is plotted in Figure 24 for
magnifications of 1.0, 1.33 and1.6 for the case of a =0.75 mm, and
d = 0.25 mm.
At small magnification, the detector resolution is the dominant
degrading factor. At a(m-1)/m = d/m which in this case occurs at
m= 1.33 the line spread functions of the focal spot and detector
are matched and the resolution is optimal. At this magnification,
the component MTF values each go through zero at a spatial
frequency of 5.3 lp/mm in the patient plane. At larger
magnifications the degradation due to the focal spot overcomes
any further gain due to magnification of the image relative to the
detector resolution element.
Optimal magnification obviously will vary with the actual focal spot
size and detector resolution. In general, smaller focal spots permit
greater magnification. Better detectors require less magnification.
See Bracewell - Chapter 10, Hasegawa - Chapter 6
We have seen in a previous chapter how digitization of the image
has noise consequences. Sampling an image for the purpose of
eventually representing it as a pixel matrix can also affect the
fidelity of the representation of the image if certain criteria are not
met.
For example, if the sample spacing is not sufficiently small, high
spatial frequencies can be interpreted as lower spatial frequencies.
This phenomenon, illustrated in Figure 25 is called aliasing. The
sample points, which in this case are too far apart to adequately
characterize the spatial frequency shown, represent it instead as a
low spatial frequency.
We will describe the basic Sampling Theorem which states that if
a function (e.g. one dimensional image) to be sampled has a
maximum frequency (is band limited ), then there is a maximum
sampling distance which will faithfully represent the function.
Figure 25
The maximum frequency is called the Nyquist Frequency and is
given by
(39)
wherex is the distance between samples, the eventual pixel
size.
Figure 26 illustrates the basic considerations involved in
sampling.
Figure 26 A shows the image to be sampled. B and C indicate
sampling at two different spacingsx1 and x2.
The sampling is accomplished by a sequence of delta functions
collectively called the shah function represented by III(x).
The Fourier transform of a shah function III(x /x) (sometimes
called a comb funtion) with spacing x is another shah function
xIII(xx) with spacing in fx space of 1/x.
The Fourier transforms of B and C are shown in I and J
Figure 26
Sampling A with B or C is just a multiplication by the shah
function resulting in the sampled images shown in D and E.
The Fourier transform of the original image shown in H has been
band-limited to the Nyquist frequency, usually by means of an
analog or digital filter.
According to the convolution theorem, the frequency space
representation of this image is given by the convolution of the
Fourier transform of the image (H) with the Fourier transforms of
the shah functions I and J, resulting in the frequency space
representations of the image shown in F and G.
In the case of F, the sampling space x1 has been chosen to be
such that there are two samples per cycle at the Nyquist frequency.
In this case the replicated Fourier transforms of the original image
do no overlap and there is no image distortion.
In G the sampling has been too sparse, resulting in overlap of the
replicated frequency space images. In this way, large negative
spatial frequencies from the first replicated distribution are aliased
back into the positive spatial frequencies of the primary image
leading to distortions and artifacts.
One of the most striking manifestations of aliasing of this sort
occurs in magnetic resonance imaging where there is a direct
relationship between spatial position and temporal frequency.
Inability to band limit in one of the image directions leads to the
appearance of objects at temporal frequencies above the Nyquist
frequency, wrapping from the right side of the image to a lower
frequency at the left side of the image. This leads to the often
seen “nose in the back of the head” artifact.
When the properly sampled image represented by Figure 26 D
and F are used to form a pixel representation of the image, Figure
26 D is basically convolved with a rectangle function Rect( x
/x1), sometimes represented by II( x /x1) which is a rectangle
with width x1. The digital image is then given by
(40)
This is shown in Figure 27A. The Fourier transform of this, by the
convolution theorem is given by
(41)
and is shown in Figure 27B.
Note that the replicated frequency spectrum shown in Figure 26F
is now modulated by the pixel frequency spectrum. Note that the
pixel frequency spectrum has its first zero at twice the Nyquist
frequency and therefore contributes only minor degradation
within the frequency range up to the Nyquist frequency.
.
Integral Form of the delta function
The delta function may be represented in the form of an integral.
This representation of the delta function has the previous
properties stated above, but is a convenient form to recognize
whenever it occurs in the course of manipulating integrals. The
integral form of the delta
function is given by.
(42)
This form is useful in showing the relationship between a
function and its Fourier transform as expressed by equations 13
and 14. From equation 13, for example, we have
To solve for the Fourier transform we can multiply by e-ik’x and
integrate over x, giving
Reversing the order of integration this equals
since the term in brackets is just 2(k-k’). Therefore, we obtain
equation 14,
Suppose that NR(x) is the convolution of N(x) and LSF(x, .ie,
Taking the Fourier transform we obtain
Performing the x integral first we get,
letting u=x-x’, du = dx, we get
This convolution theorem is illustrated in Figure 28 for the case
of the two rectangle functions by an example taken from “THE
FAST FOURIER TRANSFORM” by E. Oran Brigham,
Prentice Hill Inc.
In this example the Fourier transform of each of the rectangle
functions is a sinc function. The convolution of the two
rectangles is a triangle function as stated above. When the two
sinc functions are multiplied in frequency space they form a
sinc2 function which is the Fourier transform of the triangle as
stated by the convolution theorem.
Tables of Fourier transform pairs may be found in Bracewell,
page 100 or Hasegawa, Table 5.1 and Figure 5.1. Note that
Bracewell’s s corresponds to our k, while Hasegawa’s u
corresponds to fx.
Download