Chapter QQQ: Autocorrelation Function

advertisement
Autocorrelation Function
Page 1
AUTOCORRELATION FUNCTION. The autocorrelation function is a measure of
similarity between a data set and a translated copy of the data. Its uses include finding
periodic patterns in noisy data, characterizing similarity patterns for use in data
compression, and measurement of spatial resolution of an image receptor with uniform
white noise as the input. For medical imaging the major use reported is for the
measurement of film and screen spatial resolution.
The definition of the autocorrelation function is similar to that for the autocovariance
function. The autocorrelation function can be defined as E{I(x) I(x+)} and the
autocovariance function defined as E{[I(x)-<I(x)>][ I(x+)-<I(x)>]}. If I(x) is a zeromean function, then <I(x)> = 0 and the two definitions give the same result. In fact, for
most medical imaging applications I(x) is transformed to a zero mean function before the
correlation function is calculated so for our purposed autocorrelation = autocovariance.
The autocorrelation function can be designated as
Cx() = E{I(x) I(x+)}.
(Eq 1)
Example 1. A simple periodic function.
Following is an example of how to calculate the autocorrelation function for a simple set
of data that repeats every three samples. First lets take a look a Cx() with  = 0
I(x)
I(x+0)
I(x) I(x+0)
C x (0) 
…+1, –1, 0, +1, –1, 0, +1, –1, 0, …
…+1, –1, 0, +1, –1, 0, +1, –1, 0, …
…+1, +1, 0, +1, +1, 0, +1, +1, 0, …
1 N
1 9
6 2
I(x)I(x

0)

I(x)I(x  0)  


N x 1
9 x 1
9 3
(Eq 2)
Note that the average was selected over three periods (N = 9) in this case, but in general
the extent will be much larger. Also, note that as the numerator increases by 2 for each
period the denominator will increase by 3 keeping the Cx(0) = 2/3. For a large number of
samples (i.e., for N = 1000), averaging over an exact number of periods in not as
important since 999 points will cover 333 periods and only 1 in 1000 will be correctly
contributing to the autocorrelation function.
The next step is to calculate Cx() with  = 1
I(x)
I(x+1)
I(x) I(x+1)
Cx (1) 
…+1, -1, 0, +1, -1, 0, +1, -1, 0, …
... -1, 0, +1, -1, 0, +1, -1, 0, +1, …
… -1, 0, 0, -1, 0, 0, -1, 0, 0, …
1 N
1 9
3 1
I(x)I(x
1)

I(x)I(x 1) 



N x 1
9 x 1
9
3
(Eq 3)
Autocorrelation Function
Page 2
Note that as the numerator increases by -1 for each period the denominator increased by 3
keeping Cx(1) = -1/3. If the calculations are continued for various spacing you will see
that where
 = … -3, -2, -1, 0, +1, +2, +3 …
Cx() = … 2/3, -1/3, -1/3, 2/3, -1/3, -1/3, 2/3 …
(Eq. 4)
Inspection of this result leads to the following:



The autocorrelation function is positive (2/3) at  = 0. Further it can be shown that
the value of Cx(0)  Cx() for all . For most nonperiodic functions Cx(0) > Cx(),
i.e. Cx() will be the maximum at  = 0. This will be the case when we investigate
detector spatial resolution using white noise.
The autocorrelation function repeats with a period = 3.
The autocorrelation function is –1/3 for  =  1,  2. In other words the
autocorrelation function is identical for these two displacements. Also, it is of
opposite sense and 1/2 the magnitude of that a  = 0.
Since I(x) is a zero mean function in this example, Eq 2 is just the variance of I(x) or
stated mathematically Cx(0) = 2. With this identity in mind we can interpret Cx() as
the variance between a function I(x) and its shifted version I(x+). A normalized
measure of autocorrelation is Rx() =Cx()/Cx(0). Therefore Rx(0) = 1 and Rx()1 for
all other values of  providing a useful measure of fractional similarity as a function of
displacement.
Example 2. Autocorrelation function of sinusoids.
Since cosine is a periodic function we must integrate over n periods to ensure a proper
calculation of its autocorrelation function. The basic equation is
nT
2
C() 
1
nT
 cos(  x) cos(  )(x  )dx
2
T
2
T
(Eq. 5)
nT
2
From trigomometry cosAcosB=1/2cos(A-B)+1/2cos(A+B) and setting A=(2/T)x and
B=(2/T)x +(2/T) and using cos()=cos(-) we get
nT
2
C() 
1
nT

nT
2
nT
2
1
2
cos(
2
T
C()  12 cos( 2T )
)dx 
1
nT

1
2
cos( 2T )(2x  )dx
(Eq. 6)
nT
2
(Eq. 7)
An identical answer is seen for a sine function. Note that C(0) = 1/2 and has the same
period (T) as the original sinusoid.
Autocorrelation Function
Page 3
In general the following are true for all autocorrelation functions:





C(0) = 2
C(-) = C()
C(0)  C()
C() = 2()
R() = C()/C(0)
at =0 the autocorrelation function is the variance
autocorrelation functions are symmetric
maximum @ zero displacement (=0)
autocorrelation function = variance as function of 
normalized autocorrelation function
Autocorrelation and Autoconvolution. There is a natural similarity between the
autocorrelation function and convolving a function with itself (autoconvolution).
Autoconvolution is as follows
ff (x)   f (y) f (x  y)dy
(Eq. 8)
whereas for autocorrelation there is no reflection in the second term leading to
C(x)   f (y) f (y  x)dy   f (y) f (y  x)dy
(Eq. 9)
where ‘y’ is a dummy ‘x’ variable for integration and f(x) is a real (i.e. not complex)
function. The x in these equations can be  since both measure distance from x=0.
Graphing the two forms in Eq. 9 will reveal why they are identical on integration.
The Fourier transform of C() also has a simple form based on F(u), the Fourier
transform of f(x),
 (u)  {C()} F(u)F *(u)  F(u)
2
(Eq. 10)
and (u) is called the power spectral density (PSD) of f(x). The power spectral density
and the autocorrelation function form a Fourier transform pair
 (u,v)   C( x , y )e
2i (u x v y )
and
C( x , y )    (u,v)e
2 i (u x v y )
d x d y
(Eq. 11)
dudv
(Eq. 12)
In general the following are true for the power spectral density function:

C(0, 0)      (u,v)dudv
integral of PSD = variance



(u,v)
(u,v) ≥ 0
(u,v) = (-u,-v)
PSD is real
PSD is non-negative
PSD is symmetric
2
A summary of autocorrelation and power spectral density functions for various input
functions is helpful in understanding their use:
Autocorrelation Function
Page 4
Table 1. General Form of Autocorrelation and Power Spectral Density Functions.
input function
C()
(u)
cosine
delta functions
sinusoid (=0)
Gaussian
Gaussian
Gaussian
delta function
delta function
constant
delta function
constant
random noise (=0)
The power spectral density of random noise is called the Wiener Spectrum and is
usually written as W(u) rather than (u) to make this explicit.
C() and W(u) can be used to measure spatial resolution of film/screen systems from
uniform images of noise. Figure 1 is a random noise image i(x,y) where the mean =0 and
the standard deviation = 1. Figure 2 is its Wiener spectrum {|I(u,v)|2} where |I(u,v)| is the
magnitude of the Fourier transform of i(x,y). The origin u=0, v=0 is at the center of this
Figure 2.
Figure 1. 2-D image [i(x,y)] of random
values (= 0,  = 1).
Figure 2. I()2 is the power or Wiener
spectrum of i(x,y)
The integral of |I(u,v)|2 is equat to 1 as predicted
since it should be equal to  for i(x,y). Figure 3 is
a graph of the Wiener spectrum as a function of the
radial frequency [W()] expressed as the distance
from the origin . Note that the highest frequency is
~128 cycles or line pairs as expected for a 256x256
image. Note also that there appears to be a single or
mean response about which the data fluctuate. This
value estimated as the average from |I(u,v)|2 should
be /2562=1.53x10-5.
If the Wiener spectra in Figure 3 were fitted with a
straight line it would have zero slope and magnitude
equal to the above mean value. The constant
Figure 3. Radial frequency plot W() of
Wiener spectrum from Figure 2.
Autocorrelation Function
Page 5
magnitude shows that random noise, if unmodified by the system transfer function, is
made up of equal amplitudes at all frequencies. Figures 1-3 illustrate the calculation of
the Wiener spectrum in an ideal imaging system, i.e. one with constant magnitude of its
frequency response |H()| = constant. The Wiener spectra is therefore just the square of
the |H()| of the imaging system or W() = H()|2 for this ideal system. This
characteristic is just what is needed to test the resolution capabilities of an imaging
system.
Figures 4-6 provide some insight into how a power spectrum relates to system resolution.
Let’s assume that the imaging system alters the 2-D random image i(x,y) of Figure 1 by
some blurring process. Figure 4 illustrates this with a 9x9 Gaussian filter applied to the
i(x,y) from Figure 1 to simulate imaging system blurring. This blurring is described in
the frquency domain as Is(u,v) = I(u,v)H(u,v), where I(u,v) is the Fourier transform of
i(x,y) and H(u,v) is the simulated system transfer function.
Figure 4. Random noise image of system
modeled with a 9x9 Gaussian point spread
function.
Figure 5. Wiener spectra of system
modeled with a 9x9 Gaussian point spread
function.
The 2-D Wiener spectrum of Figure 4 is given
in Figure 5 and as Ws() in Figure 6. It follows
that Ws() = |Hs()|2 or |Hs()| = [Ws()]1/2.
This latter form states that the square root of the
Wiener spectrum for a system is the magnitude
of its frequency response. |Hs()| is similar to
MTF. The system MTF is estimated as [W s()
/W s()]1/2
While the Wiener spectrum can be calculated
without using the autocorrelation function it is
instructive to analyze system spatial blurring by
direct analysis of the autocorrelation function of
Figure 7. F() for Figure 5.
noise. The rationale is that the autocorrelation
function for random noise input to an ideal imaging system [PSF = (x,y)] is a delta
Autocorrelation Function
Page 6
function. Broadening of this delta function is due to the imaging system blurring, i.e.
PSF(x,y) not a delta function. In the spatial domain this is
Cs(x,y) = C(x,y)  [PSF(-x,-y)PSF(x,y)]
(Eq. 13)
where the smoothed autocorrelation function Cs(x,y) is different from the ideal
autocorrelation function due to convolution with PSF(x,y) twice. An example of this is
provided in Figures 8 & 9..
Figure 9. Profile along PSF2 in Figure 9.
Figure 8. Autocorrelation of random
noise image (actually Figure 4) that had
been smoothed with a 9x9 Gaussian filter.
The similarity of the of the autocorrelation analysis of a random noise image and the
direct application of system smoothing is seen in Figures 10 & 11.
Figure 10. Point that was smoothed
twice with a 9x9 Gaussian filter.
Figure 11. Profile along PSF in Figure
10.
Note the similarity of Figures 8-9 and 10-11. Taking the square root of the profile data in
Figures 9&11 give similar estimates of the system PSF.
Autocorrelation Function
Page 7
Measuring Film/Screen Resolution. The autocorrelation function and Wiener spectra
are both sensitive to low frequency background variations and this limits their use in
Nuclear Medicine. However, both can be used in the evaluation of the effects of
film/screen on the system resolution in x-ray imaging. A uniform random image can be
simulated by exposing small region of a film/screen at a large distance from the x-ray
tube. The film is generally scanned with a small aperture microdensitometer and the
recorded film density converted to relative exposure using calibration data for the
film/screen combination. The film/Screen MTF is calculated as the square root of W()
for this data.
The noise presented to the film/screen system is quantum noise but the noise in the image
is due to random processes in both the screen and the film, and since these are assumed to
be independent the random noise for these are additive. This leads to overall MTF that
looks something like the following:
Ideal MTF
MTF()
Screen MTF
Film MTF

Figure 12. MTF from film/screen system.
Also, the small aperature used in the scanning process must be accounted for by
modeling its MTF and factoring it out of the measured MTF.
Download