A COMPUTER ALGORITHM TO IMPLEMENT LINEAR STRUCTURED ILLUMINATION IMAGING Zhongchao Liao

advertisement
A COMPUTER ALGORITHM TO IMPLEMENT LINEAR STRUCTURED
ILLUMINATION IMAGING
Zhongchao Liao
B.E., Wuhan University of Technology, 2007
PROJECT
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SCIENCE
in
ELECTRICAL AND ELECTRONIC ENGINEERING
at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
FALL
2009
A COMPUTER ALGORITHM TO IMPLEMENT LINEAR STRUCTURED
ILLUMINATION IMAGING
A Project
by
Zhongchao Liao
Approved by:
__________________________________, Committee Chair
Warren D. Smith
__________________________________, Second Reader
Stephen M. Lane
____________________________
Date
ii
Student: Zhongchao Liao
I certify that this student has met the requirements for format contained in the University
format manual, and that this project is suitable for shelving in the Library and credit is to
be awarded for the Project.
__________________________, Graduate Coordinator
B. Preetham Kumar
Department of Electrical and Electronic Engineering
iii
________________
Date
Abstract
of
A COMPUTER ALGORITHM TO IMPLEMENT LINEAR STRUCTURED
ILLUMINATION IMAGING
by
Zhongchao Liao
The conventional diffraction limit defines a finite range of spatial frequencies that
can be transmitted through a microscope. To reveal more information about the objects
that are observed by microscope, techniques that can go beyond this limit need to be
developed. Structured illumination microscopy (SIM), one such method, uses patterns
of excitation light to encode otherwise unobservable information into the observed image.
Although the method has been well developed, the procedure of this technique is
complicated. During the procedure, after encoding the unobservable information into the
observed image, the superresolution information components need to be separated,
shifted, and reassembled. These procedures have never been clearly explained.
In this project, a computer algorithm of the linear structured illumination
microscopy technique is developed. To implement this algorithm, multiple images of an
object are taken with different phases and orientations of sinusoidally patterned
illumination. Superresolution information components then can be extracted from these
images. The procedures of separation, shifting, and reassembly of the superresolution
information components are presented, explained, and verified. A block diagram of the
whole procedure of the structured illumination method is presented. The results of the
conventional microscope and the structured illumination algorithm are generated and
compared.
When applied to test objects, the performance of the algorithm is found to be in
agreement with theoretical predictions, thus verifying the theory and the implementation
algorithm. The block diagram of the whole procedure of the structured illumination and
iv
the explanation of the procedures of separation, shifting, and reassembly of the
superresolution information components can be taken as the instructions of how to
implement this method. This project report is intended to serve as a useful reference for
researchers to understand this method.
_______________________, Committee Chair
Warren D. Smith
_______________________
Date
v
ACKNOWLEDGMENTS
I would like to thank my advisor, Dr. Warren D. Smith, for giving me the
opportunity to work in a very interesting area, and for his support and guidance
throughout my graduate studies at California State University, Sacramento.
I also wish to thank Dr. Stephen M. Lane, the Chief Scientific Officer of the NSF
Center for Biophotonics Science and Technology at the University of California, Davis,
for his direction, assistance, and guidance. His recommendations and suggestions have
been invaluable for the project.
I thank Dr. Preetham Kumar, the Graduate Coordinator of the Department of
Electrical and Electronic Engineering, for his support and encouragement throughout my
graduate studies.
Special thanks should be given to my parents who love and support me at all
times. Finally, words alone cannot express the thanks I owe to Qing Gu, my wife, for her
encouragement and assistance.
vi
TABLE OF CONTENTS
Page
Acknowledgments ....................................................................................................... vi
List of Figures ............................................................................................................. ix
Chapter
1.
INTRODUCTION ................................................................................................ 1
1.1 Overview .................................................................................................... 1
1.2 Purpose of Study ........................................................................................ 2
1.3 Organization of Project Report ................................................................... 3
2.
BACKGROUND .................................................................................................. 4
2.1 Structured Illumination Imaging Theory .................................................... 4
2.2 Information Components Shifting .............................................................. 6
3.
METHODOLOGY ............................................................................................... 8
3.1 Object ......................................................................................................... 8
3.2 Optical Transfer Function and Point Spread Function .............................. 9
3.2.1 Optical Transfer Function ........................................................... 9
3.2.2 Point Spread Function ............................................................... 10
3.3 Conventional Image ................................................................................. 11
3.4 Illumination Patterns ................................................................................. 15
3.5 Shifted Components ................................................................................ 15
3.6 Information Components Separation ........................................................ 19
3.7 Information Components Analysis ........................................................... 23
3.8 Information Components Reconstruction ................................................ 25
3.9 Apodization .............................................................................................. 28
3.10 Methodology Summary ......................................................................... 30
4.
RESULTS ........................................................................................................... 32
4.1 Real Space Comparison ........................................................................... 32
4.2 Reciprocal Space Comparison ................................................................. 36
vii
5.
DISCUSSION .................................................................................................... 39
6.
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS ...................... 40
6.1 Summary .................................................................................................. 40
6.2 Conclusions ............................................................................................. 40
6.3 Recommendations .................................................................................... 41
Appendix Matlab Simulation Code .......................................................................... 42
References .................................................................................................................. 52
viii
LIST OF FIGURES
Page
1.
Figure 3.1. Object image, D (r ) ....................................................................... 8
2.
Figure 3.2. OTF spectrum magnitude plot ..................................................... 10
3.
Figure 3.3. OTF spectrum image .................................................................... 11
4.
Figure 3.4. Fourier transform of object image, D (r ) ,
in reciprocal space, Dbar (r ) ....................................................... 12
5.
Figure 3.5. OTF support region in reciprocal space, DHbar (k ) ................... 14
6.
Figure 3.6. Conventional image, DP (r ) ........................................................ 14
7.
Figure 3.7. Illumination pattern, I (r ) , with   240 o , orientation = 120o
in real space ................................................................................. 16
8.
Figure 3.8. Illuminated image in real space, DI (r ) ....................................... 16
9.
Figure 3.9. Illuminated object in reciprocal space, DIbar (r ) . ....................... 17
10.
Figure 3.10. Magnitude plot of illuminated object, DIbar (r ) ....................... 17
11.
Figure 3.11. Magnitude plot of reconstructed object, rc (k ) . ......................... 18
12.
Figure 3.12. Illumination pattern, I (r ) , in three
phases and orientations.. ............................................................. 19
13.
Figure 3.13. Components for three different phases and orientations ............ 22
14.
Figure 3.14. The moved components, replc ci (k ) ........................................... 26
15.
Figure 3.15. The Fourier transform of reconstructed
structured illumination image, drr (k ) ...................................... 28
16.
Figure 3.16. Magnitude plot of triangular function
in reciprocal space, bhs (k ) ........................................................ 29
17.
Figure 3.17. Reconstruction of SI image in real space, fimage(r ) ............... 30
18.
Figure 3.18. Block diagram of the methodology ............................................ 31
19.
Figure 4.1. Magnitude plot of column 65 of object image in real space ........ 33
20.
Figure 4.2. Magnitude plot of column 65 of SI image in real space .............. 34
ix
21.
Figure 4.3. Magnitude plot of column 65 of conventional image
in real space ................................................................................. 35
22.
Figure 4.4. Comparison of column 65 of conventional image
and SI image in real space ........................................................... 35
23.
Figure 4.5. Magnitude plot of column 65 of object image
in reciprocal space ....................................................................... 36
24.
Figure 4.6. Magnitude plot of column 65 of SI image
in reciprocal space ....................................................................... 37
25.
Figure 4.7. Magnitude plot of column 65 of conventional image
in reciprocal space ....................................................................... 37
26.
Figure 4.8. Comparison of column 65 of conventional image
and SI image in reciprocal space ................................................. 38
x
1
Chapter 1
INTRODUCTION
1.1. Overview
Optical or light microscopy involves passing visible light transmitted through or
reflected from the sample through a single or multiple lens system to allow a magnified
view of the sample [1]. The resulting image can be captured digitally, imaged on a
photographic plate, or observed directly by the eyes.
According to E. K. Abbe's theory [2], the conventional diffraction limit defines a
finite range of spatial frequencies that can be transmitted through a microscope. This
theory has been well understood for more than a century. Recently, a few techniques
have been shown that can go beyond this limit. Structured illumination microscopy
(SIM) , one such method, uses patterns of excitation light to encode otherwise
unobservable information into the observed image. This method, developed by M. G. L.
Gustafsson and R. Heinzmann, has been used for resolution enhancement in both the
axial and the lateral directions.
This method resolves an object’s spatial frequencies that are normally outside the
passband of an imaging system. The basic idea is based on the well-known moiré effect
[3]. The moiré effect is a visual perception that occurs when viewing a pattern that is
superimposed on another pattern, where the patterns differ in relative size, angle, or
spacing. In this project, one pattern is purposely structured excitation light with a
2
sinusoidal illumination pattern, and the other pattern is the unknown sample object [4].
The observed image is the product of the two patterns, where the amount of light emitted
from a point is proportional to the product of the unknown object and the sinusoidally
patterned illumination [5]. Such an observed image also will contain moiré fringes. This
generated moiré pattern combines the high spatial frequencies of the object with the
spatial frequency of the sinusoidal illumination. Since it is much coarser than either the
sinusoidal pattern or the sample object, the moiré pattern easily can be observed in the
microscope, even if the object is too fine to resolve. Multiple images of the object can be
obtained by shifting the phase of the sinusoidal pattern and rotating the orientation of the
sinusoidal pattern. These images then are processed to extract the high spatial
frequencies in order to obtain a superresolved image.
1.2. Purpose of Study
The theory of structured illumination imaging has been well developed. In this
project, a computer algorithm is developed to implement linear structured illumination
imaging theory. The primary purposes of developing this computer algorithm are to
verify the robustness of the theory and to help people understand this method clearly.
During the procedure, after encoding the unobservable information into the observed
image, the superresolution information components need to be separated, shifted, and
reassembled. Since these procedures have never been explained clearly, this project
discusses these steps thoroughly.
3
1.3. Organization of Project Report
The project is organized as follows: Chapter 2 provides background knowledge
on linear structured illumination microscopy and illustrates the method of shifting the
superresolution information components. Chapter 3 shows the computer algorithm to
implement the linear structured illumination technique. It illustrates the methods of
separating, shifting, and reassembling the superresolution information components.
Chapter 4 shows the results after applying the linear structured illumination technique to
the object image and compares the reconstructed image with the conventional
microscope image. Chapter 5 is a discussion of the results of the project. Chapter 6 is
the summary, conclusions, and recommendations of this project.
4
Chapter 2
BACKGROUND
2.1. Structured Illumination Imaging Theory
The classical resolution limit specifies a maximum spatial frequency, k obs , that
can be observed through the light microscope. The region within a circle of radius k obs
in reciprocal space is called observable region [6]. It is also known as the OTF support
region. It is defined as the spatial frequencies for which the optical transfer function
(OTF) of the conventional microscope is non-zero [7]. In terms of the definition of the
OTF support region, the information that lies inside this region can be observed through
the conventional microscope, while information that resides outside the region is not
observable. The structured illumination technique is developed to extend the resolution
beyond this limit by shifting high spatial frequencies from outside the observable region
into the observable region in the form of moiré fringes.
Object image D (r ) and observed image E (r ) are related by
E (r )  D(r )  I (r ) ,
where I (r ) is the structured illumination pattern, and r is the spatial vector.
(1)
5
The Fourier transform of this relation is the convolution
E (k )  D(k )  I (k ) ,
(2)
where k is the spatial frequency vector in reciprocal space.
This convolution mixes information from outside the observable region into the
observable region in reciprocal space [8]. Thus, the observed patterned image contains
previously unobservable information. If the structured illumination pattern is chosen
properly, the unobservable information in moiré fringe form can be decoded and restored.
A reconstruction can be created with the previously unavailable superresolution
information to get the superresolved image.
Because the resolution extension is based on the structured illumination pattern’s
frequency, I (r ) should be as fine as possible to get maximal resolution [9]. The
structured illumination used in this project is a sinusoidal pattern of parallel stripes:
I (r ) 
1
1  cos(2p  r   ),
2
where p is the frequency of the illumination pattern, and  is the phase of the
illumination pattern in real space.
(3)
6
The Fourier transform of that pattern consists of three delta functions:
1
1
1
I (k )  [ (k )   (k  p )e i   (k  p )e i ] ,
2
2
2
(4)
so that convolution integral (2) becomes a sum of three components [5]. The phase
factor, e i , represents the phase  of the illumination pattern in reciprocal space [10].
The observed image, E (k ) , at each point k in reciprocal space only depends on three
information components:
1
1
1
E (k )  [ D(k )  D(k  p )e i  D(k  p )e i ] .
2
2
2
(5)
Three independent linear combinations of D(k ) , D (k  p ) , and D(k  p) then
can be measured by repeating this procedure several times with the pattern shifted by
different phases. This process can be repeated with the pattern at different orientations,
resulting in an image of the object at double the normal resolution.
2.2. Information Components Shifting
Equation (5) has three components, the unshifted object Fourier transform, D(k ) ,
and two shifted copies of the object Fourier transform, D(k  p) and D (k  p ) . The
7
shifted components contain part of the object's unobservable information in a
conventional imaging system. The structured illumination process makes the previously
unobservable information accessible by shifting these components into the OTF support
region of the conventional microscope. To obtain the superresolved image, the three
information terms need to be separated and moved back to proper positions. Unshifted
component D(k ) does not need to be moved, but the spatial frequencies of those shifted
components from (k  p) and ( k  p ) coordinates should be moved back to the (k )
coordinates [9]. Then, a reconstruction is generated to restore all components to get a
superresolved image.
8
Chapter 3
METHODOLOGY
3.1. Object
The numerical simulations are performed on a grid of 128  128 pixels. The
object image, consisting of two-dimensional rods with random length and orientation, is
shown in Figure 3.1.
Figure 3.1. Object image, D (r ) .
9
3.2. Optical Transfer Function and Point Spread Function
3.2.1. Optical Transfer Function
The optical transfer function (OTF) describes the magnitude of each spatial
frequency observed by the microscope. The simulations and numerical calculations in
this project used an analytical wide field OTF for a diffraction-limited optical microscope
in the scalar, paraxial approximation [8]. This OTF is
OTF (k )  {2b( k )  sin[ 2b( k )]} /  ,
(6)
where b(k )  cos 1 (k / k 0 ) . Figure 3.2 shows this OTF.
Here, k0 is the radius of the normally observable region in reciprocal space. The
normally observable region is shown in Figure 3.3. This simple expression is picked
because the particulars of the OTF are unimportant for the general question. The highest
spatial frequency for the OTF, f c , is set to 20 frequency index where the frequency
index 65 represents 0 spatial frequency in reciprocal space. The interval from one
frequency index to the next corresponds to a spatial frequency interval of (1 / 128) / pixel .
10
3.2.2. Point Spread Function
The point spread function (PSF) describes the response of an imaging system to a
point source or point object. A more general term for the PSF is a system's impulse
response, with the PSF being the impulse response of a focused optical system.
Figure 3.2. OTF spectrum magnitude plot. It is the plot of (6) in reciprocal space. The
frequency index 65 represents 0 spatial frequency in reciprocal space. The interval from
one frequency index to the next corresponds to a spatial frequency interval of
(1 / 128) / pixel .
When the object is divided into discrete point objects of varying intensity, the
image is computed as a sum of the PSF of each point. As the PSF typically is determined
entirely by the imaging system, the entire image can be described by specifying the
11
optical properties of the system. This process usually is formulated by a convolution
equation [11].
Figure 3.3. OTF spectrum image. It is the image of (6) in reciprocal space. The
frequency index 65 represents 0 spatial frequency in reciprocal space. The interval from
one frequency index to the next corresponds to a spatial frequency interval of
(1 / 128) / pixel .
3.3. Conventional Image
The OTF is the Fourier transform of the PSF. According to the property of
convolution, convolving the object with the PSF in real space is equivalent to multiplying
the Fourier transform of the object with the OTF in reciprocal space. The product of the
12
multiplication of the Fourier transform of the object and the OTF then is transformed
back to real space again to avoid the convolution process. The result in real
space is the normally observable, or conventional, image.
The Fourier transform of the object image, D (r ) , to reciprocal space is
Dbar (k )  F [ D(r )] ,
(7)
where F [ ] represents the Fourier transform. This Fourier transform is shown in Figure
3.4.
Figure 3.4. Fourier transform of object image D (r ) in reciprocal space, Dbar (k ) .
13
Multiplying Dbar (k ) by the OTF results in
DHbar (k )  OTF (k )  Dbar (k ) ,
(8)
the OTF support region of the object image, D(k ) , in reciprocal space, shown in Figure
3.5. Then, transforming back to real space results in
DP (r )  F 1[ DHbar (k )] ,
(9)
where F 1 [ ] represents the inverse Fourier transform and DP (r ) is the inverse Fourier
transform of DHbar (k ) in real space, shown in Figure 3.6.
Comparing Figure 3.1 with Figure 3.6, it can be seen that, after applying the PSF,
the object image, D (r ) , that consists of two-dimensional rods with random length and
orientation, is changed into a blurred image, DP (r ) , which simulates a conventionally
observed image. The goal of the project is to improve conventionally observed images
by using the structured illumination technique.
14
Figure 3.5. OTF support region in reciprocal space, DHbar (k ) .
Figure 3.6. Conventional image, DP (r ) . This is the normally observed image, which is
the image that can be observed through a conventional microscope.
15
3.4. Illumination Patterns
As mentioned before, a sinusoidal pattern of parallel stripes is used in this project
to generate the illumination pattern, I (r ) . The illumination pattern is shown in Figure
3.7 for an orientation of 120o, where orientation is measured clockwise from the
horizontal.
In real space, the product of the illumination pattern, I (r ) , and the object image,
D (r ) , is the illumination patterned object image, DI (r ) , shown in Figure 3.8. It is then
transformed to reciprocal space, DIbar (k ) , shown in Figure 3.9. Then, it is multiplied
by the OTF to get the conventionally observable patterned image, DIbars (k ) ,
DIbars (k )  H (k )  DIbar (k ) .
(10)
In (10), the patterned object image is limited by the OTF. However, there is some
superresolved information in the shifted components.
3.5. Shifted Components
As shown in (4), the Fourier transform of a sinusoidal pattern consists of three
impulses. The Fourier transform of an object illuminated by this pattern contains three
replicas of the object spectrum [12]. The three components can be visualized in Figure
3.10 [13]. Figure 3.10 is a slice through Figure 3.9 at orientation = 120o. All three
16
Figure 3.7. Illumination pattern, I (r ) , with   240 o , orientation = 120o in real space.
Figure 3.8. Illuminated image in real space, DI (r ) . It is illuminated by the illumination
pattern shown in Figure 3.7.
17
Figure 3.9. Illuminated object in reciprocal space, DIbar (r ) . It is illuminated by the
illumination pattern shown in Figure 3.7.
Figure 3.10. Magnitude plot of illuminated object, DIbar (k ) . This is the plot of (5). It
is a slice through Figure 3.9 at orientation = 120o.
18
components are combined appropriately to obtain a superresolved image, rc (k ) , as
shown in Figure 3.11.
Figure 3.11. Magnitude plot of reconstructed object, rc (k ) . The detectable region is the
normal OTF support region and the plot is the reconstruction of Figure 3.10, after moving
the shifted components back to proper positions.
In order to solve for the three unknown components, three or more images are
needed. Traditional technique uses three images with phase shifts of 0o, 120o and 240o in
the sinusoidal illumination [4]. Figure 3.12 shows sinusoidal patterns with three phases
for three different orientations. For each orientation, there are three unknown
components.
19
Orientation = 0o
Orientation = 60o
Orientation = 120o
  0o
  120 o
  240 o
Figure 3.12. Illumination pattern, I (r ) , in three phases and orientations. They are
printed on 128×128 pixel grids with the same scales.
3.6. Information Components Separation
Solution of these individual components by solving linear equations has been
20
discussed extensively in the literature [4], [5]. However, there are few papers that
discuss the details of inverting the matrix and solving for the three unknown components.
Therefore, one of the purposes of this project is to show such details. According to
Shroff's paper [9], let H1 (k ) and H 2 (k ) be the optical transfer functions (OTFs) of the
illumination and imaging paths, respectively. Recall that Dbar (k ) is the Fourier
transform of the object intensity, and DIbars (k ) is the Fourier transform of the OTF
support patterned object. The resulting matrix is
 DIbars 1 (k )  e
 DIbars (k )  e i0
2

 
 DIbars 3 (k )  e i0
C
i0
i 1
e
e i 2
e i3
A
 1

H 1 (0) H 2 (k ) Dbar (k ) 

2
e 
1

i  2 

.
e

H
(
p
)
H
(
k
)
Dbar
(
k

p
)
1
2


i 3   4
e
  1 H ( p ) H (k ) Dbar (k  p)
2
 4 1

B
i1
(11)
Since the Fourier transforms of the patterned object and the three phases, 1  0 ,
 2  120 o ,  3  240 o , are already known, the Fourier transforms of the shifted object
can be solved by inverting matrix A, which is the shifting factor matrix. For this project,
the equation is solved numerically.
21
First,
 1

 2 H 1 (0) H 2 (k ) Dbar (k )  e i0
1
 
 H 1 ( p ) H 2 (k ) Dbar (k  p )   e i0
 4
  i0
 1 H ( p ) H (k ) Dbar (k  p ) e
2
 4 1

B
1
e i1 e i1   DIbars 1 (k ) 

e i 2 e i 2    DIbars 2 (k ) .
e i3 e i3   DIbars 3 (k ) 
C
A
(12)
Substituting 1  0 ,  2  120 o , and  3  240 o in (12) results in
 1

1
 2 H 1 (0) H 2 (k ) Dbar (k )  1
1
1


 DIbars 1 (k ) 
1



 H 1 ( p) H 2 (k ) Dbar (k  p)   1 - 0.5  0.866i - 0.5  0.866i   DIbars 2 (k ) . (13)
 

4
 




1
0.5

0.866i
0.5

0.866i
DIbars
(
k
)
3
 
 1 H ( p) H (k ) Dbar (k  p) 
1
2
A
C
 4

B
The three components,
and
1
1
H 1 ( p ) H 2 (k ) Dbar (k  p ) , H 1 ( p ) H 2 (k ) Dbar (k  p ) ,
4
4
1
H 1 (0) H 2 (k ) Dbar (k ) thus are obtained. Figure 3.13 shows the three components
2
for each orientation.
22
Orientation = 0o
Orientation = 60o
Orientation = 120o
  0o
  120 o
  240 o
Figure 3.13. Components for three different phases and orientations. They are the
results of the information components separation after applying the respective
illumination patterns shown in Figure 3.12. All the images are printed on 128×128 pixel
grids with the same scales.
23
3.7. Information Components Analysis
The separated terms in matrix B of (13), which are the Fourier transforms of
shifted objects in reciprocal space, are now analyzed [9]. The term
sp c1 (k ) 
1
H 1 (0) H 2 (k ) Dbar (k )
2
(14)
is the unshifted component image for 0o orientation. It has an OTF given by
otf1 (k ) 
The second separated term,
1
H 1 (0) H 2 (k ) .
2
(15)
1
H 1 ( p ) H 2 (k ) Dbar (k  p ) , is the shifted component
4
image containing the superresolution information from the conventionally unobservable
region. A shifting factor, Ir (k ) , is introduced to sub-pixel shift the components. By
using the shifting factor, the second separated term can be shifted from the (k  p)
coordinates back to the (k ) coordinates to obtain sp c 2 (k ) 
1
H 1 ( p ) H 2 (k  p ) Dbar (k ) .
4
This procedure is repeated for the third separated term to obtain
sp c 3 (k ) 
1
H 1 ( p ) H 2 (k  p ) Dbar (k ) .
4
24
The OTFs for sp c 2 (k ) and sp c 3 (k ) are
otf 2 (k ) 
1
H 1 ( p) H 2 (k  p) ,
4
(17)
otf 3 (k ) 
1
H 1 ( p) H 2 (k  p) .
4
(18)
The derivation shown above follows that of Shroff’s paper [9]. This process is
repeated for 60o and 120o orientations of the sinusoidal illumination pattern. Thus, six
more component images can be obtained. There are four component images having
superresolution along their respective rotations in Fourier space, given as sp c 5 (k ) and
sp c 6 (k ) for orientation of 60o and sp c8 (k ) and sp c 9 (k ) for orientation of 120o. They
have their own OTFs, otf 5 (k ) , otf 6 (k ) , otf 8 (k ) , and otf 9 (k ) . There are other two
components, given as sp c 4 (k ) and sp c 7 (k ) . They are the unshifted versions for 60o and
120o orientations, having OTFs similar to otf1 (k ) , given as otf 4 (k ) and otf 7 (k ) . These
nine components need to be reconstructed with their OTFs to get an image having
superresolution in all directions in reciprocal space.
In this project, the shifting factor, Ir (k ) , given by
Ir (k )  exp{i[cos( k i )  sin( k i )]} ,
(19)
25
where symbol i represents the different phases of π/3, 2π/3, 4π/3, is applied to move the
separated components back to proper positions. The shifting factor, Ir (k ) , shifts the
different image components along with the superresolution information back to the center
of the observable region. The moved component images, replc ci (k ) , are shown in Figure
3.14. They can be reconstructed as a superresolved image by adding them together.
Once all the components are combined, a deconvolution is needed to eliminate the OTF.
3.8. Information Components Reconstruction
After obtaining the moved component images, replc ci (k ) , one estimate, dr (k ) , of
the object information, Dbar (k ) , in reciprocal space for each phase and each pattern
orientation is obtained as
dr (k )  
i 1
4replc ci (k )
,
otf i (k )
(20)
where otfi (k ) represents the proper OTFs for the moved component images, replc ci (k ) .
Each such estimate is valid in the circular region k  k 0 , where otf i (k )  0 , and
k 0 is the radius of the normally observable region of reciprocal space. Many of these
regions overlap, so there is more than one estimate of replc ci (k ) at the same point k .
26
Orientation = 0o
Orientation = 60o
Orientation = 120o
  0o
  120 o
  240 o
Figure 3.14. The moved components, replc ci (k ) . They are the results of moving the
respective images shown in Figure 3.13 by the shift factor, Ir (k ) , shown in (19).
The noise-optimal way to combine such independent measurements of the same
unknown is through a weighted average, in which each measurement is given a weight
27
inversely proportional to its noise variance [9]. The noise variance of Dbar (k ) is
2
inversely proportional to otf i (k ) , and the noise-optimal weighted average becomes
dr
(k ) 
optimal  average
 4replc ci (k ) 
2
 otf i (k )
otf i (k ) 
i 1 

 otf (k )
2

 4otf
i 1
(k )replc ci (k )
 otf (k )
i
i 1

i
i 1
2
,
(21)
i
where the sums are taken over all pattern orientations.
For the weighted average in (21), a direct linear inverse filter without regulation,
is highly unstable in regions where the denominator approaches zero [8]. To regularize
the estimate, (21) can be turned into a generalized Wiener filter by introducing a Wiener
parameter  2 in the denominator:
drr (k ) 
 4otf
i 1

i
(k )replc ci (k )
 otfi (k )   2
2
,
i 1
where drr (k ) is the regularized estimate of the object image information, Dbar (k ) ,
shown in Figure 3.15. An estimate of the object in real space then can be obtained by
an inverse Fourier transform of drr (k ) , after appropriate apodization.
(22)
28
Figure 3.15. The Fourier transform of reconstructed structured illumination image,
drr (k ) . It is an estimate of the object image information, Dbar (k ) .
3.9. Apodization
Apodization is used in telescope optics in order to improve the dynamic range of
the image [14]. Generally, apodization reduces the resolution of an optical image;
however, because it reduces diffraction edge effects, it can actually enhance certain small
details [15]. In this project, the reassembled information components are apodized with a
triangular window function, bhs (k ) , shown in Figure 3.16.
29
Figure 3.16. Magnitude plot of triangular function in reciprocal space, bhs (k ) .
And finally, the apodized reassembled information components are inverse
Fourier transformed back to real space to obtain a high resolution reconstruction of the
object, fimage(r ) , which is the reconstructed structured illumination (SI) image in real
space, shown in Figure 3.17. The cutoff frequency of the apodization function is set to
90% of the theoretical resolution limit, to account for the non-circular shape of the
support region of the effective OTF.
30
Figure 3.17. Reconstruction of SI image in real space, fimage(r ) . It is the improved
image by structured illumination technique, obtained by inverse Fourier transform of
drr (k ) shown in Figure 3.16.
3.10. Methodology Summary
In order to help people to understand the linear structured illumination
microscopy, the structure of this method is shown in Figure 3.18. It can be taken as an
instruction of how to implement this method.
31
Figure 3.18. Block diagram of the methodology. It is an instruction of how to
implement the linear structured illumination microscopy.
32
Chapter 4
RESULTS
The structured illumination image in real space shown in Figure 3.17 is better
resolved than its conventional counterpart shown in Figure 3.6. In this chapter, the
results of the conventional image and the structured illumination image are compared
both in real space and in reciprocal space.
4.1. Real Space Comparison
Since the object image, the conventional image, and the structured illumination
image are two-dimensional images, the plots of the images consist of many columns. For
the convenience of observation, columns 65 of the three images are picked to be
compared.
Figure 4.1 is the magnitude plot of column 65 of object image D (r ) . Figure 4.2
and Figure 4.3 are the magnitude plots of column 65 of structured illumination image
fimage(r ) and conventional image DP (r ) respectively.
33
Figure 4.1. Magnitude plot of column 65 of object image in real space. It is the plot of
column 65 of the image shown in Figure 3.1.
Structured illumination (SI) image, fimage(r ) , and conventional image, DP (r ) ,
are shown together in Figure 4.4. The plots are shown to the same scale in order to make
comparison easy.
34
Figure 4.2. Magnitude plot of column 65 of SI image in real space. It is the plot of
column 65 of the SI image shown in Figure 3.17.
Figure 4.4 shows that the peaks of the plot of column 65 of the SI image near 45
pixels, 85 pixels, and 100 pixels are about two times narrower than the corresponding
peaks of the plot of column 65 of the conventional image. Thus, the SI image has two
times higher resolution than the conventional image.
35
Figure 4.3. Magnitude plot of column 65 of conventional image in real space. It is the
plot of column 65 of the conventional image, DP (r ) , shown Figure 3.6.
Plot of column 65 of conventional image
Plot of column 65 of SI image
Figure 4.4. Comparison of column 65 of conventional image and SI image in real space.
36
4.2. Reciprocal Space Comparison
The plots of spectrum magnitude of column 65 of the Fourier transforms of the
object image, structured illumination image, and conventional image are shown in Figure
4.5 ( Dbar (k ) ), Figure 4.6 ( drr (k ) ), and Figure 4.7 ( DHbar (k ) ), respectively. Once
again, the structured illumination result and the conventional result are put together to the
same scale to make the comparison easier. They are shown in Figure 4.8.
Figure 4.5. Magnitude plot of column 65 of object image in reciprocal space. It is the
plot of the object information image, Dbar (k ) , shown in Figure 3.4.
37
Figure 4.6. Magnitude plot of column 65 of SI image in reciprocal space.
Figure 4.7. Magnitude plot of column 65 of conventional image in reciprocal space.
38
Plot of column 65 of conventional image
in reciprocal space.
Plot of column 65 of SI image
in reciprocal space.
Figure 4.8. Comparison of column 65 of conventional image and SI image in reciprocal
space.
The comparison in Figure 4.8 shows that the spectrum magnitude of column 65 of
the structured illumination (SI) image is two times broader at the base than that of the
conventional image. The structured illumination image contains superresolution
information residing outside the conventional OTF support region.
39
Chapter 5
DISCUSSION
The results shown in this project demonstrate the improvement of the resolution
of the observed image using a linear structured illumination technique. The improvement
is shown in both real space and reciprocal space.
In this project, a computer algorithm of linear structured illumination microscopy
technique is developed. This technique allows the conventional diffraction limit to be
extended by an amount equal to the spatial frequency of the illumination pattern. The
extended amount is the same as the conventionally observable frequencies; the spatial
frequencies that can be introduced into the illumination pattern also are limited by
diffraction. Therefore, the resolution of the structured illumination image can at most be
improved by a factor of two [10].
This improvement of the conventional microscope can help reveal more
information about the structure and function of the objects that are researched in areas of
cellular biology, material science studies, and semiconductor metrology. The resolution
improvements presented here are not related to constrained deconvolution methods. The
enhancements are due to physically measuring normally inaccessible information about
the object [14].
40
Chapter 6
SUMMARY,
CONCLUSIONS, AND RECOMMENDATIONS
6.1. Summary
In this project, a computer algorithm of the linear structured illumination
microscopy theory is developed in order to test this theory and help researchers
understand it. To implement this algorithm, multiple images of an object are taken with
different phases and orientations of sinusoidally patterned illumination. Superresolution
information components then can be extracted from these images. The procedures of
separation, shifting, and reassembly of the superresolution information components are
presented and explained. A block diagram of the whole procedure of the structured
illumination method is presented. The results of the conventional microscope and the
structured illumination algorithm are generated and compared. The algorithm is verified
on test objects, and its performance is in agreement with theoretical predictions.
6.2. Conclusions
The algorithm developed in this project successfully implements the linear
structured illumination theory. The theory is validated by showing the result of the
algorithm. The algorithm is captured in the form of a block diagram that is intended to
serve as a useful reference for researchers to understand this method.
41
6.3. Recommendations
A much higher resolution result can be achieved by using nonlinear structured
illumination microscopy [16]. The nonlinear structured illumination theory may need to
be verified by developing a similar computer algorithm like the one developed in this
project. The presentation and explanation of the complicated procedures of the nonlinear
structured illumination theory is also needed.
Definitive values of phase shifts are required to ensure the accuracy of the
reconstruction of images in this project. A method has been developed that can estimate
randomly chosen phase shifts in each image to permit the use of inexpensive actuation
equipment with no calibration [17]. This method may need to be validated by developing
a computer algorithm with illumination patterns having random phase shifts.
There is no consideration of noise in this project. In the real world, there is noise.
Therefore, the effect of noise on the algorithm needs to be investigated. The
improvement factor of the structured illumination result may be changed by the effect of
noise. A method may need to be developed in order to reduce the effect of noise to get
better image resolution.
42
APPENDIX
Matlab Simulation Code
The m-files: wuline.m, OTF.m, and rotxy.m are derived from E. A. Ingerman and
M. G. L. Gustafsson's simulation codes [8].
1. linSI.m
%
% Clear previous variables and graphics windows
%
clear all;
close all;
%
% Initialize random number generators
%
rand('state',0);
randn('state',0);
%
% Set parameters
%
nphases = 3;
% number of phases
nangles = 3;
% number of angles
fc = 20;
% maximum spatial frequency
na = 1.4;
% numerical aperture (1.4 for oil immersion
lens)
lamda = 500;
% wavelength in nanometers
n = 128;
% number of x and y pixels
%
% Pixel size in nanometers (using fc=2 NA/lambda)
%
pixelsize = lamda/(2*na*fc);
43
ki = 2*pi*(fc)/n;
%
% Pattern (theta) and phase angles (phi)
%
theta=linspace(0,pi,nangles+1);
theta=theta(1:nangles);
phi=linspace(0,2*pi,nphases+1);
phi=phi(1:nphases);
%
% Generate object consisting of 2D random length and
oriented rods
%
xcrd = (-n/2:1:n/2-1);
ycrd = (-n/2:1:n/2-1);
xcrd = repmat( xcrd', 1, n );
ycrd = repmat( ycrd, n, 1 );
alpha = rand(n/2,1)*pi;
x1 = 3+(n-3)*rand(n/2,1);
y1 = 3+(n-3)*rand(n/2,1);
l = n/20+n/10*rand(n/2,1);
x2 = max(min(x1 + cos(alpha).*l,(n3)*ones(size(x1))),2*ones(size(x1)));
y2 = max(min(y1 + sin(alpha).*l,(n3)*ones(size(y1))),2*ones(size(y1)));
D = wuline( n, n, fix(x1), fix(y1), fix(x2), fix(y2) );
rad = sqrt( xcrd.^2 + ycrd.^2 );
rmax = max( max( rad ) );
ind = find( rad >= 50 );
D(ind) = 0;
%
% figure;colormap('gray');imagesc(D); axis('square');
title('Object');
% figure; colormap('gray'); imagesc(abs(fftshift(fft2(D))));
axis('square'); title('Fourier Transform of the Object');
%
% Generatate OTF
44
%
H = OTF(n,n,0,0,fc);
figure; colormap('gray'); imagesc(H); axis('square');
title('OTF')
figure; plot(H(n/2+1,:)); axis('square'); title('OTF')
%
% Transform object to reciprocal space, multiply by the OTF,
and transform
% result back to real space
%
Dbar = fftshift(fft2(ifftshift(D)));
DHbar = H.*Dbar;
DP=fftshift(ifft2(ifftshift(DHbar)));
figure; colormap('gray'); imagesc(DP); axis('square');
title('Applied PSF to The Object')
figure; colormap('gray'); imagesc(abs(DHbar));
axis('square'); title('OTF Support Reciprocal Region')
%
% Generate arrays X1 and Y1 with n rows and n columns. For
array X1 (Y1),
% rows (columns) are identical and columns (rows) have
values ranging from 1 to n.
%
% Example for n=5:
%
% X1=[1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5; 1 2 3 4 5]
% Y1=[1 1 1 1 1; 2 2 2 2 2; 3 3 3 3 3; 4 4 4 4 4; 5 5 5 5 5]
%
[X1,Y1] = meshgrid(1:n,1:n);
%
% Information components separation
%
%Use this to store the values of DIbar, which is the B
matrix ( AX = B )
DIbars = zeros(n,n,nphases);
45
%The A matrix of the equations( AX = B ), it depends on the
shift phases
%phi
phase_matrix = [1 1 1;1 exp(1i*2*pi/3) exp(-1i*2*pi/3);1
exp(1i*4*pi/3) exp(-1i*4*pi/3);];
%Inversed A matrix
inv_phase_matrix = inv(phase_matrix);
%Sum of the (OTF)^2
hs = zeros(n,n);
x=1:n;
y=(1:n)';
%Reconstruction of the superresolved image
rc = zeros(n,n,nphases^2);
sp = zeros(n,n,nphases*nphases);
I = zeros(n,n,nphases*nphases);
rimage = zeros(n,n,3);
hv = zeros(n,n);
replc = zeros(n,n);
dr = zeros(n,n);
%
% Phase angles
%
delta=linspace(0,2*pi,nphases+1);
delta=delta(1:nphases);
for itheta=1:nphases
for iphi=1:nphases
k = rotxy(theta(itheta))*[ki; 0];
kx = k(1); ky = k(2);
disp(['angle= ',num2str(theta(itheta)*180/pi),...
' phase= ',num2str(phi(iphi)*180/pi),...
46
'
'
kx= ',num2str(kx),...
ky= ',num2str(ky)]);
%
% Real space and reciprocal space images of
illumination pattern
%
I(:,:,(itheta-1)*3+iphi) = (1cos(kx*X1+ky*Y1+phi(iphi)))/2;
figure; colormap('gray');
imagesc(abs(I(:,:,(itheta-1)*3+iphi))); axis('square');
Ibar = fftshift(fft2(ifftshift(I)));
%
% DI - illumination intensity times object
%
DI = D.*I(:,:,(itheta-1)*3+iphi);
figure; colormap('gray'); imagesc(DI);
axis('square');
realstg=['Illumination Patterned Object in Real
Space',...
'
Angle= ',num2str(theta(itheta)*180/pi),...
' Phase= ',num2str(phi(iphi)*180/pi)];
title(realstg);
% DIbar is the image in the reciprocal space. we
shift the image so that the "lowest" Fourier modes
% are in the center of the picture.
DIbar = fftshift(fft2(ifftshift(DI)));
%multiply the image in the reciprocal space by the
OTF.
DIbars(:,:,iphi) = H.*DIbar;
fd(:,:) = H.*DIbar;
figure; colormap('gray'); imagesc(abs(DIbar));
axis('square');
figure; colormap('gray');
imagesc(abs(DIbars(:,:,iphi))); axis('square');
recipstg=['Illumination Patterned Object in
Reciprocal Space',...
47
'
Angle= ',num2str(theta(itheta)*180/pi),...
' Phase= ',num2str(phi(iphi)*180/pi)];
title(recipstg);
%get back the image in the real space.
ifd=fftshift(ifft2(ifftshift(fd)));
rimage(:,:,iphi)=ifd;
%
%
end
%
spimage(:,:,1:nphases)=ifft(rimage,[],3);
nx = kx*n/(2*pi); ny = ky*n/(2*pi);
ind = [(0:(nphases-1)/2) -(nphases-1)/2:1:-1];
xx=repmat(x,n,1);
yy=repmat(y,1,n);
%
%This part is solving the equations to separate the
components
%
for j = 1:nphases
temp_separated = zeros(n,n,nphases);
Ir(:,:)=exp(-1i*(kx*ind(j)*xx+ky*ind(j)*yy));
for k = 1:nphases
temp_separated(:,:,k) =
inv_phase_matrix(j,k).*DIbars(:,:,k);
sp(:,:,(itheta-1)*3+j) = sp(:,:,(itheta1)*3+j)+temp_separated(:,:,k);
end
figure; colormap('gray');
imagesc(abs(sp(:,:,(itheta-1)*3+j))); axis('square');
hv(:,:) = OTF(n,n,-nx*ind(j),-ny*ind(j),fc);
%
%shift the OTF by taking inverse Fourier transform
and exponential
%factors
%
replc(:,:) = fft2(ifftshift(ifft2(sp(:,:,(itheta1)*3+j)).*Ir(:,:)));
scalefactor = abs(cos((j-1)*pi/3));
48
figure; colormap('gray'); imagesc(abs(replc(:,:)));
axis('square');
rc(:,:,(itheta-1)*3+j) =
replc(:,:).*(scalefactor*conj(hv(:,:)));
hs = hs + abs(scalefactor*hv(:,:)).^2;
end
end
%
% Deconvolution and reconstruction with a Wiener
%
for t = 1:nphases*nphases
dr = dr+rc(:,:,t)./ ( hs
+ .005*length(itheta)*(.0000001)^2);
end
figure; colormap('gray'); imagesc(abs(dr(:,:)));
axis('square'); title('Reconstruction of The Object in
Reciprocal Space')
%
%Triangular function
%
[k_x, k_y]=meshgrid(-n/2+1:n/2, -n/2+1:n/2);
k_r = sqrt(k_x.^2+k_y.^2);
k_max = .9*.9*fc*((nphases-1)/2+1);
bhs = cos(pi*k_r/(2*k_max));
indi = find( k_r > k_max );
bhs(indi) = 0;
figure; colormap('gray'); imagesc(abs(dr.*bhs));
axis('square'); title('Apodization of The Object in
Reciprocal Space')
drr = dr.*bhs;
fimage = ifft2(ifftshift(drr));
figure; colormap('gray'); imagesc(abs(fimage));
axis('square'); title('Reconstruction of The Object in Real
Space')
49
2. wuline.m
function y = wuline(m,n,x1,y1,x2,y2)
%y=zeros(m,n);
xd = x2 - x1;
yd = y2 - y1;
indh = find( abs(xd) > abs(yd) );
indv = find( abs(xd) <= abs(yd) );
yh = zeros(m,n); yv=zeros(n,m);
if ~isempty(indh)
yh = drawline( m, n, x1( indh ), y1( indh ), x2(indh),
y2(indh) );
end
if ~isempty(indv)
yv = drawline( n, m, y1( indv ), x1( indv ), y2(indv),
x2(indv) );
end
y = yh + yv';
function y = drawline( m, n, x1, y1, x2, y2 )
y = zeros( m, n );
ind = find( x1 > x2 );
if ~isempty( ind )
t = x2(ind);
x2(ind) = x1(ind);
x1(ind) = t;
t = y2(ind);
y2(ind) = y1(ind);
y1(ind) = t;
end
xd = x2 - x1;
yd = y2 - y1;
grad = yd./xd;
%end point 1
50
xend = fix( x1 + .5 );
yend = y1 + grad .* ( xend - x1 );
xgap = invfrac( x1 + .5 );
ix1 = round( xend );
iy1 = round( yend );
y( sub2ind(size(y), ix1 , iy1) ) = y( sub2ind(size(y), ix1 ,
iy1) ) + invfrac( yend ).*xgap;
y( sub2ind(size(y), ix1 , iy1+1) ) = y( sub2ind(size(y),
ix1 , iy1+1) ) + frac( yend ).*xgap;
yf = yend + grad;
%end point 2
xend = fix( x2 + .5 );
yend = y2 + grad .* ( xend - x2 );
xgap = invfrac( x2 - .5 );
ix2 = round( xend );
iy2 = round( yend );
y( sub2ind(size(y), ix2 , iy2) ) = y( sub2ind(size(y), ix2 ,
iy2) ) + invfrac( yend ).*xgap;
y( sub2ind(size(y), ix2 , iy2+1) ) = y( sub2ind(size(y),
ix2 , iy2+1) ) + frac( yend ).*xgap;
for jj=1:length(ix1)
for k = ix1(jj)+1:ix2(jj)-1
y(k,fix(yf(jj)))=y(k,fix(yf(jj)))+invfrac(yf(jj));
y(k,fix(yf(jj))+1)=y(k,fix(yf(jj))+1)+frac(yf(jj));
yf(jj) = yf(jj) + grad(jj);
end
end
function y=frac(x)
y = x - fix(x);
function y=invfrac(x)
y = ones(size(x)) - frac( x );
51
3. OTF.m
function h = OTF( m, n, off_x, off_y, fc )
h=zeros(m,n);
for k=1:m
for l=1:n
q = sqrt((k-m/2-1-off_y)^2+(l-n/2-1-off_x)^2);
if q>fc
h(k,l)=0;
else
b = acos( q/fc );
h(k,l)=(2*b-sin(2*b))/pi;
%h(k,l)=1;
end
end
end
return
4. rotxy.m
function a=rotxy(theta)
%a=rotxy(theta)
%
% rotation matrix
a=[cos(theta) -sin(theta);
sin(theta) cos(theta) ];
52
REFERENCES
[1] M. Abramowitz, M. W. Davidson, (2007). "Introduction to Microscopy," Molecular
Expressions [Online]. Available:
http://micro.magnet.fsu.edu/primer/anatomy/introduction.html, Date Accessed: 2009,
Oct. 25.
[2] E. K. Abbe, "Beiträge zur Theorie des Mikroskops und der mikroskopischen
Wahrnehmung," Archiv für Mikroskopische Anatomie, vol. 9, no. 1, Dec. 1873, pp. 413418.
[3] "Moiré Effect" [Online]. Available:
http://whatis.techtarget.com/definition/0,,sid9_gci768750,00.html, Date Accessed: 2009,
Oct. 25.
[4] M. G. L. Gustafson, "Surpassing the Lateral Resolution Limit by a Factor of Two
Using Structured Illumination Microscopy," Journal of Microscopy, vol. 198, pt 2, May
2000, pp. 82-87
[5] M. G. L. Gustafsson, D. A. Agard, J. W. Sedat, "Doubling the Lateral Resolution of
Wide-Field Fluorescence Microscopy Using Structured Illumination," Proceedings of
SPIE, vol. 3919, 2000, pp. 141-150.
[6] M. G. L. Gustafsson, "Nonlinear Structured-Illumination Microscopy: Wide-Field
Fluorescence Imaging with Theoretically Unlimited Resolution," Proceedings of the
53
National Academy Sciences of the United States of America, vol. 102, no. 37, Sep. 13,
2005, pp. 13081-13086.
[7] J. W. Goodman, Introduction to Fourier Optics, 3rd Edition, Roberts and Company
Publishers, Englewood, Colorado, 2004.
[8] E. A. Ingerman, R. A. London, M. G. L. Gustafsson, "Signal, Noise, and Resolution
in Linear and Nonlinear Structured-Illumination Microscopy,” unpublished.
[9] S. A. Shroff, J. R. Fienup, D. R. Williams, "OTF Compensation in Structured
Illumination Superresolution Images," Proceedings of SPIE 7094, presented at the
Unconventional Imaging IV, San Diego, CA, Aug. 2008, pp. 709402-1-11.
[10] R. Heintzmann, C. Cremer, "Laterally Modulated Excitation Microscopy:
Improvement of Resolution by Using a Diffraction Grating," Proceedings of SPIE, vol.
3568, 1999, pp. 185-196.
[11] Wikipedia, "Point Spread Function" [Online]. Available:
http://en.wikipedia.org/wiki/Point_spread_function, Date Accessed: 2009, Oct. 25.
[12] D. Williams, "Structured Illumination Imaging of the Retina," David Williams' Lab
[Online]. Available: http://www.cvs.rochester.edu/williamslab/r_imageretina.html, Date
Accessed: 2009, Oct. 25.
[13] R. Heintzmann, "High Resolution Microscopy: Structured Illumination and
Pointillism," presented at the ARC Centre of Excellence for Coherent X-ray Science
Workshop, Melbourne, Australia, Sep. 2008.
[14] M. G. L. Gustafsson, "Extended-Resolution Reconstruction of Structured
Illumination Microscopy Data," Adaptive Optics: Analysis and Methods/Computational
54
Optical Sensing and Imaging/Information Photonics/Signal Recovery and Synthesis
Topical Meetings on CD-ROM, Technical Digest, Optical Society of America, 2005,
paper JMA2.
[15] Wikipedia, "Apodization" [Online]. Available:
http://en.wikipedia.org/wiki/Apodization, Date Accessed: 2009, Oct. 25.
[16] R. Heintzmann, T. M. Jovin, C. Cremer, "Saturated Patterned Excitation
Microscopy-A Concept for Optical Resolution Improvement," Journal of the Optical
Society of America, vol. 19, no. 8, 2002, pp. 1599–1609.
[17] S. A. Shroff, J. R. Fienup, D. R. Williams, "Estimation of Phase Shifts in Structured
Illumination for High Resolution Imaging," presented at the Frontiers in Optics, 2007
Optical Society of America Annual Meeting, paper FMH4.
Download