BEAD-BASED REGISTRATION OF MICROSCOPY IMAGES A Project

advertisement
BEAD-BASED REGISTRATION OF MICROSCOPY IMAGES
A Project
Presented to the faculty of the Department of Electrical and Electronic Engineering
California State University, Sacramento
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SCIENCE
in
Electrical and Electronic Engineering
by
Shuifen Li
SPRING
2013
BEAD-BASED REGISTRATION OF MICROSCOPY IMAGES
A Project
by
Shuifen Li
Approved by:
__________________________________, Committee Chair
Warren D. Smith, Ph.D.
__________________________________, Second Reader
Preetham Kumar, Ph.D.
____________________________
Date
ii
Student: Shuifen Li
I certify that this student has met the requirements for format contained in the University
format manual, and that this project is suitable for shelving in the Library and credit is to
be awarded for the project.
__________________________, Graduate Coordinator
Preetham Kumar, Ph.D.
Department of Electrical and Electronic Engineering
iii
___________________
Date
Abstract
of
BEAD-BASED REGISTRATION OF MICROSCOPY IMAGES
by
Shuifen Li
In a microscope system with multiple channels (cameras), each channel has a
different optical path, thus introducing misalignment when picturing the same objects.
This project focuses on correcting this misalignment to get a high-resolution image. To
achieve efficient and sample-independent alignment, fluorescent beads are used as
objects in the images.
Images from two channels are considered in this project. Several registration
algorithms are discussed, applied, and evaluated, including Gaussian fitting, phase
correction, and point pattern matching. For testing, this project uses six three-dimension
image files taken by a two-channel microscope system.
After establishing the correspondence bead pairs, a nonlinear approach is applied
to align the two images. Phase correction helps to improve the performance.
iv
A pair of two rotated images can exist. The correction of such a pair is not
considered in this project. Further studies should include this case.
_______________________, Committee Chair
Warren D. Smith, Ph.D.
_______________________
Date
v
ACKNOWLEDGEMENTS
I would like to say thank you to Warren D. Smith, Ph.D., for guiding me through
this project, and even through my two-year graduate program. I also want to thank
TieQiao (Tim) Zhang, Ph.D., from the Center for Biphotonics Science and Technology
(CBST) of the University of California, Davis, for providing me the basic ideas, having
discussions with me, and correcting many details.
vi
TABLE OF CONTENTS
Page
Acknowledgements ........................................................................................................ vi
List of Tables ................................................................................................................. ix
List of Figures ..................................................................................................................x
Chapter
1. INTRODUCTION .....................................................................................................1
2. BACKGROUND .......................................................................................................3
2.1. Feature detection .........................................................................................4
2.2. Feature matching .........................................................................................7
2.2.1. Point set matching ............................................................................8
2.2.2. Phase correlation ............................................................................11
2.3. Transform model estimation .....................................................................12
3. METHODOLOGY ..................................................................................................13
4. EXPERIMENTAL RESULTS AND ANALYSIS ..................................................15
4.1. Open DV files ...........................................................................................15
4.2. Select images with the highest contrast ....................................................17
4.3. Pre-processing ...........................................................................................20
4.4. Register the target image to the reference image ......................................22
4.4.1. Detect features ...............................................................................22
4.4.2. Match features ................................................................................25
vii
4.4.3. Calculate the parameters of the mapping function ........................29
4.5. Practical experiment..................................................................................31
5. DISCUSSION ..........................................................................................................34
6. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS...........................35
Appendix A. User Instructions ......................................................................................37
A.1. Select a file for registration ......................................................................37
A.2. Calibration or correction ..........................................................................37
A.3. Target channel and reference channel ......................................................38
Appendix B. Code Listing .............................................................................................40
References ......................................................................................................................54
viii
LIST OF TABLES
Table
Page
1. Table 4.1. The percentage of correct correspondences for the six
test files used in this project ....................................................................................29
2. Table 4.2. Quantitative analysis of the performance of the mapping
function ...................................................................................................................31
ix
LIST OF FIGURES
Figure
Page
1. Figure 2.1. An image with distinctive objects ..........................................................4
2. Figure 2.2. An ultrasound image without clear boundaries ......................................5
3. Figure 2.3. A bead image taken by a microscope .....................................................5
4. Figure 2.4. The slice profile of a bead in Figure 2.3 ..................................................6
5. Figure 2.5. Distortion observed in the images ...........................................................8
6. Figure 2.6. An example of the failure of the shortest distance algorithm ..................9
7. Figure 4.1. The flow chart for the image registration software tool ........................16
8. Figure 4.2. The flow chart for finding the parameters of the mapping
function ...................................................................................................................17
9. Figure 4.3. 3D view of a DV file .............................................................................17
10. Figure 4.4. A mesh plot of a 2D image with higher contrast ...................................18
11. Figure 4.5. A mesh plot of a 2D image with lower contrast ....................................19
12. Figure 4.6. Standard deviations of images in the reference channel .......................20
13. Figure 4.7. Standard deviations of images in the target channel .............................21
14. Figure 4.8. The result of Gaussian fitting ................................................................24
15. Figure 4.9. The result of feature matching in the spatial domain ............................27
16. Figure 4.10. The corrected feature matching by phase correlation..........................28
17. Figure 4.11. The result of applying the mapping function on matched
pairs in Figure 4.10 .................................................................................................30
18. Figure 4.12. Overlaid target image and reference image before registration ..........32
x
19. Figure 4.13. Overlaid target image and reference image after registration .............33
20. Figure A.1. A pop-up window for users to select a DV file ....................................37
21. Figure A.2. The pop-up window for choosing correcting or calibrating .................38
22. Figure A.3. The pop-up window when reading the file ...........................................38
23. Figure A.4. Selecting the target channel and reference channel ..............................39
xi
1
CHAPTER 1
INTRODUCTION
This project focuses on the development of an image registration software tool for
conventional and super-resolution fluorescence microscopes. In a microscope imaging
system with multiple channels (cameras), each camera has a different optical path, thus
introducing a relative misalignment. Mechanical positioning of cameras brings in yet
another misalignment, which cannot be completely removed by mechanical adjustment.
Total misalignment is a combination of the external factors with the intrinsic optical
aberrations.
To properly overlay two images, image registration involves estimating the
transformation parameters (translation, rotation, and scaling) between the two images,
which is a time-consuming task,[1] especially for high-resolution images. However, in a
fixed microscope system, a single optimal transformation function can be used for all sets
of images captured by this system, which makes real time correction possible.
A medical specimen in an image usually has an arbitrary shape, and often many
areas in this kind of image appear blurred. The vague information can negatively affect
the calibration procedure. Fluorescent beads, which are easy to locate and are widely
used for calibrating microscope systems,[2] are used as a replacement of a live specimen
in this project.
This report describes the development of the image registration algorithms. A
brief background of image registration is presented in Chapter 2. Chapter 3 provides the
methodology used to develop and test an image registration software tool. Chapter 4
2
presents and analyzes the experimental results, including detecting beads, finding the
corresponding pairs, and establishing the transformation function and calculating of the
parameters of the mapping function. After a discussion of this project in Chapter 5,
Chapter 6 concludes this research and offers recommendations for future work.
3
CHAPTER 2
BACKGROUND
Image registration is analyzing and integrating two or more images of the same
scene taken at different times, from different viewpoints, and/or by different sensors.[3] It
is widely used in computer vision, medical imaging, military target recognition,
astrophotography, etc.
This project focuses on registering one image (target image) to another image
(reference image). Image registration algorithms which perform operations on a target
image to make it align with the reference image can be classified into intensity-based and
feature-based.[4]
Intensity-based techniques find matches based on the intensities of the target
image and the reference image. Approaches that compare the intensities of entire images
are sensitive to intensity changes introduced, for instance, by noise, varying illumination,
and/or by using different sensor types.[3] Another disadvantage of intensity-based
methods occurs if windows are used to register sub-images. The same size and shape
window may not cover the same parts of the scene in the target and reference images.
Besides, the content in the window may be not useful (not contain distinctive parts in the
image) for registration.
Feature-based methods often are applied on images that contain distinctive and
easily detectable objects, such as the buildings in the image in Figure 2.1. However,
medical images usually are vague in details. For example, the thyroid structures in the
medical ultrasound image in Figure 2.2 do not possess clear boundaries.
4
Figure 2.1. An image with distinctive objects.
To provide distinct objects, this project introduces fluorescent beads as extrinsic
features in the microscope images.[5] Figure 2.3 shows a high-resolution picture of
fluorescent beads. The distinctive beads are suitable for feature-based registration
methods. Three major registration steps are employed in this project: feature detection,
feature matching, and transform model estimation.
2.1. Feature detection
Feature detection detects, either manually or automatically, salient and distinctive
objects (edges, closed-boundary regions, etc.). These objects can be represented by
distinctive points (line intersections, corners, centers of objects). The distinctive points
also are called control points (CP) and are used for feature matching.
5
Figure 2.2. An ultrasound image without clear boundaries.
Figure 2.3. A bead image taken by a microscope. The beads look bright and are easy to
detect.
6
The mission of feature detection in this project is to decide the CPs, chosen to be
the centers of the beads. A typical slice profile of a bead in Figure 2.3 is plotted in Figure
2.4 (with normalized amplitude) and is close to a Gaussian shape.[6]
0.8
Normalized pixel value
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
5
10
Pixel location
15
20
Figure 2.4. The slice profile of a bead in Figure 2.3. It is close to a Gaussian shape.
The CP of a bead may be not the brightest point or the geometric center of the
bead. Since the profiles of the beads are close to Gaussian in shape, Gaussian fitting can
be used to find a bead’s center. Gaussian fitting calculates the center of a given set of data
by fitting these data using a Gaussian shape. For a one-dimensional Gaussian distribution
7
function with mean μ and variance σ2 , that is,
2 /(2𝜎 2 )
𝑓(π‘₯) = π‘Žπ‘’ −(π‘₯−πœ‡)
,
π‘₯ = πœ‡ is its center. For a two-dimensional (2D) Gaussian distribution with no correlation,
that is,
2 /(2𝜎 2 )+(𝑦−𝑦 )2 /(2𝜎 2 ))
π‘₯
0
𝑦
𝑓(π‘₯, 𝑦) = 𝐴𝑒 ((π‘₯−π‘₯0 )
,
(π‘₯0 , 𝑦0 ) is its center.
Through Gaussian fitting, beads in the target image and reference image can be
represented by points. The next step is to find points in the reference image that
correspond to the points in the target image.
2.2. Feature matching
Feature matching algorithms can operate in both the spatial and frequency
domains. Phase correlation, which aligns images by a single translational movement, is
one of the algorithms that is performed in the frequency domain and requires relatively
little computation. However, this algorithm is not satisfactory on its own.
In a microscope image, there may be different distortions in different areas, as
shown in Figure 2.5. In the figure, the centers of the beads are shown by dots in the
reference image and by crosses in the target image. Corresponding to the reference beads,
the movement of the target beads on the left side is different from the movement of the
target beads on the right side. Due to distortion, it is impossible to correct the images by a
single translational movement.
8
540
520
Y (pixel index)
500
480
460
440
420
100
200
300
400 500 600
X (pixel index)
700
800
900
Figure 2.5. Distortion observed in the images. Dots represent the centers of the beads in a
reference image, and crosses are the centers of beads in a target image. Relative to the
reference beads, the movement of the target beads on the left side is different from the
movement of the target beads on the right side.
Another technique is to process portions of the image individually. However, this
window technique cannot succeed in all cases, since the distribution of the beads may
vary greatly from region to region. Another difficulty is that, because most of the image
intensity stems from the beads, once two beads of a pair are incorrectly included in two
different windows, the estimated movement would not be correct.
9
2.2.1. Point set matching
Instead of operating in the frequency domain, a matching of point sets can be
performed by the shortest distance algorithm that associates each point in one image with
its nearest neighbor in the other. However, this method may not find the correct
correspondence pairs. For example, in Figure 2.6, the middle two points of the four points
in the ellipse are the nearest correspondence pair. However, based on the whole image,
one can tell that the left two points form one correct correspondence pair and the
350
300
Y (pixel index)
250
200
150
100
50
0
250
300
350
400
450
X (pixel index)
500
550
600
Figure 2.6. An example of the failure of the shortest distance algorithm. Dots represent
the centers of the beads in a reference image and crosses are the center of beads in a
target image. An example of failure of the shortest distance algorithm for finding
correspondence pairs is illustrated in the ellipse.
10
right two points another correct pair.
Another feature matching algorithm, from Scott and Longuet-Higgins, decides
correspondence pairs by a correlated orthogonal matrix 𝑃 coming from a matrix of
Gaussian distances, 𝐺.[7] Let 𝑑𝑖𝑗 denote the Euclidean distance between a point 𝑅𝑖 (𝑖 =
1, 2, … , π‘š) in the reference image and a point 𝑇𝑗 (𝑗 = 1, 2, … , 𝑛) in the target image. The
element of the rectangular π‘š × π‘› matrix 𝐺 is
2
𝐺𝑖𝑗 = exp(−𝑑𝑖𝑗
/(2𝜎 2 )) ,
where 𝜎 is an appropriate unit of distance. The singular value decomposition of 𝐺 gives
an π‘š × π‘› rectangular diagonal matrix 𝑆 with nonnegative real numbers on the diagonal,
𝐺 = π‘ˆπ‘†π‘‰′ .
Here, 𝑃 is obtained by replacing 𝑆 in the above equation by an identity matrix 𝐸,
resulting in
𝑃 = π‘ˆπΈπ‘‰′ .
The element 𝑃𝑖𝑗 of matrix 𝑃 indicates the extent of paring between points 𝑅𝑖 and
𝑇𝑗 . If 𝑃𝑖𝑗 is both the greatest element in its row and the greatest element in its column,[7]
then the correspondence point of 𝑅𝑖 in the target image is 𝑇𝑗 .
Unlike some other algorithms,[8][9] this algorithm works well in the case of
inexact matching, i.e., π‘š ≠ 𝑛. However, it still introduces some incorrectly matched pairs
11
in this project. To exclude the bad matched pairs, this project uses phase correlation to do
a second correction, as discussed in the next section.
2.2.2. Phase correlation
The phase correlation method is based on the Fourier shift theorem.[10] Assume
replicas of the same image, 𝑓1 and 𝑓2 , such that 𝑓2 has a horizontal shift π‘₯0 and a vertical
shift 𝑦0 compared with 𝑓1 . That is, the relationship of these two images is
𝑓2 (π‘₯, 𝑦) = 𝑓1 (π‘₯ − π‘₯0 , 𝑦 − 𝑦0 ) .
Then, their Fourier transforms, 𝐹1 and 𝐹2 , are related by
𝐹2 (𝑒, 𝑣) = 𝐹1 (𝑒, 𝑣)𝑒 −𝑗2πœ‹(𝑒π‘₯0 +𝑣𝑦0 ) .
The translational movement (π‘₯0 , 𝑦0 ) can be estimated by the phase shift in the Fourier
domain.
The phase correlation technique uses the fast Fourier transform to compute the
cross-correlation between two images. It is faster than other matching methods.[1]
This project proposes a matching method that simultaneously uses both intensitybased and feature-based approaches. The phase correlation method assists to screen out
the mis-matched point pairs that are decided by the point set matching algorithm.
After establishing the matched pairs, the next step is to construct a mapping
function that transforms CPs in the target image to overlay with their corresponding CPs
in the reference image. Since the CPs are supposed to be the most distinctive
12
characteristics of these two images, once CPs are mapped onto their counterparts, the two
images can be registered by the mapping function.
2.3. Transform model estimation
Transform model estimation consists of choosing the type of mapping function
and estimating its parameters. This project employs a global linear model to map the
matched points.
A point (𝑇π‘₯ , 𝑇𝑦 ) in the target image can be deformed to (𝑇π‘₯′ , 𝑇𝑦′ ) by a transform
that consists of rotation, translation, and scaling. That is,
𝑇π‘₯′ = π‘Ž0 + π‘Ž1 𝑇π‘₯ + π‘Ž2 𝑇𝑦
𝑇𝑦′ = 𝑏0 + 𝑏1 𝑇π‘₯ + 𝑏2 𝑇𝑦 ,
where π‘Ž0 , π‘Ž1 , π‘Ž2 , 𝑏0 , 𝑏1, and 𝑏2 are unknown parameters of the mapping function.
Usually, all CPs are used to calculate the parameters of the mapping function. The
number of CPs is often higher than the minimum number required for determining the
parameters. To take all CPs into account, the parameters of the mapping functions are
computed by means of a least-squares fit. That is,
π‘˜
′
′
𝑆𝐸 = ∑((𝑇𝑖π‘₯
− 𝑅𝑖π‘₯ )2 + (𝑇𝑖𝑦
− 𝑅𝑖𝑦 )2 )
𝑖=1
is minimized, where the counterpart of (𝑇π‘₯ , 𝑇𝑦 ) is (𝑅π‘₯ , 𝑅𝑦 ) in the reference image. The
set of parameters that minimizes 𝑆𝐸 results in the optimal mapping function.
13
CHAPTER 3
METHODOLOGY
To implement image registration following the three steps describe in Chapter 2,
the properties of the features in interested-images should be considered first. This project
uses ImageJ (National Institutes of Health (NIH), Bethesda, Maryland) to analyze the
image files.
ImageJ supports a wide number of standard image file formats, such as TIFF,
GIF, JPEG, and BMP. It allows easy access to image and meta-data. It supports standard
image processing functions such as contrast manipulation, sharpening, smoothing, edge
detection, and median filtering. It does geometric transformations such as scaling,
rotation, and flips.[11] This project uses ImageJ to study the area and pixel value statistics
of selected areas and create density histograms and line profile plots.
ImageJ helps to determine the algorithms for feature detection. To implement the
algorithms, this project employs MATLAB (MathWorks, Natick, Massachusetts). In
addition to the general functions, this project uses functions from the MATLAB Image
Processing Toolbox. The Image Processing Toolbox provides a comprehensive set of
reference-standard functions for image processing and analysis.[12] The size of the images
used in this project is 1024 x 1024 pixels (the size of an image is 1024 pixels by 1024
pixels). MATLAB functions for the basic matrix operations, Fourier analysis, noise
reduction, and morphological operations on binary images are used in this project. The
functions for 2D spatial filters and image display are used to simulate the 2D Gaussian
beads.
14
The microscope used in this project is a Delta Vision (DV) OMX (Optical
Microscope eXperimental) Imaging System (Applied Precision, Issaquah, Washington).
The Delta Vision OMX offers super-resolution imaging using 3D structured illumination
(3D-SIM) and/or localization microscopy techniques. The OMX microscope for multiwavelength optical sectioning of biological samples with subdiffraction resolution allows
new insights in biological structures.[13] Six DV files collected at the Center for
Biphotonics Science and Technology (CBST), administered by the University of
California, Davis, are used as test files for this project. They are named
“tsbeads_0_SIR.dv”, “tsbeads_2_SIR.dv”, “tsbeads_3_SIR.dv”,
“Tbeads_091812_01_SIR.dv”, “Tbeads_091812_02_SIR.dv”, and
“Tbeads_091812_03_SIR.dv”. These test files cover images with various bead sizes and
bead distributions.
Although the quality of the registration can be determined visually, this project
also uses quantitative measures of the performance of the registration algorithms. Both
qualitative and quantitative measurements of the accuracy of the methods are provided in
this project. MATLAB provides many easy-to-use functions for evaluation. Functions in
the statistics toolbox are used to evaluate algorithm performance.
15
CHAPTER 4
EXPERIMENTAL RESULTS AND ANALYSIS
The steps developed in this project to register two images stored in a DV file are
shown in Figure 4.1. Appendix A describes the user interface of the image registration
software tool. Appendix B shows the MATLAB code for the tool. A library for reading
DV files is loaded first. Each layer (2D data) of the three-dimensional (3D) file can be
read out separately. It is easier to register images with clearly distinguishable features.
Images with the highest contrast from the target and reference channels, respectively, are
selected as the target and reference images for registration. After that, these two images
are pre-processed to reduce noise that introduces error in registration.
Finding optimal parameters for the mapping function is the most time-consuming
step in this project. However, once the mapping parameters for a system are calculated,
these parameters can be saved in a file and used to register other images taken by this
system. This project provides an interface so that users can choose whether to calculate
the parameters or not. The major steps of finding optimal parameters for the mapping
function are shown in Figure 4.2.
4.1. Open DV files
The microscope imaging system records the beads from different channels (e.g.,
reference channel and target channel) and saves these 3D data to a DV file. This DV
image file represents a 3D image by many 2D images.
A third party provides a library to open and read DV files. Once the library is
loaded, the functions in the library can be called to get the 3D image data.
16
Load library
Open DV files
Select images with the highest contrast
as reference and target images
Pre-process
Register based on
previous result?
N
Find parameters of the
mapping function
Y
Load parameters of the
mapping function
Save parameters of the
mapping function
Apply mapping function on
the target image
Figure 4.1. The flow chart for the image registration software tool.
17
Detect the beads
Find the correspondence
beads pairs
Calculate the parameters of the
mapping function
Figure 4.2. The flow chart for finding the parameters of the mapping function.
4.2. Select images with the highest contrast
Figure 4.3 shows 3D images from two channels in a DV file. Before calibration,
Figure 4.3. 3D view of a DV file. There are two channels in this DV file. The upper
channel looks fuzzier. The bottom channel looks brighter. Along the z-axis there are 25
2D images for each channel.
18
these two channels do not align with each other.
Depth perception in the 3D image is provided by many 2D images. Each channel
in Figure 4.3 has 25 2D images. Finding 2D images that hold the richest information for
the target and reference channels can facilitate feature (bead) detection and accurate the
registration. For each channel, the best 2D image should have the highest contrast among
all 2D images in that channel. Figure 4.4 and Figure 4.5 illustrate two 2D images from
the target channel. The one in Figure 4.4 has a higher contrast than the one in Figure 4.5.
It is easier to detect beads in Figure 4.4 than in Figure 4.5.
Figure 4.4. A mesh plot of a 2D image with higher contrast.
19
Figure 4.5. A mesh plot of a 2D image with lower contrast.
The standard deviation (SD) is a measure of how far a signal fluctuates from the
mean. The standard deviation of image intensity (brightness) can be used to measure the
contrast of the images here. A bigger SD corresponds with higher contrast. The SD of an
n-pixel image with intensity 𝑓 and mean π‘šπ‘“ is
1
2
𝑆𝐷 = √𝑛 ∑(𝑓[π‘˜, 𝑙] − π‘šπ‘“ ) .
Figure 4.6 and Figure 4.7 plot the SDs of images in the reference channel and
target channel, respectively. These two figures indicate that the 14th image in the
20
reference channel and the 17th image in the target channel have higher contrast than other
images. These two images are used as the reference image and the target image of this
DV file.
Y (stdandard deviation of image)
1400
1200
1000
800
600
400
200
0
0
5
10
15
X (image index)
20
25
Figure 4.6. Standard deviations of images in the reference channel.
4.3. Pre-processing
The bead images as they are shown in Figure 4.3 are gray images. The aim of the
pre-processing part of the algorithm is to reduce the noise and keep the bead information.
It is implemented based on a statistical thresholding method[14] in which pixels in an
image are dichotomized into two classes, the object and the background, by a threshold.
21
400
Y (standard deviation of image)
350
300
250
200
150
100
50
0
0
5
10
15
X (image index)
20
25
Figure 4.7. Standard deviations of images in the target channel.
An optimal threshold is determined by minimizing an objective function,
𝑂𝐡𝐽 = 𝜎12 + 𝜎22 − 𝜎1 𝜎2 ,
where 𝜎1 and 𝜎2 are the standard deviations of the two classes. To get the optimal
threshold, this project measures and compares 𝑂𝐡𝐽 values for 256 thresholds.
Another kind of noise that needs to be removed is the small objects with only a
few pixels (less than 16 pixels). The MATLAB function π‘π‘€π‘Žπ‘Ÿπ‘’π‘Žπ‘œπ‘π‘’π‘› is used to remove
this kind of noise in this project.
22
4.4. Register the target image to the reference image
The next step is registering the target image to the reference image. The centers of
the beads are determined by feature detection. Feature matching decides the
correspondence pairs of beads in the target and reference images. The parameters of the
mapping function are calculated based on the matched pairs.
4.4.1. Detect features
The center of a Gaussian distribution is where the distribution reaches its peak
value. Even with noise, the center of a set of data with a Gaussian distribution can be
assumed to be around the location of the maximum value. To fit to the centers of the
beads, this project first uses MATLAB function 𝑓𝑖𝑛𝑑 to identify the locations of the
regional maxima in the images and then performs the Gaussian fitting on 30 x 30 pixels
2D images around the maxima. The beads used in this project are usually about 16 x 16
pixels in size. Two beads that are close to each other would be considered as one bead.
The one-dimension Gaussian shape fitted in this project is
2 /𝑝2
2
𝑓(π‘₯) = 𝑝4 + 𝑝3 𝑒 −0.5(π‘₯−𝑝1 )
,
where 𝑝4 is the amplitude shift, 𝑝3 is the peak intensity, 𝑝1 is the peak position, and 𝑝2 is
the width. The 2D Gaussian shape used in this project is
2 /𝑝2 −0.5(𝑦−𝑝 )2 /𝑝2
2
3
4
𝑓(π‘₯, 𝑦) = 𝑝6 + 𝑝5 𝑒 −0.5(π‘₯−𝑝1 )
,
where (𝑝1 , 𝑝2 ) is the peak position, i.e., the center of the bead here, and 𝑝3 and 𝑝4 are the
π‘₯ and 𝑦 spreads of the shape, 𝑝6 is the amplitude shift, and 𝑝5 is the peak intensity.
23
To fit a Gaussian distribution to a 30 x 30 pixels 2D image, this project applies
the MATLAB function π‘“π‘šπ‘–π‘›π‘ π‘’π‘Žπ‘Ÿπ‘β„Ž to minimize the objective function
2
π‘ π‘’π‘Ÿπ‘Ÿ = ∑(𝑓(π‘₯, 𝑦) − 𝐼(π‘₯, 𝑦)) ,
where 𝐼(π‘₯, 𝑦) is the pixel value of the image. An optimal initial guess of the parameters
𝑃 = {𝑝1 , 𝑝2 , 𝑝3 , … } can speed the fitting process. Although the location of a pixel with a
regional maximum value is close to the center, a more accurate center is obtained by
considering all pixels’ values, which is the intensity-weighted centroid.[15][16] It is defined
as
𝑐π‘₯ = ∑(𝑀𝑖 π‘₯𝑖 ) / ∑ 𝑀𝑖 ,
𝑐𝑦 = ∑(𝑣𝑖 𝑦𝑖 ) / ∑ 𝑣𝑖 .
The weights 𝑀𝑖 of 𝑐π‘₯ are the pixel values in rows, and the weights 𝑣𝑖 of 𝑐𝑦 are the pixel
values in columns. A 1D Gaussian fitting in the row and the column of the intensityweighted centroid can provide an initial guess for (𝑝1 , 𝑝2). The initial guess for (𝑝3 , 𝑝4 ) is
the intensity-weighted standard deviation. It is defined as
𝑠π‘₯ = √∑(𝑀𝑖 (π‘₯𝑖 − 𝑐π‘₯)2 ) / ∑ 𝑀𝑖 ,
𝑠𝑦 = √∑(𝑣𝑖 (𝑦𝑖 − 𝑐𝑦)2 ) / ∑ 𝑣𝑖 .
The initial guess for 𝑝6 is the mean of all pixel values in the 30 x 30 pixels 2D image.
The difference between the maximal pixel value in this area and 𝑝6 is the initial guess
for 𝑝5 .
24
To measure the performance of the Gaussian fitting algorithm, this project uses a
simulated 30 x 30 pixels noisy 2D Gaussian image as the test file. Figure 4.8 shows a
Gaussian-shaped 2D image with 20 dB (signal-to-noise ratio) additive Gaussian noise.
The cross (15.0, 15.0) is the center of the image, i.e., the center of the bead if it was a
bead. The dot (15.5, 15.7) is the estimated center of the bead by Gaussian fitting. The
distance between the cross and the doc is 0.86 pixels, which shows that the Gaussian
fitting method used in this project works well.
30
Y (pixel index)
25
20
15
10
5
5
10
15
20
X (pixel index)
25
30
Figure 4.8. The result of Gaussian fitting. This image is a 2D Gaussian shape with
additive Gaussian noise. The signal-to-noise ratio is 20 dB. The cross (15.0, 15.0) is the
center of the image, i.e., the center of the bead if it was a bead. The dot (15.5, 15.7) is the
estimated center of the bead by Gaussian fitting. The distance between the cross and the
dot is 0.86 pixels.
25
4.4.2. Match features
The images from different channels are slightly misaligned to each other in this
project, which means the distances between the correspondence pairs in the target image
and the reference image are short relative to the average distance between beads in the
target image or in the reference image. As discussed in Chapter 2, a bead in the target
image that corresponds to a bead in the reference image may not be the closest one,
although it is true in most cases. One of the missions of the feature matching part in this
project is to reduce the mismatched pairs of beads that are the closest to each other but
are not a correct pair.
Another issue is that the number of detected beads in the target image may not
equal the number of beads in the reference image. Feature matching should be able to
exclude the beads that do not have their counterparts.
To implement feature matching, suppose the number of beads in the target image
is π‘š, and the number of beads in the reference images is 𝑛. Then, an π‘š × π‘› matrix 𝑀𝑑 of
the distances between beads in the target image and beads in the reference image is
established. Each element in the matrix stands for a distance between a bead in the target
image and a bead in the reference image. If an element 𝑀𝑑 (𝑖, 𝑗) in the matrix 𝑀𝑑 is both
the smallest one in column 𝑖 and the smallest one in row 𝑗, then the 𝑖th bead in the
reference image and the 𝑗th bead in the target image are the closest pair. As mentioned
before, the shortest distance may not ensure a correct pair. To screen the mismatched
pairs, this project adopts phase correction.
26
Suppose there are two images, 𝑓1 from the target image and 𝑓2 from the reference
image. For both images, the pixel values are 1 at the centers of the detected beads. The
pixel values are 0 elsewhere. According to Chapter 2, the translational movement (π‘₯0 , 𝑦0 )
of 𝑓1 relative to 𝑓2 can be estimated in the Fourier domain. The Fourier transforms, 𝐹1
and 𝐹2 , of the images have the following relation:
𝐹2 (𝑒, 𝑣)/𝐹1 (𝑒, 𝑣) = 𝑒 −𝑗2πœ‹(𝑒π‘₯0 +𝑣𝑦0 ) .
The inverse Fourier transform of the right side of the above equation is an impulse
function. The translational movement (π‘₯0 , 𝑦0 ) is the location of the pulse, i.e., the
location of the maximum value of the inverse Fourier transform.
Translational parameter (π‘₯0 , 𝑦0 ) tells the direction and size of the movement of
the target image relative to the reference image. The angle of (π‘₯0 , 𝑦0 ) is a criterion to
determine if a pair is mismatched or matched. If the absolute difference between the
angle of a pair and the angle of (π‘₯0 , 𝑦0 ) is beyond πœ‹/8, the pair is labeled as a
mismatched pair. It is excluded from the following calculations.
Figure 4.9 shows part of the result of feature matching by the point set matching
method of the test file “tsbeads_0_SIR.dv”. Stars correspond to the beads in the target
image, and crosses correspond to the beads in the reference image. Stars and crosses
connected by lines are successfully matched pairs. Stars and crosses that are not
connected are beads that do not find their counterparts by the algorithm.
27
350
300
Y (pixel index)
250
200
150
100
50
0
250
300
350
400
450
X (pixel index)
500
550
600
Figure 4.9. The result of feature matching in the spatial domain. Stars correspond to the
beads in the target image, and crosses correspond to the beads in the reference image.
The incorrectly matched pairs are circled.
Figure 4.10 shows the result of screening incorrect pairs in Figure 4.9 by phase
correction. The movement of the two pairs circled in Figure 4.10 is different from the
average movement for most pairs in this picture. Phase correction determines that these
pairs are not correct pairs.
Table 4.1 shows the performance of the feature matching algorithm. The number
of total pairs is the smaller of the number of beads in the target image and the number of
beads in the reference image. It is an approximate number since some pairs do not have
counterparts. All matched pairs are correct based on the visual inspection. Among the six
28
test files, the average percentage of correct correspondences is 51.6%. One big reason is
that the translational movement of the target image relative to the reference image
obtained by phase correction could not represent the translational movement of different
parts of the images. Tracking local movement can provide a solution. However, applying
the same window on both the target image and the reference image may take apart some
pairs. This method could introduce a wrong movement estimate.
350
300
Y (pixel index)
250
200
150
100
50
0
250
300
350
400
450
X (pixel index)
500
550
600
Figure 4.10. The corrected feature matching by phase correlation. Stars correspond to the
beads in the target image, and crosses correspond to the beads in the reference image.
The incorrectly matched pairs, shown circled, are excluded.
29
Table 4.1. The percentage of correct correspondences for the six test files used in this
project. The number of total pairs is the smaller of the number of beads in the target
image and the number of beads in the reference image. It is an approximate number since
some pairs do not have counterparts. All matched pairs are correct based on visual
inspection.
Test file name
Total pairs
(approx.)
Matched pairs
Matched percent
(%)
Tbeads_091812_01_SIR.dv
87
59
67.8
Tbeads_091812_02_SIR.dv
124
73
58.9
Tbeads_091812_03_SIR.dv
141
72
51.1
tsbeads_0_SIR.dv
108
48
44.4
tsbeads_2_SIR.dv
63
32
50.7
tsbeads_3_SIR.dv
57
21
36.8
Average
-
-
51.6
4.4.3. Calculate the parameters of the mapping function
The mapping function applies a transformation on the target image to move the
beads in the target image as close as possible to their counterparts in the reference image.
The parameters of the mapping function are computed by means of the MATLAB
function π‘“π‘šπ‘–π‘›π‘ π‘’π‘Žπ‘Ÿπ‘β„Ž. Firstly, apply the mapping function to the locations of beads in the
target image to estimate the locations of the beads’ counterparts in the reference image.
Then, the optimal parameters are obtained when the least-square error of the estimated
locations of the counterparts and the actual locations of beads in the reference image is
obtained.
30
Figure 4.11 illustrates the result of applying the mapping function on the matched
pairs in Figure 4.10. Dots correspond to the beads in the target image, and circles
correspond to the beads in the reference image.
350
300
Y (pixel index)
250
200
150
100
50
0
250
300
350
400
450
X (pixel index)
500
550
600
Figure 4.11. The result of applying the mapping function on matched pairs in Figure 4.10.
Dots correspond to the beads in the target image, and circles correspond to the beads in
the reference image.
A good mapping function makes beads in the target image overlay the beads in
the reference image. To measure the performance of the mapping function, this project
uses the mean and the standard deviation of the relative error between the mapped beads
in the target image and the beads in the reference image. Table 4.2 provides the result of
31
this quantitative analysis. The average of the mean error is 0.7 pixels. The average of the
standard deviation is 1.4 pixels. These numbers show the mapping function works well.
Table 4.2. Quantitative analysis of the performance of the mapping function.
Test file name
Mean of the error
(pixels)
Standard deviation of
the error (pixels)
Tbeads_091812_01_SIR.dv
1.7719
3.2586
Tbeads_091812_02_SIR.dv
0.4257
0.6584
Tbeads_091812_03_SIR.dv
0.6486
2.1695
tsbeads_0_SIR.dv
0.3428
0.3627
tsbeads_2_SIR.dv
0.8701
1.5628
tsbeads_3_SIR.dv
0.2981
0.1498
4.5. Practical experiment
Figure 4.12 and Figure 4.13 show the target image and the reference image in
“tsbeads_0_SIR.dv” overlaid before and after registration, respectively. Figure 4.13
shows that after registration, the target image aligns with the reference image. A detail
comparison can be seen in the ellipses in Figure 4.12 and Figure 4.13.
32
1000
900
Y (pixel index)
800
700
600
500
400
300
200
100
200
400
600
X (pixel index)
800
1000
Figure 4.12. Overlaid target image and reference image before registration.
33
1000
900
Y (pixel index)
800
700
600
500
400
300
200
100
200
400
600
X (pixel index)
800
1000
Figure 4.13. Overlaid target image and reference image after registration.
34
CHAPTER 5
DISCUSSION
Although various sizes and patterns of beads are used in this project, real medical
images have not been used as test files. The feature detection algorithm is tested and
measured by simulated Gaussian shape beads, not the real beads.
All the matched beads are correct based on the observation of the whole image,
providing good input for the mapping function. However, due to distortion, only the pairs
that represent the movement of the whole image are included, and almost half of the bead
pairs is not included.
This project applies a single mapping function to the whole target image.
However, the movement of the beads in the target image is not uniform toward the beads
in the reference image. To reduce the mean relative error and the standard deviation, the
use of several rounds of registration, which means registering the new mapped target
image to the reference image several times, was tried but resulted in insignificant
improvement. Considering the extra time required, it is not included in this project.
The sizes of the beads in the target image are different from their counterparts in
the reference image, so it is not meaningful to measure the residuals based on the whole
image. Thus, this project measures the mapping function based on the detected beads.
35
CHAPTER 6
SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS
Image registration improves the resolution of microscopy images. This project
registers a target image to a reference image, both of which are from 3D DV image files.
Three major steps of image registration, that is, feature detection, feature matching, and
creating and applying the mapping function, are discussed in detail. The method of
selecting the highest contrast image from a 3D image file, the Gaussian fitting algorithm
used in feature detection, the point matching algorithm, which combines both the point
set matching method and the Fourier method, and the mapping function are presented.
A graphic user interface lets users choose DV files from hard drives and select the
target and reference channels. Image registration of high resolution images is time
consuming. Instead of calculating the mapping function’s parameters for every image,
this project lets users apply the same mapping function on a set of images obtained from
a defined system.
Although the algorithms described in this project perform well on all six test files,
more tests with real medical images are needed to make sure the algorithms work
consistently. A few suggestions may help to achieve better accuracy.
Firstly, determination of corresponding CPs in the images is a very important step
in the registration process. The accuracy of feature detection impacts both the accuracy of
feature matching and the mapping function. This project uses simulated data to evaluate
the algorithms of feature detection. A study is needed to determine how close the
simulation is to an actual bead. It may be that a better simulation is needed.
36
Secondly, in the feature matching part, although it is good to combine the point
matching method and the Fourier method, an advanced point matching[9] method is worth
trying to see if more CP pairs are included.
Thirdly, geometric distortions are not considered in this project. The least-squares
technique used in this project to determine the parameters of the polynomial mapping
function averages local geometric distortion equally all over the images. Developing a
mapping function that uses local information to register local areas of the images would
improve the performance of the registration.
37
Appendix A
User Instructions
A.1. Select a file for registration
At the beginning of the software, a pop-up window is created for users to choose a
DV file to register. It is shown in Figure A.1.
Figure A.1. A pop-up window for users to select a DV file.
A.2. Calibration or correction
After a DV file is selected, users need to decide whether to correct this file based
on a previous calibration or to use this file as a new calibration standard (Figure A.2).
38
After a decision is made, the program starts to read the DV file. Figure A.3 shows the
progress bar when reading a file.
Figure A.2. The pop-up window for choosing correcting or calibrating.
Figure A.3. The pop-up window when reading a file.
A.3. Target channel and reference channel
A DV file may contain many channels. The project supports registering two
channels. Users need to tell the software which channel they prefer to as the target
channel and which one is the reference channel. The number 525 in Figure A.4 means
525 nm, which is the emission wavelength of light with a green color. The green light
channel usually is chosen as the target channel. The red light channel with emission
wavelength of 615 nm is chosen as the reference channel.
39
Figure A.4. Selecting the target channel and reference channel.
40
Appendix B
Code Listing
function main
clear all;
close all;
[filename pathname filterindex]=uigetfile('*.dv', 'Choose your DV file');
if filename==0
errordlg('File not found','File Error');
quit;
end
correction = questdlg('Do you want to correct the file or use it as a calibration
standard? ','Job selection','Correct','Calibrate','Correct');
[RawData DVFileInfo ImageInfo ExtInfo]=ReadDVfile([pathname filename]);
[tarchannel ook]=listdlg('Name',Target channel selection','PromptString','Select the
target
channel','SelectionMode','single','ListString',num2str(DVFileInfo.ChannelArray(:,1)));
[refchannel ook]=listdlg('Name','Reference channel selection','PromptString','Select the
reference
channel','SelectionMode','single','ListString',num2str(DVFileInfo.ChannelArray(:,1)));
tarfilename=num2str(DVFileInfo.ChannelArray(tarchannel,1),3);
reffilename=num2str(DVFileInfo.ChannelArray(refchannel,1),3);
[originaltar, zb]=getfocusimage(RawData, tarchannel, DVFileInfo);
[originalref zt]=getfocusimage(RawData, refchannel, DVFileInfo);
tar=preprocessing(originaltar);
ref=preprocessing(originalref);
dims=size(tar);
blank=zeros(dims);
% reference and target images before registration
comparison1=cat(3,(tar-min(min(tar)))/(max(max(tar))-min(min(tar))), (refmin(min(ref)))/(max(max(ref))-min(min(ref))), blank);
figure,
image(1-comparison1);
41
if strcmp(correction,'Calibrate')
%intial parameters of mapping function
%first two parameter are cetner x & y; 3rd & 4th are x and y
%scaling factors (linear); 4th & 5th are cross coupling and shifting
initpar = [floor(dims(1)/2) floor(dims(2)/2) 1.0, 1.0 0.0 0.0 0.0 0.0];
%registration
[fp error_residue] = fittingprocess(tar,ref,initpar);
else
%correction
fid = fopen(['dewarp from ' tarfilename ' to ' reffilename ' Created by
Calibration3566r.txt'], 'r');
fp=fscanf(fid, 'Center x and y: %f %f (um)\r\n Scaling factor: %f %f (rotation
angle)\r\n Shift offset x and y: %f %f \r\n');
zdiff=fscanf(fid, 'z offset %f ');
fclose(fid);
fp=fp';
fp(1)=fp(1)/DVFileInfo.pixX;
fp(2)=fp(2)/DVFileInfo.pixY;
fp(5)=fp(5)/DVFileInfo.pixX;
fp(6)=fp(6)/DVFileInfo.pixY;
fid = fopen(['dewarp from ' tarfilename ' to ' reffilename ' Created by
Calibration3566r.bin'], 'r');
fp=fread(fid,20,'single');
fp=fp';
end
% apply mapping function
b=bianhuan(originaltar,fp);
result=b.*single(b>0);
% reference and target images after registration
comparison2=cat(3,(result-min(min(result)))/(max(max(result))-min(min(result))),
(originalref-min(min(originalref)))/(max(max(originalref))-min(min(originalref))),
blank);
figure,
image(1-comparison2);
% errorresidue1=errcal(originalref,result,1)
if strcmp(correction,'Calibrate') %save parameters only when the input is for
calibration
fid = fopen(['dewarp from ' tarfilename ' to ' reffilename ' Created by
Calibration3566r.txt'], 'wt');
fprintf(fid, 'Center x and y: %6.2f %6.2f (um)\r\n Scaling factor: %10.8f %10.8f
(rotation angle)\r\n Shift offset x and y: %10.8f %10.8f \r\n', fp(1)*DVFileInfo.pixX,
fp(2)*DVFileInfo.pixY, fp(3), fp(4), fp(5)*DVFileInfo.pixX, fp(6)*DVFileInfo.pixY);
fprintf(fid, 'z offset %6.2f \r\n', zt-zb);
fclose(fid);
42
fid = fopen(['dewarp from ' tarfilename ' to ' reffilename ' Created by
Calibration3566r.bin'], 'wt');
fwrite(fid,fp,'single');
fclose(fid);
end
end
%parameter finder for maping function
%return:square root of the difference
function z = parafinder(p,star,ref)
T.tdata=p;
b=change(star,T);
diff=b-ref;
[theta, rho] = cart2pol(diff(:,1),diff(:,2));
z=sum(rho.^2);
end
% apply mapping function
function newimage=bianhuan(a, paras)
dims=size(a);
Tform=maketform('custom', 2, 2, [], @change, paras);
newimage=imtransform(a,Tform,'bicubic','XData',[1 dims(1)],'YData',[1
dims(2)],'FillValues',0);
end
% the mapping function
function U=change(X,T)
cx=T.tdata(1);
cy=T.tdata(2);
xcoef=T.tdata(3);
ycoef=T.tdata(4);
xycoef=T.tdata(5);
yxcoef=T.tdata(6);
xshift=T.tdata(7);
yshift=T.tdata(8);
U(:,1)=cx+xcoef*(X(:,1)-cx)+xycoef*(X(:,2)-cy)+xshift;
U(:,2)=cy+ycoef*(X(:,2)-cy)+yxcoef*(X(:,1)-cx)+yshift;
end
%feature matching
function correspondence=findcorrespondence(imagein,ref)
results=[];
dims=size(ref);
im3=zeros(dims);
sp = 30;
43
im3(sp:dims(1)-sp,sp:dims(2)-sp)=ref(sp:dims(1)-sp,sp:dims(2)-sp);
im1 = im3;
bw=imregionalmax(im1);
[rowy colx]=find(bw);
tim4=zeros(dims);
for i=1:size(rowy),
if(bw(rowy(i), colx(i)) == 1)
%feature detection
[txcol(i) tyrow(i)]=fitpoint(ref, colx(i),rowy(i),0.95);
tim4(round(tyrow(i)), round(txcol(i))) = 1;
end
end
sp = 30;
im3(sp:dims(1)-sp,sp:dims(2)-sp)=imagein(sp:dims(1)-sp,sp:dims(2)-sp);
im1 = im3;
bw=imregionalmax(im1);
[browy bcolx]=find(bw);
bim4=zeros(dims);
for i=1:size(browy),
if(bw(browy(i), bcolx(i)) == 1)
[bxcol(i) byrow(i)]=fitpoint(imagein, bcolx(i),browy(i),0.95);
bim4(round(byrow(i)), round(bxcol(i))) = 1;%ones(20,20);
end
end
%detected beads
figure,
hold on;
plot(round(bxcol), round(byrow), 'k.', round(txcol), round(tyrow), 'kx');
xlim([0 1024]);
ylim([0 1024]);
%phase correction
[offx offy] = phasecorr(tim4, bim4, dims(1)/2);
b2tangle = atan2(-offy, -offx);
b2tdist = sqrt((offx)^2+(offy)^2);
dmatrix = gaussianm(txcol, tyrow, bxcol, byrow);
pmatrix = abs(dmatrix);
44
dimp = size(pmatrix)
%shortest distance pairs
[cp1, ip1] = min(pmatrix);
mcp1 = mean(cp1);
[cp2, ip2] = min(pmatrix');
mcp2 = mean(cp2);
j = 1;
if(dimp(1) > dimp(2)),
for i = 1:dimp(2),
if(ip2(ip1(i)) == i),
angd = abs(angle(dmatrix(ip1(i), i)) - b2tangle);
disd = abs(abs(dmatrix(ip1(i), i)) - b2tdist);
if(angd < pi/8)% & disd < b2tdist & abs(dmatrix(ip1(i), i)) > b2tdist/2)
tx(j) = txcol(ip1(i));
ty(j) = tyrow(ip1(i));
bx(j) = bxcol(i);
by(j) = byrow(i);
j = j+1;
else
j = j;
end
end
end
else
for i = 1:dimp(1),
if(ip1(ip2(i)) == i), % i row
angd = abs(angle(dmatrix(i, ip2(i))) - b2tangle);
disd = abs(abs(dmatrix(i, ip2(i))) - b2tdist);
if(angd < pi/8)% & disd < b2tdist & abs(dmatrix(i, ip2(i))) > b2tdist/2)
tx(j) = txcol(i);
ty(j) = tyrow(i);
bx(j) = bxcol(ip2(i));%ref
by(j) = byrow(ip2(i));
j = j+1;
else
j = j;
end
end
end
end
dimby = size(by)
45
%matched points
for i = 1: dimby(2),
plot([bx(i) tx(i)], [by(i) ty(i)], 'k');
end
plot(bx, by, 'k.', tx, ty, 'kx');
hold off;
correspondence=cat(2, by', bx', ty', tx');
end
function dmatrix = gaussianm(x1, y1, x2, y2)
dim1 = length(x1);
dim2 = length(x2);
for i = 1:dim1,
for j = 1:dim2,
dmatrix(i,j) = (x1(i)-x2(j)) + (y1(i)-y2(j))*sqrt(-1);
%
dmatrix(i,j) = (((x1(i)-x2(j))^2+(y1(i)-y2(j))^2));
end
end
end
%Gaussian fitting
function [cx,cy]=fitpoint(AA, x, y,threshold)
%AA: input image
% x, y: estimated peak position, usu the coordinates of the maximum
% threshold: satification factor.
x=round(x);
y=round(y);
%find the local maximum within the square x+-15;y+-15
A1=AA(y-15:y+14,x-15:x+14);
[vall, jj] = max(A1);
[vall, kk] = max(vall);
max_x = kk-1+x-15;
max_y= jj(kk)-1+y-15;
ps=[max_x-9,max_y-9,max_x+9,max_y+9];
try
[cx,cy] = Gaussian2D(AA,ps,threshold);
catch err
ps
x
y
max_x
46
max_y
end
end
% To fit a 2-D gaussian
% m1 = Image
% tol = fitting tolerance
% a function to fit a thermal cloud 2-D
function [cx,cy] = Gaussian2D(m1,dims,tol)
options = optimset('Display','off','TolFun',tol,'LargeScale','off');
m=double(m1(dims(2):dims(4),dims(1):dims(3)));
[sizey sizex] = size(m);
[cx,cy,sx,sy,mean] = centerofmass(m);
pOD = max(max(m))-mean;
mx = m(round(cy),:);
x1D = 1:sizex;
ip1D = [cx,sx,pOD,mean];
fp1D = fminsearch(@fitgaussian1D,ip1D,options,mx,x1D);
if round(fp1D(1))<=sizex && round(fp1D(1))>0
cx = fp1D(1);
end
%for perfect beads, FWHM is about 3~7 pixels, fp1D2 should really be
%smaller than 3.
if fp1D(2)<=10 && fp1D(2)>0
sx = fp1D(2);
end
PeakOD = fp1D(3);
my = m(:,round(cx))';
y1D = 1:sizey;
ip1D = [cy,sy,pOD,mean];
fp1D = fminsearch(@fitgaussian1D,ip1D,options,my,y1D);
if fp1D(1)<=sizey && fp1D(1)>0
cy = fp1D(1);
end
if fp1D(2)<=10 && fp1D(2)>0
sy = fp1D(2);
end
PeakOD = fp1D(3);%(PeakOD+fp1D(3))/2;
[X,Y] = meshgrid(1:sizex,1:sizey);
47
comorcen=0;
initpar = [cx,cy,sx,sy,PeakOD,mean];
fp = fminsearch(@fitgaussian2D,initpar,options,m,X,Y);
if fp(1)<=sizex && fp(1)>0
cx = fp(1);
else
comorcen=1;%debuginfo% display('error in Gaussian2D x fitting'); display(cx);
end
cx=cx+dims(1)-1;
if fp(2)<=sizey && fp(2)>0
cy = fp(2);
else
comorcen=2;
end
cy=cy+dims(2)-1;
end
% PURPOSE: find c of m of distribution
function [cx,cy,sx,sy,mean] = centerofmass(m)
[sizey sizex] = size(m);
vx = sum(m);
vy = sum(m');
vx = vx.*(vx>0);
vy = vy.*(vy>0);
x = [1:sizex];
y = [1:sizey];
if sum(vx)>0
cx = sum(vx.*x)/sum(vx);
sx = sqrt(sum(vx.*(abs(x-cx).^2))/sum(vx));
else
cx = sizex/2;
sx = sizex;
end
if sum(vy)>0
cy = sum(vy.*y)/sum(vy);
sy = sqrt(sum(vy.*(abs(y-cy).^2))/sum(vy));
else
cy = sizey/2;
48
sy = sizey;
end
mean=sum(sum(m))/sizex/sizey;
end
function [z] = fitgaussian1D(p,v,x)
zx = p(4)+ p(3)*exp(-0.5*(x-p(1)).^2./(p(2)^2)) - v;
z = sum(zx.^2);
end
function [z] = fitgaussian2D(p,m,X,Y)
ztmp = p(6)+p(5)*(exp(-0.5*(X-p(1)).^2./(p(3)^2)-0.5*(Y-p(2)).^2./(p(4)^2))) - m;
z = sum(sum(ztmp.^2));
end
function Dots=preprocessing(Img)
len = size(Img,1)*size(Img,2);
datab=reshape(Img,len,1);
datab=sort(datab);
step = (max(datab)-min(datab))/2048;%2048 gray levels
stm = 0;
for i = 1:255,
cutd = datab > i*step;
[val idx] = max(cutd);
std1 = std(datab(1:idx));
std2 = std(datab(idx+1:len));
st = std1.^2+std2.^2-std1.*std2;%objective function
if(st > stm)
stm = st;
thrd = i*step;
end
end
ThreshedImg = (Img > thrd);
Dotstotal=bwareaopen(ThreshedImg,16,4);
Dots = Img.*Dotstotal;
end
%variance and mean error
function errresidue=errcal(rvect,reg_tvect,range)
49
diff=rvect-reg_tvect;
diff=sqrt(sum(diff.^2,2));
errresidue = mean(diff);
vari = std(diff);
end
%registartion
function [fp errorresidue]=fittingprocess(inputtar, ref, inputparas)
results=findcorrespondence(inputtar,ref);
%%%%now use found correspondences to get fitting parameters
options = optimset('Display','off','TolFun',0.99,'LargeScale','off');
%because x is col and y is row in transformation, so cols become the fist
%part of the vector
rvect=[results(:,4) results(:,3)];
%here tvect is already in result space of the last round of
%optimization
tvect=[results(:,2) results(:,1)];
assignin('tar','rvect',rvect);
assignin('tar','tvect',tvect);
T.tdata=inputparas;
otvect=change(tvect, T);
dims=size(inputtar);
diff3=rvect-otvect;
diff=tvect-otvect;
if (inputparas(3)~=1)&&(inputparas(4)~=1)
[theta3 rho3]=cart2pol(diff3(:,1), diff3(:,2));
[theta rho]=cart2pol(diff(:,1), diff(:,2));
thetadiff=theta3-theta;
thetadiff=abs(thetadiff);
largedevi=(thetadiff>0.25)&(thetadiff<(2*pi-0.25));
otvect(largedevi,:)=[];
rvect(largedevi,:)=[];
end
%options = optimset('UseParallel','always','Algorithm','activeset','Display','off','TolFun',0.99,'LargeScale','off');
options = optimset('Display','off','TolFun',0.99,'LargeScale','off');
lb=[%512 512 0.85 -0.20 -50.0 -50.0];
0.4*dims(1) 0.4*dims(2) 0.85 0.85 -0.20 -0.20 -50.0 -50.0];
50
ub=[%512 512 1.2 0.20 50.0 50.0];
0.6*dims(1) 0.6*dims(2) 1.2 1.2 0.20 0.20 50.0 50.0];
f=@(x)parafinder(x,rvect,otvect);
fp = fmincon(f,inputparas,[], [], [], [], lb, ub, [],options);
T.tdata=[fp(1)+fp(7) fp(2)+fp(8) fp(4)/(fp(4)*fp(3)-fp(5)*fp(6)) fp(3)/(fp(4)*fp(3)fp(5)*fp(6)) -fp(5)/(fp(4)*fp(3)-fp(5)*fp(6)) -fp(6)/(fp(4)*fp(3)-fp(5)*fp(6)) -fp(7) fp(8)];
reg_tvect=change(otvect, T);
errorresidue=errcal(rvect,reg_tvect,1)
figure,
plot(reg_tvect(:,1),reg_tvect(:,2),'k.', rvect(:,1), rvect(:,2), 'kx');
xlim([0 1024]);
ylim([0 1024]);
title('after correction');
end
function [focus zindex]=getfocusimage(InputImg, ch, DVFileInfo)
chImages=InputImg(:,:,(ch-1)*DVFileInfo.nz+1:ch*DVFileInfo.nz);
peakval=max(max(max(chImages)));
[M N]=size(chImages(:,:,1));
for row=1:4
for col=1:4
section=chImages((row-1)*M/4+1:row*M/4,(col-1)*N/4+1:col*N/4,:);
for i=1:DVFileInfo.nz
slice=section(:,:,i);
threshed=regionprops((slice>0.4*peakval),slice,'EquivDiameter','MeanIntensity');
[n1, n2]=size(threshed);
intens=zeros(n1,1);
for j=1:n1
intens(j)=threshed(j).MeanIntensity;
end
51
intensit(i)=mean(intens);
if ~isnumeric(intensit(i)) intensit(i)=[];
end
[counts,x]=imhist((slice-min(min(slice)))/(max(max(slice))min(min(slice))),100);
lowmean=sum(counts(1:5).*x(1:5))/sum(counts(1:5));
himean=sum(counts(95:100).*x(95:100))/sum(counts(95:100));
contrast(i)=(himean-lowmean)*(max(max(slice))-min(min(slice)));
end
[maxval zconindex]=max(contrast);
[maxval zindex]=max(intensit);
zcons(row,col)=zconindex;
zs(row,col)=zindex;
end
end
zs=[zs(1,:) zs(2,:) zs(3,:) zs(4,:)];
zcons=[zcons(1,:) zcons(2,:) zcons(3,:) zcons(4,:)];
zs=sort(zs);
zcons=sort(zcons);
zintenindex=round(mean(zs(2:15)));
zindex=round(mean(zcons(2:15)));
focus=chImages(:,:,zindex);
end
function [offx offy] = phasecorr(targ, tar, blocks)
B_prev=fft2(targ);
B_curr=fft2(tar);
%
mul=B_prev.*conj(B_curr);
mul=B_curr.*conj(B_prev);
mag=abs(mul);
mag(mag==0)=1e-31;
C=mul./mag;
c=fftshift(abs(ifft2(C)));
[tempy1 tempx1]=find(c==max(max(c)));
tempx = tempx1(1);%(tempx1(1) + tempx2(1))/2;
tempy = tempy1(1);%(tempy1(1) + tempy2(1))/2;
52
offx = tempx-blocks-1;
offy = tempy-blocks-1;
end
function [RawData DVFileInfo ImageInfo ExtInfo]=ReadDVfile(InFileName)
h = waitbar(0,'Reading file. Please wait...');
cd DVImgFunctions;
DVImgLibOpen(0);
DVImgOpen(1,InFileName,'ro'); %open existing DV image
% Read some parameters we'll use
DVFileInfo.nx = DVImgGetNumCols(1);
DVFileInfo.ny = DVImgGetNumRows(1);
DVFileInfo.nz = DVImgGetNumZ(1);
DVFileInfo.nw = DVImgGetNumW(1);
DVFileInfo.nt = DVImgGetNumT(1);
DVFileInfo.pixX = DVImgGetPixelSizeX(1);
DVFileInfo.pixY = DVImgGetPixelSizeY(1);
DVFileInfo.pixZ = DVImgGetPixelSizeZ(1);
DVFileInfo.DataType = DVImgGetDataType(1);
DVFileInfo.OriginX = DVImgGetOriginX(1);
DVFileInfo.OriginY = DVImgGetOriginY(1);
DVFileInfo.OriginZ = DVImgGetOriginZ(1);
DVFileInfo.LensID = DVImgGetLensID(1);
DVFileInfo.TimeStamp = DVImgGetTimeStamp(1);
DVFileInfo.ImgSequence = DVImgGetImageSequence(1);
DVFileInfo.ChannelArray = zeros(DVFileInfo.nw,7);
for Channel=0:DVFileInfo.nw-1
index = Channel+1;
DVFileInfo.ChannelArray(index,1) = DVImgGetWavelength(1,Channel);
DVFileInfo.ChannelArray(index,2) = DVImgGetIntenMin(1,Channel);
DVFileInfo.ChannelArray(index,3) = DVImgGetIntenMax(1,Channel);
DVFileInfo.ChannelArray(index,4) = DVImgGetIntenMean(1,Channel);
DVFileInfo.ChannelArray(index,5) = DVImgGetDisplayMin(1,Channel);
DVFileInfo.ChannelArray(index,6) = DVImgGetDisplayMax(1,Channel);
DVFileInfo.ChannelArray(index,7) = DVImgGetDisplayExp(1,Channel);
end
RawData=zeros(DVFileInfo.ny, DVFileInfo.nx,
DVFileInfo.nz*DVFileInfo.nt*DVFileInfo.nw,'single');
ImageInfo(1:DVFileInfo.nt*DVFileInfo.nw*DVFileInfo.nz)=struct('PosX',[],'PosY',[],'P
osZ',[],'Time',[],'PhotoVal',[],'Min',[],'Max',[],'Mean',[]);
53
i=0;
%sync the extended header
for t=0:DVFileInfo.nt-1
for w=0:DVFileInfo.nw-1
for z=0:DVFileInfo.nz-1
i=i+1;
ImageInfo(i).PosX = DVImgGetPosX(1,z,w,t);
ImageInfo(i).PosY = DVImgGetPosY(1,z,w,t);
ImageInfo(i).PosZ = DVImgGetPosZ(1,z,w,t);
ImageInfo(i).Time = DVImgGetTime(1,z,w,t);
ImageInfo(i).PhotoVal = DVImgGetPhotoVal(1,z,w,t);
ImageInfo(i).Min = DVImgGetMin(1,z,w,t);
ImageInfo(i).Max = DVImgGetMax(1,z,w,t);
ImageInfo(i).Mean = DVImgGetMean(1,z,w,t);
RawData(:,:,i) = DVImgRead(1,z,w,t); % Read section
waitbar(i/(DVFileInfo.nt*DVFileInfo.nw*DVFileInfo.nz));
end
end
end
%sync any metadata stored in the extended header
ExtInfo.Fields = DVImgGetExtInfo(1);
FieldNames = fieldnames(ExtInfo.Fields);
i=1;
for Field=FieldNames'
ExtInfo.Value(i) = DVImgGetExtHdrField(1,char(Field));
ExtInfo.Type(i) = char(Fields.(char(Field))(1));
i=i+1;
end
close(h);
DVImgClose(1);
DVImgLibClose();
% for i=1:DVFileInfo.nt*DVFileInfo.nw*DVFileInfo.nz
% imwrite(uint16(RawData(:,:,i)),
'rewrite.tif','tif','Compression','none','WriteMode','append');
% end
end
54
REFERENCES
[1] B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation,
and scale-invariant image registration,” IEEE Trans. on Image Process., vol. 5, pp. 1266
- 1271, 1996.
[2] “Fluorescence microscopy accessories and reference standards—section 23.1,”
[online]. Available:
http://www.invitrogen.com/site/us/en/home/References/Molecular-Probes-TheHandbook/Tools-for-Fluorescence-Applications-Including-Reference-Standards-andOptical-Filters/Fluorescence-Microscopy-Reference-Standards-and-AntifadeReagents.html. [accessed: Apr. 2013].
[3] B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision
Computing, vol. 21, pp. 977 - 1000, 2003.
[4] “Image registration,” [online]. Avaliable:
http://en.wikipedia.org/wiki/Image_registration. [accessed: Apr. 2013].
[5] S. Preibisch, S. Saalfeld, T. Rohlfing, and P. Tomancak, “Bead-based mosaicing of
single plane illumination microscopy images using geometric local descriptor matching,”
Proc. of SPIE, vol. 7259, pp. 1 - 10, 2009.
[6] B. Zhang, J. Zerubia, and J. Olivo-Marin, “Gaussian approximations of fluorescence
microscope point-spread function models,” Applied Optics, vol. 46, issue 10, pp. 1819 1829, 2007.
[7] G. Scott and C. Longuet-Higgins, “An algorithm for associating the features of two
images,” Proc. R. Soc., pp. 21 - 26, 1991.
55
[8] L. Shapiro and J. Brady, “Feature-based correspondence: an eigenvector approach,”
Image Vision Comput., vol. 10, pp. 283 - 288, 1992.
[9] T. S. Caetano, T. Caelli, D. Schuurmans, and D. A. C. Barone, ”Graphical models and
point pattern matching,” IEEE Trans. PAMI, vol. 28, pp. 1646-1663, 2006.
[10] A. A. Goshtasby, “Registration of images with geometric distortions,” IEEE Trans.
Geoscience and Remote Sensing, vol. 26, pp. 60 - 64, 1988.
[11] “ImageJ user guide,” [online]. Avaliable: http://rsbweb.nih.gov/ij/docs/userguide.pdf. [accessed: Apr. 2013].
[12] “Image Processing Toolbox,” [online]. Avaliable:
http://www.mathworks.com/products/image/. [accessed: Apr. 2013].
[13] “Delta Vision OMX,” [online]. Available: http://www.api.com/deltavision-omx.asp.
[accessed: Apr. 2013].
[14] Z. Li and C. Liu, “An image thresholding method based on standard deviation,”
Computational Sciences and Optimization, 2009. CSO 2009. International Joint
Conference, vol. 1, pp. 835 - 838, 2009.
[15] J. M. Fitzpatrick, D. L. G. Hill, and C. R. Maurer, “Image registration,” [online].
Available:
http://tango.andrew.cmu.edu/~gustavor/42431-intro-bioimaging/readings/ch8.pdf.
[accessed: Apr. 2013].
[16] S. Eddins, “Intensity weighted centroids,” [online]. Available:
http://blogs.mathworks.com/steve/2007/08/31/intensity-weighted-centroids/. [accessed:
Apr. 2013].
56
[17] T. J. Collins, “ImageJ for microscopy,” BioTechniques, vol. 43, pp. 25 - 30, 2007.
Download