IMAGE FUSION SUPERRESOLUTION IN STRUCTURED ILLUMINATION
MICROSCOPY
A Thesis
Presented to the faculty of the Department of Electric and Electronic Engineering
California State University, Sacramento
Submitted in partial satisfaction of
the requirements for the degree of
MASTER OF SCIENCE
in
Electrical and Electronic Engineering
by
Kamal Abdullah Alharbi
SUMMER
2013
© 2013
Kamal Abdullah Alharbi
ALL RIGHTS RESERVED
ii
IMAGE FUSION SUPERRESOLUTION IN STRUCTURED ILLUMINATION
MICROSCOPY
A Thesis
by
Kamal Abdullah Alharbi
Approved by:
__________________________________, Committee Chair
Warren D. Smith, Ph.D.
__________________________________, Second Reader
Preetham B. Kumar, Ph.D.
__________________________________, Third Reader
Fethi Belkhouche, Ph.D.
____________________________
Date
iii
Student: Kamal Abdullah Alharbi
I certify that this student has met the requirements for format contained in the University
format manual, and that this thesis is suitable for shelving in the Library and credit is to
be awarded for the thesis.
__________________________, Graduate Coordinator
Preetham B. Kumar
Department of Electrical and Electronic Engineering
iv
___________________
Date
Abstract
of
IMAGE FUSION SUPERRESOLUTION IN STRUCTURED ILLUMINATION
MICROSCOPY
by
Kamal Abdullah Alharbi
The limited resolution of a microscope is due to the diffraction limit, aperture and
the optical lens. Superresolution (SR) methods improve resolution beyond the
diffraction limit. Structured illumination (SI) is an SR method that helps acquire and
fuse several non-redundant low-resolution (LR) images of the same object to produce a
high-resolution (HR) image.
In this thesis, an alternative method is developed and evaluated for fusing LR
images obtained using SI to produce HR images. The method advocates the use of the
L1 norm with total variation regularization to address the problem with existing image
reconstruction using Wiener-like deconvolution. The method is applicable for
reconstruction of grayscale images. The work also justifies some practical assumptions
that greatly reduce the computational complexity and memory requirements of the
proposed methods.
v
The work introduces Peak Signal to Standard Error of the Estimate Ratio
(PSSEER) as a quantitative method of measuring image quality. Subjective and
objective methods are consistent in showing that L1/TV optimization resolves more
details than Wiener-like deconvolution reconstruction. The proposed method performs
better in the absence of noise and in the presence of either Gaussian or Poisson noise.
_______________________, Committee Chair
Warren D. Smith, Ph.D.
_______________________
Date
vi
DEDICATION
To my parents Abdullah and Salehah, my wife Nada, and my kids Jana, Lama, Abdullah,
Yara, and the new baby Sarah
vii
ACKNOWLEDGEMENTS
This work is the result of close collaboration with a unique team of scientists and
friends. It was their sincere assistance and support that helped me reach this milestone.
First, I would like to thank my advisor Professor Warren Smith, my role model
of an exceptional scientist and teacher. It was a great privilege and honor to work and
study under his guidance. I would also like to thank him for his friendship, empathy, and
patience.
I would like to thank the team at the Center for Biophotonics Science and
Technology, CBST, at the University of California, Davis, for offering me the chance to
be a member of the team. I am grateful to Dr. Stephen Lane and Dr. Kaiqin Chu for their
invaluable guidance and sincere assistance throughout this work.
Thanks to Dr. Preetham Kumar for serving on my committee, reviewing this
thesis, and providing all the help and assistance as graduate coordinator during my time
as graduate student. Also, I thank Dr. Fethi Belkhouche for serving on my committee,
introducing me to the topic of image processing, and for sharing his time answering my
questions. My thanks, also, reach to all faculty of the Electrical and Electronic
Engineering Department in California State University, Sacramento (CSUS).
Finally, special thanks to the Saudi Arabian National Guard for funding my
study at CSUS. Their encouragement was immense. I also thank the Saudi Arabian
Cultural Mission for their assistance and the support they provide during my study.
viii
TABLE OF CONTENTS
Page
Dedication .................................................................................................................. vii
Acknowledgements ................................................................................................... viii
List of Tables............................................................................................................... xi
List of Figures ............................................................................................................ xii
Chapter
1. INTRODUCTION .................................................................................................... 1
1.1. Why Super-resolution? .................................................................................. 1
1.2. Purpose of the Study ..................................................................................... 2
1.3. Organization of the Thesis ............................................................................. 2
2. BACKGROUND ..................................................................................................... 3
2.1. Point Spread Function (PSF) and Diffraction Limit ................................. 3
2.1.1. Illumination Pattern.................................................................... 7
2.1.2. Extracting and Shifting Linear Components ............................ 10
2.2. Image Fusion ............................................................................................ 12
2.2.1. Deterministic Approach ........................................................... 13
2.2.2. Choosing Cost Function and Penalty Function ......................... 15
3. METHODOLOGY ............................................................................................... 17
ix
3.1. Reasons for the Study ............................................................................. 17
3.2. Simulation Process ................................................................................... 17
3.3. Evaluation of the Reconstructed High Resolution Images ...................... 19
4. RESULTS: ALGORITHM DEVELOPMENT AND TESTING ........................ 20
4.1. Selected Original Image .......................................................................... 20
4.2. The Blurry Image ..................................................................................... 21
4.3. Applying Structured Illumination ........................................................... 23
4.4. Reconstruction Process ............................................................................ 25
4.4.1. Wiener-Like Deconvolution..................................................... 26
4.4.2. L1 Norm with Total Variation (L1/TV) Optimization............. 28
4.4.2.1. Alternating Minimization Algorithm ........................ 28
4.4.2.2. Iteration Stopping Criterion ........................................ 33
4.5. Peak Signal to Standard Error of the Estimate (PSSEER) ...................... 33
4.6. Comparing Methods in the Absence of Noise ......................................... 34
4.7. Comparing Methods in the Presence of Gaussian Noise ......................... 41
4.8. Comparing Methods in the Presence of Poisson Noise ........................... 49
5. DISCUSSION ...................................................................................................... 57
6. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS ....................... 59
Appendix A. MATLAB Code ................................................................................. 61
References ................................................................................................................. 79
x
LIST OF TABLES
Table
1.
Page
Table 4.1. A comparison of the PSSEER of the reconstructed images
from a blurry image (PSSEER = 26 dB) in the absence of noise ......................38
2.
Table 4.2. Processing time to reconstruct images in the absence of noise ........41
3.
Table 4.3. A comparison of the PSSEER of the reconstructed images from
a blurry image in the presence of Gaussian noise with zero mean and
standard deviation of 20 (PSSEER = 25.2 dB) .................................................46
4.
Table 4.4. Processing time to reconstruct images in the presence of
Gaussian noise (zero mean and standard deviation of 20) ................................49
5.
Table 4.5. A comparison of the PSSEER of the reconstructed images from
a blurry image contaminated with Poisson noise (PSSEER = 25 dB) ..............53
6.
Table 4.6. Processing time to reconstruct images in the presence of
Poisson noise .....................................................................................................56
xi
LIST OF FIGURES
Figure
1.
Page
Figure 2.1. One-dimensional representation of the effect of the PSF in
the spatial domain ................................................................................................4
2.
Figure 2.2. One-dimensional representation of the effect of the OTF ................5
3.
Figure 2.3. The concept of SIM as described in [3] ............................................7
4.
Figure 2.4. The illumination pattern in the frequency domain ............................8
5.
Figure 2.5. Frequency components introduced by the illumination pattern ......11
6.
Figure 2.6. Separated components ....................................................................11
7.
Figure 2.7 A comparison of original signal f(x), conventional
outcome of LTI system g(x), and constructed signal 𝑓𝑀 (π‘₯) using
Wiener-like deconvolution ................................................................................12
8.
Figure 4.1. Lena 512 × 512 gray scale image ...................................................20
9.
Figure 4.2. Two-dimensional representation of the OTF ..................................21
10.
Figure 4.3. The blurry image, 𝑔(π‘₯, 𝑦) ...............................................................22
11.
Figure 4.4. Illumination patterns for different phases and orientations ............23
12.
Figure 4.5. The application of the illumination pattern on g(x,y) .....................24
13.
Figure 4.6. The three overlapped components of the illumination pattern
in the frequency space with πœƒ = 45 degrees and phase φ = 0 ..........................24
14.
Figure 4.7. Comparison of the pixel intensities of the original and the
blurred images ...................................................................................................36
xii
15.
Figure 4.8. Comparison of the pixel intensities of the Wiener-like
reconstruction using three LR images and the original image ..........................36
16.
Figure 4.9. Comparison of the pixel intensities of the Wiener-like
reconstruction using nine LR images and the original image ...........................37
17.
Figure 4.10. Comparison of the pixel intensities of the L1/TV
reconstruction using three LR images and the original image ..........................37
18.
Figure 4.11. Comparison of the pixel intensities of the L1/TV
reconstruction using nine LR images and the original image ...........................38
19.
Figure 4.12. Wiener-like deconvolution reconstructed image using
three LR images .................................................................................................39
20.
Figure 4.13. Wiener-like deconvolution reconstructed image using
nine LR images ..................................................................................................40
21.
Figure 4.14. Reconstruction obtained by L1/TV optimization using
three LR images .................................................................................................40
22.
Figure 4.15. Reconstruction obtained by L1/TV optimization using
nine LR images ..................................................................................................41
23.
Figure 4.16 The blurred image contaminated with Gaussian noise
(zero mean and a standard deviation of 20) ......................................................43
24.
Figure 4.17. Comparison of the pixel intensities of the blurred with
Gaussian noise contamination and the original images .....................................43
xiii
25.
Figure 4.18. Comparison of the pixel intensities of the Wiener-like
reconstruction using three LR images in the presence of Gaussian
noise and the original image ..............................................................................44
26.
Figure 4.19. Comparison of the pixel intensities of the Wiener-like
reconstruction using nine LR images in the presence of Gaussian noise
and the original image .......................................................................................44
27.
Figure 4.20. Comparison of the pixel intensities of the L1/TV
reconstruction image using three LR images in the presence of Gaussian
noise and the original image ..............................................................................45
28.
Figure 4.21. Comparison of the pixel intensities of the L1/TV
reconstruction image using nine LR images in the presence of Gaussian
noise and the original image ..............................................................................45
29.
Figure 4.22. Wiener-like deconvolution reconstruction using three LR
images in the presence of Gaussian noise (zero mean and standard
deviation 20) ......................................................................................................47
30.
Figure 4.23. Wiener-like deconvolution reconstruction using nine LR
images in the presence of Gaussian noise (zero mean and standard
deviation 20) ......................................................................................................47
31.
Figure 4.24. Reconstruction obtained by L1/TV optimization using
three LR images in the presence of Gaussian noise (zero mean and 20
standard deviation) ............................................................................................48
xiv
32.
Figure 4.25. Reconstruction obtained by L1/TV optimization using
nine LR images in the presence of Gaussian noise (zero mean and 20
standard deviation) ............................................................................................48
33.
Figure 4.26. The blurred image contaminated with Poisson noise....................50
34.
Figure 4.27. Comparison of the pixel intensities of the blurred
contaminated with Poisson noise and the original images ................................51
35.
Figure 4.28. Comparison of the pixel intensities of the Wiener-like
reconstruction using three LR images in the presence of Poisson noise
and the original image .......................................................................................51
36.
Figure 4.29. Comparison of the pixel intensities of the Wiener-like
reconstruction using nine LR images in the presence of Poisson noise
and the original image .......................................................................................52
37.
Figure 4.30. Comparison of the pixel intensities of the L1/TV
reconstruction image using three LR images in the presence of Poisson
noise and the original image ..............................................................................52
38.
Figure 4.31. Comparison of the pixel intensities of the L1/TV
reconstruction image using nine LR images in the presence of Poisson
noise and the original image ..............................................................................53
39.
Figure 4.32. Wiener-like deconvolution reconstruction using
three LR images in the presence of Poisson noise ............................................54
xv
40.
Figure 4.33. Wiener-like deconvolution reconstruction using
nine LR images in the presence of Poisson noise .............................................55
41.
Figure 4.34. Reconstruction obtained by L1/TV optimization using
three LR images in the presence of Poisson noise ............................................55
42.
Figure 4.35. Reconstruction obtained by L1/TV optimization using
nine LR images in the presence of Poisson noise .............................................56
xvi
1
Chapter 1
Introduction
1.1.
Why Super-resolution?
High-resolution (HR) images are desired and often required for better image
processing and analysis [1]. Higher resolution means that more details can be seen. For
example, HR medical images are very helpful for doctors to make a correct diagnosis.
Signal processing techniques can be used to obtain an HR image from either single or
multiple low-resolution (LR) images. This process commonly is referred to as
superresolution (SR) in the literature [1]. In this thesis, SR refers to signal processing
methods used to reconstruct HR images from multiple LR images.
In light microscopy applications, one SR approach used to reconstruct HR
images is structured illumination microscopy (SIM), developed by M. G. Gustafsson
and R. Heinzmann [2]. This method uses Moiré patterns to see spatial frequencies
beyond the Abbe theory [3]. Abbe theory indicates that the lateral resolution of the
optical microscope is fundamentally limited because of the finite wavelength of light.
Moiré patterns are a visual effect that occurs when viewing superimposed patterns that
differ in angle, spacing, or size. In SIM, multiple frames of LR images with lateral
shifting and rotation of an illumination pattern are collected and processed to form one
HR image [2], [4], [5].
2
Image fusion, on the other hand, is a field that has been developing in the past
couple of decades to enhance image resolution in general. Image fusion uses LR images
from different channels to build one HR image. Many techniques have been introduced
in the field of image fusion, each of which is designed to serve a specific application [1],
[6].
1.2.
Purpose of the Study
Structured illumination microscopy and image fusion share the same goal of
achieving HR images by using multiple LR images. This work investigates image fusion
optimization methods suggested in the literature and compares them with the existing
SIM reconstruction method, while at the same time comparing the computational cost of
each method.
1.3.
Organization of the Thesis
The thesis is organized as follows: Chapter 2 provides background information
on SIM and image fusion optimization techniques. Chapter 3 shows the methodology
followed. Chapter 4 describes the results of algorithm development and testing. Chapter
5 discusses the results of the thesis. Chapter 6 presents a summary, conclusions, and
recommendations.
3
Chapter 2
Background
2.1.
Point Spread Function (PSF) and Diffraction Limit
In imaging, for any Linear Translation Invariant (LTI) system, the information
that can be observed, g(x), is the information allowed by the Point Spread Function,
PSF; that is,
𝑔(π‘₯) = 𝑃𝑆𝐹 ∗ 𝑓(π‘₯) + 𝑛(π‘₯),
(2.1)
where * is convolution, f(x) is the original image, and n(x) is noise. This principle is
often better described in the frequency domain as
𝐺(𝑓) = 𝑂𝑇𝐹 βˆ™ 𝐹(𝑓) + 𝑁(𝑓),
(2.2)
where G(f), F(f), and N(f) are the Fourier transforms of g(x), f(x), and n(x), respectively,
and the Optical Transfer Function (OTF) is the Fourier transform of the PSF. Figure 2.1
shows the effect of a PSF on input f(x) to produce output g(x). The PSF widened and
blurred some of the details of f(x). Figure 2.2 shows that the non-zero OTF frequency
components dictate the frequencies of G(f) that can be seen at the output of the system.
4
10000
f(x)
9000
8000
Magnitude
7000
6000
5000
4000
3000
2000
1000
-0.05 -0.04 -0.03 -0.02 -0.01
0
0.01
0.02
0.03
0.04
x (mm)
(A)
10000
f(x)
9000
8000
Magnitude
7000
6000
5000
4000
3000
2000
1000
-0.05 -0.04 -0.03 -0.02 -0.01
0
0.01
0.02
0.03
0.04
x (mm)
(B)
Figure 2.1. One-dimensional representation of the effect of the PSF in the spatial
domain. (A) Input signal f(x). (B) Output signal g(x). The output of the LTI system, g(x),
has a different peak value and has lost some of the details that existed in the input, f(x).
5
x 10
4
5.5
F(f)
5
4.5
Magnitude
4
3.5
3
2.5
2
1.5
1
0.5
-40
-30
-20
-10
0
10
20
30
40
Frequency (cycles/mm)
(A)
x 10
4
5.5
G(f)
5
4.5
Magnitude
4
3.5
3
2.5
2
1.5
1
0.5
-40
-30
-20
-10
0
10
20
30
40
Frequency (cycles/mm)
(B)
Figure 2.2. One-dimensional representation of the effect of the OTF. (A) Input signal
Fourier transform, F(f). (B) Output signal Fourier transform, G(f). The LTI system
caused output, G(f), to lose frequency components beyond the cutoff frequency of the
OTF (𝑓𝑐 = 15 cycles/mm) that existed in the input, F(f).
6
In 1875, Abbe introduced his theory [3] that shows that diffraction limits define
a finite range of spatial frequencies that can be transmitted through a microscope. In this
sense, a PSF could be designed with a shape that represents the effect of the
microscope’s lens. Simplifying a microscope system mathematically to be represented
by (2.1) and (2.2) allowed investigators to find a way to overcome the diffraction limit
[4], [5]. The result, Structured Illumination Microscopy (SIM), achieves a resolution
beyond the diffraction limit [2], [5].
Structured Illumination Microscopy uses the principle that Moiré patterns can
alias higher frequency components down to the lower frequencies. Structured
illumination microscopy uses a sinusoidal illumination to heterodyne the high
frequencies of the image into the passband of the imaging system. With lateral shift and
rotation of the illumination pattern, multiple frames of LR images are collected and
processed to form one HR image. Figure 2.3 outlines the concept of SIM. In (a) Moiré
fringes are observed from overlapping patterns. In (b), the set of low-resolution
information is represented by a circular “observable region” in the frequency domain.
The sinusoidal illumination pattern in the frequency domain is shown in (c). The offset
regions caused by the illumination pattern in the frequency domain are shown in (d). In
(e), the images recovered from different orientations and phases of the pattern in the
frequency domain are shown. These images are used in methods such as Wiener
deconvolution, as described in this chapter, to reconstruct an HR image.
7
In order to describe SIM in a clear way, and for simplicity, consider applying the
method to the one-dimensional example in Figure 2.1 in order to enhance g(x). The
same idea can be generalized to two-dimensional images.
Figure 2.3. The concept of SIM as described in [3]. (a) Moiré fringes are observed form
overlapping patterns. In (b), the set of low-resolution information in the frequency
domain is represented by a circular “observable region.” The sinusoidal illumination
pattern in the frequency domain is shown in (c). The offset regions caused by the
illumination pattern in the frequency domain are shown in (d). In (e), the images
recovered from different orientations and phases of the pattern are shown in the
frequency domain.
2.1.1. Illumination Pattern
In the one-dimensional case, the illumination pattern is described as
𝑖 = 1 + cos(2πœ‹π‘“π‘œ π‘₯ + πœ‘),
(2.3)
where 𝑓0 is the frequency, and πœ‘ is the phase of the pattern. (Throughout this thesis,
continuous Fourier transform notation is used to present theory, but the discrete Fourier
8
transform is used for computer implementation.) Therefore, the illumination pattern in
the frequency domain is define as
1
1
𝐼 = 𝛿(𝑓) + 2 𝛿(𝑓 + 𝑓0 )𝑒 −π‘–πœ‘ + 2 𝛿(𝑓 − 𝑓0 )𝑒 π‘–πœ‘ .
(2.4)
Figure 2.4 shows the discrete Fourier transform function of the illumination pattern. For
the illumination pattern to be effective, allowing higher frequency components to be
seen and extracted, 𝑓0 needs to be less than the value of the cutoff frequency, 𝑓𝑐 , of the
OTF [4], [5]. In this example, 𝑓0 = 12 cycles/mm, 𝑓𝑐 = 15 cycles/mm, and πœ‘ = 0.
1
0.9
0.8
Magnitude
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-40
-30
-20
-10
0
10
20
30
40
Frequency index
Figure 2.4. The illumination pattern in the frequency domain. π‘“π‘œ = 12 cycles/mm, and
πœ‘ = 0.
9
The application of the illumination pattern to signal g(x) results in
𝑔𝑖 (π‘₯) = 𝑔(π‘₯) βˆ™ (1 + cos(2πœ‹π‘“π‘œ π‘₯ + πœ‘)),
(2.5)
which has Fourier transform
1
1
𝐺𝑖 (𝑓) = 𝐺(𝑓) + 2 𝐺(𝑓 + 𝑓0 )𝑒 −π‘–πœ‘ + 2 𝐺(𝑓 − 𝑓0 )𝑒 π‘–πœ‘ .
(2.6)
The passband allowed by the OTF has overlapping components of higher
frequencies and lower frequencies. Figure 2.5 shows 𝐺𝑖 (𝑓) with the illumination pattern
aliasing some higher frequency components down into the passband. The diamondsymbol line represents the higher frequency components of G(f) shifted by the
cos(2πœ‹π‘“π‘œ π‘₯ + πœ‘) of the illumination pattern into the passband of the OTF. In reality, this
line is added to the x-symbol line, and together they make the square-symbol line which
represents 𝐺𝑖 (𝑓). In the next section, the mechanism for breaking down the squaresymbol line to the three components that represent the x-symbol line and diamondsymbol line is explained. The three components eventually are used to reconstruct the
Μ‚.
HR 𝑓(π‘₯)
10
2.1.2. Extracting and Shifting Linear Components
The observed 𝐺𝑖 (𝑓) is a sum of three contributions, and it is not possible to
separate them using a single 𝐺𝑖 . In order to get the three components in (2.6), the sum of
which can be seen as the square-symbol line in Figure 2.5, the illumination pattern
should be applied with at least three phase values. In the case of two-dimensional
images, more than three phases and different orientations are required [5].
After solving for the three independent linear components, one is unshifted and
two are shifted objects [5]. The shifted components carry frequencies that were not
accessible by the passband of the OTF. The unshifted version may be retained as is, but
the shifted versions must be moved in Fourier space so as to bring the spatial
frequencies of these components from being centered at f - π‘“π‘œ and f + π‘“π‘œ to being
centered at 0 [2], [4], [5]. Then all three components may be combined appropriately to
obtain a superresolved image. Figure 2.6 shows the three components separated.
References [3] and [5] suggested the use of Wiener-like deconvolution to
increase image resolution by a factor of 2. The result of the reconstruction using Wienerlike deconvolution, 𝑓𝑀 (π‘₯), is shown in Figure 2.7. In this result, 𝑓𝑀 (π‘₯) is much closer to
f(x) than is g(x). This example did not incorporate any noise. As this thesis discusses,
Wiener-like deconvolution performance is affected in the presence of the noise.
Therefore, this thesis investigates another method to reconstruct an HR image using
structure illumination LR images.
11
x 10
4
G(f)
1/2(G(f-f0)+G(f+f0))
G(f)+1/2(G(f-f0)+G(f+f0))
7
6
Magnitude
5
4
3
2
1
0
-40
-30
-20
-10
0
10
20
30
40
Frequency (cycle/mm)
Figure 2.5. Frequency components introduced by the illumination pattern. The
1
line with x-shapes represents G(f), the line with diamonds represents 𝐺(𝑓 +
2
1
𝑓0 )𝑒 −π‘–πœ‘ + 2 𝐺(𝑓 − 𝑓0 )𝑒 π‘–πœ‘ , and the line with squares represents 𝐺𝑖 (𝑓).
x 10
4
G(f)
1/2G(f-f0)
1/2G(f+f0)
7
6
Magnitude
5
4
3
2
1
0
-40
-30
-20
-10
0
10
20
30
40
Frequency (cycle/mm)
Figure 2.6. Separated components. These components are used to reconstruct the
HR output.
12
10000
f(x)
g(x)
fw(x)
9000
8000
Magnitude
7000
6000
5000
4000
3000
2000
1000
-0.05 -0.04 -0.03 -0.02 -0.01
0
0.01
0.02
0.03
0.04
x (mm)
Figure 2.7. A comparison of original signal f(x), conventional outcome of LTI
system g(x), and constructed signal 𝑓𝑀 (π‘₯) using Wiener-like deconvolution.
2.2.
Image Fusion
Image fusion has been used in many applications, including but not limited to
medical imaging, astronomy, security, and surveillance [1], [6]. Image fusion uses
mathematical techniques in order to create a single composite HR image that is more
comprehensive and thus more useful for the human operator or computer vision task [6].
The basic idea behind SR is the fusion of a sequence of LR noisy blurred images to
produce an HR image. The resulting image has less noise and blur effects and thus more
HR content than any of the LR input images. Fusing multiple LR images to reconstruct
an HR image is an inverse problem, and it is a computationally complex and
13
numerically ill-posed problem. To be able to solve such a problem, many optimization
techniques have been proposed in both the spatial and the frequency domains [1], each
of which is designed to serve a specific application.
Image fusion is a broad topic. This thesis focuses on techniques that help
increase the spatial resolution. As described in [1], approaches such as frequency
domain, stochastic, deterministic, projection onto convex sets (POCS), ML-POCS
hybrid reconstruction, and many others have been studied in the literature for increasing
the spatial resolution. This thesis investigates the deterministic approach, since this
method incorporates prior knowledge [1], [7]. This approche allows the use of what is
known in SIM and solves the ill-posed problem.
2.2.1. Deterministic Approach
Since SIM has been well studied, Signal to Noise ratio (SNR), types of noise,
and the approximate shape of the PSF are well understood [3], [4]. The deterministic
approach of image fusion can use this information to enhance the result of SIM. Going
back to (2.1), the convolution is carried out mathematically by multiplication. If the PSF
is a matrix with size [m, n], then
𝑃𝑆𝐹 ∗ π‘₯ = 𝐾 × π‘₯,
where × is matrix multiplication, and K is a circulant matrix with size [π‘š2 , 𝑛2 ].
(2.6)
14
A straightforward but naive method to solve (2.1) is by using the normal equation,
𝑓(π‘₯) = (𝐾 𝑇 𝐾)−1 𝐾 𝑇 βˆ™ 𝑔(π‘₯).
(2.7)
However, due to the size of K (in the case of a 512 x 512 image, K is
5122 x 5122 ), this solution is impractical. Therefore, developers of SR methods usually
explicitly or implicitly define a cost function to estimate f(x) in an iterative fashion [7].
This type of cost function assures a certain fidelity or closeness of the final solution to
the measured data. Cost functions are founded on either algebraic or statistical
reasoning. Perhaps the cost function most common to both perspectives is the leastsquares (LS) cost function, which minimizes the L2 norm of the residual vector,
resulting in
Μ‚ = π΄π‘Ÿπ‘”π‘€π‘–π‘› [|𝑔(π‘₯) − 𝐾 × π‘“(π‘₯)|2 ].
𝑓(π‘₯)
(2.8)
Μ‚ is the maximum
When n(x) is white Gaussian noise with zero mean, this 𝑓(π‘₯)
likelihood estimate of f(x) [7]. However, finding the minimizer of (2.8) amplifies the
random noise n(x) in the direction of the singular vectors (in the SR case, these are the
high spatial frequencies), making the solution highly sensitive to measurement noise [7].
15
Some form of regularization must be included in the cost function to stabilize the
problem or constrain the space of solutions [1], [6], [7].
The choice of regularization plays an important role in the performance of any
optimization algorithm. Like the cost function, regularization also is described from both
algebraic and statistical perspectives. In both cases, regularization takes the form of soft
constraints on the space of possible solutions, often independent of the measured data, as
Μ‚ = π΄π‘Ÿπ‘”π‘€π‘–π‘›[πœ‡ βˆ™ |𝑔(π‘₯) − 𝐾 × π‘“(π‘₯)|2 + 𝛢(𝑓(π‘₯))].
𝑓(π‘₯)
(2.9)
In (2.9), the function 𝛢(𝑓(π‘₯)) places a penalty on the unknown f(x) to direct it to
a better formed solution. The coefficient µ indicates the weight or the strength with
which this penalty is enforced. In general, choosing µ could be done manually, using
visual inspection, or automatically, using methods such as Generalized Cross-Validation
[8], [9], L-curve [10], and other techniques.
2.2.2. Choosing Cost Function and Penalty Function
In recent years, the L1 norm has become an interesting topic for optimization.
The L1 norm in the past was ignored due the difficulties in obtaining a solution (it is
difficult to differentiate). However, with advancements in computational power, many
algorithms have been suggested to solve this type of optimization [11], [12].
Minimizing the L1 norm leads to the sparsest solution [7], [11]. In [11] and [12], the
16
ability of the L1 norm to reduce an impulsive noise was confirmed. Therefore, the L1
norm characteristics, along with new algorithms to solve the L1 norm problem, justify
the use of the cost function described by
Μ‚ = π΄π‘Ÿπ‘”π‘€π‘–π‘›[πœ‡ βˆ™ |𝑔(π‘₯) − 𝐾 × π‘“(π‘₯)| + 𝛢(𝑓(π‘₯))].
𝑓(π‘₯)
(2.10)
Total variation (TV), which was first introduced in [13], has been one of the
most successful regularization methods for denoising and deblurring [7], [14]. Total
variation is well-known for preserving discontinuities in recovered images, and TVbased algorithms have proven effective for reducing noise and blur without smearing
sharp edges for grayscale images [7], [11], [13], [15]. Total variation is defined as
𝑇𝑉(𝑓(π‘₯)) = ∫|∇ 𝑓(π‘₯)| 𝑑π‘₯,
(2.11)
where ∇𝑓(π‘₯) is the gradient of f(x). By making 𝛢=TV, (2.11) becomes
Μ‚ = π΄π‘Ÿπ‘”π‘€π‘–π‘›[ πœ‡ βˆ™ |𝑔(π‘₯) − 𝐾 × π‘“(π‘₯)| + 𝑇𝑉(𝑓(π‘₯))].
𝑓(π‘₯)
(2.12)
17
Chapter 3
Methodology
3.1.
Reasons for the Study
The purpose of this study is to determine whether the L1/TV reconstruction
technique for images obtained using SI results in better performance than the Wienerlike method suggested in [2], [4], [5]. Performance is investigated on blurred image with
and without noise (Gaussian or Poisson). The study also investigates whether the
number of multiple orientations can be reduced for the L1/TV method compared with
the Wiener-like method.
3.2.
Simulation Process
In this thesis, MATLAB code (Appendix A) is used to generate LR images and
to reconstruct HR images. First, an image (original image) is selected of size 512 x 512.
Then, this image is blurred to simulate the effect of the diffraction limit caused by the
microscope lens by using an OTF defined as
2 +(𝜎 πœ‹(𝑓 −255))2 )
𝑦
𝑦
𝑂𝑇𝐹 = 𝑒 −2((𝜎π‘₯ πœ‹(𝑓π‘₯ −255))
,
(3.1)
18
where 𝜎π‘₯ = πœŽπ‘¦ = 3, and 𝑓π‘₯ π‘Žπ‘›π‘‘ 𝑓𝑦 are frequency indexes of the x-axis and the y-axis,
respectively, 0 ≤ 𝑓π‘₯ , 𝑓𝑦 < 512. The code performs SI on the blurry image to obtain
multiple LR images. Structured illumination takes the shape of
𝑖 = 1 + cos(2πœ‹ (𝑓0π‘₯ π‘₯ + 𝑓0𝑦 𝑦) + πœ‘),
where πœ‘ = 0,
2πœ‹
3
, π‘Žπ‘›π‘‘
4πœ‹
1
(3.2)
1
, 𝑓0π‘₯ = 2 𝜎 cos(πœƒ) , and 𝑓0𝑦 = 2 𝜎 sin(πœƒ) for πœƒ =
3
π‘₯
𝑦
45π‘œ , 105π‘œ , and 165π‘œ . The values for 𝑓0π‘₯ and 𝑓0𝑦 are chosen to make sure the
illumination pattern is located within the radius of the passband dictated by the cutoff
frequency of OTF so that higher frequency components can be seen and processed [4][6]. Finally, the code performs different trials to assess the performance of the
reconstruction using Wiener-like deconvolution and L1/TV optimization.
The first trial is to reconstruct HR images using the Wiener-like deconvolution
and L1/TV methods with no noise added to the blurry image. The maximum pixel
intensity of the original image is set to 1000. This trial performs reconstruction using
three LR images and nine LR images as suggested in [4]-[5].
The second trial is performed for the case of Gaussian noise added to the blurry
image. The maximum pixel intensity of the original image is set to 1000. The Gaussian
noise added to the blurry image has zero mean and a standard deviation of 20. The code
19
performs the reconstruction using Wiener-like deconvolution and the L1/TV methods
using three LR images and nine LR images.
The final trial of the study is for only Poisson noise added to the blurry image.
The maximum pixel intensity of the original image is set to 1000. Poisson noise is pixel
dependent (every pixel is affected with Poisson noise that has a different standard
deviation). For this study, the Poisson noise of the average pixel of the image is set to
have an approximate standard deviation of 20. The code reconstructs HR images using
Wiener-like deconvolution and the L1/TV methods using three LR images and nine LR
images.
3.3.
Evaluation of the Reconstructed High Resolution Images
The subjective method of visual inspection is used to evaluate the quality of the
reconstructed HR images. Also, Peak Signal to Standard Error of the Estimate Ratio
(PSSEER) is used to evaluate the reconstructed image quantitatively. The consistency of
the subjective and objective methods of assessing performance is examined.
Although processing time is not important in some applications (i.e., biology),
this study compares the processing time for Wiener-like deconvolution and L1/TV
reconstruction. The comparison shows the trade-off between processing time and quality
of the reconstructed images. Processing time were measured for MATLAB 2011a run on
an Apple MacBook Pro, 2.3 GHz, Intel I5 with 4 GB of memory.
20
Chapter 4
Results: Algorithm Development and Testing
4.1.
Selected Original Image
The original image selected is a 512 × 512 grid of pixels that is commonly
known in the image processing community as Lena. Lena, as shown in Figure 4.1, has
been used as a standard test image [7], [11]. Lena contains edges, smooth areas, and
contrast, which make it possible to test reconstruction methods and to generalize the
study to a broader context.
Figure 4.1. Lena 512 × 512 gray scale image. Lena contains edges, smooth areas, and
contrast. Details such as feathers are crisp.
21
4.2.
The Blurry Image
Figure 4.2 shows the log 10 scaled image of the Gaussian shape of the OTF used
to approximate the effect of the lens as defined in (3.1). This two-dimensional
representation of the OTF is used throughout the study. In the black region, the OTF has
values equal or close to zero. The peak of the OTF is located at frequency index (255,
255).
0
50
100
Frequency Index
150
200
250
300
350
400
450
500
0
100
200
300
400
500
Frequency Index
Figure 4.2. Two-dimensional representation of the OTF. In the black region, values are
equal or close to zero
22
Let f(x, y) be the image in Figure 4.1, and let 𝐹(𝑓π‘₯ , 𝑓𝑦 ) be the Discrete Fourier
transform of f(x, y) in the frequency space. Multiplying 𝐹(𝑓π‘₯ , 𝑓𝑦 ) by the OTF results in
𝐺(𝑓π‘₯, 𝑓𝑦) = 𝐹(𝑓π‘₯ , 𝑓𝑦 ) βˆ™ 𝑂𝑇𝐹,
(4.1)
where (βˆ™) is element by element multiplication. Transforming 𝐺(𝑓π‘₯ , 𝑓𝑦 ) back to the
spatial domain results in the blurry image, g(x, y), shown in Figure 4.3. As expected, in
Figure 4.3, the higher frequency components (i.e., edges) are now blurred and hard to
recognize.
Figure 4.3. The blurry image, 𝑔(π‘₯, 𝑦). The effect of the PSF caused feathers to be
blurred and hard to recognize.
23
4.3.
Applying Structure Illumination
The shapes of the illumination pattern are shown in Figure 4.4. Consider phase
πœ‘ = 0π‘œ and orientation πœƒ = 450 . This illumination pattern causes a pattern of parallel
stripes on g(x, y), as seen Figure 4.5. The resulting three overlapped components seen
within the passband dictated by the cutoff frequency of the OTF are shown in Figure
4.6. One component, the centralband, carries the exact information found in blurred
image g(x, y). The other components, sideband 1 and sideband 2, are the components
that carry the higher frequencies that were not accessible in g(x, y).
πœƒ = 450
πœƒ = 1050
πœƒ = 1650
πœ‘=0
πœ‘=
2πœ‹
3
πœ‘=
4πœ‹
3
Figure 4.4. Illumination patterns for different phases and orientations.
2πœ‹
4πœ‹
The phases values are πœ‘ = 0, 3 , and 3 , and the orientation values
are πœƒ = 45π‘œ , 105π‘œ , and 165π‘œ .
24
Figure 4.5. The application of the illumination pattern on g(x, y). The
parallel stripes resulted from an illumination pattern with phase φ = 0
and θ = 45 degrees.
50
Frequency index
100
sideband1 (fx+f0x,fy-f0y)
150
200
250
300
centralband (fx,fy)
350
sideband2 (fx-f0x,fy+f0y)
400
450
500
100
200
300
400
500
Frequency index
Figure 4.6. The three overlapped components of the illumination pattern in the
frequency space with πœƒ = 45 degrees and phase φ = 0. The arrows point to the
central frequencies of the components.
25
The three different phase values πœ‘ = 0,
2πœ‹
3
, and
4πœ‹
3
and orientation πœƒ = 45π‘œ
result in the following set of equations to solve:
𝐹(𝑓π‘₯ , 𝑓𝑦 ) (𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ = 0) βˆ™ 𝑂𝑇𝐹
𝑒 𝑗0 𝑒 𝑗0 𝑒 −𝑗0
π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘™π‘π‘Žπ‘›π‘‘1
2πœ‹
−𝑗
) βˆ™ 𝑂𝑇𝐹 = [𝑒 𝑗0 𝑒 𝑗2πœ‹
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘11 ],
]
×
[
3
3
𝑒
3
4πœ‹
4πœ‹
4πœ‹
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘21
𝑗
−𝑗
𝑒 𝑗0 𝑒 3 𝑒 3
[𝐹(𝑓π‘₯ , 𝑓𝑦 ) (𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ = ) βˆ™ 𝑂𝑇𝐹 ]
𝐹(𝑓π‘₯ , 𝑓𝑦 ) (𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ =
2πœ‹
(4.2)
3
where IP stands for illumination pattern. Equation (4.2) can be rearranged as
𝑒 𝑗0 𝑒 𝑗0 𝑒 −𝑗0
π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘™π‘π‘Žπ‘›π‘‘1
2πœ‹
2πœ‹
[ π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘11 ] = [𝑒 𝑗0 𝑒 𝑗 3 𝑒 −𝑗 3 ]
4πœ‹
4πœ‹
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘21
𝑒 𝑗0 𝑒 𝑗 3 𝑒 −𝑗 3
−1
𝐹(𝑓π‘₯ , 𝑓𝑦 ) (𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ = 0) βˆ™ 𝑂𝑇𝐹
× πΉ(𝑓π‘₯ , 𝑓𝑦 ) (𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ =
[ 𝐹(𝑓π‘₯ , 𝑓𝑦 )(𝐼𝑃 π‘€π‘–π‘‘β„Ž πœƒ = 45 π‘Žπ‘›π‘‘ πœ‘ =
2πœ‹
3
4πœ‹
3
) βˆ™ 𝑂𝑇𝐹 .
(4.3)
) βˆ™ 𝑂𝑇𝐹 ]
Each component of the three separated components has different frequency details that
can be used to reconstruct an HR image. The same procedure is performed for
orientations πœƒ = 105π‘œ and 165π‘œ .
4.4.
Reconstruction Process
To be able to use these components and the components for the other two
orientations for SR, each component is considered a separate LR image. Each LR image
would need a different OTF. The centralbands are left as is, and the sidebands of these
26
components are shifted back to the central frequency. The components are represented
as
π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘™π‘π‘Žπ‘›π‘‘π‘– (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 × πΉ(𝑓π‘₯ , 𝑓𝑦 ),
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘1𝑖 (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 × πΉ (𝑓π‘₯ − π‘“π‘œπ‘₯ 𝑖 , 𝑓𝑦 + π‘“π‘œπ‘¦ 𝑖 ) ,
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘2𝑖 (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 × πΉ (𝑓π‘₯ + π‘“π‘œπ‘₯ 𝑖 , 𝑓𝑦 − π‘“π‘œπ‘¦ ) ,
𝑖
(4.4)
}
where i = 1 to 3. The LR image equations in (4.4) also can be represented as
π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘™π‘π‘Žπ‘›π‘‘π‘– (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 × πΉ(𝑓π‘₯ , 𝑓𝑦 ),
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘1𝑖 (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 (𝑓π‘₯ + π‘“π‘œπ‘₯ 𝑖 , 𝑓𝑦 − π‘“π‘œπ‘¦ 𝑖 ) × πΉ(𝑓π‘₯ , 𝑓𝑦 ),
π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘2𝑖 (𝑓π‘₯ , 𝑓𝑦 ) = 𝑂𝑇𝐹 (𝑓π‘₯ − π‘“π‘œπ‘₯ 𝑖 , 𝑓𝑦 + π‘“π‘œπ‘¦ 𝑖 ) × πΉ(𝑓π‘₯ , 𝑓𝑦 ),
(4.5)
}
where 𝑖 = 1 to 3. For the bands showed in Figure 4.5, and using the OTF defined in
(3.1), the shifted versions of the OTF corresponding to the components are as follows:
for centralband 𝑂𝑇𝐹1 = 𝑂𝑇𝐹(𝑓π‘₯ + 0, 𝑓𝑦 + 0); for sideband1 𝑂𝑇𝐹2 = 𝑂𝑇𝐹 (𝑓π‘₯ +
π‘“π‘œπ‘₯ 1 , 𝑓𝑦 − π‘“π‘œπ‘¦ 1 ), and for sideband 2, 𝑂𝑇𝐹3 = 𝑂𝑇𝐹(𝑓π‘₯ − π‘“π‘œπ‘₯ 1 , 𝑓𝑦 + π‘“π‘œπ‘¦ 1 ).
4.4.1. Wiener-Like Deconvoluition
Each LR image contains additional information that can be used to improve
image quality. The question is how to best use these components to enhance the
27
resolution of image in Figure 4.4. Keep in mind these patterns represent different images
for the same object. Wiener-Helstrom-like deconvolution was introduced in [14] to
deconvolve different images acquired from different sensors (here, different OTFs). So,
define the shifted version of the components in Figure 4.6 as
𝑆1 = π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘™π‘π‘Žπ‘›π‘‘1 (𝑓π‘₯ + 0, 𝑓𝑦 + 0),
𝑆2 = π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘11 (𝑓π‘₯ − π‘“π‘œπ‘₯ 1 , 𝑓𝑦 + π‘“π‘œπ‘¦ 1 ) , π‘Žπ‘›π‘‘
(4.6)
𝑆3 = π‘ π‘–π‘‘π‘’π‘π‘Žπ‘›π‘‘21 (𝑓π‘₯ + π‘“π‘œπ‘₯ 1 , 𝑓𝑦 − π‘“π‘œπ‘¦ 1 ).
These components are used to reconstruct the image as proposed in [5] and [14] using
𝐹𝑀(𝑓π‘₯, 𝑓𝑦) =
∑3𝑖=1 𝑂𝑇𝐹𝑖∗ βˆ™π‘†π‘–
πœ€+∑3𝑖=1|𝑂𝑇𝐹𝑖 |2
,
(4.7)
where ∗ represents the conjugate of 𝑂𝑇𝐹𝑖 , and πœ€ is noise to signal ratio. From (4.7), a
lower value of πœ€ may be used for increased visual contrast at the expense of increased
noise in the image. In this experiment, πœ€ = 0.09. For the case of nine LR images i = 1 to
9.
28
4.4.2. L1 Norm with Total Variation (L1/TV) Optimization
For the 2-dimensional case, the optimization problem in (2.9) is defined as
̂𝑦) = π΄π‘Ÿπ‘”π‘€π‘–π‘›[ πœ‡ βˆ™ |𝑔(π‘₯, 𝑦) − 𝐾 × π‘“(π‘₯, 𝑦)| + 𝑇𝑉(𝑓(π‘₯, 𝑦))],
𝑓(π‘₯,
(4.8)
where πœ‡ is a positive number indicating the weight or the strength with which this
fidelity is enforced, and [m, n] indicates the size of the image. Now, to solve (4.8), the
Alternating Minimization algorithm is used as suggested in [15].
4.4.2.1. Alternating Minimization Algorithm
To be able to solve the cost function defined in (4.8), first the regularization part
of the equation is defined. Recalling (2.8), the discrete representation for the twodimensional case is
𝑇𝑉(𝑓(π‘₯, 𝑦) = ∑π‘š×𝑛
𝑖=1 |𝐷𝑖 𝑓(π‘₯, 𝑦)|,
(4.9)
where, for each i, 𝐷𝑖 𝑓(π‘₯, 𝑦) represents the first-order finite difference of (x, y) at pixel i
in both horizontal and vertical directions. The absolute value of 𝐷𝑖 𝑓(π‘₯, 𝑦), |𝐷𝑖 𝑓(π‘₯, 𝑦) |,
is the variation of f(x, y) at pixel i. Then (4.8) is equivalent to
̂𝑦) = π΄π‘Ÿπ‘”π‘€π‘–π‘› [∑π‘š×𝑛
𝑓(π‘₯,
𝑖=1 (πœ‡ βˆ™ |𝑔(π‘₯, 𝑦) − 𝐾 × π‘“(π‘₯, 𝑦)| + |𝐷𝑖 𝑓(π‘₯, 𝑦)|)].
(4.10)
29
Let z and w be auxiliary variables that approximate [𝑔(π‘₯, 𝑦) − 𝐾 × π‘“(π‘₯, 𝑦)]
and 𝐷𝑖 𝑓(π‘₯, 𝑦) in the non-differentiable norms in (4.10), respectively. Then, by adding
quadratic terms to penalize the difference between every pair of original and auxiliary
variables, the approximate problem to (4.10) is
̂𝑦) = π΄π‘Ÿπ‘”π‘€π‘–π‘› [∑
𝑓(π‘₯,
π‘š×𝑛
𝑖=1
(𝑀𝑖 +
𝛽1
|𝑀 − 𝐷𝑖 𝑓(π‘₯, 𝑦) |2 ) + πœ‡(|𝑧|
2 𝑖
𝛽2
+ |𝑧 − (𝑔(π‘₯, 𝑦) − 𝐾 × π‘“(π‘₯, 𝑦))|2 )].
2
(4.11)
The augmented Lagrangian function of (4.11) is
𝐿𝐴 (π‘₯, 𝑀, 𝑧, πœ†) = ∑ (|𝑀𝑖 | − πœ†1 𝑇 (𝑀𝑖 − 𝐷𝑖 𝑓(π‘₯, 𝑦)))
𝑖
+
𝛽1
∑|𝑀𝑖 − 𝐷𝑖 𝑓(π‘₯, 𝑦) |2 + πœ‡|𝑧| − πœ†2 𝑇 (𝑧 − (𝐾 × π‘“(π‘₯, 𝑦) − 𝑔(π‘₯, 𝑦)))
2
𝑖
+
𝛽2
|𝑧 − (𝐾 × π‘“(π‘₯, 𝑦) − 𝑔(π‘₯, 𝑦))|2 ,
2
where 𝛽1, 𝛽2 > 0 are penalty parameters, and πœ† = (πœ†1 , πœ†2 ) is the Lagrangian multiplier.
Now, according to the scheme suggested by the Alternating Direction Method (ADM)
30
[5], [16], for a given 𝑓(π‘₯, 𝑦)π‘˜ and πœ†π‘˜ , the next iteration to find 𝑓(π‘₯, 𝑦)π‘˜+1 , 𝑀 π‘˜+1 , 𝑧 π‘˜+1 ,
and πœ†π‘˜+1 is generated as follows:
1. Fixing f(x,y) = 𝑓(π‘₯, 𝑦)π‘˜ and πœ† = πœ†π‘˜ and minimizing 𝐿𝐴 with respect to w and z
to obtain 𝑀 π‘˜+1 and 𝑧 π‘˜+1 . The minimizers are given explicitly by
πœ†π‘˜
πœ†π‘˜
1
𝑀 π‘˜+1 = π‘šπ‘Žπ‘₯ {|𝐷𝑖 𝑓(π‘₯, 𝑦)π‘˜ + 𝛽1 | − 𝛽 , 0} 𝑠𝑔𝑛 (𝐷𝑖 𝑓(π‘₯, 𝑦)π‘˜ + 𝛽1 ),
1
𝑧
π‘˜+1
1
1
(4.12)
πœ†π‘˜2
πœ‡
= π‘šπ‘Žπ‘₯ {|𝐾 × π‘“(π‘₯, 𝑦) − 𝑔(π‘₯, 𝑦) + | − , 0}
𝛽2
𝛽2
π‘˜
πœ†π‘˜
𝑠𝑔𝑛 (𝐾 × π‘“(π‘₯, 𝑦)π‘˜ − 𝑔(π‘₯, 𝑦) + 𝛽2 ),
(4.13)
1
where, sgn represents the signum function.
2. Computing 𝑓(π‘₯, 𝑦) π‘˜+1 via solving the normal equations
(𝐷𝑇 𝐷 +
𝛽2 𝑇
𝐾 𝐾) 𝑓(π‘₯, 𝑦) =
𝛽1
πœ†π‘˜
𝐷𝑇 (𝑀 π‘˜+1 − 𝛽1 ) + 𝐾 𝑇 (
1
𝛽2 𝑧 π‘˜+1 −πœ†π‘˜
2
𝛽1
𝛽
) + 𝛽2 𝐾 𝑇 × π‘”(π‘₯, 𝑦).
1
(4.14)
31
3. Updating πœ† by
πœ†1π‘˜+1 = πœ†1π‘˜ − 𝛽1 (𝑀 π‘˜+1 − 𝐷𝑖 𝑓(π‘₯, 𝑦)π‘˜+1 ),
(4.15)
πœ†π‘˜+1
= πœ†π‘˜2 − 𝛽2 (𝑧 π‘˜+1 − (𝐾 × π‘“(π‘₯, 𝑦)π‘˜+1 − 𝑔(π‘₯, 𝑦))).
2
(4.16)
The aforementioned is a solution for (4.10) designed for reconstructing an HR
image from only one LR image. In order to get an advantage of the SI method, (4.10) is
extended to accommodate all LR images acquired by SI. In case of N images,
̂𝑦) = π΄π‘Ÿπ‘”π‘€π‘–π‘› [∑π‘š×𝑛(πœ‡ ∑𝑁 |𝑔(π‘₯, 𝑦)𝑗 − 𝐾𝑗 × π‘“(π‘₯, 𝑦)| + |𝐷𝑖 𝑓(π‘₯, 𝑦) |)].
𝑓(π‘₯,
𝑖=1
𝑗=1
The augmented Lagrangian function of (4.23) become
𝐿𝐴 (π‘₯, 𝑀, 𝑧, πœ†) = ∑ (|𝑀𝑖 | − πœ†1 𝑇 (𝑀𝑖 − 𝐷𝑖 𝑓(π‘₯, 𝑦)))
𝑖
+
𝛽1
∑(|𝑀𝑖 − 𝐷𝑖 𝑓(π‘₯, 𝑦)|2 )
2
𝑖
𝑁
+ πœ‡( ∑|𝑧𝑗 | − πœ†π‘— 𝑇 (𝑧𝑗 − (𝐾𝑗 × π‘“(π‘₯, 𝑦) − 𝑔𝑗 (π‘₯, 𝑦))
𝑗=1
+
𝛽𝑗
2
|𝑧𝑗 − (𝐾𝑗 × π‘“(π‘₯, 𝑦) − 𝑔𝑗 (π‘₯, 𝑦))| ).
2
(4.17)
32
The iteration scheme is
1. Obtaining 𝑀 π‘˜+1 and 𝑧𝑗 π‘˜+1 as
πœ†π‘˜
πœ†π‘˜
1
𝑀 π‘˜+1 = π‘šπ‘Žπ‘₯ {|𝐷𝑖 𝑓(π‘₯, 𝑦)π‘˜ + 𝛽0 | − 𝛽 , 0} 𝑠𝑔𝑛 (𝐷𝑖 𝑓(π‘₯, 𝑦)π‘˜ + 𝛽0 ),
0
0
0
πœ†π‘˜
πœ‡
𝑗
𝑗
(4.18)
𝑧𝑗 π‘˜+1 = π‘šπ‘Žπ‘₯ {|𝐾𝑗 × π‘“(π‘₯, 𝑦)π‘˜ − 𝑔(π‘₯, 𝑦)𝑗 + 𝛽𝑗 | − 𝛽 , 0}
πœ†π‘˜
𝑠𝑔𝑛 (𝐾 × π‘“(π‘₯, 𝑦)π‘˜ − 𝑔(π‘₯, 𝑦)𝑗 + 𝛽𝑗 ).
(4.19)
𝑗
2. Computing 𝑓(π‘₯, 𝑦)π‘˜+1 via solving the normal equations
πœ†π‘˜
𝛽
𝑗
𝑇
𝑇
π‘˜+1
(𝐷𝑇 𝐷 + ∑𝑁
− 𝛽0 ) +
𝑗=1 𝛽 𝐾𝑗 𝐾𝑗 ) 𝑓(π‘₯, 𝑦) = 𝐷 (𝑀
1
0
𝛽𝑗 𝑧𝑗 π‘˜+1 −πœ†π‘˜
𝑗
𝑇
∑𝑁
𝑗=1 (𝐾𝑗 (
𝛽0
𝛽
) + 𝛽𝑗 𝐾𝑗 𝑇 𝑔(π‘₯, 𝑦)𝑗 ).
0
(4.20)
3. Updating πœ†
πœ†π‘˜+1
= πœ†π‘˜0 − 𝛽0 (𝑀 π‘˜+1 − 𝐷𝑖 𝑓(π‘₯, 𝑦) π‘˜+1 ),
0
(4.21)
πœ†π‘—π‘˜+1 = πœ†π‘—π‘˜ − 𝛽𝑗 (𝑧𝑗 π‘˜+1 − (𝐾𝑗 × π‘“(π‘₯, 𝑦)π‘˜+1 − 𝑔(π‘₯, 𝑦)𝑗 )).
(4.22)
33
4.4.2.2. Iteration Stopping Criterion
A stopping criterion needs to be defined for the iteration method. This algorithm
uses the same stopping criterion and the threshold suggested in [5] and [16]. The
iteration stops if the following criterion is met:
̂𝑦)π‘˜+1 − 𝑓(π‘₯,
̂𝑦)π‘˜ |
|𝑓(π‘₯,
≤ 𝛾,
̂𝑦)π‘˜+1 |
|𝑓(π‘₯,
where |βˆ™| is the Frobenius norm, and 𝛾 is the threshold. The value of 𝛾 controls how fast
the iteration process converges. The smaller 𝛾 is, the longer the convergence takes, but
the better the result. In this algorithm, 𝛾 is set to 10−3 .
Keep in mind that this method was designed originally to solve for real images.
However, SI actually produces complex images in the spatial domain because of the
shifting in the frequency domain. To be able to perform this method on SI images, the
images are approximated by taking the absolute value.
4.5.
Peak Signal to Standard Error of the Estimate Ratio (PSSEER)
An objective image quality metric can be used to dynamically monitor and adjust
image quality [17], [18]. Here, the Peak Signal to Standard Error of the Estimate Ratio,
PSSEER, is a useful metric for comparing the restored images with the original image.
34
The quantitative measure, PSSEER, takes into account offset and scaling caused by the
approximation during the reconstruction process. This metric is simple to calculate, has
clear physical meaning, and is mathematically convenient in the context of optimization.
Equation (4.22) defines PSSEER as
𝑃𝑆𝑆𝐸𝐸𝑅 = 20π‘™π‘œπ‘”10
Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…)
max(𝑓(π‘₯,𝑦)
2
,
(4.23)
Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…−𝑓(π‘₯,𝑦)
Μ‚ |
√|𝑓(π‘₯,𝑦)
where Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…
𝑓(π‘₯, 𝑦) is calculated by linear regression of the pixel intensity of the original
image f(x, y) as the independent variable and the pixel intensity of reconstructed
Μ‚
Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…Μ…
image𝑓(π‘₯,
𝑦) as the dependent variable. The value of max(𝑓(π‘₯,
𝑦)) is the linear
regression value at the maximum value of original image f(x, y).
4.6.
Comparing Methods in the Absence of Noise
The methods first are compared for reconstructing images that are blurred by a
PSF. A comparison between the pixel intensities of the blurred image g(x,y) and the
original image f(x,y) is shown in Figure 4.7. Figure 4.7 shows that the blurring effect
causes changes in pixel intensity level and therefore causes the scattering that represents
the degradation of image quality. The straight line is obtained by linear regression. The
PSSEER of the blurry image is 26 dB.
35
Comparisons between pixel intensities of the images reconstructed by Weinerlike deconvolution using three LR images and nine LR images and the original image
are shown in Figure 4.8 and Figure 4.9, respectively. The comparisons indicate that the
slopes of the linear regression lines are smaller than the one in Figure 4.7, indicating that
scaling is introduced during the SR process. Table 4.1 shows that the PSSEER values of
the reconstructed images are higher than for the blurry image in Figure 4.7. The
reconstructions using three and nine LR images in Figure 4.8 and Figure 4.9 appear to
be close.
The comparison between pixel intensities of the L1/TV reconstruction using
three and nine LR images and the original image are shown in Figure 4.10 and Figure
4.11, respectively. Table 4.1 shows comparisons of the PSSEER values for Wiener-like
deconvolution and L1/TV optimization. In Table 4.1, the results show that L1/TV
optimization is able to increase PSSEER values and, quantitatively, enhance the
goodness of the blurry image. The table shows that the methods are fairly close to each
other. The PSSEER of L1/TV optimization is more than two dB greater than the best
performance of Wiener-like deconvolution.
36
Figure 4.7. Comparison of the pixel intensities of the original and
the blurred images. The x-axis is the intensity of the original image;
the y-axis is the intensity of the blurred and noisy image. The
straight line is obtained by linear regression.
Figure 4.8. Comparison of the pixel intensities of the Wiener-like reconstruction
using three LR images and the original image. The x-axis is the intensity of the
original image; the y-axis is the intensity of the blurred and noisy image. The
straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.7.
37
Figure 4.9. Comparison of the pixel intensities of the Wiener-like reconstruction
using nine LR images and the original image. The x-axis is the intensity of the
original image; the y-axis is the intensity of the blurred and noisy image. The
straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure
4.7.
Figure 4.10. Comparison of the pixel intensities of the L1/TV reconstruction using
three LR images and the original image. The x-axis is the intensity of the original
image; the y-axis is the intensity of the blurred and noisy image. The straight line is
obtained by linear regression. Scaling caused by reconstruction is represented in a
smaller slope of the straight line compared with that of Figure 4.7.
38
Figure 4.11. Comparison of the pixel intensities of the L1/TV reconstruction using
nine LR images and the original image. The x-axis is the intensity of the original
image; the y-axis is the intensity of the blurred and noisy image. The straight line is
obtained by linear regression. Scaling caused by reconstruction is represented in a
smaller slope of the straight line compared with that of Figure 4.7.
Table 4.1. A comparison of the PSSEER of the reconstructed images from a blurry
image (PSSEER = 26 dB) in the absence of noise. The L1/TV reconstruction using three
SI LR images is better than Wiener-like reconstruction using nine SI LR images.
PSSEER (dB)
Method of the Reconstruction
Three SI LR images
Nine SI LR images
Wiener-like
27.2
27.1
L1/ TV
29
29.4
39
Figure 4.12 and Figure 4.13 show the reconstructed images of Wiener-like
deconvolution using three LR images and nine LR images, respectively. Figure 4.14 and
Figure 4.15 show the reconstructed images of L1/TV optimization using three LR
images and nine LR images, respectively. These results show that using more LR
images produced better performance. Also, they show that L1/TV optimization achieved
a better reconstruction using the three SI components than Wiener-like deconvolution
using nine SI components. Table 4.2 shows the processing times for the reconstruction
of the HR images. The L1/TV method needs more time to reconstruct HR images.
Figure 4.12. Wiener-like deconvolution reconstructed image using
three LR images. The image has better resolution than the blurry
image in Figure 4.3.
40
Figure 4.13. Wiener-like deconvolution reconstructed image using nine LR
images. The image has better resolution than the blurry image in Figure 4.3 and
the reconstructed image using three components in Figure 4.12. Ringing
artifacts are visible near edges and borders.
Figure 4.14. Reconstruction obtained by L1/TV optimization using
three LR images. Details are recovered (feathers) better than by the
Wiener-like method.
41
Figure 4.15. Reconstruction obtained by the L1/TV optimization using
nine LR images. Details are recovered (feathers) better than by the
Wiener-like and L1/TV optimizations using three LR images methods.
Table 4.2. Processing time to reconstruct images in the absence of noise. The Wienerlike method is much faster than the L1/TV method.
Processing Time (s)
Method of Reconstruction
Three LR images
Nine LR images
4.7.
Wiener-like
0.15
0.22
L1/ TV
17
34
Comparing Methods in the Presence of Gaussian Noise
Gaussian noise with zero mean and a standard deviation of 20 is added to the
blurry image, resulting in a noisy and blurry image with a PSSEER of 25.2 dB. Figure
4.16 shows the blurry and noisy image. Figure 4.17 shows pixel intensities of the
contaminated blurry image compared with those for the original image. Gaussian noise
42
with a standard deviation of 20 does not significantly increase scattered compared with
that in Figure 4.7. But, a comparison of the images in Figure 4.3 and Figure 4.16 shows
how the blurred image is affected by the Gaussian noise.
Comparisons between pixel intensities of the image reconstructed by Weiner-like
deconvolution using three LR images and nine LR images and the original image are
shown in Figure 4.18 and Figure 4.19, respectively. Comparisons between pixels
intensities of the image reconstructed by L1/TV optimization using three LR images and
nine LR images and the original image are shown in Figure 4.20 and Figure 4.21,
respectively. Table 4.3 shows comparisons of the PSSEER achieved by both methods
for three and nine LR images. Table 4.3 shows that the reconstructions by Weiner-like
deconvolution and L1/TV optimization have increased PSSEER values compared with
that of Figure 4.17. Therefore, reconstruction using Wiener-like deconvloution and
L1/TV optimization, from a quantitative perspective, successfully enhanced the
goodness of the contaminated blurry image.
43
Figure 4.16. The blurred image contaminated with Gaussian noise (zero
mean and a standard deviation of 20). The quality of the image is worse
than that of the blurry image in Figure 4.3.
Figure 4.17. Comparison of the pixel intensities of the blurred with
Gaussian noise contamination and the original images. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred
and noisy image. The straight line is obtained by linear regression.
44
Figure 4.18. Comparison of the pixel intensities of the Wiener-like reconstruction using
three LR images in the presence of Gaussian noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.17.
Figure 4.19. Comparison of the pixel intensities of the Wiener-like reconstruction using
nine LR images in the presence of Gaussian noise and the original image. The x-axis is
the intensity of the original image; the y-axis is the intensity of the blurred and noisy
image. The straight line is obtained by linear regression. Scaling caused by
reconstruction is represented in a smaller slope compared with that of Figure 4.17.
45
Figure 4.20. Comparison of the pixel intensities of the L1/TV reconstruction using three
LR images in the presence of Gaussian noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.17.
Figure 4.21. Comparison of the pixel intensities of the L1/TV reconstruction using nine
LR images in the presence of Gaussian noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.17.
46
Table 4.3. A comparison of the PSSEER of the reconstructed images from a blurry
image in the presence of Gaussian noise with zero mean and standard deviation of 20
(PSSEER = 25.2 dB). The reconstruction of L1/TV using three SI LR images is better
than Wiener-like reconstruction using nine SI LR images.
PSSEER (dB)
Method of the Reconstruction
Three SI LR images
Nine SI LR images
Wiener-like
27.15
27
L1/ TV
27.7
28.6
Figure 4.22 and Figure 4.23 show the images reconstructed by Wiener-like
deconvolution using three LR images and nine LR images, respectively. Figure 4.24 and
Figure 4.25 show the images reconstructed by L1/TV optimization using three LR
images and nine LR images, respectively. These results show that the resolution
achieved by L1/TV is better. Table 4.4 shows the processing time in seconds of the
reconstruction of the HR images. The reconstruction using L1/TV optimization needs
more time compared with the case of no noise added to the blurry image. On the other
hand, there was no change in the processing time for the Wiener-like method.
47
Figure 4.22. Wiener-like deconvolution reconstructed image using three LR images
in the presence of Gaussian noise (zero mean and standard deviation 20). The
image has better resolution than the contaminated blurry image in Figure 4.16.
Figure 4.23. Wiener-like deconvolution reconstructed image using nine LR images
in the presence of Gaussian noise (zero mean and standard deviation 20). The image
has better resolution than the blurry and noisy image in Figure 4.16 and the
reconstructed image using three components in Figure 4.22. Ringing artifacts are
visible near edges and borders.
48
Figure 4.24. Reconstruction obtained by L1/TV optimization using three LR images in the
presence of Gaussian noise (zero mean and 20 standard deviation). Details are recovered
(feathers) better than by the Wiener-like method. The image has better resolution than the
contaminated blurry image in Figure 4.16. The smooth area of the image contains some
artifact, but the resolution is better than for Wiener-like deconvolution.
Figure 4.25. Reconstruction obtained by L1/TV optimization using nine LR images in the
presence of Gaussian noise (zero mean and standard deviation of 20). Details are recovered
(feathers) better than by the Wiener-like method. The image has better resolution than the
contaminated and blurry image in Figure 4.16. The artifact in smooth areas in Figure 4.24 is
reduced in the case of nine LR images.
49
Table 4.4. Processing time to reconstruct images in the presence of Gaussian noise (zero
mean and standard deviation of 20). The presence of noise increases the time for
convergence for the L1/TV optimization method.
Processing time (s)
Method of Reconstruction
Three LR images
Nine LR images
4.8.
Wiener-like
0.15
0.22
L1/ TV
20
40
Comparing Methods in the Presence of Poisson Noise
The blurred image is contaminated with Poisson noise. Since Poisson noise is
pixel dependent, in this trial, the maximum intensity of the original image is set to 800.
This value makes the approximate average pixel intensity close to 400. Using the
MATLAB function for Poisson noise, pixels with an intensity of 400 are affected by
Poisson noise with a standard deviation of 20. After introducing the noise, the blurry and
noisy image maximum intensity is reset to 1000. Resetting the maximum intensity level
in this way generates Poisson noise having a size similar to the Gaussian noise used
previously.
The contaminated blurry image has a PSSEER of 25 dB. Figure 4.26 shows the
contaminated blurry image. Figure 4.27 shows pixel intensities of the contaminated
blurry image versus those of the original image. The comparisons of pixel intensities of
the image reconstructed by Weiner-like deconvolution and the original image using
three LR images and nine LR images are shown in Figure 4.28, and Figure 4.29,
50
respectively. The comparisons of pixel intensities of the image reconstructed by L1/TV
optimization using three LR images and nine LR images and the original image are
shown in Figure 4.30 and Figure 4.31, respectively. Table 4.3 shows comparisons of the
PSSEER values achieved by both methods using three and nine LR images. The results
in Table 4.3 show the closeness of the PSSEER values for both methods.
Figure 4.26. The blurred image contaminated with Poisson noise. The quality of the
image is worse than that of the blurry image in Figure 4.3.
51
Figure 4.27. Comparison of the pixel intensities of the blurred contaminated
with Poisson noise and the original images. The x-axis is the intensity of the
original image; the y-axis is the intensity of the blurred and noisy image. The
straight line is obtained by linear regression.
Figure 4.28. Comparison of the pixel intensities of the Wiener-like reconstruction using
three LR images in the presence of Poisson noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.27.
52
Figure 4.29. Comparison of the pixel intensities of the Wiener-like reconstruction using
nine LR images in the presence of Poisson noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.27.
Figure 4.30. Comparison of the pixel intensities of the L1/TV reconstruction using three
LR images in the presence of Poisson noise and the original image. The x-axis is the
intensity of the original image; the y-axis is the intensity of the blurred and noisy image.
The straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.27.
53
Figure 4.31. Comparison of the pixel intensities of the L1/TV reconstruction using nine LR
images in the presence of Poisson noise and the original image. The x-axis is the intensity
of the original image; the y-axis is the intensity of the blurred and noisy image. The
straight line is obtained by linear regression. Scaling caused by reconstruction is
represented in a smaller slope of the straight line compared with that of Figure 4.27.
Table 4.5. A comparison of the PSSEER of the reconstructed images from a blurry
image contaminated with Poisson noise (PSSEER = 25 dB). The reconstruction by the
L1/TV method using three SI LR images is better than Wiener-like reconstruction using
nine SI LR images.
PSSEER (dB)
Method of the Reconstruction
Three SI LR images
Nine SI LR images
Wiener-like
27
26.8
L1/ TV
27.1
28
Figure 4.32 and Figure 4.33 show the reconstructed images by Wiener-like
deconvolution using three LR images and nine LR images, respectively. Figure 4.34 and
54
Figure 4.35 show the reconstructed images by L1/TV optimization using three LR
images and nine LR images, respectively. These results show that the resolution
achieved by L1/TV is better. Table 4.6 shows the processing time in seconds of the
reconstruction of the HR images. The L1/TV method needed more time than for no
noise added and for Gaussian contamination of the blurry image.
Figure 4.32. Wiener-like deconvolution reconstruction using three LR images in the
presence of Poisson noise. The image has better resolution than the contaminated
blurry image in Figure 4.26.
55
Figure 4.33. Wiener-like deconvolution reconstruction using nine LR images in the
presence of Poisson noise. The image has better resolution than the contaminated
blurry image in Figure 4.26 and the reconstructed image using three components in
Figure 4.32. Ringing artifacts are visible near edges and borders.
Figure 4.34. Reconstruction obtained by L1/TV optimization using three LR images
in the presence of Poisson noise. Details are recovered (feathers) better than by the
Wiener-like method. The image has better resolution than the contaminated blurry
image in Figure 4.26. The smooth area of the image contains some artifact, but the
resolution is better than for Wiener-like deconvolution.
56
Figure 4.35. Reconstruction obtained by L1/TV optimization using nine LR
images in the presence of Poisson noise. Details are recovered (feathers) better
than by the Wiener-like method. The image has better resolution than the
contaminated blurry image in Figure 4.26. The artifact in smooth areas that
appears in Figure 4.34 is reduced in the case of nine LR images.
Table 4.6. Processing time to reconstruct images in the presence of Poisson noise. The
presence of Poisson noise affects the speed of convergence of the L1/TV optimization
method.
Processing time (s)
Method of Reconstruction
Three LR images
Nine LR images
Wiener-like
0.15
0.24
L1/ TV
22
44
57
Chapter 5
Discussion
In this study, using the same set of LR images obtained by the structured
illumination technique, L1/TV optimization performs better with and without noise both
quantitatively and subjectively. These results show that the resolution of the images
reconstructed by L1/TV optimization is better than that obtained by Wiener-like
deconvolution.
Wiener-like deconvolution failed to overcome the phase residue that caused the
illumination pattern to be retained in the reconstructed images. It also caused ringing
artifacts that increased with the number of LR images used for reconstruction.
Moreover, the value of ε plays a significant rule in the outcome Wiener-like
deconvoultion. In this study, and after many trials, ε = 0.09 is the optimum value that
produce a reconstructed image with fewer artifacts and better resolution. Increasing the
value of ε makes the reconstructed image more blurry. Decreasing ε makes the
reconstructed image noisier with more artifacts.
On the other hand, although L1/TV optimization was successful in
reconstructing an HR image in the presence of Gaussian noise with zero mean and a
standard deviation less than 100, it failed to converge in the case of Gaussian noise with
zero mean and a standard deviation larger than 100. Wiener-like deconvolution is able to
reconstruct an HR with Gaussian noise with zero mean and a standard deviation larger
than 100. Since L1/TV optimization has many parameters that control its performance,
58
and considering the approximation suggested (using the absolute value for the complex
images), the reasons for failure need to be examined. Nonetheless, the low level of noise
introduced in this study does mimic real SIM applications, and as far as these
applications are concerned, L1/TV optimization is a successful alternative.
This study shows that using illumination patterns for SR produces offset and
scaling during the process of acquisition or reconstruction. This offset and scaling make
the use of signal to noise ratio, SNR, and peak signal to noise ratio, PSNR,
unsatisfactory as quantitative measures of the performance. Peak Signal to Standard
Error of the Estimate Ratio, suggested in this study, shows the ability to overcome offset
and scaling and provide a measure that is consistent with visual inspection of the
reconstructed images. Although, the reconstruction results of L1/TV and Wiener-like
deconvolution are quantitatively close, they differ significantly in resolution. This
difference can be seen especially in the case of no noise. The difference between L1/TV
optimization and Wiener-like deconvolution is about 1.5 dB, and yet L1/TV
optimization resolved more details.
As expected, processing time is in favor of Wiener-like deconvolution because
of the iterative behavior of L1/TV optimization. The greater the noise added to the
blurry image, the slower L1/TV optimization performs.
59
Chapter 6
Summary, Conclusions, and Recommendations
6.1.
Summary
In this thesis, a modification of L1/TV optimization suggested in [16] is
developed to enhance the image reconstruction from multiple LR images obtained by
structured illumination. A computer algorithm is developed to perform structured
illumination, image reconstruction by Wiener-like deconvolution, and image
reconstruction by L1/TV optimization. The structured illumination principles, including
the process of separation and shifting, are presented and explained for one- and twodimensional cases. The results of the reconstructed images using Wiener-like
deconvolution and the L1/TV method are compared. The study investigated the
reconstruction in cases of no noise, Gaussian noise, and Poisson noise.
6.2.
Conclusions
Image reconstruction by L1/TV optimization is another alternative for SIM to
reconstruct images. The ability to reduce the number of orientations of the illumination
pattern and resolve more details gives the L1/TV method an advantage over Wiener-like
deconvolution. The results also show that L1/TV optimization reduces blurring and
noise contamination at the expense of more processing time.
60
This work suggested PSSEER as a quantitative method to assess the performance
of the reconstruction. Peak signal to the standard error of the estimate ratio, PSSEER,
overcomes offset and scaling introduced during the SR process. The fact that PSSEER
can overcome offset and scaling makes it a good alternative to the common quantitative
measures of SNR and PSNR.
6.3.
Recommendations
The advantages of parallel processing can be used to enhance the speed of
L1/TV optimization. Many programming languages, including MATLAB, can process
the Fast Fourier Transform (FFT) and other functions using multiple processors to
reduce the time needed to reconstruct the HR image.
The performance of L1/TV optimization depends on the values of the weights
assigned to the regularization term and fidelity term of the cost function [16]. The
weights suggested in this thesis were not optimum; therefore, a better choice of weights
can be investigated to help improve the restoration quality.
This work uses alternating minimization to solve the cost function of the L1/TV.
Other algorithms can be used to solve the cost function and the performance compared.
61
Appendix A
MATLAB code
%% Main Code
clear all;
clc
close all;
%% loading original image
oi=(rgb2gray(imread('lenna.png')));
oi=double(oi);
oi=oi/max(oi(:))*1000;
max(oi(:))
[n,m]=size(oi);
OI=fftshift(fft2(oi));
x=-n/2:n/2-1;y=-m/2:m/2-1;
[x,y]=meshgrid(x,y);
dfx=1/n; dfy=1/m;
fx=(-n/2:n/2-1)*dfx;
fy=(-m/2:m/2-1)*dfy;
[fx,fy]=meshgrid(fx,fy);
% gaussian filter (PSF)
sigmax=3;
sigmay=3;
H=sigmax*sigmay*2*pi*exp(2*((sigmax*pi*fx).^2+(sigmay*pi*fy).^2));H=H/max(H(:));
h=abs(ifftshift(ifft2(H)));
H=fftshift(fft2(h));
% Blurring process
OIF=OI.*H;oif=abs((ifft2(OIF)));
figure(4),imagesc(fftshift(oif)),plotbrowser('on')
oif=fftshift(oif)+20*randn(n,m); % In case of Poisson noise function
randn is replaced with poissrnd
62
%% Structured illumination
dfx=1/n; dfy=1/m;
k=1;
for theta=(0:pi/3:2*pi/3)+pi/4
f0x=1/(sigmax*2)*cos(theta);
f0y=1/(sigmay*2)*sin(theta);
phi1=0;phi2=2*pi/3;phi3=4*pi/3;
% Phase shift
%% Illumination pattern
ip1(:,:,k)=1+cos(2*pi*(f0x.*x+f0y.*y)+phi1);
ip2(:,:,k)=1+cos(2*pi*(f0x.*x+f0y.*y)+phi2);
ip3(:,:,k)=1+cos(2*pi*(f0x.*x+f0y.*y)+phi3);
IB1(:,:,k)=abs(ifftshift(ifft2(fftshift(fft2(oi.*ip1(:,:,k))).*H)));
IB1(:,:,k)=IB1(:,:,k)/max(max(IB1(:,:,k)))*1000+20*randn(m,n);
IB1(:,:,k)=fftshift(fft2(IB1(:,:,k)));
IB2(:,:,k)=abs(ifftshift(ifft2(fftshift(fft2(oi.*ip2(:,:,k))).*H)));
IB2(:,:,k)=IB2(:,:,k)/max(max(IB2(:,:,k)))*1000+20*randn(m,n);
IB2(:,:,k)=fftshift(fft2(IB2(:,:,k)));
IB3(:,:,k)=abs(ifftshift(ifft2(fftshift(fft2(oi.*ip3(:,:,k))).*H)));
IB3(:,:,k)=IB3(:,:,k)/max(max(IB3(:,:,k)))*1000+20*randn(m,n);
IB3(:,:,k)=fftshift(fft2(IB3(:,:,k)));
% Extracting the SI components
E=([1 1 1;1 exp(1i*2*pi/3) exp(-1i*2*pi/3);1 exp(1i*4*pi/3) exp(1i*4*pi/3)])^-1;
OH1(:,:,k)=E(1,1)*IB1(:,:,k)+E(1,2)*IB2(:,:,k)+E(1,3)*IB3(:,:,k);
OH2(:,:,k)=E(2,1)*IB1(:,:,k)+E(2,2)*IB2(:,:,k)+E(2,3)*IB3(:,:,k);
OH3(:,:,k)=E(3,1)*IB1(:,:,k)+E(3,2)*IB2(:,:,k)+E(3,3)*IB3(:,:,k);
%% zero paddding process
% padding the images
OH1p(:,:,k)=padarray(OH1(:,:,k),[round(1/(dfx*sigmax*2))
round(1/(dfy*sigmay*2))]);
63
OH2p(:,:,k)=padarray(OH2(:,:,k),[round(1/(dfx*sigmax*2))
round(1/(dfy*sigmay*2))]);
OH3p(:,:,k)=padarray(OH3(:,:,k),[round(1/(dfx*sigmax*2))
round(1/(dfy*sigmay*2))]);
[K,L,Z]=size(OH1p);
% shifting (we use fuction uses fourier transform as mean to shift)
OH1p(:,:,k)=FourierShift2D(OH1p(:,:,k),[0 0]);
OH2p(:,:,k)=FourierShift2D(OH2p(:,:,k),[-f0y/dfy -f0x/dfx]);
OH3p(:,:,k)=FourierShift2D(OH3p(:,:,k),[f0y/dfy f0x/dfx]);
% padding and shifting the filter
H1(:,:,k)=padarray(H,[round(1/(dfx*sigmax*2))
round(1/(dfy*sigmay*2))]);
H2(:,:,k)=FourierShift2D(H1(:,:,k),[-f0y/dfx -f0x/dfy])/2;
H3(:,:,k)=FourierShift2D(H1(:,:,k),[f0y/dfx f0x/dfy])/2;
k=k+1;
end
%% Wiener filter
[K,L]=size(OH1p(:,:,1));
Iw3=zeros(K,L);Iw9=zeros(K,L);
eps=.09;
QSQNR=10/10;
Ichu=zeros(K,L);
Hsum3=zeros(K,L);
Hsum9=zeros(K,L);
tic
for i=1:1
%wiener filter for three LR images
Hsum3=Hsum3+(abs(H1(:,:,i)).^2+abs(H2(:,:,i)).^2+abs(H3(:,:,i)).^2);
Iw3=Iw3+QSQNR*(conj(H1(:,:,i)).*OH1p(:,:,i)+conj(H2(:,:,i)).*OH2p(:,:,i
)+conj(H3(:,:,i)).*OH3p(:,:,i));
64
end
Iw3=Iw3./(eps+Hsum3);
toc
tic
for i=1:3
%wiener filter for nine LR images
Hsum9=Hsum9+(abs(H1(:,:,i)).^2+abs(H2(:,:,i)).^2+abs(H3(:,:,i)).^2);
Iw9=Iw9+QSQNR*(conj(H1(:,:,i)).*OH1p(:,:,i)+conj(H2(:,:,i)).*OH2p(:,:,i
)+conj(H3(:,:,i)).*OH3p(:,:,i));
end
Iw9=Iw9./(eps+Hsum9);
toc
oh1p=(ifft2(OH1p));
oh2p=(ifft2(OH2p));
oh3p=(ifft2(OH3p));
h1=(ifft2(H1));
h2=(ifft2(H2));
h3=(ifft2(H3));
%% optimazation process
mu=80; %
%
tic
[XX3] =
optimiz3(oh1p(:,:,1),oh2p(:,:,1),oh3p(:,:,1),abs(h1(:,:,1)),abs(h2(:,:,
1)),abs(h3(:,:,1)),mu);
toc
%
tic
[XX9] = optimiz9(oh1p,oh2p,oh3p,abs(h1),abs(h2),abs(h3),mu);
toc
figure(1),% Original image
colormap('gray')
imagesc(oi);axis image, axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[]);plotbrowser('on
');
65
plotbrowser('on');
figure(2),% Blurry image (contaminated in case of Guassian or Poisson
noise)
colormap('gray')
imagesc(oif);axis image, axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[]);plotbrowser('on
');
plotbrowser('on');
figure(3) % Reconstruction using Wiener-like deconvolution using 3 LR
images
colormap('gray')
imagesc(abs(fftshift(ifft2(Iw3))));axis image; axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[]);plotbrowser('on
');
figure(4) % Reconstruction using Wiener-like deconvolution using 9 LR
images
colormap('gray')
imagesc(abs(fftshift(ifft2(Iw9))));axis image, axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[])
;plotbrowser('on');
figure(5), % Reconstruction using L1/TV using 3 LR images
colormap('gray')
imagesc(XX3.sol);axis image; axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[]);
plotbrowser('on');
%
figure(6),% Reconstruction using L1/TV using 9 LR images
colormap('gray')
imagesc(XX9.sol);axis image; axis
off,set(gca,'position',[0,0,1,1],'xtick',[],'ytick',[]);
plotbrowser('on');
SNR=snr(oif,oi) % calculating the SNR for the blurry image
BSNR=psnr(oif,oi) % calculating the PSNR for the blurry image
PSEE=psee(oif,oi); % calculating the PSSEER for the blurry image
SNRw3=snr(abs(fftshift(ifft2(Iw3))),imresize(oi,[K,L]));% calculating
the SNR for Wiener reconstruction using 3 LR images
PSNRw3=psnr(abs(fftshift(ifft2(Iw3))),imresize(oi,[K,L]));% calculating
the PSNR for Wiener reconstruction using 3 LR images
PSEEw3=psee(abs(fftshift(ifft2(Iw3))),imresize(oi,[K,L]));% calculating
the PSSEER for Wiener reconstruction using 3 LR images
66
SNRw9=snr(abs(fftshift(ifft2(Iw9))),imresize(oi,[K,L]));% calculating
the SNR for Wiener reconstruction using 9 LR images
PSNRw9=psnr(abs(fftshift(ifft2(Iw9))),imresize(oi,[K,L]));% calculating
the PSNR for Wiener reconstruction using 9 LR images
PSEEw9=psee(abs(fftshift(ifft2(Iw9))),imresize(oi,[K,L]));% calculating
the PSSEER for Wiener reconstruction using 9 LR images
% %
%
% snr(XX1.sol,oi)
%
SNRl13=snr(XX3.sol,imresize(oi,[K,L])); %calculating the SNR for L1/TV
reconstruction using 3 LR images
PSNRl13=psnr(XX3.sol,imresize(oi,[K,L])); %calculating the PSNR for
L1/TV reconstruction using 3 LR images
PSEEl13=psee(XX3.sol,imresize(oi,[K,L])); %calculating the PSSEER for
L1/TV reconstruction using 3 LR images
%
SNRl19=snr(XX9.sol,imresize(oi,[K,L])); %calculating the SNR for L1/TV
reconstruction using 9 LR images
PSNRl19=psnr(XX9.sol,imresize(oi,[K,L])); %calculating the PSNR for
L1/TV reconstruction using 9 LR images
PSEEl19=psee(XX9.sol,imresize(oi,[K,L])); %calculating the PSSEER for
L1/TV reconstruction using 9 LR images
%
XXW3=abs(fftshift(ifft2(Iw3)));
XXW9=abs(fftshift(ifft2(Iw9)));
% Ploting the zoomed part of the image
OOIf=imresize(oif,[K,L]);
figure, colormap('gray'), imagesc(OOIf(200:340,200:340)), axis off ,
axis image, plotbrowser('on')% Blurry omage
figure, colormap('gray'), imagesc(XXW3(200:340,200:340)), axis off ,
axis image,plotbrowser('on')%Wiener reconstruction using 3 LR images
figure, colormap('gray'), imagesc(XXW9(200:340,200:340)), axis off ,
axis image,plotbrowser('on')%Wiener reconstruction using 9 LR images
figure, colormap('gray'), imagesc(XX3.sol(200:340,200:340)), axis off ,
axis image, plotbrowser('on')%L1/TV reconstruction using 3 LR images
figure, colormap('gray'), imagesc(XX9.sol(200:340,200:340)), axis off ,
axis image, plotbrowser('on')%L1/TV reconstruction using 9 LR images
Function PSSEE
function x = psee(sig, ref)
% x = snr(sig, ref)
% snr -- Compute Signal-to-Noise Ratio for images
67
%
% Usage:
%
x = snr(sig, ref) -- 1st time or
%
x = snr(sig)
-- afterwards
%
% Input:
%
sig
Modified image
%
ref
Reference image
%
% Output:
%
x
SNR value
persistent ref_save;
if nargin == 2; ref_save = ref; end;
[K,L]=size(ref);
R1=ref(1:K*L);
R2=sig(1:K*L);
[r,m,b] = regression(R1,R2);
peak=max(ref_save(:));
figure, plot(R1,R2,'.',1:peak,m*(1:peak)+b,'k'), xlabel('Pixel
Intensity of the Original Image'), ylabel('Pixel Intensity of the
Constructed Image'),axis([0 peak 0 peak]),plotbrowser('on')
peak=max(ref_save(:));
see= sum((R2-(m*R1+b)).^2)/(K*L);
%figure,plot(R1,(R2-(m*R1+b)),'.')
see=sqrt(see);
linearmax=max((m*R1+b))
x = 20*log10(linearmax/(see));
Function optimiz3
function out = optimiz3(oh1p,oh2p,oh3p,h1,h2,h3,mu,opts)
%
disp('Optimiz3 is running, please wait ...');
[m n] = size(oh1p);
if nargin < 8; opts = []; end
opts = getopts(opts);
C = getC(oh1p,h1,h2,h3);
[D,Dt] = defDDt;
% initialization
X = abs(oh1p);
LamD1 = zeros(m,n);
68
LamD2 = LamD1;
Lam1 = LamD1;
Lam2 = Lam1;
Lam3 = Lam1;
beta0 = opts.beta0;
beta1 = opts.beta1;
beta2 = opts.beta2;
beta3 = opts.beta3;
gamma = opts.gamma;
print = opts.print;
Denom = C.DtD + beta1/beta0 *C.H1tH1 + beta2/beta0 * C.H2tH2 +
beta3/beta0 * C.H3tH3;
% finite diff
[D1X,D2X] = D(X);
KXF1 = abs( ifft2((C.H1 .* fft2(X))
KXF2 = abs( ifft2((C.H2 .* fft2(X))
KXF3 = abs( ifft2((C.H3 .* fft2(X))
KXF=KXF1+KXF2+KXF3;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.snr = [];
out.nrmdX = [];
out.relchg = [];
out.f = f;
out.tv = tv;
out.fid = fid;
%% Main loop
for ii = 1:opts.maxitr
V1 = D1X +
V2 = D2X +
V31 = KXF1
V32 = KXF2
V33 = KXF3
LamD1/beta0;
LamD2/beta0;
+ Lam1/beta1;
+ Lam2/beta2;
+ Lam3/beta3;
V = V1.^2 + V2.^2;
V = sqrt(V);
V(V==0) = 1;
% ==================
%
Shrinkage Step
% ==================
V = max(V - 1/beta1, 0)./V;
W1 = V1.*V;
W2 = V2.*V;
- (fft2(oh1p))));
- (fft2(oh2p))));
- (fft2(oh3p))));
69
Z1 = max(abs(V31) - mu/beta1, 0).*sign(V31);
Z2 = max(abs(V32) - mu/beta2, 0).*sign(V32);
Z3 = max(abs(V33) - mu/beta3, 0).*sign(V33);
% ==================
%
X-subprolem
% ==================
Xp = X;
Temp1 = (beta1*Z1 - conj(Lam1))/beta0; Temp2 = (beta2*Z2 conj(Lam2))/beta0; Temp3 = (beta3*Z3 - conj(Lam3))/beta0;
Temp1= abs(ifft2(C.H1t .* fft2(Temp1))); Temp2= abs(ifft2(C.H2t .*
fft2(Temp2))); Temp3= abs(ifft2(C.H3t .* fft2(Temp3)));
Temp = Temp1+Temp2+Temp3;
X = Dt(W1 - LamD1/beta0,W2 - LamD2/beta0) + Temp +
beta1/beta0*C.H1tX+beta2/beta0*C.H2tX+beta3/beta0*C.H3tX;
X = fft2(X)./Denom;
X = abs(ifft2(X));
%snrX = snr(X);
%out.snr = [out.snr; snrX];
relchg = norm(X - Xp,'fro')/norm(X,'fro');
out.relchg = [out.relchg; relchg];
if print
fprintf('Iter: %d, snrX: %4.2f, relchg:
%4.2e\n',ii,snrX,relchg);
end
% ====================
% Check stopping rule
% ====================
if relchg < opts.relchg
out.sol = X;
out.itr = ii;
[D1X,D2X] = D(X);
KXF1 = abs( ifft2((C.H1 .* fft2(X))
KXF2 = abs( ifft2((C.H2 .* fft2(X))
KXF3 = abs( ifft2((C.H3 .* fft2(X))
KXF=KXF1+KXF2+KXF3;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.f = [out.f; f];
out.tv = [out.tv; tv];
out.fid = [out.fid; fid];
disp('Done!');
return
end
% finite diff.
- (fft2(oh1p))));
- (fft2(oh2p))));
- (fft2(oh3p))));
70
[D1X,D2X] =
KXF1 = abs(
KXF2 = abs(
KXF3 = abs(
D(X);
ifft2((C.H1 .* fft2(X))
ifft2((C.H2 .* fft2(X))
ifft2((C.H3 .* fft2(X))
- (fft2(oh1p))));
- (fft2(oh2p))));
- (fft2(oh3p))));
KXF=KXF1+KXF2+KXF3;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.f = [out.f; f];
out.tv = [out.tv; tv];
out.fid = [out.fid; fid];
% ==================
%
Update Lam
% ==================
LamD1 = LamD1 - gamma*beta0*(W1
LamD2 = LamD2 - gamma*beta0*(W2
Lam1 = Lam1 - gamma*beta1*(Z1 Lam2 = Lam2 - gamma*beta2*(Z2 Lam3 = Lam3 - gamma*beta3*(Z3 -
- D1X);
- D2X);
KXF1);
KXF2);
KXF3);
end
out.sol = X;
out.itr = ii;
out.exit = 'Exist Normally';
disp('Done!');
if ii == opts.maxitr
out.exit = 'Maximum iteration reached!';
end
%% ------------------SUBFUNCTION----------------------------function opts = getopts(opts)
if ~isfield(opts,'maxitr')
opts.maxitr = 500;
end
if ~isfield(opts,'beta0')
opts.beta0 =15;
end
if ~isfield(opts,'beta1')
opts.beta1 = 30;
end
if ~isfield(opts,'beta2')
opts.beta2 = 80;
end
if ~isfield(opts,'beta3')
opts.beta3 = 80;
end
if ~isfield(opts,'gamma')
opts.gamma = 1.680;
71
end
if ~isfield(opts,'relchg')
opts.relchg = 1.e-3;
end
if ~isfield(opts,'print')
opts.print = 0;
end
%% ------------------SUBFUNCTION----------------------------function C = getC(X,h1,h2,h3)
[m,n] = size(X);
C.DtD = abs(psf2otf([1,-1],[m,n])).^2 + abs(psf2otf([1;-1],[m,n])).^2;
C.H1 = psf2otf(h1);C.H2 = psf2otf(h2);C.H3 = psf2otf(h3);
C.H1t = conj(C.H1);C.H2t = conj(C.H2);C.H3t = conj(C.H3);
C.H1tH1 = abs(C.H1).^2;C.H2tH2 = abs(C.H2).^2;C.H3tH3 = abs(C.H3).^2;
C.H1tX = abs(ifft2(C.H1t .* fft2(X)));C.H2tX =abs(ifft2(C.H2t .*
fft2(X)));C.H3tX =abs(ifft2(C.H3t .* fft2(X)));
%% ------------------SUBFUNCTION----------------------------function [tv,fid,f] = fval(D1X,D2X,KXF,mu)
tv = sum(sum(sqrt(D1X.^2 + D2X.^2)));
fid = sum(abs(KXF(:)));
f = tv + mu * fid;
function [D,Dt] = defDDt
D = @(U) ForwardD(U);
Dt = @(X,Y) Dive(X,Y);
function [Dux,Duy] = ForwardD(U)
Dux = [diff(U,1,2), U(:,1) - U(:,end)];
Duy = [diff(U,1,1); U(1,:) - U(end,:)];
function DtXY = Dive(X,Y)
DtXY = [X(:,end) - X(:, 1), -diff(X,1,2)];
DtXY = DtXY + [Y(end,:) - Y(1, :); -diff(Y,1,1)];
Function Optimiz9
function out = optimiz9(oh1p,oh2p,oh3p,h1,h2,h3,mu,opts)
%
disp('Optimiz9 is running, please wait ...');
72
[m n k] = size(oh1p);
if nargin < 8; opts = []; end
opts = getopts(opts);
C = getC(oh1p(:,:,1),h1,h2,h3);
[D,Dt] = defDDt;
% initialization
X = abs(oh1p(:,:,1));
LamD1 = zeros(m,n);
LamD2 = LamD1;
Lam1 = LamD1;Lam2 = Lam1;Lam3 = Lam1;
Lam4 = LamD1;Lam5 = Lam1;Lam6 = Lam1;
Lam7 = LamD1;Lam8 = Lam1;Lam9 = Lam1;
beta0 = opts.beta0;
beta1 = opts.beta1;beta2 = opts.beta2;beta3 = opts.beta3;
beta4 = opts.beta4;beta5 = opts.beta5;beta6 = opts.beta6;
beta7 = opts.beta7;beta8 = opts.beta8;beta9 = opts.beta9;
gamma = opts.gamma;
print = opts.print;
Denom = C.DtD
beta3/beta0 *
Denom = Denom
beta3/beta0 *
Denom = Denom
beta3/beta0 *
+ beta1/beta0 *C.H1tH1 + beta2/beta0 * C.H2tH2 +
C.H3tH3;
+ beta1/beta0 *C.H4tH4 + beta2/beta0 * C.H5tH5 +
C.H6tH6;
+ beta1/beta0 *C.H7tH7 + beta2/beta0 * C.H8tH8 +
C.H9tH9;
% finite diff
[D1X,D2X] = D(X);
KXF1 = abs( ifft2((C.H1 .* fft2(X)) - (fft2(oh1p(:,:,1)))));KXF2 =
abs( ifft2((C.H2 .* fft2(X)) - (fft2(oh2p(:,:,1)))));KXF3 = abs(
ifft2((C.H3 .* fft2(X)) - (fft2(oh3p(:,:,1)))));
KXF4 = abs( ifft2((C.H4 .* fft2(X)) - (fft2(oh1p(:,:,2)))));KXF5 =
abs( ifft2((C.H5 .* fft2(X)) - (fft2(oh2p(:,:,2)))));KXF6 = abs(
ifft2((C.H6 .* fft2(X)) - (fft2(oh3p(:,:,2)))));
KXF7 = abs( ifft2((C.H7 .* fft2(X)) - (fft2(oh1p(:,:,3)))));KXF8 =
abs( ifft2((C.H8 .* fft2(X)) - (fft2(oh2p(:,:,3)))));KXF9 = abs(
ifft2((C.H9 .* fft2(X)) - (fft2(oh3p(:,:,3)))));
KXF=KXF1+KXF2+KXF3+KXF4+KXF5+KXF6+KXF7+KXF8+KXF9;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.snr = [];
out.nrmdX = [];
out.relchg = [];
out.f = f;
73
out.tv = tv;
out.fid = fid;
%% Main loop
for ii = 1:opts.maxitr
V1 = D1X +
V2 = D2X +
V31 = KXF1
Lam3/beta2;
V34 = KXF4
Lam6/beta2;
V37 = KXF7
Lam9/beta2;
LamD1/beta0;
LamD2/beta0;
+ Lam1/beta1;V32 = KXF2 + Lam2/beta2;V33 = KXF3 +
+ Lam4/beta1;V35 = KXF5 + Lam5/beta2;V36 = KXF6 +
+ Lam7/beta1;V38 = KXF8 + Lam8/beta2;V39 = KXF9 +
V = V1.^2 + V2.^2;
V = sqrt(V);
V(V==0) = 1;
% ==================
%
Shrinkage Step
% ==================
V = max(V - 1/beta0, 0)./V;
W1 = V1.*V;
W2 = V2.*V;
Z1 = max(abs(V31) - mu/beta1, 0).*sign(V31);Z2 = max(abs(V32) mu/beta2, 0).*sign(V32);Z3 = max(abs(V33) - mu/beta3, 0).*sign(V32);
Z4 = max(abs(V34) - mu/beta4, 0).*sign(V34);Z5 = max(abs(V35) mu/beta5, 0).*sign(V35);Z6 = max(abs(V36) - mu/beta6, 0).*sign(V36);
Z7 = max(abs(V37) - mu/beta7, 0).*sign(V37);Z8 = max(abs(V38) mu/beta8, 0).*sign(V38);Z9 = max(abs(V39) - mu/beta9, 0).*sign(V39);
% ==================
%
X-subprolem
% ==================
Xp = X;
Temp1 = (beta1*Z1 - conj(Lam1))/beta0; Temp2 = (beta2*Z2 conj(Lam2))/beta0; Temp3 = (beta3*Z3 - conj(Lam3))/beta0;
Temp4 = (beta1*Z4 - conj(Lam4))/beta0; Temp5 = (beta2*Z5 conj(Lam5))/beta0; Temp6 = (beta3*Z6 - conj(Lam6))/beta0;
Temp7 = (beta1*Z7 - conj(Lam7))/beta0; Temp8 = (beta2*Z8 conj(Lam8))/beta0; Temp9 = (beta3*Z9 - conj(Lam9))/beta0;
Temp1= abs(ifft2(C.H1t .* fft2(Temp1))); Temp2= abs(ifft2(C.H2t .*
fft2(Temp2))); Temp3= abs(ifft2(C.H3t .* fft2(Temp3)));
Temp4= abs(ifft2(C.H4t .* fft2(Temp4))); Temp5= abs(ifft2(C.H5t .*
fft2(Temp5))); Temp6= abs(ifft2(C.H6t .* fft2(Temp6)));
Temp7= abs(ifft2(C.H7t .* fft2(Temp7))); Temp8= abs(ifft2(C.H8t .*
fft2(Temp8))); Temp9= abs(ifft2(C.H9t .* fft2(Temp9)));
Temp = Temp1+Temp2+Temp3;
74
Temp = Temp+Temp4+Temp5+Temp6;
Temp = Temp+Temp7+Temp8+Temp9;
X = Dt(W1 - LamD1/beta0,W2 - LamD2/beta0) + Temp +
beta1/beta0*C.H1tX+beta2/beta0*C.H2tX+beta3/beta0*C.H3tX;
X = X + beta4/beta0*C.H4tX+beta5/beta0*C.H5tX+beta6/beta0*C.H6tX;
X = X + beta7/beta0*C.H7tX+beta8/beta0*C.H8tX+beta9/beta0*C.H9tX;
X = fft2(X)./Denom;
X = abs(ifft2(X));
%snrX = snr(X);
%out.snr = [out.snr; snrX];
relchg = norm(X - Xp,'fro')/norm(X,'fro');
out.relchg = [out.relchg; relchg];
if print
fprintf('Iter: %d, snrX: %4.2f, relchg:
%4.2e\n',ii,snrX,relchg);
end
% ====================
% Check stopping rule
% ====================
if relchg < opts.relchg
out.sol = X;
out.itr = ii;
[D1X,D2X] = D(X);
KXF1 = abs( ifft2((C.H1 .* fft2(X)) (fft2(oh1p(:,:,1)))));KXF2 = abs( ifft2((C.H2 .*
(fft2(oh2p(:,:,1)))));KXF3 = abs( ifft2((C.H3 .*
(fft2(oh3p(:,:,1)))));
KXF4 = abs( ifft2((C.H4 .* fft2(X)) (fft2(oh1p(:,:,2)))));KXF5 = abs( ifft2((C.H5 .*
(fft2(oh2p(:,:,2)))));KXF6 = abs( ifft2((C.H6 .*
(fft2(oh3p(:,:,2)))));
KXF7 = abs( ifft2((C.H7 .* fft2(X)) (fft2(oh1p(:,:,3)))));KXF8 = abs( ifft2((C.H8 .*
(fft2(oh2p(:,:,3)))));KXF9 = abs( ifft2((C.H9 .*
(fft2(oh3p(:,:,3)))));
fft2(X))
fft2(X))
-
fft2(X))
fft2(X))
-
fft2(X))
fft2(X))
-
KXF=KXF1+KXF2+KXF3+KXF4+KXF5+KXF6+KXF7+KXF8+KXF9;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.f = [out.f; f];
out.tv = [out.tv; tv];
out.fid = [out.fid; fid];
disp('Done!');
return
end
75
% finite diff.
[D1X,D2X] = D(X);
KXF1 = abs( ifft2((C.H1 .* fft2(X)) - (fft2(oh1p(:,:,1)))));KXF2 =
abs( ifft2((C.H2 .* fft2(X)) - (fft2(oh2p(:,:,1)))));KXF3 = abs(
ifft2((C.H3 .* fft2(X)) - (fft2(oh3p(:,:,1)))));
KXF4 = abs( ifft2((C.H4 .* fft2(X)) - (fft2(oh1p(:,:,2)))));KXF5 =
abs( ifft2((C.H5 .* fft2(X)) - (fft2(oh2p(:,:,2)))));KXF6 = abs(
ifft2((C.H6 .* fft2(X)) - (fft2(oh3p(:,:,2)))));
KXF7 = abs( ifft2((C.H7 .* fft2(X)) - (fft2(oh1p(:,:,3)))));KXF8 =
abs( ifft2((C.H8 .* fft2(X)) - (fft2(oh2p(:,:,3)))));KXF9 = abs(
ifft2((C.H9 .* fft2(X)) - (fft2(oh3p(:,:,3)))));
KXF=KXF1+KXF2+KXF3+KXF4+KXF5+KXF6+KXF7+KXF8+KXF9;
[tv,fid,f] = fval(D1X,D2X,KXF,mu);
out.f = [out.f; f];
out.tv = [out.tv; tv];
out.fid = [out.fid; fid];
% ==================
%
Update Lam
% ==================
LamD1 = LamD1 - gamma*beta0*(W1 - D1X);
LamD2 = LamD2 - gamma*beta0*(W2 - D2X);
Lam1 = Lam1 - gamma*beta1*(Z1 - KXF1);Lam2 = Lam2 - gamma*beta2*(Z2
- KXF2);Lam3 = Lam3 - gamma*beta3*(Z3 - KXF3);
Lam4 = Lam4 - gamma*beta4*(Z4 - KXF4);Lam5 = Lam5 - gamma*beta5*(Z5
- KXF5);Lam6 = Lam6 - gamma*beta6*(Z6 - KXF6);
Lam7 = Lam7 - gamma*beta7*(Z7 - KXF7);Lam8 = Lam8 - gamma*beta6*(Z8
- KXF8);Lam9 = Lam9 - gamma*beta9*(Z9 - KXF9);
end
out.sol = X;
out.itr = ii;
out.exit = 'Exist Normally';
disp('Done!');
if ii == opts.maxitr
out.exit = 'Maximum iteration reached!';
end
%% ------------------SUBFUNCTION----------------------------function opts = getopts(opts)
if ~isfield(opts,'maxitr')
opts.maxitr = 500;
end
if ~isfield(opts,'beta0')
opts.beta0 =15;
end
if ~isfield(opts,'beta1')
76
opts.beta1 = 30;
end
if ~isfield(opts,'beta2')
opts.beta2 = 20;
end
if ~isfield(opts,'beta3')
opts.beta3 = 20;
end
if ~isfield(opts,'beta4')
opts.beta4 = 30;
end
if ~isfield(opts,'beta5')
opts.beta5 = 25;
end
if ~isfield(opts,'beta6')
opts.beta6 = 25;
end
if ~isfield(opts,'beta7')
opts.beta7 = 30;
end
if ~isfield(opts,'beta8')
opts.beta8 = 22;
end
if ~isfield(opts,'beta9')
opts.beta9 = 22;
end
if ~isfield(opts,'gamma')
opts.gamma = 1.680;
end
if ~isfield(opts,'relchg')
opts.relchg = 1.e-3;
end
if ~isfield(opts,'print')
opts.print = 0;
end
%% ------------------SUBFUNCTION----------------------------function C = getC(X,h1,h2,h3)
[m,n] = size(X);
C.DtD = abs(psf2otf([1,-1],[m,n])).^2 + abs(psf2otf([1;-1],[m,n])).^2;
C.H1 = psf2otf(h1(:,:,1));C.H2 = psf2otf(h2(:,:,1));C.H3 =
psf2otf(h3(:,:,1));
C.H4 = psf2otf(h1(:,:,2));C.H5 = psf2otf(h2(:,:,2));C.H6 =
psf2otf(h3(:,:,2));
C.H7 = psf2otf(h1(:,:,3));C.H8 = psf2otf(h2(:,:,3));C.H9 =
psf2otf(h3(:,:,3));
C.H1t = conj(C.H1);C.H2t = conj(C.H2);C.H3t = conj(C.H3);
C.H4t = conj(C.H4);C.H5t = conj(C.H5);C.H6t = conj(C.H6);
77
C.H7t = conj(C.H7);C.H8t = conj(C.H8);C.H9t = conj(C.H9);
C.H1tH1 = abs(C.H1).^2;C.H2tH2 = abs(C.H2).^2;C.H3tH3 = abs(C.H3).^2;
C.H4tH4 = abs(C.H4).^2;C.H5tH5 = abs(C.H5).^2;C.H6tH6 = abs(C.H6).^2;
C.H7tH7 = abs(C.H7).^2;C.H8tH8 = abs(C.H8).^2;C.H9tH9 = abs(C.H9).^2;
C.H1tX = abs(ifft2(C.H1t .* fft2(X)));C.H2tX =abs(ifft2(C.H2t .*
fft2(X)));C.H3tX =abs(ifft2(C.H3t .* fft2(X)));
C.H4tX = abs(ifft2(C.H4t .* fft2(X)));C.H5tX =abs(ifft2(C.H5t .*
fft2(X)));C.H6tX =abs(ifft2(C.H6t .* fft2(X)));
C.H7tX = abs(ifft2(C.H7t .* fft2(X)));C.H8tX =abs(ifft2(C.H8t .*
fft2(X)));C.H9tX =abs(ifft2(C.H9t .* fft2(X)));
%% ------------------SUBFUNCTION----------------------------function [tv,fid,f] = fval(D1X,D2X,KXF,mu)
tv = sum(sum(sqrt(D1X.^2 + D2X.^2)));
fid = sum(abs(KXF(:)));
f = tv + mu * fid;
function [D,Dt] = defDDt
D = @(U) ForwardD(U);
Dt = @(X,Y) Dive(X,Y);
function [Dux,Duy] = ForwardD(U)
Dux = [diff(U,1,2), U(:,1) - U(:,end)];
Duy = [diff(U,1,1); U(1,:) - U(end,:)];
function DtXY = Dive(X,Y)
DtXY = [X(:,end) - X(:, 1), -diff(X,1,2)];
DtXY = DtXY + [Y(end,:) - Y(1, :); -diff(Y,1,1)];
Function FourierShift2D
function y = FourierShift2D(x, delta)
%
% y = FourierShift(x, [delta_x delta_y])
%
% Shifts x by delta cyclically. Uses the fourier shift theorem.
%
% Real inputs should give real outputs.
%
% By Tim Hutt, 26/03/2009
% Small fix thanks to Brian Krause, 11/02/2010
% The size of the matrix.
[N, M] = size(x);
78
% FFT of our possibly padded input signal.
X = fft2(x);
% The mathsy bit. The floors take care of odd-length signals.
x_shift = exp(-i * 2 * pi * delta(1) * [0:floor(N/2)-1 floor(-N/2):-1]'
/ N);
y_shift = exp(-i * 2 * pi * delta(2) * [0:floor(M/2)-1 floor(-M/2):-1]
/ M);
% Force conjugate symmetry. Otherwise this frequency component has no
% corresponding negative frequency to cancel out its imaginary part.
if mod(N, 2) == 0
x_shift(N/2+1) = real(x_shift(N/2+1));
end
if mod(M, 2) == 0
y_shift(M/2+1) = real(y_shift(M/2+1));
end
Y = X .* (x_shift * y_shift);
% Invert the FFT.
y = ifft2(Y);
% There should be no imaginary component (for real input
% signals) but due to numerical effects some remnants remain.
if isreal(x)
y = real(y);
end
end
79
References
[1] S. C. Park, M. K. Park, M. G. Kang, “Super-resolution image reconstruction: a
technical overview,” Signal Processing Magazine, IEEE , vol. 20, pp. 21-36, May
2003.
[2] M. Gustaffson, “Surpassing the lateral resolution limit by a factor of two using
structured illumination microscopy,” Journal of Microscopy, vol. 198, pp. 82-87,
2000.
[3] E. Abbe,“Beiträge zur Theorie des Mikroskops und der mikroskopischen
Wahrnehmung.” Archiv für Mikroskopische Anatomie, vol. 9, pp. 413-420, 1873.
[4] M. Gustafsson, “Extended resolution fluorescence microscopy,” Current Opinion
in Structural Biology, vol. 9, pp. 627-634, 1999.
[5] S. A. Shroff, J. R. Fienup, D. R. Williams, “OTF compensation in structured
illumination superresolution images,” Proc. SPIE, vol. 7094, pp. 2-11, 2008.
[6] T. Stathaki, Image Fusion: Algorithms and Applications, Academic Press,
London, 2011.
[7] S. Farsiu, D. Robinson, M. Elad, P. Milanfar, “Fast and robust multi-frame
superresolution,” IEEE Trans. Image Processing, vol. 13, pp. 1327–1344, 2004.
[8] M. A. Lukas, “Asymptotic optimality of generalized cross-validation for choosing
the regularization parameter,” Numerische Mathematik, vol. 66, pp. 41-66, 1993.
80
[9] N. Nguyen, P. Milanfar, G. Golub, “Efficient generalized cross-validation with
applications to parametric image restoration and resolution enhancement,” IEEE
Trans. Image Processing, vol. 10, pp. 1299-1308, 2001.
[10] P. C. Hansen, D. P. O’Leary, “The use of the L-curve in the regularization of illposed problems,” SIAM Journal on Scientific Computing, vol. 14, pp. 487-1503,
1993.
[11] Y. Wang, J. Yang, W. Yin, Y. Zhang, “A new alternating minimization
algorithm for total variation image reconstruction,” SIAM Journal on Imaging
Sciences, vol. 1, pp. 248–272, 2008.
[12] J. Yang, Y. Zhang, W. Yin, “An efficient TVL1 algorithm for deblurring
multichannel images corrupted by impulsive noise,” SIAM Journal on Scientific
Computing, vol. 31, pp. 2842–2865, 2009.
[13] L. Rudin, S. Osher, E. Fatemi, “Nonlinear total variation based noise removal
algorithms,” Physica D, vol. 60, pp. 259–268. 1992.
[14] T. F. Chan, S. Esedoglu, “Aspects of total variation regularized L1 function
approximation,” UCLA, Tech. Report, 2004.
[15] L. P. Yaroslavsky, H. J. Caulfield, “Deconvolution of multiple images of the
same object,” Appl. Opt., vol. 33, pp. 2157-2162, 1994.
[16] J. Yang, W. Yin, Y. Zhang, Y. Wang, “A fast algorithm for edge-preserving
variational multichannel image restoration,” SIAM Journal on Imaging Sciences,
vol. 2, pp. 569–592, 2009.
81
[17] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, “Image quality
assessment: from error visibility to structural similarity,” Image Processing, IEEE
Transactions on, vol. 13, pp. 600-612, April 2004.
[18] Z. Wang, A. C. Bovik, “A universal image quality index,” Signal Processing
Letters, IEEE, vol. 9, pp. 81-84, 2002.
[19] S. Alliney, S. A. Ruzinsky, “An algorithm for the minimization of mixed l1 and
l2 norms with application to Bayesian estimation,” Signal Processing, IEEE
Transactions on, vol. 42, pp. 618-627, 1994.
[20] N. Hagan, L. Gao, T. S. Tkaczyk, “Quantitative sectioning and noise analysis for
structured illumination microscopy,” Optics Express, vol. 20, pp. 403-413, 2012.