Lightfield Imaging and its Application in Microscopy

advertisement
Lightfield Imaging and its Application in
Microscopy
Michael Bayer
Advisors: Grover Swartzlander and Jinwei Gu
Chester F. Carlson Center for Imaging Science
Rochester Institute of Technology
Rochester, New York 14623
I. A BSTRACT
In conventional photography the focus of the camera must
be adjusted to ensure each captured image is in focus. This is
a result of all the light rays that reach an individual pixel
being summed. From integral photography the concept of
lightfield imaging was developed. With lightfield imaging,
the light ray field is sampled based on angle of incidence.
This system provides the user with the ability to refocus
images after the instance of exposure and to view a scene
from various viewpoints bounded by the main lens aperture.
[1] This technology was used to build a prototype lightfield
microscope. A possible use for this prototype is monitoring the
trajectory of microscopic objects without the need for periodic
refocusing.
II. S TRUCTURE AND S CIENCE OF A L IGHTFIELD C AMERA
Fig. 1: Traditional plenoptic camera. The main lens is focused
onto the microlens array (f2) and the microlens array is
focused onto the imaging sensor (f1).
The lightfield camera system seen in figure 1 consists of a
main lens which can be a generalization of a series of lenses,
a microlens array, and an image sensor. As light enters the
system, the main lens focuses the light onto the microlens
array. For a fixed main lens system, the focal length of the
main lens is the distance to the microlens array. As a result, a
portion of a scene at an effective infinity will focus onto the
microlens array. The microlenses are focused on the principal
plane of the main lens. These lenses are small compared
to the main lens and therefore, the main lens is effectively
at infinity from their view point. The image focused onto
the microlens array is focused onto the image sensor, being
located at the focal length distance of the microlens array. A
microlens should be thought of as an output image pixel, and
a photosensor pixel value should be thought of as one of the
many light rays that contribute to that output image pixel. [5]
All light entering the camera can be characterized by its
radiance passing through the main lens and microlens array.
The convention used to describe this radiance in equation 1
is L(u,v,s,t). The (u,v) coordinates represent a position on the
main lens and (s,t) are positions, or samples, of the lightfield
passing through the microlenses. The (s,t) plane is sampled via
an imaging sensor, i.e., the central pixel under each microlens.
For a visualization of these planes see figure 1. Considering
only the rays of light that pass through a single (u,v) coordinate
is akin to a pinhole camera. By holding a constant ”point”
on the main lens, the aperture effectively shrinks to the (u,v)
point on the main lens. The irradiance reaching the sensor at
microlens position (s,t) is the weighted integral of the total
radiance passing through the system at a point on the main
lens and at a corresponding point on the microlens array.
Z Z
1
EF (s, t) = 2
LF (u, v, s, t)dudv
(1)
F
Note: In equation 1 there is an optical vignetting term,
cos4 (θ), that has been removed for improved clarity, where
theta is the angle between the light ray, (u,v,s,t), and the sensor
plane normal. For paraxial approximations this term can be
ignored all together.
From lightfield imaging theory, there are three basic concepts that should be understood.
1) The light rays incident on each microlens are sampled
according to their angle of incidence. If the separation
between the microlenses and the imaging sensor de-
creased to zero, the lightfield camera would essentially
transform into a conventional camera.
2) Differing viewpoints of the scene are captured by synthetically shrinking the main lens aperture to a certain
(u,v) coordinate.
3) Differing viewpoints also provide parallax information
and a sense of depth in a scene. [5]
III. I MAGE E XTRACTION (S UB -A PERTURE I MAGES )
Once a lightfield is captured, macroscopically, it will appear
to be a traditional photograph taken with a certain focus, such
as would be the case with a conventional camera. Figure 2 is an
example of this image. This is because under each microlens,
a tiny image of the lightfield arriving at each microlens is
captured. In the large scale these tiny images appear to create
the scene being imaged.
Fig. 2: A raw lightfield image has the appearance of being a
traditional photograph when viewed macroscopically.
When the image of the raw lightfield is magnified, it is clear
that the image is composed of many small circular regions.
These are the regions beneath each microlens in the system
and are the images of the lightfield reaching each microlens.
A magnified view is shown in figure 3. Each circular area in
this image is the area of the sensor under each microlens. One
such area can be seen highlighted by a red circle.
Fig. 3: A magnified region from figure 2. Notice the circular
regions. These are the areas of the microlenses, focused onto
the sensor.
Considering equation 1, if (u,v) is held constant, by
selecting the same pixel coordinate under each microlens,
as is the case in figure 4, a pseudo-pinhole camera situation
is created and a sub-aperture image can be extracted. What
is meant by the same pixel under each microlens is that
each microlens has a specific number of (s,t) coordinates.
In figure 4, there is a five pixel diameter and so there will
2
be approximately π 25 , or approximately 20 individual (s,t)
coordinates under each microlens. This also corresponds
to the number of different sub-aperture images that can be
extracted. In this case there will be 20 extracted images,
or 20 individual (u,v) coordinates. Increasing sensor pixel
dimensions, or megapixels, provides a denser population of
(u,v) positions.
It may be beneficial to consider a single microlens as a
representation of the main lens. The number of pixels beneath
each microlens is the number of sub-aperture images that can
be extracted. Increasing the number of pixels thus increases
the number of these images.
Varying the pixel coordinates (s,t) across the entire array
extracts an image that would have resulted had a conventional
image been taken with the sub-aperture as the new aperture
for the system. This light is focused onto the microlens array
at a specific perspective, depending on the location of the subaperture. [4]
For example, the central coordinate (s,t) = (0,0), is extracted
across every microlens in the image. This will isolate the light
rays that passed through the main lens at its central region. A
coordinate that is off-center will extract a sub-aperture image
that is from a main lens synthetic aperture that is off-center.
(See figure 6) By extracting these coordinates periodically
across the microlens array, the light field is being sampled.
The samples in question are rays that passed through a subset
of the full aperture (u,v) plane of the main lens.
Fig. 4: An example of the pixel layout beneath a single
microlens. Taking coordinate (1,0) in this case, and extending
this coordinate across all microlenses, extracts a sub-aperture
image.
This provides a system that records different perspectives of
(a) Pencil in the foreground is in focus.
(b) The middle pencil is in focus.
Fig. 5: An example of digital refocusing applied to an image of three pencils at varying distances from the lightfield camera.
the same scene. Figure 6 is an example of what occurs when
(u,v) is held constant and (s,t) is varied across all microlenses.
In the case of figure 6, a sub-aperture image, or viewpoint
from a specific (u,v) coordinate was extracted from a top and
bottom pixel across all microlenses. By doing this, the original
aperture across the main lens was reduced to isolate the rays
that created these two perspectives.
Fig. 6: Two sub-aperture photographs obtained from a light
field by extracting the shown pixel under each microlens
(depicted on left). Note that the images are not the same, but
exhibit vertical parallax. Image credit from references [4] and
[5].
IV. D IGITAL R EFOCUSING
The title of this section is meant to hint at the nature of
this refocusing method. For a standard camera, a physical
adjustment to the focus must be made for a proper image
to be acquired. Using a lightfield imaging system the need
to physically change the focus between each exposure is
removed. Instead, after the image is taken, using a computer
and software, the focus can be altered. Extending equation 1
to a synthetic coordinate (s’,t’), or a plane of focus that is
before or after the original plane of focus distance, F, results
in equation 2.
1
1
LF u(1 − ), v(1 − ), u0 , v 0 dudv
α
α
(2)
The alpha term is the depth of the virtual film plane relative
to F, and E(αF ) is the photograph formed at the virtual film
plane at a depth of (αF ). [5] Extending the focal length or
compressing it will alter the overall focus of the image and
refocus to the new virtual plane. From equation 2, refocusing
to a virtual plane is accomplished by shifting and adding
multiple sub-aperture images where the amount of shifting that
is required follows equation 3. The amount of shift increases
with the distance the sub-aperture image is from the central
sub-aperture image.
EαF (s0 , t0 ) =
1
α2 F 2
Z Z
1
1
shif t(horizontal, vertical) = u(1 − ), v(1 − ) (3)
α
α
This amount of shift is applied depending on the (u,v)
coordinate of each sub-aperture image. The degree of shifting
that occurs is dependent on the alpha value chosen. This is also
scaled by the (u,v) coordinate of the sub-aperture image being
shifted. Equation 4 is a more intuitive form of the continuous
function from equation 2. For a finite number of sub-aperture
images, refocusing is a summation of the shifted versions of
each individual sub-aperture image.
n m
1 XX
1 0 0
1
EαF (s , t ) =
Iu,v u(1 − ), v(1 − , u , v )
α2 F 2 u v
α
α
(4)
When referencing an (s,t) coordinate or its synthetic
counterpart it is important to again mention that this is the
coordinate taken under each microlens in the image.
0
0
These concepts were applied to the images in figure 5. Using
the process of digital refocusing two separate planes of focus
could be extracted from a single image. Figure 5a focuses on
the pencil in the foreground whereas figure 5b is the same
image, but the focus has been moved further from the camera
to the second pencil by varying the alpha value. These two
images were acquired from one image and one original focus.
V. L IGHTFIELD M ICROSCOPY
The Lightfield Imaging System that has been discussed to
this point has been implemented in a camera setup, or a point
and shoot system for everyday imaging. This technology
can be integrated into other systems such as in the field of
microscopy. It is possible to create a lightfield microscope. In
doing this, one can apply the advantages of lightfield imaging
to microscopy.
There are two rules that govern a lightfield microscope’s
construction.
1.The microlens array must be positioned at the intermediate
image plane.
2. The back focal plane of the microlenses must be
imaged to capture the lightfield.
A standard microscope can be converted into a lightfield
microscope by removing the eyepiece, replacing it with a
microlens array, and positioning a sensor at the back focal
plane of the microlens array. A ray diagram for this situation
is found in figure 7.
For ease of assembly, the back focal plane of the microlens
array can be imaged using a relay, or 1:1 macro lens. A
custom camera would need to be made in order to place the
camera sensor at fIm as this number is on the order of a few
millimeters.
Fig. 7: Ray diagram of a lightfield microscope setup. Focal
length 1 is the objective’s focal length, 2 is the tube length
(generally 160 [mm]), and Focal length Im is the distance to
the back focal plane of the microlens array.
Figure 7 is the ray diagram for a non-infinity corrected
microscope. For further information and additional ray
diagrams for the infinity corrected objectives case refer to
Levoy’s paper, ”Optical Recipes for Lightfield Microscopes.”
[2]
To ensure that the positioning of each component is correct,
equation 5 should be applied, where f3 is the focal length
of the microlens array. Notice that the position of the sensor
behind the microlens array is slightly greater than the true
focal length.
fIm =
1
f2
1
−
1
f3
(5)
VI. M ICROLENS AND O BJECTIVE PAIRING
The resolving power of a multi-lens optical system is
governed by the smallest numerical aperture (NA) among its
lenses. [3] Referring to figure 8, if the main lens’ f number
is larger than the microlenses’ f number, an inadequate use
of sensor area results. When the opposite is true and the
microlenses’ f number is larger than the main lens’ f number,
overlapping occurs, reducing the usable area on the sensor
again.
N2 =
f3
pitch
(7)
N1 and N2 should be equal to ensure that the system uses
the maximum sensor area possible rather than having overlap
or unused area.
VII. R ESOLUTION L IMITS OF THE L IGHTFIELD S YSTEM
Recalling figure 1, the imaging plane in the camera system
is located at the microlens array plane. Each microlens
consolidates the light incident at a given angle to a single
pixel. Due to this fact, the number of microlenses in a
lightfield system determines the maximum pixel dimensions
that can be extracted for each sub-aperture image. A term that
can be used for each microlens is a super-pixel. The number
of super-pixels governs spatial resolution because multiple
pixels are being imaged beneath each microlens.
The pixel density of the sensor also can limit the system.
In figure 4 the maximum number of sub-aperture images, or
images from specific perspectives was limited by the number
of sensor pixels beneath each microlens. Therefore, the pixel
density of the sensor determines the angular resolution of the
system. More pixels beneath a microlens will provide finer
discrimination of perspectives and there will be a greater
lightfield sampling.
Fig. 8: An illustration of matched and mismatched microlens
and main lens f numbers. Image credit from references [3] and
[5].
This is the reason for matching the f numbers of the
objective, which is the main lens for a microscope setup,
and the microlens array. Without proper matching, the quality
of the system will be degraded. Equation 6 expresses the f
number for an objective as a function of the magnification
and numerical aperture, where M is the magnification, NA is
the numerical aperture of the objective, and N is the resulting
f number.
M
N1 =
(6)
2N A
To find the f number of the microlens array, equation 7
divides the focal length of the microlens array, defined as f3
earlier, by the pitch, which in this case is the aperture size of
the microlenses.
In other words, the spatial resolution is controlled by the
number of microlenses (Ns ∗ Nt ) and angular resolution is
determined by the number of resolvable samples behind each
microlens, or (Nu ∗ Nv ). The total resolution for a lightfield
system can be written as Ns ∗ Nt ∗ Nu ∗ Nv . In microscopy the
total resolution is limited by the number of resolvable sample
spots in the specimen. The number of resolvable spots is also
the number of pixels beneath each microlens, or (Nu ∗ Nv ).
This criterion is known as the Sparrow limit and is defined
as the smallest spacing between two points on the specimen
such that intensity along a line connecting their centers in the
image barely shows a measurable dip. [3] On the intermediate
image plane the Sparrow limit can be expressed as
0.47λ
M
(8)
NA
where λ is the wavelength of light, NA is the numerical
aperture of the objective, M is the magnification, and Robj is
the smallest spacing distance. Based on the balance between
spatial and angular resolution, equation 9 provides an upper
limit on the number of measurable spots for a system.
Robj =
Nu ∗ Nv =
W ∗H
Robj
(9)
Another limiting factor is axial resolution, or depth of field.
This is the ability to refocus to features at varying depths
in a sample. For a microscope the total depth of field is a
combination of both the standard depth of field derived from
geometrical optics alone and an additional term to take into
account the wave optics of the microscope. This is given by
n
λn
+
e (10)
2
NA
M ∗ NA
where e is the spacing between samples in the image and
n is the immersion medium’s index of refraction. [3] In the
case of high NA microscope objectives oil immersion may
be necessary, therefore n would be the index of the oil used.
In most cases the index will be 1.0 (air) because immersion
is not necessary for low magnifications.
Dtot = Dwave + Dgeometric =
For a microscope with Nu = Nv , which is the case for
circular or square microlenses, equation 10 is dominated by
the geometrical term and becomes
(2 + Nu )λn
2N A2
VIII. E XPERIMENTAL S ETUP
Dtot =
(11)
Using the microscope concepts outlined in the previous
section, a lightfield microscope was constructed in figure 9
above. In our prototype lightfield microscope, a Thorlabs 150
[µm] pitch, 5200 [µm] focal length, 10x10 [mm] microlens
array was used. The available objective that paired best with
the microlens array was 30x, 0.35NA.
The f number of the microlens array is 35 and the objective’s
f number is 43. These are not perfectly paired and follow the
limitations in figure 8. From the Sparrow limit in equation
8, at 535 [nm], Robj =21.6 [µm]. The maximum number of
resolvable spots for a full frame, 36x24 [mm] sensor was
(1600x1110), from equation 9. With the microlens used in
this setup having a pitch of 150 [µm], the sub-aperture images
have a maximum pixel amount equal to 288x192 pixels. For
symmetrical microlenses, Nu =Nv = # of resolvable spots per
microlens. This is given by the microlens pitch divided by
Robj , which equals 6.94 resolvable spots per microlens. This
number specifies the angular resolution of this system. In
object space the lateral resolution is given by the microlens
pitch divided by the magnification of the objective, which
is 150 [µm] / 30x. This produces a lateral resolution on
the sample of 5 [µm]. Considering the axial resolution from
equation 11, the total depth of focus for this system is 19.5
[µm]. These results were found assuming that we have a
microlens array that has a WxH that fills the sensor of the
camera, i.e. 36x24 [mm]. Recalling the specification for the
prototype lightfield microscope being discussed, the available
microlens array is 10x10 [mm]. This reduces total number
of resolvable spots and the maximum sub-aperture image
resolution. These results are summarized in table I.
TABLE I: Resolution factors calculated for an optimum, full
frame microlens array, and for the microlens array used in the
prototype lightfield microscope. This microlens array measures
10x10 [mm] with 150 [µm] pitch. The wavelength of light
used throughout was green light at λ=535 [nm].
Factors
Robj
Nu total
Nu
per
microlens
Sub-Aperture Dimensions
Angular Resolution
Lateral
Resolution
Axial Resolution
30x/0.35NA, full fram microlens array
21.6 [µm]
1670x1110 [spots]
6.94 [spots/microlens]
240x160 [pixels]
6.94 [spots/microlens]
30x/0.35NA, 10x10
[mm] microlens array
21.6 µm]
463x463 [spots]
6.94
[spots/microlens]
67x67 [pixels]
5 [µm]
6.94
[spots/microlens]
5 [µm]
19.5 [µm]
19.5 [µm]
For an illumination source, a fiberoptic light source was
directed into a condenser setup. The illumination source
was calibrated for Köhler illumination for a uniform angular
distribution of light rays.
Fig. 9: The prototype lightfield microscope developed using
the theory outlined in this paper.
(a) Near Focus
(a) Far left perspective.
(b) Mid-Range Focus
(b) Far right perspective
Fig. 11: These are two sub-aperture images that show an
example of parallax. The circled regions are emphasized
to show that these regions move behind the objects in the
foreground. This is the concept of parallax that is impossible
to achieve with a standard microscope.
Fig. 10: An example of focusing at various depths using the
prototype lightfield microscope. The sample is made up of
semi-translucent spheres suspended in liquid.
Using the theory from the section VII a comparison of
angular resolution, sub-aperture pixel amount, and axial
resolution versus microlens pitch is found in figure 12,
13, and 14. These results were all calculated based on the
experimental setup using a 30x, 0.35 NA objective, and
assuming a wavelength of light equal to 535 [nm]. For each
plot a red box signifies the capabilities of the prototype
lightfield microscope.
Using the prototype lightfield microscope outlined above,
figure 10 shows the resulting refocused imagery for semitranslucent spheres suspended in liquid. From figure 10a
to 10c, the focus is changed from a near focus to a far
focus. Also, from the extracted sub-apertures images parallax
was also seen. With a standard microscope parallax is
impossible to achieve. This is because a standard microscope
is orthographic, meaning it produce different perspectives of
a sample. Also, translating the stage in the x and y directions
does not produce parallax. The technology presented in
this paper and seen in the results of the prototype lightfield
microscope produce images that do exhibit parallax and do
allow for the extraction of perspectives. Figure 11 presents
two sub-aperture images that show parallax. The circled
regions in figures 11a and 11b are emphasized to show the
viewer that parallax is occurring.
Fig. 12: Angular resolution versus microlens pitch for different
objects, assuming green light.
(c) Far Focus
consideration is matching the f number of the objective with
the microlens array. In this prototype the microlens array could
only approximately match with the 30x objective being used.
X. C ONCLUSION AND F UTURE W ORK
Fig. 13: Output pixel amount versus microlens pitch for
different objects, assuming green light. This curve is not
dependent on the objective and therefore only one curve is
plotted.
Using the theory outlined in this paper and its references,
a prototype lightfield microscope was constructed. This microscope successfully captured lightfield images of various
samples, providing imagery that can be refocused to different
depths, exhibit multiple viewpoints, and show parallax, which
is impossible with a standard microscope.
The focus for future work with this prototype microscope is to
be able to measure the trajectory and orientation of a moving,
microscopic object. To do this, an increase in lateral resolution
and an increase in axial resolution is needed as the microscopic
objects in question approximately equal the current lateral
resolution. With improvemts, I believe this technology can be
used to map trajectory and orientation of a moving object.
XI. ACKNOWLEDGEMENTS
I would like to thank Dr. Grover Swartzlander and Dr.
Jinwei Gu for their assistance and advisement throughout this
process. I would also like to thank Xiaopeng Peng for her help
throughout the project. Her skills proved invaluable.
Fig. 14: Axial resolution versus microlens pitch for different
objects, assuming green light.
To summarize, as microlens pitch increases, angular resolution increases, sub-aperture image resolution decreases, and
axial resolution increases. There is a constant balance that
must be considered when constructing a lightfield microscope.
To increase the capabilities in one area may also reduce the
capabilities in another.
IX. P OSSIBLE I MPROVEMENTS
This prototype lightfield microscope can be further optimized based on the results found in table I. There are
improvements that have the potential to result in a significant
gain in output image resolution and refocusing quality. In my
opinion, the most important gain to be made is to obtain a
custom manufactured microlens array that is better suited for
the prototype setup. The 10X10 [mm] microlens array fills
only a fraction of the camera’s field of view and results with
a sub-aperture spatial resolution of only 67x67, not counting
the microlenses that are obstructed by the mechanism holding
the array. This wastes sensor area and reduces the field of
view and output spatial resolution of the system. Another
R EFERENCES
[1] J. Wang E. Adelson. Single lens stereo with a plenoptic camera.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2),
February 1992.
[2] M. Levoy. Optical recipes for lightfield microscopes. Technical memo,
Stanford University, 2006.
[3] R. Adams A. Footer M. Horowitz M. Levoy, M. Ng. Light field
microscopy. ACM Transactions on Graphics, 25, 2006.
[4] M. Bredif M. Duval G. Horowitz M. Hanrahan P. Ng, R. Levoy. Light
field photography with a hand-held plenoptic camera. Technical report,
Stanford University, 2005.
[5] R. Ng. Digital Light Field Photography. PhD thesis, Stanford University,
July 2006.
Download