Spectral encoding enhances visual flexibility of surgical endoscopes Please share

advertisement
Spectral encoding enhances visual flexibility of surgical
endoscopes
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Citation
Barsi, Christopher, Andreas Velten, Mirella Lorrainy-Altoe, Atiq
Rehman, and Ramesh Raskar. “Spectral Encoding Enhances
Visual Flexibility of Surgical Endoscopes.” Edited by Tuan VoDinh, Anita Mahadevan-Jansen, and Warren S. Grundfest.
Advanced Biomedical and Clinical Diagnostic Systems XII
(February 27, 2014). (Proceedings of SPIE ; 8935). © 2014
SPIE.
As Published
http://dx.doi.org/10.1117/12.2037454
Publisher
Society of Photo-Optical Instrumentation Engineers (SPIE)
Version
Author's final manuscript
Accessed
Thu May 26 00:45:16 EDT 2016
Citable Link
http://hdl.handle.net/1721.1/92454
Terms of Use
Article is made available in accordance with the publisher's policy
and may be subject to US copyright law. Please refer to the
publisher's site for terms of use.
Detailed Terms
Spectral encoding enhances visual flexibility of surgical endoscopes
Christopher Barsia, Andreas Velten*a,b, Mirella Lorrainy-Altoea, Atiq Rehmanc, Ramesh Raskara
a
MIT Media Lab, 77 Massachusetts Avenue, Cambridge, MA, USA 02139;
b
Morgridge Institute for Research and Laboratory for Optical and Computational Instrumentation,
University of Wisconsin at Madison, 330 N. Orchard St., Madison, WI, USA 53715;
c
Robotic & Minimally Invasive Cardiac Surgery, Sarasota Memorial Hospital, 1700 S. Tamiami
Trail, Sarasota, FL, USA 34239
ABSTRACT
Endoscope cameras play an important and growing role as a diagnostic and surgical tool. The endoscope camera is
usually used to provide a view of the scene straight ahead of the instrument to the operator. As is common in many
remotely operated systems, the limited field of view and the inability to pan the camera make it challenging to gain a
situational awareness comparable to an operator with direct access to the scene. We present a spectral multiplexing
technique for endoscopes that allows for overlay of the existing forward view with additional views at different angles to
increase the effective field of view of the device. Our goal is to provide peripheral vision while minimally affecting the
design and forward image quality of existing systems.
Keywords: robotic surgery, endoscopy, spectral multiplexing, image reconstruction
1. INTRODUCTION
1.1 Introduction
Endoscopic imaging systems are essential tools in medical diagnostics and minimally invasive and robotic surgery.
While providing many advantages over traditional open surgery techniques, endoscopes provide the surgeon with
challenges that often arise from the well-known trade-off between restricted field of view and image resolution and
quality. Although modern tubular endoscopes provide a forward image at an acceptable resolution approaching the
resolution of the human eye, they provide no peripheral vision or ability to pan and tilt the camera view to obtain a better
situational awareness of the scene. Moving components can be used to extend the field of view, but they present the risk
of potential damage to bodily tissue.
Figure 1. Schematic of spectrally-encoded increase in field of view. Two spectrally sensitive mirrors are used to separate
spatially the frontal and side views of an object through a rigid boroscope.
Even though peripheral vision is desirable, however, it cannot come at the cost of imaging resolution in the main forward
direction. Even a low-resolution peripheral view can offer the surgeon knowledge of his surroundings, i.e., detailed
knowledge about the surroundings is not as crucial. Commercial optical tube endoscope systems are designed to
optimize the point spread function, illumination strength, tube diameter, and the tube length. We present an endoscope
Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XII, edited by
Tuan Vo-Dinh, Anita Mahadevan-Jansen, Warren S. Grundfest, Proc. of SPIE Vol. 8935,
89351E © 2014 SPIE · CCC code: 1605-7422/14/$18 · doi: 10.1117/12.2037454
Proc. of SPIE Vol. 8935 89351E-1
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
system that uses spectral multiplexing to use the tube and imaging system of the endoscope for two simultaneous video
feeds, one forward feed and one peripheral feed. The method relies on utilizing the spectrum of the illumination by
inserting wavelength-selective filters in the beam path, at both the entrance and exit pupils of the device. A schematic of
this contribution is shown in Figure 1. The filters send one spectral band to the front and another spectral band to the
side. On the detection side, each band is send to a different spatial location.
1.2 Related Work
Robotic surgical methods are useful for improving minimally invasive surgeries [1,2]. It has been demonstrated that
wide-field methods can improve movement times and are prone to fewer errors for surgical tasks such as knot-tying and
suturing [3,4]. Therefore, much research is done to increase the field-of-view to generate panoramic images of human
cavities [5,6]. Moving wedge prisms can be used to direct the light path to various peripheral locations [7]. Scanning
fibers have been shown to have high-resolution in catheterscopes [8,9]. Polarization-based methods multiplex light based
on its vector nature [10,11]. Many commercial devices exist [12-14].
However, spectral information is also important for, e.g., cancer detection [15], imaging fluorescent markers [16], and
imaging sub-surface features [17], so the spectral coding must be efficient. Some methods exist for mitigating
overlapping spectra (in fluorescence imaging) [18]. Wavelength (spectral) coding has been used for topography (shape)
[19]. Some work has studied various wavelength coding under solid state lighting [20].
In computational imaging, much work has been done in spectral coding in other fields. Spectroscopy using coded
apertures has a long history [21], with much recent progress in light of compressive sensing techniques [22-24]. Superresolution effects have been demonstrated by spectral dilation [25].
2. EXPERIMENTAL DEMONSTRATION
2.1 Method and Setup
Spectral multiplexing employs passive multiband filters to separate an image into different channels by subdividing the
red, green, and blue color channels, so that wavelengths R, G, and B correspond to, e.g., the frontal view, and
wavelengths R + δR, G + δG, and B+ δB correspond to the peripheral view. Because this subdivision is finer than the
spectral resolution of the human eye, these channels still carry full color information and can address the complete color
space perceivable by a human observer. An example of this technique used for left and right image separation is the
Dolby 3D cinema standard [26].
Although the ideal case would multiplex red, green, blue channels to both the front and side views, for our current proofof-concept example, we use standard dichroic filters designed as a low pass filter at 45 degrees. As a result, our system is
monochrome, producing red and green images for the front and side views, respectively. Using custom designed
multiband filters similar to the ones used for separation of images in the Dolby 3d cinema standard will allow us to
provide full true color images of high quality.
Figure 2. Photograph of the setup. The light source (foreground) is fiber-coupled to a rigid boroscope (Olympus). Light at
the exit pupil is split by a dichroic mirror (right) to illuminate front and peripheral objects. Light is collected by the scope
and is separated on the detection side by an identical dichroic mirror (left). A camera records the front and side views in two
different locations.
Proc. of SPIE Vol. 8935 89351E-2
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
A photograph of the experimental setup is shown in Figure 2. A rigid boroscope is illuminated via fiber coupling with a
light source (Olympus ILK-7 white light source). On either side of the boroscope (5 Series, R060-032-000-50) is placed
a multiplexing filter, here a pair of dichroic mirrors with long-pass edge at 567 nm (Thorlabs DMLP567). Both are
oriented at approximately 45° relative to the boroscope axis. The object is placed after the mirror at the exit pupil. The
object here is a sheet of paper with measuring lines printed on it. Part of the paper is directly in front of the mirror, and
part is folded to the side (periphery). Light that is reflected from the object is recorded by an Olympus SLR camera
(Olympus E-PL2). The camera has a front and peripheral position, and it is moved to record two images. Thus, the
multiplexing dichroic filters route one set of frequencies (green) to the “side” view and another set (red) to the “frontal”
view. Images of the object are recorded with and without the filters.
Results are shown in Figure 3. Fig. 3a is an image recorded with only the exit-side mirror in place. As expected, both
will overlap onto the image plane, but it is clear that there is spectral separation, with the green channel containing the
side-view information, and the red channel containing the frontal-view information. With this single image alone, the
two channels can be separated simply by separating numerically the channels in the RGB bitmap. An image recorded
with both mirrors in place is shown in Fig. 3b. Now, we see that the peripheral view has been completely filtered out of
the “front-view” camera position. This would be the same image with both mirrors removed. Note that there is
significant vignetting effects in all images. This is due to the physical separation between the scope and the camera,
which is optimized to be in contact. This is necessary here for inserting the image-side mirror. Integrating the mirrors
within the scope would eliminate this artifact.
Figure 3. a) Image with a single dichroic mirror placed on the object side of the boroscope. b) Same view, but with both
mirrors in place. Only the frontal view is recorded at this camera position. The peripheral view is directed to a different
image plane (c.f., Fig. 1).
Figure 4. a) Object under study. Dichroic mirror and boroscope are visible in the foreground. b,c) Images with the dichroic
mirror in place. d,e) Images without the dichroic mirror.
Proc. of SPIE Vol. 8935 89351E-3
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
We show both front and peripheral views in Figure 4. Fig. 4a shows a photograph of the scene with the exit-side mirror
in place. We see clear spectral separation of the (white light) illumination into two colors. Figures 4b,c are the front
image and side image, respectively. Each component is routed successfully to the appropriate location. Figures 4d,e
show the images without the exit-pupil mirror in place. In this setup, there is no illumination directed to the side, and the
frontal view receives the entire spectrum. The frontal image is therefore routed to both output channels. (There is an
extra horizontal inversion in the side-view channel because of the lack of one mirror.)
2.2 Discussion
The central insight in this communication is that endoscopic images are enhanced by utilizing the wavelength parameter,
which allows for increased information throughput. A possible trade-off, however, is that the recorded images could
become noisier, as optical power is split between different channels. To begin to address this noise issue, we analyze the
central portion of the images in Figures 3b and 4b,c. A region of 60 × 400 pixels is selected. The intensity values of the
red, blue, and green channels, and the composite (sum) are plotted in Figure 4. We see that, for our specific
configuration, the pixel value does not exhibit an appreciable change in standard deviation (Table 1). In general, a more
detailed analysis, via calculation of Cramér-Rao bounds is necessary. The strength of the illumination light is limited by
heat generation in the tube. By filtering the illumination light with the same filters, it is still possible to tune illumination
strength for the forward and sideways channels independently or to disable the side view and send all illumination
through the forward channel. This would yeald nearly the same illumination power and the same signal to noise ratio in
the forward channel as in a system not using wavelength multiplexing.
001111111111011
ion"
0001"0
I
IIUIII
IIII
III
1
11
¡NM 1111111 II
IIII I111
II
I
J
1
III11111
1
I
Figure 4. Plot of pixel values for the central region of Figs. 3b and 4b,c, respectively. A region of 60 x 400 pixels is plotted.
Left: intensity values for Figure 3a. Center: intensity values for Fig. 4b. Right: Intensity values for Fig 4c. Red, green, and
blue curves represent the values of the respective color channels. The cyan curves are the sum of the R, G, and B channels.
Mean
Standard Deviation
Baseline
Ch (G)
Ch (R)
Baseline
Ch (G)
Ch (R)
Ch (R)
88.3
17.1
223.1
5.80
3.31
8.23
Ch (G)
80.0
135.4
61.7
4.69
9.87
4.74
Ch (B)
62.7
93.9
7.0
4.94
5.99
2.99
Composite
231.0
246.4
291.8
14.30
17.03
13.49
Table 1. Average and standard deviations of curves in Figure 5. Note that the standard deviations are all relatively low
compared to the average, indicating high SNR.
Similarly, the design of the mirror should provide a continuous panorama view of the front and side views. Currently,
because the components here are discrete, the filter thickness prevents illumination of the corner of the object (see, e.g.,
Figure 4a). To calculate the gap, we replace the ruled object with a tilted line. If the entire field were illuminated,
stitching the frontal and peripheral views together would reconstruct a continuous line. Here, however, the nonilluminated corner precludes perfect reconstruction, and a gap is revealed (Figure 6). We measure the height of this gap,
Proc. of SPIE Vol. 8935 89351E-4
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
using the known object size, and we calculate a gap angle of approximately 40°. However, for a commercial device, the
mirrors must be integrated with the boroscope, making the gap far smaller via, e.g., thin film fabrication.
Figure 6. Panoramic stitching of frontal and peripheral views. The gap in the sloped line indicates a missing “wedge” of
illumination in the corner of the object. The wedge angle, caused by the finite thickness of the mirror, is ~ 40°.
3. CONCLUSIONS
In conclusion, we have demonstrated a spectral coding method to enhance the field of view of endoscopes for use in
robotic-assisted surgery. The method is straightforward and involves spectral separation of frontal and peripheral views
using a spectrally selective pair of beamsplitters. Our proof-of-principle demonstrated can be improved by designing a
non-planar mirror to acquire a full 360° field-of-view after computational panoramic stitching.
In other
implementations, this device can include an actuator for the filter at the endoscope tip to provide a full-field view. In
addition, the illumination strength can be tuned independently for both channels. Furthermore, multiband channeling
can enable additional flexibility for surgical endoscope systems. It can be added to existing designs at little additional
cost. We envision that this new degree of freedom will help improve mobility and time performance in robotic-assisted
surgeries.
ACKNOWLEDGEMENTS
All authors would like to thank Olympus Corporation for providing their equipment for this project. This work was
funded by the MIT Media Lab Consortium.
REFERENCES
[1] Felger, J., Reade, C., Nifong, L. and Chitwood Jr., R., “Robot-assisted sutureless minimally invasive mitral valve
repair,” Surg. Technol. Int. 12, 185-187 (2004).
[2] Ali, M. R., Bhaskerrao, B. and Wolfe, B. M., “Robot-assisted laparoscopic Roux-en-Y gastric bypass,” Surg.
Endosc. 19(4), 468-472 (2005).
[3] Cao, A., Ellis, R. D., Composto, A., Pandya, A. K. and Klein, M. D., “Supplemental wide field-of-view monitor
improves performance in surgical telerobotic movement time,” Int. J. Med. Rob. Comp. Assisted Surgery 2(4), 364369 (2006).
[4] Cao, A., Ellis, R. D., Klein, E. D., Auner, G. W., Klein, M. D. and Pandya, A. K., “Comparison of a supplemental
wide field of view versus a single field of view with zoom on performance in minimally invasive surgery,” Surg.
Endosc. 22(6), 1445-1451 (2008).
[5] Fair, S. B. and Gilbert, J. A., “Panoramic endoscopy,” Proc. SPIE 1649, 203-207 (1992).
Proc. of SPIE Vol. 8935 89351E-5
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
[6] Roulet, P., Konen, P., Villegas, M., Thibault, S. and Garneau, P. Y., “360° endoscopy using panomorph lens
technology,” Proc. SPIE 7558, 75580T (2010).
[7] Kobayashi, E., Sakuma, I., Konishi, K., Hashizume, M. and Dohi, T., “A robotic wide-angle view endoscope
system using wedge prisms,” Surg. Endosc. 18(9), 1396-1398 (2004).
[8] Lee, C. M., Engelbrecht, C. J., Soper, T. D., Helmchen, F. and Seibel, E. J., “Scanning fiber endoscopy with highly
flexible, 1-mm catheterscopes for wide-field, full-color imaging,” J. Biophotonics, 3(5-6), 385-407 (2010).
[9] Seibel, E. J. and Smithwick, Q. Y. J., “Unique features of optical scanning, single fiber endoscopy,” Lasers in
Surgery and Medicine 30(3), 177-183 (2002).
[10] Tsuruyama, T., Yamanaka, N., Kuwana, K., Masmune, K., Kim, K. and Dohi, T., “Polarizing beam splitter
endoscope for simultaneously observing front and lateral view,” Comp. Aided Surg. 3, 29-37 (2012).
[11] Kim, K., Kamiuchi, H., Masamune, K. and Dohi, T., “A new, safer, controllable field-of-view endoscope avoiding
movement inside body cavities,” Med. Eng. Phys. 33(2), 174-179 (2011).
[12] Olympus,
“PCF-H190
EVIS
EXERA
III
Video
Colonoscope,”
http://www.olympuseuropa.com/exera3/prod_pcfh190.html?page=gi.
[13] C-Vision, Inc., “C-ViewTM Endoscope,” http://www.intersci.com/media/CViewEndoscope.pdf.
[14] Richard Wolf, “Panoview Endoscopes,” http://www.richard-wolf.com/en/company/endoscopy/panoviewendoscopes.html
[15] Kester, R. T., Gao, L., Bedard, N. and Tkaczyk, T. S., “Real-time hyperspectral endoscope for early cancer
diagnostics,” Proc. SPIE 7555, 75550A (2010).
[16] Saunter, C. D., Semprini, S., Buckley, C., Mullins, J. and Girkin, J. M., “Micro-endoscope for in vivo widefield high
spatial resolution fluorescent imaging,” Biomed. Opt. Express 3(6), 1274-1278 (2013).
[17] Yelin, D., Bouma, B. E. and Tearney, G. J., “Volumetric sub-surface imaging using spectrally encoded endoscopy,”
Optics Expr. 16(3), 1748-1757 (2008).
[18] Yang, C., Hou, V., Nelson, L Y. and Seibel, E. J., “Mitigating fluorescence spectral overlap in wide-field
endoscopic imaging,” J. Biomed. Opt. 18(8), 86012 (2013).
[19] Froehly, L., Bachmann, A. H., Depeursinge, C., Lasser, T. and Lang, F., “Wavelength multiplexed spectral
interferometry for endoscopic topographic imaging,” Proc. SPIE 5864, SuD6 (2005).
[20] Farr, M., “Wavelength multiplexing endoscope,” US Patent Application US20090292168A1.
[21] Golay, M. T. E., "Multi-slit spectrometry," J. Opt. Soc. Am. 39(6), 437 (1949).
[22] Katz, O., Levitt, J. M. and Silberberg, Y., "Compressive Fourier transform spectrscopy," Arxiv arXiv:1006.2553
[physics.optics] http://arxiv.org/abs/1006.2553
[23] Mrozack, A., Marks, D. L. and Brady, D. J., "Coded aperture spectroscopy with denoising through sparsity," Opt.
Expr. 20(3), 2297-2309 (2012).
[24] Brady, D. J. and Gehm, M. E., "Compressive imaging spectrometers using coded apertures," Proc. SPIE 6246,
62460A (2006).
[25] Zalevsky, Z., Eckhouse, V., Konforti, N., Shemer, A. and Mendlovic, D., "Super resolving optical system based on
spectral dilation," Opt. Comm. 241(1-3), 43-50 (2004).
[26] Dolby, “Dolby 3D system components,” http://www.dolby.com/us/en/professional/hardware/cinema/dolby3d/dolby-3d-system.html.
Proc. of SPIE Vol. 8935 89351E-6
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/22/2014 Terms of Use: http://spiedl.org/terms
Download