Computational Imaging and Superresolution Program

advertisement
Welcome
8.30
Welcome and Introductions
Big Picture
Overview everyone's interests
Mike Fiddy
Joe Mait + Ravi Athale
Mike, Ravi Athale, Joe Mait
Session 1
9.30
Optical Superresolution I
Colin Sheppard
10.30
Optical Superresolution II
Jim Fienup + Sapna Shroff
11.00
Fundamental limits to optical systems
Raphael Piestun
11.30
Coherence and subwavelength sensing
Aristide Dogariu
Lunch
12.00
Session II
1.00
lunch and posters
Computational Cameras
Shree Nayar
2.00
An overview of superresolution
Chuck Matson
2.30
Spectral estimation algorithms I
Charlie Byrne
3.00
Estimating the degree of polarization
from intensity measurements
Tim Schulz
3.30
GST-PDFT
Markus Testorf
4.00
Light Field Sensing
Marc Levoy
5.00
Motion invariant photography
Fredo Durand
"Woodstock" Themed Reception and
Dinner:
Brainstorming Discussions
Reception
7:00
Session III
9.00
Biological & engineered information proc Andreas Andreou
systs
10.00
Less is More: Coded Computational
Photography
Ramesh Raskar
11.00
New camera form factors
Jim Leger
12.00
From macro to micro: the
challenge of miniaturization
Kenny Kubala
Lunch
12.30
Session IV
1.30
lunch
PERIODIC
Bob Plemmons/Sudhakar Prasad
2.00
COMP-I advances
Bob Gibbons/Nikos Pitsianis/
Andrew Portnoy
2.30
Imaging demos
Contest
discussion and Futures Contest (powerpoint 2030 concepts)
Reception
6:00
"Around the World" Themed Reception
Speaker
After dinner
speaker
Optical superresolution using compressive Dave Brady
spectral imaging systems
Vote
*Vote on 2030 concept papers!
Session V
8.30
Imaging with coded apertures
Bill Freeman
9.30
Multiaperture imaging sensors
Keith Fife
10.30
Feature specific imaging
11.30
Compressive imaging for wide-area
persistent surveillance
Bob Muise
12.00
Single pixel camera
Kevin Kelly
Mark Neifeld
Lunch
12.30
Open Forum:
lunch
Challenges and Future Initiatives
1.30
Device Fabrication and Integration
2.00
Funding needs roundtable:
Dennis Healy (DARPA)
Eric Johnson (NSF)
Eric Johnson (NSF)
Dr. Todd Du Bosq (Army Night
Vision)
Tim Persons (IARPA)
Wrap Up
4.00
Wrap up
5.00
end
Reception
6:00
"Mardi Gras" Themed Reception and
Dinner
continue discussions….for those who
are left!
Dr. Robert G. Wilhelm
Executive Director
Charlotte Research Institute
University of North Carolina at Charlotte
An experienced educator, researcher, engineer, and businessman, Dr. Robert G. Wilhelm provides executive and
administrative leadership for the Charlotte Research Institute (CRI), UNC Charlotte’s portal for business-university
science and technology partnerships. With its research centers housed in three new custom-designed buildings on the
Charlotte Research Institute Campus, CRI helps companies initiate new partnerships at UNC Charlotte and offers a
variety of opportunities to engage talented faculty and make use of specialized facilities that are available only at UNC
Charlotte.
Wilhelm is a Professor of Mechanical Engineering and Engineering Science in the William States Lee College of
Engineering. Dr. Wilhelm has wide experience in both academic and business circles. At UNC Charlotte since 1993,
Wilhelm was a founding faculty member for PhD programs in Mechanical Engineering, Biotechnology, Information
Technology, and Nanoscience. He served on the committees to form the School of Computing and Informatics and the
PhD program in Optical Sciences and Engineering. Most recently he served as the associate director of the Center for
Precision Metrology, an Industry/University Cooperative Research Center funded by the National Science Foundation.
Before coming to Charlotte, Wilhelm worked at the Palo Alto Laboratory of Rockwell Science Center and at Cincinnati
Milacron. He co-founded a high-technology manufacturing company, OpSource, Inc., in 2001.
Wilhelm holds a bachelor’s degree in industrial engineering from Wichita State University, a master’s degree in
industrial engineering from Purdue University, and a doctorate in mechanical engineering from the University of Illinois
at Urbana-Champaign. Wilhelm also pursued postgraduate studies in Great Britain as a Rotary Foundation Fellow. His
research and teaching have been recognized with the National Science Foundation Young Investigator Award. Dr.
Wilhelm serves on a number of regional, national, and international advisory boards for scientific research,
engineering, community and economic development, and philanthropy.
Dr. Michael A. Fiddy
Director
Center for Optoelectronics and Optical Communications
University of North Carolina at Charlotte
The Charlotte Research Institute and the Center for Optoelectronics and Optical Communications at the
University of North Carolina at Charlotte welcomes participants to its workshop on Computational Imaging and
Superresolution. This is the fifth summer workshop to be sponsored by the Charlotte Research Institute and we are
very grateful to them for their support. We also thank MITRE Corporation and the National Science Foundation for
their sponsorship of this workshop.
Recent advances in technologies for optical wavefront manipulation, optical detection, and digital postprocessing have opened up new possibilities for imaging in the visible and IR. New imaging systems are emerging
which differ in form factor and capabilities from traditional imaging and camera designs. The DARPA MONTAGE
program pushed forward ideas for reduced form factor cameras incorporating new concepts in integrationof optical,
detection, and processing subsystems. This has lead to emerging capabilities for co-design and joint optimization of
the optical, detection, and more importantly, information processing aspects of imaging systems. A parallel effort,
IARPA’s PERIODIC program sought new functionality by exploiting a number of lenslets to capture and fuse different
information about a scene. This too has lead to new ideas about what defines a camera. Both programs have
advanced the integration of microoptics technologies and new algorithms that can reduce data acquisition while
extracting more information of value. This workshop will bring together researchers withboth hardware and software
expertise as well as mathematicians and physicists who are actively working on the fundamental issues of information
theory and light-matter interactions, to bring new ideas to this exciting field. Diverse communities that have not
interacted before, such as the IEEE computational photography group and experts in fundamental limits to optical
superresolution will participate.Our intentionis to bring together these different communities, provide a stimulating
environment and ample opportunities for exchanging new ideas. Following the style of a Gordon Conference, we hope
to have provided sufficient time throughout each day and in the evenings for participants to interact with each other
and form long term collaborative partnerships, that advance the field.
A special thanks goes to those who have had to deal with all of the logistical and planning details that go into
making a workshop such as this a success. This meeting would not have been possible without the hard work and
dedication of Mark Clayton, Karen Ford, Jerri Price, Margaret Williams and Scott Williams.
MICHAEL FIDDY received his Ph.D in Physics from the University of London in 1977, and was a post-doc in the
Department of Electronic and Electrical Engineering at University College London before becoming a tenured faculty
member in 1979 at Queen Elizabeth College and then Kings College, London University. Between 1982 and 1987, he
held visiting professor positions at the Institute of Optics Rochester and the Catholic University of America in
Washington, DC. Dr. Fiddy moved to the University of Massachusetts Lowell in 1987 where he was Electrical and
Computer Engineering Department Head from 1994 until 2001. In 2002 he moved to UNC Charlotte to become the
founding director of the Center for Optoelectronics and Optical Communications. He was the topical editor for signal
and image processing for the J.O.S.A. A from 1994 until 2001 and has been the Editor-in-Chief of the journal Waves in
Random and Complex Media (Taylor and Francis) since 1996. He has chaired a number of conferences in his field, and
is a fellow of the Optical Society of America, the Institute of Physics and the Society of Photo-Optical Engineers
(SPIE). His research interests include inverse problems and optical information processing
We Thank our Sponsors:
Dr. Andreas G. Andreou
Electrical and Computer Engineering
Center for Language and Speech Processing and
Whitaker Biomedical Engineering Institute
Johns Hopkins University
andreou@jhu.edu
Title: Silicon Eyes
Biological sensory organs operate at performance levels set by fundamental physical limits, under severe constraints of size, weight and energy
resources; same constraints that sensor network devices have to meet. Eyes are specialized sensory structures in biological systems that are
employed to extract information from the intensity, polarization and spectral content of the light that is reflected or emitted by objects in the natural
environments. Reliable and timely answers to the questions: “Is there anything out there?”, “where is it?” and eventually “what is it?” is the goal of
processing that follows the photoreceptor mosaics. This is in contrast to CCD or CMOS video and still cameras that have been developed for the
precise measurement of the spatial-temporal light intensity and color distribution, often within a fixed time interval, for accurate communication and
reproduction in electronic or printed media.
In this talk, I discuss bio-inspired image sensor architectures that employ local processing for data reduction and information extraction. I begin
with processing at the photon level to extract polarization information at each pixel. I then introduce circuits for analog local pre-processing for
spatial and temporal filtering and gain control, addressing issues of noise and device mismatch. I then examine the power/rate/latency tradeoffs of
synchronous and asynchronous schemes for accessing the pixel data in the 2D arrays. Anisochronous pulse time modulation and address event
representation encoding and processing of data in distributed architectures, is an attractive alternative to traditional synchronous digital signal
processing. Finally I discuss more recent work in single photon detection using deep sub-micron CMOS technologies and light field sensing pixels
fabricated using 3D SOI-CMOS technologies.
References
M. Adlerstein Marwick and A.G. Andreou, “Fabrication and testing of single photon avalanche detectors in the TSMC 0.18um CMOS technology,”
Proceedings of the 41st Annual Conference on Information Sciences and Systems (CISS07), pp. 741-744, Baltimore, March 2007.
E. Culurciello and A.G. Andreou, “CMOS image sensors for sensor networks," Analog Integrated Circuits and Signal Processing, Vol. 49, pp. 39-51,
2007
F. Tejada, P.O. Pouliquen and A.G. Andreou, “Stacked, standing wave detectors in 3D SOI-CMOS,” Proceedings of the 2006 IEEE International
Symposium on Circuits and Systems, (ISCAS 2006), Kos, Greece, pp. 1315-1318, May 2006.
M.A Marwick and A.G. Andreou, “Retinomorphic system design in three dimensional SOI-CMOS,” Proceedings of the 2006 IEEE International
Symposium on Circuits and Systems, (ISCAS 2006), Kos, Greece, pp. 1655-1658, May 2006.
A.G. Andreou and Z.K. Kalayjian, “Polarization imaging: principles and integrated polarimeters,” IEEE Sensors Journal, Vol. 2, No. 6, pp. 566-576,
Dec. 2002.
P.A. Abshire and A.G. Andreou, “Capacity and energy cost of information in biological and silicon photoreceptors ," Proceedings of the IEEE, Vol.
89, No. 7, pp. 1052-1064, July 2001 (Invited Paper)
L.B. Wolff, T.A. Mancini, P.O. Pouliquen and A.G. Andreou, “Liquid crystal polarization camera,” IEEE Transactions on Robotics and Automation, Vol.
13, No. 2, pp. 195-203, April 1997.
ANDREAS ANDREOU received his Ph.D. in electrical engineering and computer science in 1986 from Johns Hopkins University..
Andreou became an assistant professor of electrical and computer engineering in 1989, associate professor in 1993 and professor in
1996. He now holds appointments in computer science and in the Whitaker Biomedical Institute. He is the co-founder of the Johns
Hopkins University Center for Language and Speech Processing. In 1996 and 1997 he was a visiting professor of the computation
and neural systems program at the California Institute of Technology. In 1989 and 1991 he was awarded the R.W. Hart Prize for
his work on mixed analog/digital integrated circuits for space applications. He is the recipient of the 1995 and 1997 Myril B. Reed
Best Paper Award and the 2000 IEEE Circuits and Systems Society, Darlington Award. During the summer of 2001 he was a visiting
professor in the department of systems engineering and machine intelligence at Tohoku University Andreou's research interests
include sensors, micropower electronics, heterogeneous microsystems, and information processing in biological systems. He is a coeditor of the IEEE Press book: Low-Voltage/Low-Power Integrated Circuits and Systems, 1998 (translated in Japanese) and the
Kluwer Academic Publishers book: Adaptive Resonance Theory Microchips, 1998.
Amit Ashok
Senior Scientist
OmniVision CDM Optics Inc.
4001 Discovery Drive, Suite 130
Boulder, CO 80303
Tel: (303) 345-2180
Email: amit.ashok@cdm-optics.com
His main research interests are in the areas of computational optical imaging and statistical signal processing. His
research in optical point spread function engineering and multi-aperture imaging lead to an ultra-thin imager design
with super-resolution capability as a part of DARPA’s MONTAGE program. His doctoral research work involved a formal
framework for an information-theoretic analysis of computational imaging systems and a task-specific approach to
imaging system design. He has published several journal articles on the topics of task-specific design and informationtheoretic analysis of computational imaging systems.
AMIT ASHOK is currently a senior scientist at CDM Optics. He received his Masters degree in Electrical Engineering
from University of Cape Town in 2001 and is currently a PhD candidate in the Electrical Engineering department at
University of Arizona.
Dr. Vasily N. Astratov
Associate Professor
Department of Physics and Optical Science
University of North Carolina at Charlotte
Tel: 704/ 687-8131
Fax: 704/ 687-8197
E-mail: astratov@uncc.edu
http://maxwell.uncc.edu/astratov/astratov.htm
Novel structures and materials: microcavities and photonic crystals, coupled resonator optical waveguides, opals.
Quantum optics: light-matter interaction, optical coupling between high-Q cavities, localization of light, polaritons.
Optoelectronics applications: integrated optical circuits, delay lines, switches and spectrometers on a chip.
VASILY ASTRATOV is an associate professor in the Department of Physics and Optical Science at the University of
North Carolina-Charlotte. He received his M.S. from the St. Petersburg State University, Russia, in 1981, and received
his Ph.D. degree from the A.F. Ioffe Physical-Technical Institute, St. Petersburg, in 1986. In 1993-1997 he headed a
research group at the Ioffe Institute where he pioneered studies of synthetic opals as new three-dimensional photonic
crystal structures, the work which directly resulted in a quest for high contrast opals with a complete photonic band
gap. In 1996 he was awarded a grant of Royal Society that enabled his visit to the University of Sheffield, U.K. In
1997-2001 he worked as a postdoctoral scholar at the University of Sheffield where he developed novel surface
coupling techniques for studying photonic crystal waveguides, and was engaged in the studies of semiconductor
microcavities. He has been an assistant professor from 2002 to 2007 in the Department of Physics and Optical Science
at the University of North Carolina-Charlotte, where he is now an associate professor. His current research aims at
studying optical properties of novel mesoscopic structures and materials formed by coupled ultra high-Q cavities. He is
a topical editor for the journal Optics Express since 2005. In 2007 he organized and edited a Focus Issue of Optics
Express devoted to Physics and Applications of Microresonators. He was one of the hosts and a main organizer of the
CRI workshop on Physics of Microresonators in 2007. He has served as a technical committee member for CLEO/QELS
2006-07, Special session on Microresonators and Photonic Molecules at ICTON 07, and OECC/ ACOFT 08. He has been
a member of the international DFG panel on photonic crystals in Germany. He is a recipient of a number of awards
including Senior Visiting EPSRC Fellow Award in the UK in 2006, Award of the Exchange Program adopted between
Royal Society and Russian Academy of Sciences in 1996, and the Award in the Annual Competition from A.F. Ioffe
Physical-Technical Institute in 1985. He is a member of OSA and SPIE.
Dr. Ravi Athale
Principal Scientist
Emerging Technology Office
MITRE Corp.
MS H205
7515 Colshire Drive
McLean, VA 22102-7508
Principal Scientist at MITRE Corporation and until recently Photonics Program Manager at the Defense Advanced
Research Projects Agency (DARPA/MTO). Amongst other accomplishments at DARPA, Dr .Athale, along with Dr.
Dennis Healy, managed MONTAGE, the Multiple Optical Non-Redundant Aperture Generalized Sensors) program.
Current research interests include optical interconnections and switching and hybrid digital/optical imaging systems.
Dr. Athale has been issued several patent in optical processing and computing. He is a cofounder of HoloSpexTM, Inc.
and a CO-inventor of HoloSpexTM glasses, the first consumer product that is based on far field holograms.
RAVI ATHALE received his B.Sc.(1972) from University of Bombay and M.Sc (1974) from Indian Institute of
Technology, Kanpur, both is Physics. He finished his Ph.D. (1980) in Electrical Engineering from University of Calif.,
San Diego. From 1981 to 1985 he worked as a Research Physicist at US Naval Research Laboratory, in Washington,
DC. His areas of research were optical signal and image processing systems and From 1985 to 1990 he was a Senior
Principal Staff Member at BDM Corporation in McLean, VA where he headed a group in Optical Computing. His
research there was in optical interconnects and multistage switching networks and optical neural network
implementations. Since 2001 he has been a program manager for Photonics at the Defence Advanced Research
Projects Agency (DARPA). Prior to that he was an Associate Professor in the Electrical and Computer Engineering
Department at George Mason University, in Fairfax, VA. His research has been in the area of fiber optic signal
processing and analysis of fundamental limitations in optical interconnection networks. Dr. Athale was elected Fellow
of the Optical Society of America in 1989 and he is a member, Lasers and Electro-Optics Society, IEEE. He chaired the
first two topical meetings on Optical Computing in 1985 and 1987 and edited Critical Review of Technology volume on
Digital Optical Computing, 1990 published by SPIE. In 1992 he founded in 1992, under DARPA sponsorship
Consortium for Optical and Optoelectronic Technologies in Computing (CO-OP). CO-OP, which he has directed since
then, is a unique experiment in transitioning emerging device technologies to the user and systems research
community at large. It has been responsible for organizing in cooperation with Lucent Bell Labs the first multi-project
foundry run for hybrid integrated optoelectronic device technology (CMOS-Multiple Quantum Well hybrid devices).
Dr. David J. Brady
Professor
Fitzpatrick Institute for Photonics, Electrical and Computer
Engineering Department
Duke University
Box 90291, Durham NC 27708
Tel: (919) 660-5394
Email: dbrady@duke.edu
Title: Optical superresolution using compressive spectral imaging systems
Optical superresolution may reflect sub-wavelength feature detection in microscopic systems or sub-diffraction limit
detection in remote sensing systems. Both categories are enabled by spectral imaging. This talk considers the
limits of microscopic and remote sensing using emerging spectral imagers based on snapshot imagers using
compressive projections and describes physical mechanisms for implementing such projections.
DAVID J. BRADY is Professor of Electrical and Computer Engineering at Duke University and leader of the Duke
Imaging and Spectroscopy Program (DISP). Brady was the founding director of the Fitzpatrick Institute for Photonics
and Founder of Centice Corporation, Blue Angel Optics and Distant Focus Corporation. Brady is a Fellow of the Optical
Society of America and SPIE and was program chair of the 2001 Optical Society Topical meeting on Integrated
Computational Imaging Systems and General Chair of the 2005 topical meeting on Computational Optical Sensing and
Imaging. As PI of the Compressive Optical MONTAGE Photography Initiative, Brady had the honor of integrating work
from UNC Charlotte, the University of Delaware, Michigan Tech University, Rice University, the University of Rochester,
Raytheon Company and Digital Optics Corporation with DISP’s integrated imaging systems.
Dr. Charles Byrne
Professor
Department of Mathematical Sciences,
University of Massachusetts Lowell, Lowell, MA
Charles_Byrne@uml.edu
http://faculty.uml.edu/cbyrne/cbyrne.html
Title: Prior Knowledge and Resolution Enhancement
The problem is to reconstruct a (possibly complex-valued) function f(r) of one or several variables from finitely many measurements
d_n, n=1,...,N, pertaining to the function. The function f(r) represents the physical object of interest, such as the spatial distribution
of acoustic energy in sonar, the distribution of x-ray-attenuating material in transmission tomography, the distribution of
radionuclide in emission tomography, the sources of reflected radio waves in radar, and so on. Often the reconstruction, or
estimate, of the function takes the form of an image in two or three dimensions; for that reason, we also speak of the problem as
one of image reconstruction. The data are obtained through measurements. Because there are only finitely many measurements,
the problem is highly under-determined and even noise-free data are insufficient to specify a unique solution. One way to solve
such under-determined problems is to replace f(r) with an N-vector and to use the data to determine the N entries of this vector.
An alternative method is to model f(r) as a member of a family of linear combinations of N preselected basis functions of the multivariable r. Then the data is used to determine the coefficients. This approach offers the user the opportunity to incorporate prior
information about f(r) in the choice of the basis functions. Such finite-parameter models for f(r) can be obtained through the use of
the minimum (weighted)-norm estimation procedure.
References
[1]Image reconstruction: a unifying model for resolution enhancement
and data extrapolation. Tutorial, Journal of the Optical
Society of America, A, (23(2), 258--266 (2006), with M.
Shieh and M. Fiddy.
[2]Iterative image reconstruction using prior knowledge, Journal
of the Optical Society of America, A, (23(6), 1292--1300
(2006), with M. Shieh, M. Testorf, and M. Fiddy.
CHARLIE BYRNE has a B.S. (1968) from Georgetown University and an M.A. (1970) and Ph.D. (1972) from the University of
Pittsburgh, all in mathematics. From 1972 until 1986 he was a member of the Mathematics Department at The Catholic University
of America, serving as chairman from 1983 to 1986. Since 1986 he has been a member of the Department of Mathematical
Sciences at the University of Massachusetts, Lowell, serving as chairman from 1987 to 1990. His early research work was in
functional analysis and topology. From 1981 to 1983 he was on leave-of-absence at the Division of Acoustics, Naval Research
Laboratory, Wash. D.C., doing acoustic signal processing. His work on reconstruction from limited data led to a collaboration with
Dr. Mike Fiddy, then of the University of London, on problems arising in optics. In June of 1986 he was a consultant in acoustic
signal processing for the Australian Department of Defence in Adelaide. Since about 1990 he has been working with members of
the Department of Nuclear Medicine, University of Massachusetts Medical School, Worcester. His research interests these days
include iterative reconstruction algorithms for medical imaging, particularly emission and transmission tomography, and more
general iterative algorithms in optimization theory.
Larry Candell
MIT Lincoln Laboratory
Division 9
244 Wood Street, S4-511
Lexington, MA 01821
(781) 981-7907
lmc@ll.mit.edu
Research Interests:
Mr. Candell started at Lincoln Laboratory in 1986 as an MIT Electrical Engineering and Computer Science Department
VI-A Program intern. In 1989, he joined the Laboratory full time in the Countermeasures Technology Group, designing
jammers, specialized “set-on” receiver systems, and high performance RF direction-of-arrival systems. Two years later,
he became involved with the National Oceanic and Atmospheric Administration’s weather satellites, performing
analysis and design of next-generation geostationary infrared imaging and sounding instruments. In 1996, he became
a group leader in the Sensor Technology and Systems Group and was responsible for running the Geostationary
Operational Environmental Satellite (GOES) program. In 1999, Mr. Candell led the formation of the Advanced Space
Systems and Concepts Group, which has focused on the design of novel electro-optical systems for surveillance and
communications, and became its Group Leader.
Mr. Candell has contributed to cutting-edge imaging systems, ranging from gigapixel cameras for persistent
surveillance to million-frame-per-second cameras for analyzing missile defense missions. He has developed not only
ground-based advanced sensor prototypes, but also sensor prototypes for air, rocket, and space platforms. He was the
associate program manager for the Mars Laser Communications Demonstration, responsible for the development of a
distributed aperture receiving system that could successfully decode with efficiencies of nearly 3 bits/photon. He has
been a key leader across the entire spectrum of Division 9’s system and technology programs, and has been
recognized in 2006 with a Lincoln Laboratory Technical Excellence Award.
In addition, Mr. Candell has served as a member and co-chair of the New Technology Initiatives Board and the
Strategic Core Technology Group. He has also been involved with management effectiveness at the Laboratory,
serving as the chair of the Group Leader Management Effectiveness Committee and helping to organize the Group
Leader Offsite programs.
LAWRENCE M. CANDELL is Assistant Head of the Aerospace Division at MIT Lincoln Laboratory. He holds SB and SM
degrees in electrical engineering and an SB degree in management, all from MIT. He specializes in signal processing,
electro-optical systems, and optical communications.
Aaron Cannistra
UNC Charlotte
Optics Student
9201 university city blvd
Charlotte, NC 28262
atcannis@uncc.edu
Research Interests:
Diffractive, refractive, and sub-wavelength micro/nano-optics
Novel fabrication methods for micro/nano-optics
Microsystems integration and applications
Nanoreplication and nanomanufacturing
Multi-axis free-form micromachining
Near-field diffraction and Talbot self-imaging
AARON CANNISTRA received an AAS in Electrical Engineering Technology form Wilkes Community College in 2003
and received a BS in Electrical and Computer Engineering Technology from Western Carolina University in 2005. He is
currently a PhD candidate in the Optics Program at UNC Charlotte.
Dr. Angela Davies
UNC Charlotte
Optics Student
9201 university city blvd
Charlotte, NC 28262
atcannis@uncc.edu
Areas of Research Specialization:
Precision Optics Metrology, Interferometry, Micro-optics Characterization
ANGELA DAVIES received her BS in Physics from the University of Oregon in 1988 and her MS and Ph.D.
from Cornell in 1991 and 1994 respectively. Prior to joining the Physics and Optical Science faculty at the
University of North Carolina at Charlotte, she held various positions at the National Institute of Standards
and Technology, specifically Postdoctoral Fellow and Physicist in the Physics Laboratory and Physicist in the
Manufacturing Engineering Laboratory.
Dr. Aristide Dogariu
CREOL, University of central florida
4000 central florida blvd
Orlando, fl 32816
(407) 823-6839
adogariu@mail.ucf.edu
Title: Random EM fields: use and control of correlations
A naïve description considers light as a deterministic, oscillatory phenomenon. However, it is now well understood that
the optical radiation is in fact a random process in both time and space, which requires a statistical description.
Controlling the stochastic properties of electromagnetic fields opens up new possibilities for characterizing not only the
sources of radiation but also their interaction with material systems. We will present recent advances in using spatial
field correlations for optical tomography and subwavelength sensing.
Optical waves interacting with inhomogeneous media give rise to complicated electromagnetic fields. When regarded
as a superposition of elementary waves with random phases and states of polarization, the statistics of these complex
fields leads to various measurable distributions. The conventional wisdom is to assume circular Gaussianity but these
assumptions are not always valid and more involved analysis is necessary to provide information about the underlying
processes giving rise to random fields. We will review novel concepts in high-order polarimetry and will discuss their
use in a number of sensing applications.
ARISTIDE DOGARIU received his PhD from Hokkaido University and is the Florida Photonics Center of
Excellence Professor of Optics. His research interests include optical physics, wave propagation and
scattering, electromagnetism, and random media characterization. Within the College of Optics and
Photonics at the University of Central Florida he leads the Laboratory for Photonics Diagnostics of
Random Media (http://random.creol.ucf.edu/). Professor Dogariu is a Fellow of the Optical Society of
America, the Physical Society of America, and currently serves as the editor of Optical Technology division
of Applied Optics.
Dr. Todd Du Bosq
Physicist
U.S. Army RDECOM CERDEC
Night Vision and Electronic Sensors Directorate
10221 Burbeck Road
Fort Belvoir, VA 22060
Tel: (703) 704-1312
Email: todd.dubosq@us.army.mil
His research interests include modeling advanced signal processing target acquisition performance
and terahertz imaging.
TODD DU BOSQ is currently a physicist in the modeling and simulation division at the U.S. Army’s
Night Vision and Electronic Sensors Directorate. He received a B.S. degree in physics from Stetson
University in 2001. He received his M.S. and Ph.D. degrees in physics from the University of
Central Florida in 2003 and 2007, respectively.
Dr. Fredo Durand
Associate Professor
MIT CSAIL
The Stata Center, 32-D426, 32 Vassar Street
Cambridge, MA 02139, USA
Tel : (617) 253 7223
Email: fredo@mit.edu
Web: http://people.csail.mit.edu/fredo/
Title: Motion-Invariant-Photography
Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging
because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the
case of motions along a 1D direction (e.g. horizontal) we show that these challenges can be addressed using a sensor that moves
during the exposure. Through the analysis of motion blur as space-time integration, we show that a parabolic integration
(corresponding to constant sensor acceleration) leads to motion blur that is not only invariant to object velocity, but preserves
image frequency content nearly optimally. That is, static objects are degraded relative to their image from a static camera, but all
moving objects within a given range of motions reconstruct well. A single deconvolution kernel can be used to remove blur and
create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge
of the object speeds. We have built a prototype camera, show successful results for deblurring various motions, and compare with
other approaches.
References
Anat Levin, Peter Sand, Taeg Sang Cho, Fredo Durand, William T. Freeman. Motion-Invariant Photography.
SIGGRAPH, ACM Transactions on Graphics, Aug 2008.
FREDO DURAND is an associate professor in Electrical Engineering and Computer Science at the Massachusetts Institute of
Technology, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from
Grenoble University, France, in 1999, supervised by Claude Puech and George Drettakis. From 1999 till 2002, he was a post-doc in
the MIT Computer Graphics Group with Julie Dorsey.
He works both on synthetic image generation and computational photography, where new algorithms afford powerful image
enhancement and the design of imaging system that can record richer information about a scene. His research interests span most
aspects of picture generation and creation, with emphasis on mathematical analysis, signal processing, and inspiration from
perceptual sciences. He co-organized the first Symposium on Computational Photography and Video in 2005 and was on the
advisory board of the Image and Meaning 2 conference. He received an inaugural Eurographics Young Researcher Award in 2004,
an NSF CAREER award in 2005, an inaugural Microsoft Research New Faculty Fellowship in 2005, a Sloan fellowship in 2006, and a
Spira award for distinguished teaching in 2007.
Dr. Gary Euliss
The MITRE Corporation
Emerging Technologies Office
7515 Colshire Drive
McLean, VA 22102
geuliss@mitre.org
(703) 983-6472
Working at the Army Research Laboratory Dr. Euliss was responsible for identifying and initiating independent
research projects concentrating on the application of information theory and linear systems theory to problems in
imaging, optical signal processing, fiber optics, integrated optics, and optical interconnects. It was during his last few
years at ARL that Dr. Euliss became interested in computational imaging, and received an Army Research and
Development Achievement award for research in the area of aperture coding. As a principal engineer at Applied
Photonics, he was responsible for research and development activities in the general areas of imaging and optical
interconnects. Projects included a novel hyperspectral imaging concept developed under the DARPA Photonic
Wavelength and Spectral Sensing Program, and smart pixel-based architectures for LADAR scene projection applied to
hardware-in-the-loop testing facilities (funded by the Air Force Research Laboratory). Since joining MITRE, Dr. Euliss
helped to launch a new computational imaging thrust within the Emerging Technologies Office, and his current
responsibilities include working to expand that activity. He has an interest in prototype development and
demonstration of novel, unconventional imaging architectures. To that end Dr. Euliss is the principal investigator on a
MITRE-funded project to demonstrate a novel coaxial dual-band imaging system. His other current interests include
coded-aperture imaging, detector plane processing, information theory of imaging, “nano-inspired” optical materials,
and fundamental scaling issues in computational imaging.
GARY EULISS received his BS from Southwest Missouri State University (1980), his MS from Kansas State University
(1983), and his PhD from George Mason University (1994). Dr. Euliss is a senior researcher in the Emerging
Technologies Office of The MITRE Corporation in McLean, Virginia. Previously, he worked as a physicist at the Army
Research Laboratory, and a principal engineer at Applied Photonics Corporation, a start-up company in Fairfax,
Virginia.
Dr. Michael Feldman
Entrepreneur
Founder, Digital Optics Corp.
Former President, CTO and Chairman, Digital Optics Corp
Former CTO-Optics, Tessera Corp.
Email: mrfeldman@mac.com
Dr. Feldman is an expert in optical interconnects and diffractive and refractive microoptics. He is well
known for his work in the miniaturization of optics for a wide range of applications including
communications, data storage and semiconductor manufacturing. His technology is used in semiconductor
optics, communications and photonics and more recently camera-phone manufacturers
MICHAEL FELDMAN obtained his BSE in 1984 from Duke University and he earned a doctorate at the
University of California, San Diego. Following that, he took a teaching position in the Electrical and
Computer Engineering Department at UNC Charlotte in 1989. In 1991, he and one of his graduate students,
Hudson Welch, started Digital Optics Corp.
Dr. Feldman led Digital Optics through various roles including President, CTO and Chairman of the Board
from 1991 through its sale to Tessera, Inc in 2006. During this time Digital Optics won several awards for
high growth and technology innovation including National Society of Professional Engineers New Product
Award for Medium-sized companies, Deloite and Touch fast 500 and Inc 500.
Dr. Feldman is an inventor on more than 70 patents and the recipient of the Duke University Distinguished
Young Alumni Award for the year 2000.
Keith Fife
Graduate Student in Electrical Engineering
Stanford University
257 Packard Building
350 Serra Mall
Stanford, CA 94305
Tel: (650) 725-9696
Email: kfife@alum.mit.edu
Title: Devices for Integrated Multi-Aperture Imaging
There has been significant development of image sensors over the last decade with work on CCDs and CMOS-based devices.
Several issues have been addressed such as sensitivity, resolution, capture rate, dynamic range, dark current, crosstalk, power
consumption, manufacturability and cost. One consistent limitation in the design of conventional image sensors has been that the
sensing area is constrained to a regular array of photosites used to recover an intensity distribution in the focal plane of the
imaging system. There are both practical and fundamental issues that limit the scalability or performance of these image sensors.
This research explores an alternative, multi-aperture approach to imaging, whereby the integrated image sensor is partitioned into
an array of apertures, each with its own local subarray of pixels and image-forming optics. A virtual image is focused a certain
distance above the sensor such that the apertures capture overlapping views of the scene. The subimages are post-processed to
obtain both a high resolution 2D image and a depth map. A key feature of this design is in the use of submicron pixels to obtain
accurate depth measurements derived from the localization of features within adjacent subarrays. The pixels are scaled beyond the
conventional limits because the displacement of features between subarrays may be estimated to smaller dimensions than the spot
size of a diffraction or aberration limited lens. Other benefits include the ability to (i) image objects at close proximity to the sensor
without the need for objective optics, (ii) achieve excellent color separation through a per-aperture color filter array, (iii) relax the
requirements on the camera objective optics, and (iv) increase the tolerance to defective pixels. The multi-aperture architecture is
also highly scalable, making it possible to increase pixel counts well beyond current levels.
Fabricated pixel sizes down to 0.5um pitch will be presented along with a prototype multi-aperture image sensor, which comprises
a 166x76 array of 16x16, 0.7um pixel, FT-CCD subarrays with local readout circuit and per-column 10-bit ADCs fabricated in a
0.11um CMOS process modified for buried channel charge transfer. Global snap shot image acquisition with CDS is performed at up
to 15fps with 0.15V/lux-s responsivity, 3500e- well capacity, 5e- read noise, 33e-/sec dark signal, 57 dB dynamic range, and 35 dB
peak SNR.
KEITH FIFE is a currently a Ph.D. student in the department of Electrical Engineering at Stanford University. He received his B.S.
and M.Eng. degrees in Electrical Engineering from Massachusetts Institute of Technology in 1999. He won the MIT 6.270 robot
competition and an EE departmental award for his master's thesis. His work and research has led to several patents in imaging
devices, circuits and systems. After finishing at MIT, he co-founded an image sensor company to develop solutions for consumer
and automotive imaging markets. One product was recognized as "Best of CES" in 2001 and as "World's Thinnest Camera" by
Guinness World Records in 2002. In 2003, he received a Hertz Foundation fellowship and returned to graduate school to work on
devices and architectures for new imaging systems.
Dr. James R. Fienup
Professor
Institute of Optics
University of Rochester
Rochester, NY 14627
Tel: (585) 275 8009
Email: fienup@optics.rochester.edu
Title: Structured Illumination Imaging for Superresolution
The presentation will begin with a brief review of some super-resolution techniques including Super-SVA [1], then provide detail on the structured
illumination approach. Sinusoidally patterned illumination has been used to obtain lateral superresolution as well as axial sectioning in microscopy
[2-7]. In this talk we discuss the superresolution aspect of this technique. The sinusoidal illumination frequency heterodynes the superresolution
frequencies of the object into a low frequency moiré pattern which now lies within the passband of the imaging system. In order to extract
superresolution from this moiré beat pattern, multiple images are taken of the object with distinct phase shifts of the sinusoidal illumination. This
process is repeated for one or two more orientations of the sinusoidal illumination and the extracted superresolution information from the different
orientations is then combined appropriately to obtain a superresolved image.
The processing of the sinusoidally patterned images requires accurate knowledge of the phase shifts in the sinusoidal illumination and hence this
technique is usually restricted to imaging stationary objects using precise, pre-calibrated phase shifting elements. We discuss the application of this
technique to obtain lateral superresolution in fluorescent moving objects such as live or in vivo tissue, specifically the human retina in vivo. We
discuss methods of estimating the phase shifts in the sinusoidal illumination a posteriori to allow for unknown, random object motion. We also
discuss the combination of the different superresolution components to obtain an appropriately weighted, OTF compensated superresolved image.
References
[1] H.C. Stankwitz and M.R. Kosek, “Super-Resolution for SAR/ISAR RCS Measurement Using Spatially Variant Apodization,” Proceedings of the
Antenna Measurement Techniques Association (AMTA) 17th Annual Meeting and Symposium, Williamsburg, VA, 13-17 November 1995.
[2] M. Gustaffson, "Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy," Journal of Microscopy, Vol.
198, Pt 2, pp 82 – 87 (May 2000).
[3] R. Heintzmann, C. Cremer, "Laterally Modulated Excitation Microscopy: Improvement of resolution by using a diffraction grating," Optical
Biopsies and Microscopic Techniques III, Irving J. Biglo, Herbert Schneckenburger, Jan Slavik, Katrina Svanberg, M.D., Pierre M. Viallet, Editors,
Proceedings of SPIE Vol. 3568, pp. 185 – 196 (1999).
[4] M. A. A. Neil, R. Juskaitis, and T. Wilson, "Method of obtaining optical sectioning by using structured light in a conventional microscope," Opt.
Lett. 22, 1905-1907 (1997).
[5] Karadaglić, D., Wilson, T., “Image formation in structured illumination wide-field fluorescence microscopy,” Micron (2008), doi:
10.1016/j.micron.2008.01.017.
[6] L. H. Schaefer, D. Schuster, J. Schaffer, "Structured illumination microscopy: artifact analysis and reduction utilizing a parameter optimization
approach," Journal of Microscopy 216:2, 165-174 (2004).
[7] S. A. Shroff, J. R. Fienup, and D. R. Williams, "OTF compensation in structured illumination superresolution images," in Unconventional Imaging
IV, edited by Jean J. Dolne, Thomas J. Karr, Victor L. Gamiz, Proceedings of SPIE Vol. 7094 (SPIE, Bellingham, WA), in press, (2008).
JAMES R. FIENUP is the Robert E. Hopkins Professor of Optics at the University of Rochester, Institute of Optics. He is also
Professor, Center for Visual Science, Senior Scientist in the Laboratory for Laser Energetics, and Professor of Electrical and
Computer Engineering. Prior to coming to Rochester in 2002 he was a Senior/Chief Scientist at ERIM/Veridian Systems (now
General Dynamics/AIS). He received his Ph.D. (1975) in Applied Physics at Stanford University where he was an NSF Graduate
Fellow. He is a Fellow of Optical Society of America (OSA) and of SPIE and won the SPIE’s Rudolf Kingslake Medal and Prize and
the ICO’s International Prize in Optics. He was the Editor-in-Chief of JOSA A, Division Editor of Applied Optics - Information
Processing, Associate Editor of Optics Letters, and is currently Chair of the Publications Council of the OSA. His research interests
center around imaging science. His work includes unconventional imaging, phase retrieval, wavefront sensing, image
reconstruction and restoration, and image quality assessment. These techniques are applied to passive and active optical
imaging systems, synthetic-aperture radar, and biomedical imaging modalities. His past work has also included diffractive optics
and moving-target detection.
Dr. Bill Freeman
Professor
Massachusetts Institute of Technology
32 Vassar St. 32-D476
Cambridge, MA 01239
Tel: (617) 253-8828
Email: billf@mit.edu
Title: Imaging with coded apertures, and a Bayesian analysis of cameras
First half of talk:
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems
have been proposed that allow for recording all-focus images, or for extracting depth, but to record both
simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple
modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image
information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the
image.
Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture.
We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a
statistical model of images, we can recover both depth information and an all-focus image from single photographs
taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer
assignments in some cases. The resulting sharp image and layered depth map can be combined for various
photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the
scene from an alternate viewpoint. Joint work with: Anat Levin, Rob Fergus, Fredo Durand. Siggraph, 2007
Second half of talk:
The growing flexibility of digital photography has led to the development of a large selection of unconventional
cameras, ranging from multi-camera systems to phase plates, plenoptic and coded aperture cameras. These designs
follow very different approaches to the tasks of image or depth reconstruction, raising the need for a meaningful
comparison across camera types.
This paper introduces a unified framework for comparison. The data in each sensor element of a camera is modeled
as a linear projection of the 4D light field. We pose the imaging task as Bayesian inference: given the observed noisy
light field projections and a prior for the light field signal, estimate the original light field. Under a common set of
imaging conditions, we compare the performance of various camera designs, including some unconventional ones.
This framework allows us to better understand the tradeoffs of each camera, to optimize performance
Joint work with: Anat Levin, Fredo Durand.
BILL FREEMAN is a professor of Electrical Engineering and Computer Science at MIT, and a member of the
Computer Science and Artificial Intelligence Laboratory (CSAIL). He does research in computer vision, computer
graphics, and machine learning, studying how to represent, manipulate, and understand images. Before joining MIT,
he worked for 9 years at Mitsubishi Electric Research Labs, for 6 years at the Polaroid Corporation, and for 1 year as a
Foreign Expert at the Taiyuan University of Technology, Shanxi, China. Hobbies include flying cameras in kites.
Dr. Dennis Healy
Department of Mathematics
University of Maryland
College Park, MD 20742
and DARPA, Microsystems Technology Office (MTO)
3701 North Fairfax Drive
Arlington, Virginia, 22203-1714
Research Interests:
Dr. Healy manages several programs where mathematical algorithms play a central role in the optimization, control,
and exploitation of microelectronic and optical systems. The Analog-to-Information (A-to-I) program is exploring new
ways to extract information from complex signals, seeking significant reduction of the sampling resources required by
classical Shannon representations, effectively concentrating meaningful information into less data. The Multiple Optical
Non-redundant Aperture Generalized Sensors (MONTAGE) program investigates an analogous approach in the optical
domain, freeing imaging cameras from some of the constraints classical Fourier optics to develop new imaging sensors
with radically different form, fit, and function compared to existing systems. The Non-Linear Mathematics for Mixed
Signal Microsystems (NLMMSM) program seeks to provide increased ability to extract signals from noisy and interfering
backgrounds by dealing more effectively with the non-linearities inherent in all electronics processing.
DENNIS HEALY was an associate professor in the Computer Science and Mathematics Departments at Dartmouth
College and was Summer Faculty Fellow at the Naval Ocean Systems Center (now SPAWAR). He holds bachelor's
degrees in physics and mathematics from the University of California at San Diego (UCSD) and earned a Doctorate in
Mathematics from UCSD in 1986. He has authored over 90 publications on the subjects of mathematical physics,
statistics, optical sciences, electrical engineering, biomedical engineering, magnetic resonance, signal and image
processing, mathematics, applied mathematics, and theoretical computer science. He is a member of the editorial
board for the Journal of Fourier Analysis and its Applications and the IEEE press series on Biomedical Engineering.
Professor Healy is on the faculty of the Mathematics Department, as well as that of the Applied Mathematics and
Scientific Computation Program. He is also an affiliate Professor of Bioengineering. Dr. Healy rejoined DARPA in 2003
as a Program Manager for the Microsystems Technology Office (MTO). He had previously headed the Applied and
Computational Mathematics Program in DARPA's Defense Sciences Office. In addition, Professor Healy is a Research
Program Consultant for the National Institute on Alcohol Abuse and Alcoholism (NIAAA) at NIH.
Dr. Eric G. Johnson
Associate Director
Center for Optoelectronics and Optical Communications
University of North Carolina at Charlotte
9201 University City Blvd. Charlotte NC 28223
Phone: (704) 687-8123
Email: egjohnso@uncc.edu
Dr. Johnson is currently a program manager at NSF with program responsibilities in Electronics, Photonics & Device
Technologies (EPDT) in the Electrical, Communications and Cyber Systems (ECCS) Division.
His research interests span micro and nano-fabrication methods. In his Micro-Photonics Laboratory he and his group
have been active in developing 3D Nano Optical Elements, Photonic Crystals, Bio-inspired Optics, Narrow Linewidth
Filters, Dual Grating Resonator (DGR), Guided Mode Resonance Filter (GMR), Lasers & Amplifiers, Grating Coupled
Surface Emitting Lasers, Fiber Lasers, Master Oscillator Power Amplifier (MOPA) Devices, Sensors & Detectors,
Multimode Interference (MMI) Based Devices, Silicon & GaAs based Resonant Cavity Devices, Integration of
Micro/Nano-Optics, Gratings, Lenses, Prisms and Multiplexed Elements.
ERIC G. JOHNSON is a Professor of Optics/Physics and ECE at the University of North Carolina at Charlotte. Prior to
this, he was an Associate Professor with the College of Optics and Photonics at UCF and has been a leading innovator
in the field of micro-optics and nano-optics for over a decade. Dr. Johnson was also a recipient of a NSF CAREER
award for Three Dimensional Nano-Optical Elements. These research efforts have stimulated over 100 publications, 9
issued patents with an additional 4 pending. Dr. Johnson is the current Chair for the Optics in Information Science
Division of OSA and the former OSA Technical Group Chair for Holography and Diffractive Optics in the Information
Systems Division. Dr. Johnson also serves as a Topical Editor for Applied Optics and an Associate Editor for SPIE’s
Journal of MEMS. He also serves on the Board of Directors for SPIE and is a member of OSA, IEEE, and a Fellow of
SPIE.
Dr. Kevin F. Kelly
Assistant Professor
ECE Department, Rice University
6100 Main St., MS 366
Houston, TX 77005
Tel: (713) 348-3565
Email: kkelly@rice.edu
Title: A Single Pixel Camera - Compressive Imaging with a Random Basis
Compressed sensing is a new sampling theory which allows reconstructing signals using sub-Nyquist measurements/sampling.[1,2]
This can significantly reduce the computation required for image/video acquisition/encoding, at least at the sensor end.
Compressed sensing works on the concept of sparsity of the signal in some known domain, which is incoherent with the
measurement domain. We exploit this technique to build a single pixel camera based on an optical modulator and a single
photosensor.[3,4] Random projections of the signal (image) are taken by optical modulator, which has random matrix displayed on
it, corresponding to the measurement domain (random noise). This randomly projected signal is collected on the photosensor and
later used for reconstructing the signal. This process simultaneously compresses the signal because the measurement projects the
signal onto a white-noise basis. Subsequently, the data from this incoherent basis is reconstructed into a complete real-space
image. Given its compressive nature, far fewer measurements are required than the total number of pixels which greatly decreases
the acquisition time of the signal. In addition, the intensity of the compressed signal at the detector is much greater than its raster
scan counterpart and therefore results in greater signal sensitivity and improved image quality. In this scheme we are making a
tradeoff between the spatial extent of sampling array and a sequential sampling over time with a single detector. Applications of
this technique in hyperspectral and infrared imaging along with its use in confocal microscopy will be discussed.
References
[1] Emmanuel Candès, Justin Romberg, and Terence Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency
information, IEEE Trans. on Information Theory, 52(2) pp. 489 - 509, February 2006.
[2] David Donoho, Compressed sensing, IEEE Trans. on Information Theory, 52(4), pp. 1289 - 1306, April 2006.
[3] Dharmpal Takhar, Jason Laska, Michael Wakin, Marco Duarte, Dror Baron, Shriram Sarvotham, Kevin Kelly, and Richard Baraniuk, A new compressive
imaging camera architecture using optical-domain compression, Computational Imaging IV at SPIE Electronic Imaging, San Jose, California, January 2006.
[4] Marco Duarte, Mark Davenport, Dharmpal Takhar, Jason Laska, Ting Sun, Kevin Kelly, and Richard Baraniuk, Single-pixel imaging via compressive
sampling, IEEE Signal Processing Magazine, 25(2), pp. 83-91, March 2008.
KEVIN KELLY is an assistant professor of Electrical and Computer Engineering and a member of the Smalley Institute
for Nanoscale Science and Technology at Rice University in Houston, Texas. He received a B.S. in engineering physics
from Colorado School of Mines in 1993 followed by M.S. (1996) and Ph.D. (1999) in applied physics from Rice
University. He was a postdoctoral fellow at the Institute for Materials Research in Sendai, Japan and in the Chemistry
department at the Pennsylvania State University. He returned to Rice in 2002 where his lab has focused on imaging
and spectroscopy at the nanoscale, in particular carbon nanotube systems, conducting polymers, plasmonic
nanostructures, and single molecule devices including the Nanocar. His most recent research involves imaging on the
macroscale with the development of a one pixel camera based on compressive sensing in collaboration with fellow ECE
professor Richard Baraniuk.
Dr. Kenny Kubala
CDM Optics
4001 Discovery Dr.
Suite 130
Boulder, CO 80305
(303) 345-2115
kennyk@cdm-optics.com
Dr. Kubala’s university research was in variable addressability imaging systems which create an information efficient
transformation that is matched to what the human visual system can perceive. At Omnivision CDM Optics his primary
research areas are non-conventional optics and computational imaging. He has published 20 papers and has 36
issued or pending patents in the areas of non-conventional optics and computational imaging.
KENNY KUBALA is the Director of Strategic Technology at Omnivision CDM Optics. He received his Ph.D in electrical
engineering from the University of Colorado in 2001.
Dr. Jim Leger
Professor
University of Minnesota
Department of Electrical and Computer Engineering
Tel: (612) 625-0838
Email: leger@umn.edu
Title: New Camera Form Factors
Traditionally, large aperture optical systems require a length comparable the aperture size to form an image, resulting
in a total volume that can be excessive in some applications. This talk explores several possible designs for “large
aperture” optical systems in a planar format. We begin by making the connection between confocal imaging systems
and guided modes in an optical waveguide. It is shown that the resolution of a grating coupler on the surface of a
waveguide is proportional to the length of the coupler, implying that high resolution imaging of distant objects is
possible with inherently flat optics. We review basic experiments that demonstrate this imaging using various
waveguide coupling schemes. Several conceptual designs for active imaging are described utilizing this technology.
We then turn our attention to passive applications of planar imaging. The radiometric efficiency of our proposed
imaging systems is of primary concern, along with the chromatic performance of the optics. We propose several
possible architectures for handling chromatic aberrations and investigate the performance over a broad spectral band.
In one possible realization, harmonic gratings (gratings with multiple 2π phase wrapping) are used and post
processing is suggested to recover the signal. In another realization, the spectral blur caused by the chromatic
aberration is utilized in a manner similar to Computer Tomographic Imaging Spectroscopy, whereby several spectral
blurs are recorded and tomographic algorithms are utilized to reconstruct the a hyperspectral image. We emphasize
that these ideas are in the conceptual stage, and point out several of the possible challenges in realizing them.
JAMES LEGER received his BS degree in Applied Physics from the California Institute of Technology (1974) and Ph.D. degree
in Electrical Engineering from the University of California, San Diego (1980). He has held previous positions at the 3M Company,
and MIT Lincoln Laboratory. He is currently professor of Electrical Engineering at the University of Minnesota, where he holds
both the Cymer Professorship of Electrical Engineering and the Mr. and Mrs. George W. Taylor distinguished teaching
professorship. His research group is studying a wide variety of optical techniques, including mode control of semiconductor and
solid-state lasers, laser focal spot design by polarization manipulation, laser metrology, and design of microoptical systems.
Prof. Leger has been awarded the 1998 Joseph Fraunhofer Award/Robert M. Burley Prize by the Optical Society of America, the
1998 Eta Kappa Nu outstanding teaching professor award, the 2000 George Taylor Award for Outstanding Research at the
University of Minnesota, the 2006 Eta Kappa Nu Outstanding teaching Professor award, the ITSB professor of the year award
(2006), and the Morse Award for Outstanding Undergraduate Teaching (2006). He has recently been inducted into the academy
of distinguished teachers at the University of Minnesota. He is a Fellow of the Optical Society of America (and former member of
the board of governors), Fellow of the Institute of Electrical and Electronic Engineers (IEEE), and Fellow of the International
Society of Optical Engineers (SPIE).
Dr. Marc Levoy
Stanford University
Computer Science Department
366 Gates Computer Science Building
Stanford University
Stanford, CA 94305
(650) 725-4089
levoy@cs.stanford.edu
Title: Light Field Sensing
The scalar light field is a four-dimensional function representing radiance along rays as a function of position and direction in space.
One can think of light fields as a collection of photographs, each captured from a slightly different viewpoint. Unlike conventional
photographs, light fields permit manipulation of viewpoint and focus after the imagery has been recorded.
In this talk, I will briefly review the theory of light fields, and I will describe three devices we have built in our laboratory for
capturing them:
an array of 128 synchronized VGA-resolution video cameras [1], a handheld camera in which a microlens array has been inserted
between the sensor and main lens [2], and a microscope, in which a similar microlens array has been inserted at the intermediate
image plane [3].
Time permitting, I will also present an ongoing effort to generate 4D light field illumination in a microscope - using a video projector
and second microlens array. Combined with our light field microscope, this illumination system allows us to illuminate any shape in
a 3D volume, to measure and correct for aberrations in optical systems, to measure the 8D reflectance properties of opaque
surfaces, and several other applications.
References:
[1] Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M., "High Performance
Imaging Using Large Camera Arrays", Proc. SIGGRAPH 2005.
[2] Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., Hanrahan, P., "Light Field Photography with a Hand-Held Plenoptic
Camera", Stanford Tech Report CTSR 2005-02, April, 2005.
[3] Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M., "Light field microscopy," Proc. SIGGRAPH 2006.
MARC LEVOY is a Professor of Computer Science and Electrical Engineering at Stanford University. He received degrees in
Architecture from Cornell University in 1976 and 1978 and a PhD in Computer Science from the University of North Carolina in
1989. His research interests include computer- assisted cartoon animation, volume rendering (for which he won the SIGGRAPH
Computer Graphics Achievement Award in 1996), 3D scanning, light field sensing and display, computational photography, and
computational microscopy.
Other awards: Charles Goodwin Sands Medal for best undergraduate thesis (Cornell University College of Architecture, 1976),
National Science Foundation Presidential Young Investigator (1991), ACM Fellow (2007).
Dr. Joseph Mait
Senior Technical Researcher (ST)
US Army Research Laboratory
AMSRD-ARL-SE-R
2800 Powder Mill Road
Adelphi, MD 20783-1197
Tel: (301) 394-2462
Email: jmait@arl.army.mil
Title: Imaging: Its Past, Present, and Futures
Despite the fact that electronic detectors have replaced film in imaging systems, the optical design principles applied to most
electronic-based imaging systems remain conventional. They are based on the simple principle that, for systems designed to
record images, the role of the designer is to produce optics whose impulse response is matched in size to the detector pixel and
whose field of view is matched in size to the detector array. So as long as designers apply conventional notions of image
formation, so-called digital camera designs should more properly be referred to as film-less cameras. They represent nothing more
than an electronic replacement of film-based imagers similar to early automobiles which were nothing more than “horse-less
carriages.”
Just as we have seen the evolution of the horse-less carriage into an array of specialized motorized vehicles, we predict a similar
evolution to occur with film-less cameras. If one considers that face detection, motion compensation, and color de-mosaicing are
features included in some commercially available digital cameras, this evolution is beginning. Government support has also been
instrumental in developing imagers that exploit optical design and signal processing to achieve capabilities not otherwise possible,
for example, high performance thin imagers in the visible and infrared, and high dynamic range imagers.
As new capabilities begin to appear, imagers will no doubt become specialized, just as automobiles have. Some imagers may push
the limits of conventional imaging, for example, by increasing resolution beyond the diffraction limit. Other imagers may push the
limits of information extraction through multi-modal sensing (e.g., polarization and wavelength) and adaptation. To this end, pixels
may not even be appropriate as a common currency of information commerce.
The issues raised in this talk are meant to motivate discussion in the remainder of the workshop.
JOSEPH MAIT received his BSEE from the University of Virginia in 1979 and received his graduate degrees from the Georgia
Institute of Technology; his MSEE in 1980 and Ph.D. in 1985. Since 1989 Dr. Mait has been with the U.S. Army Research
Laboratory (formerly Harry Diamond Laboratories) and served in several positions. He is presently a senior technical researcher.
Dr. Mait has academic experience as a professor of Electrical Engineering at the University of Virginia and as an adjunct professor
at the University of Maryland, College Park. He has also held visiting positions at the Lehrstuhl für Angewandte Optik, Universität
Erlangen-Nürnberg, Germany and the Center for Technology and National Security Policy at the National Defense University in
Washington DC. Dr. Mait's research interests include sensors and the application of optics, photonics, and electro-magnetics to
sensing and sensor signal processing. Particular research areas include diffractive optic design, integrated computational imaging
systems, and signal processing. He is a Fellow of the professional societies SPIE and OSA, and a senior member of IEEE. He is a
Raven from the University of Virginia.
Dr. Chuck Matson
Senior Scientist
Directed Energy Directorate
Air Force Research Laboratory
Kirtland AFB, NM 87117
Tel: (505) 846-2049
Email: chudaw@comcast.net
Title: An Overview of Superresolution
Superresolution is a term that is used to describe a number of phenomena including restoring unmeasured spatial frequency
information [1]; removing aliasing from data [2]; measuring sub-wavelength detail [3]; encoding, transmitting, and decoding image
details optically that normally would be outside the transmission system spatial frequency cutoff [4]; decreasing system point
spread function widths optically [5]; and decreasing system point spread function widths with the use of post-processing [6]. In
this talk, I will present an overview of superresolution in each of these areas by describing what is meant by the term
„superresolution‟, explaining how superresolution is accomplished, and characterizing the amount of superresolution that is
reasonably possible. I will illustrate the discussion with examples.
References
[1] M. Bertero and C. de Mol, “Super-resolution by data inversion,” in Progress in Optics XXXVI, E. Wolf, ed., Elsevier (1996)
[2] S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Processing
Magazine (May 2003)
[3] Selected Papers on Near-Field Optics, S. Jutamulia, editor, SPIE Milestone Series, volume MS 172 (2002)
[4] Z. Zalesvsky and D. Mendlovic, Optical Superresolution, Springer-Verlag, New York (2003)
[5] T. R. M. Sales and G. Michael Morris, “Fundamental limits of optical superresolution,” Optics Letters, vol. 22, pp. 582-584 (1997)
[6] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics Publishing (1998)
CHUCK MATSON is a senior scientist with the Air Force Research Laboratory‟s Directed Energy Directorate. He received his
MSEE and PhD degrees from the Air Force Institute of Technology (Dayton, Ohio) in 1983 and 1986, respectively. His research
interests include imaging through atmospheric turbulence, information-theoretic investigations into fundamental limits to image
quality, non-spatially-resolved target identification and characterization, and imaging through turbid media. He is a Fellow of the
Optical Society of America. He was an associate editor and is currently an advisory editor for Optics Express. He is one of the
program chairs for the 2009 OSA Signal Recovery and Synthesis Topical Meeting.
Dr. Mark Mirotznik
Associate Professor
The Catholic University of America
620 Michigan Ave, NE
Washington, DC 20064
Tel: (202) 319-4380
Email: mirotznik@cua.edu
Title: Multi-aperture/multi-modal computational imaging platforms developed during the PERIODIC program
Even the best commercially available imaging systems employ a mere concatenation of individually optimized optical,
sensor, digital processing, and visualization subsystems, with scant attention paid to the optimization of the overall
system performance against the full panoply of system trade-offs. In the emerging paradigm of integrated imaging
systems, the focus has slowly but surely begun to shift toward end-to-end optimized systems that maximize the
information content of images relative to a set of prescribed imaging tasks. Digital post-processing is an essential
ingredient of this approach. The design of the optical and detection subsystems is optimized for efficient information
gathering; post-processing merely renders the gathered information in the most desirable form (e.g., visually most
pleasing form) in the final image. Information is clearly the most important metric of integrated-imaging system
performance. In the PERIODIC concept we employ an array of imaging channels combined with a set of diverse
optical elements to maximize information content. Array imaging, rather akin to the function of compound eyes of
insects like flies, is at the leading edge of the ongoing computational-imaging revolution. Multiple optical elements
permit the use of a range of information gathering strategies in parallel that can turn what would otherwise be simple
yet powerful digital imagers into comprehensive scene interrogators with multiple functionalities, such as high spatial
and spectral resolution, high dynamic range, high depth and width of the field of view, excellent target recognition
capabilities, and well optimized computational strategies that employ data compression and fusion. In this talk we will
present the hardware platforms developed during the PERIODIC program and some representative results.
MARK MIROTZNIK is an associate professor of electrical engineering at The Catholic University of America. His
research interests are in computational electromagnetics, subwavelength optical devices and computational imaging.
His current research is focused on the development of computational array imaging systems that can be used to
remove artifacts produced from atmospheric obscurants (i.e. haze, fog, light rain) using spectral/polarimetric
information.
Dr. Robert Muise
Senior Staff Engineer
Lockheed Martin Missiles and Fire Control
5600 Sand Lake Rd.
Mail Point 450
Orlando, FL 32819
Tel: (407)356-8014
Email: robert.r.muise@lmco.com
Title: Compressive imaging for wide-area persistent surveillance
We consider the application of compressive imaging theory to the problem of wide-area persistent surveillance. As the
compressive sensing theory enjoys significant research attention, mainly because of the possibilities for orders of
magnitude increases in signal/image processing applications, the application areas for compressive imaging have not
kept pace without an optical architecture which could directly improve current sensing capabilities. There are now
cases in the literature and under study where optical architectures have been developed which require the
incorporation of compressive imaging in order to perform the indicated exploitation application. This presentation
utilizes one such architecture to show a dramatic (2 orders of magnitude) increase in performance for the application
of wide-area persistent surveillance. This application and architecture is described as a field-of-view (FOV)
multiplexing imager and its relation to compressive imaging will be presented and exploited for increased field-ofregard (FOR) imaging. A simulated example will be given with qualitatively impressive results. The optical
architecture of FOV multiplexing while showing significant performance increase over current capabilities also opens
some interesting research questions for future sensor design as well as algorithms for image reconstruction.
ROBERT MUISE is Senior Staff Engineer in the applied research department of Lockheed Martin Missiles and Fire
Control. He received his Ph.D. in Applied Mathematics from the University of Central Florida. His research interests
include image processing and exploitation (including automatic target detection/recognition/tracking), Integrated
Sensing and Processing, Compressive Sensing, and applications involving image and video exploitation with novel
sensors. He is a Senior member of IEEE, a member of SIAM and the imaging science activity group, and is part of the
organizing committee for the annual SPIE conference on automatic target recognition.
Dr. Shree K. Nayar
T. C. Chang Professor of Computer Science
Columbia University
New York City, New York
(212) 939-7092
nayar@cs.columbia.edu
Title: Computational Cameras: Redefining the Image
In this talk, we will first present the concept of a computational camera. It is a device that embodies the convergence
of the camera and the computer. It uses new optics to select rays from the scene in unusual ways, and an appropriate
algorithm to process the selected rays. This ability to manipulate images before they are recorded and process the
recorded images before they are presented is a powerful one. It enables us to experience our visual world in rich and
compelling ways. We will show computational cameras that can capture wide angle, high dynamic range,
multispectral, and depth images. Finally, we will explore the use of a programmable light source as a more
sophisticated camera flash. We will show how the use of such a flash enables a camera to produce images that reveal
the complex interactions of light within objects as well as between them.
SHREE K. NAYAR received his PhD degree in Electrical and Computer Engineering from the Robotics Institute at
Carnegie Mellon University in 1990. He is currently the T. C. Chang Professor of Computer Science at Columbia
University. He co-directs the Columbia Vision and Graphics Center. He also heads the Columbia Computer Vision
Laboratory (CAVE), which is dedicated to the development of advanced computer vision systems. His research is
focused on three areas; the creation of novel cameras, the design of physics based models for vision, and the
development of algorithms for scene understanding. His work is motivated by applications in the fields of digital
imaging, computer graphics, and robotics.
He has received best paper awards at ICCV 1990, ICPR 1994, CVPR 1994, ICCV 1995, CVPR 2000 and CVPR 2004. He
is the recipient of the David Marr Prize (1990 and 1995), the David and Lucile Packard Fellowship (1992), the National
Young Investigator Award (1993), the NTT Distinguished Scientific Achievement Award (1994), the Keck Foundation
Award for Excellence in Teaching (1995) and the Columbia Great Teacher Award (2006). In February 2008, he was
elected to the National Academy of Engineering.
Dr. Mark A. Neifeld
Professor
University of Arizona
Tucson, AZ 85721
Tel: (520) 621-6102
Email: neifeld@ece.arizona.edu
Title: Feature Specific Imaging
Feature-specific imaging (FSI) is a technique by which optical measurements employing a non-traditional basis may be
used to efficiently extract spatial, temporal, and/or spectral object information. Because the measurement
dimensionality of a FSI system is often much lower than the native dimensionality of the object space, FSI is
sometimes called compressive imaging. This presentation will discuss several candidate optical systems for FSI. The
performance of FSI will be analyzed for both general purpose imaging (i.e., image reconstruction) and task-specific
imaging (e.g., target detection and/or tracking). FSI will also be discussed as a convenient framework within which the
joint optimization of optical and post-processing resources may be undertaken.
References
[1] Mark. A. Neifeld and Premchandra Shankar, “Feature-Specific Imaging,” Applied Optics, Vol.42, No.17, pp.33793389, June 2003.
MARK A. NEIFELD received the B.S.E.E. degree from the Georgia Institute of Technology in 1985 and the M.S. and
Ph.D. degrees from the California Institute of Technology in 1987 and 1991 respectively. In 1991 he joined the faculty
of the Department of Electrical and Computer Engineering and the Optical Sciences Center at the University of Arizona
in Tucson, AZ. He has coauthored more than 80 journal articles and more than 200 conference papers in the areas of
optical communications and storage, coding and signal processing, and optical imaging and processing systems. His
current interested include information- and communication-theoretic methods in image processing, nontraditional
imaging techniques that exploit the joint optimization of optical and post-processing degrees of freedom, coding and
modulation for fiber and free-space optical communications, and applications of slow and fast light. Professor Neifeld
is a Fellow of the Optical Society of America and a member of the SPIE, IEEE, and APS. He has served on the
organizing committees of numerous conferences and symposia. He has also been a two-term topical editor for Applied
Optics and a three-time Guest Editor of special issues of Applied Optics.
Dr. Victor Paúl Pauca
Assistant Professor
Computer Science Department
PO Box 7311
Winston-Salem, NC 27109
Tel. (336) 758-5454
Web: www.cs.wfu.edu/~pauca
Research Interests:
Computational imaging, inverse problems, high performance computing
PAúL PAUCA is an Assistant Professor of Computer Science at Wake Forest University. He is an active
member of the PERIODIC research team, contributing to the numerical simulation and computational
aspects of this project. His research over the last few years has been funded by AFOSR, ARO, IARPA, and
the North Carolina Biotechnology Center.
Dr. Timothy M. Persons
Technical Director
Intelligence Advanced Research Projects Activity
Office of the Director of National Intelligence
Dr. Persons is a 2007 DNI S&T Fellow whose research focuses on computational imaging systems. He has also
recently been selected as the James Madison University (JMU) Physics Alumnus of 2007. He received his B.Sc.
(Physics) from JMU, a M.Sc. (Nuclear Physics) from Emory University, and a M.Sc. (Computer Science) and Ph.D.
(Biomedical Engineering) degrees from Wake Forest University. He is a senior member of the Institute for Electrical
and Electronic Engineers, Association for Computing Machinery, and the Sigma Xi research honor society. He has
authored an array of journal, conference, and technical articles at various classification levels. He also serves as a
Ruling Elder in the Presbyterian Church in America. He is married to Gena D. (nee Crater) Persons and they are the
proud parents of Leah Elizabeth (4) and Timothy Daniel (3).
TIMOTHY M. PERSONS was appointed the Technical Director of the Intelligence Advanced Research Projects
Activity (IARPA) in October 2007. He is the Director’s advisor on strategic planning, technical oversight and
measurement of programmatic investment performance, intra and intergovernmental science and technology (S&T)
relationships, initiation of seedling research and development efforts, and reporting to the Director of National
Intelligence (DNI), Congressional, and Intelligence Community (IC) stakeholders on behalf of the entire IARPA
corporate enterprise. He has also served as the research manager for the IARPA Quantum Information Science and
Technology research portfolio.
Prior to joining IARPA, Dr. Persons was the Technical Director and Chief Scientist of the Disruptive Technology Office
(DTO, formerly the Advanced Research and Development Activity (ARDA)) in September 2005, Acting Deputy Director
of ARDA from March to September 2005 and as Technical Director since November 2002. From July 2001 to
November of 2002, he served as the Technical Director for the National Security Agency’s (NSA) Human Interface
Security Group, whose mission is to research, design, and test next-generation biometric identification and
authentication systems. Prior to joining the NSA, Dr. Persons was a radiation physicist with the University of North
Carolina at Chapel Hill.
Dr. Rafael Piestun
Associate Professor
University of Colorado at Boulder
UCB 425, Boulder, CO 80309
Tel: (303) 7350894
piestun@colorado.edu
TItle: Fundamental limits to optical systems
In this talk we will discuss fundamental limits to optical systems based on the total number of communication channels
available between the object space and the image space. These channels include both weakly scattering modes and
strongly scattering modes [1]. We use information theoretic concepts to account for the effect of noise.
In the second part of this talk, as an example of how to overcome this limitations, we present a new paradigm for
high-speed, three-dimensional (3D) information acquisition using engineered point spread functions. An information
theoretic analysis shows an inherent and significant improvement in depth estimation of at least one order of
magnitude with respect to traditional methods that use just lenses. This principle is particularly important in the
microscopy domain because it can offer simultaneously high temporal resolution and 3D-spatial accuracy. We will
discuss recent efforts to create computational optical systems to sense nanoscale object features.
1. R. Piestun and D. A. B. Miller, "Electromagnetic degrees of freedom of an optical system," J. Opt. Soc. Am. A 17,
892-902 (2000)
2. A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181-183 (2006)
S. R. P. Pavani and R. Piestun, "High-efficiency rotating point spread functions," Opt. Express 16, 3484-3489 (2008)
RAFAEL PIESTUN received the degree of Ingeniero Electricista (1990) from the Universidad de la Republica
(Uruguay) and the MSc. (1994) and Ph.D. (1998) degrees in Electrical Engineering from the Technion – Israel Institute
of Technology. From 1998 to 2000 he was a researcher at Stanford University. Since 2001 he is with the Department
of Electrical and Computer Engineering and the Department of Physics at the University of Colorado – Boulder where
he is an Associate Professor. He was a Fulbright scholar, an Eshkol fellow, received a Honda Initiation Grant award, a
Minerva award, an El-Op prize, and a Gutwirth prize. He served in the editorial committee of Optics and Photonics
News and is currently a topical editor of Applied Optics. His areas of interest include nanophotonic devices and
computational optical imaging.
Dr. Nikos Pitsianis
Assistant Research Professor
Duke University
Departments of ECE and CS
Durham, NC 27708
Tel: (919) 660-6500
Email: Nikos.P.Pitsianis@Duke.edu
Research Interests:
The focus of my research activities is on high-performance computer algorithms and architectures. My research
interests intersect mainly with the following three traditionally categorized areas: (1) computational science and
numerical algorithms, such as matrix computations, fast transforms, and optimization techniques; (2) high
performance computer systems and architectures, and (3) image and signal processing applications.
I am interested in utilizing compiler-aided fast transforms, special-purpose high performance computation
architectures in building efficient sensing systems. With the help of special-purpose compilers and appropriately
designed mathematical abstractions, we can explore the potential in high performance computation by manipulating
domain-specific mathematical structures to match them to a given computer architecture and vice-versa. At Duke
University, my colleagues and I are working toward more complex structures in discrete transforms for broader
applications in signal and image processing, computational physics, computational chemistry, and information
processing, especially efficient algorithms and computing architectures for discrete transforms of unequal-space
sampled data.
I have also been involved in integrated sensing and signal processing since I joined the Duke faculty. In integrated
sensing and processing, computer technologies and computation techniques are brought closer to important sensing
systems. At Duke, with Sun and Brady, we have introduced the reference structure tomography (RST) framework, and
developed compressive sampling with QCT coding and decoding, among other theoretical and technical advances.
Within this framework, my colleagues and I have designed imaging cameras with thin optics and compressive
sampling, developed schemes to estimate source parameters and classify sources.
NIKOS PITSIANIS is currently an Assistant Research Professor with the Department of Electrical and Computer
Engineering and Computer Science of Duke University in Durham, NC. He received the M.S. and Ph.D. degrees in
Computer Science from Cornell University. His research interests include high performance algorithms and
architectures for signal and image processing.
Dr. Bob Plemmons
Wake Forest University
Mathematics and Computer Science
Dept. Mathematics, Box 7388
Wake Forest University
Winston-Salem, NC 27109
(336) 758-5358
plemmons@wfu.edu
Title: PERIODIC Project: The Design, Construction, and Testing of Multimodal Imagers
The overarching goal of the PERIODIC project is to design, study, and construct a suite of computational
imaging systems that capture, process, and render scene information in the most efficient manner possible in order to
generate images of high quality, resolution, and discrimination. These systems are intrinsically multi-modal and
computational, emphasizing fundamental trade-offs both within the space of spatial, temporal, spectral, and
polarization data and among the optical, sensor, and processing sub-systems for a well optimized application-based,
resource-allocation strategy. In this two-part talk, we report on the past successes, current development, and future
plans of the project.
The computational and hardware sides of the project have seen the development of new superresolution
reconstruction algorithms for array image reconstruction, including fast registration methods, fusion of
polarization and spectral data, variational edge-preserving regularization techniques, and implementations on FPGA
systems for near real-time computation. Application areas of these computational techniques currently being tested
include biometric systems for iris recognition and hand fingerprints, biomedical systems for burn analysis, and most
recently single-snapshot dehazing made possible by a new PERIODIC camera capturing polarization and spectral data.
The theoretical development of the project has involved fundamental studies of the prospects and limits on
the extent of digital and optical superresolution in computational imagers with the help of information theory. These
studies have been greatly illuminated by our exploration of the implications of the theory of generalized
sampling expansions for digital resolution enhancement from sequences of low-resolution images and for prior
knowledge such as the finiteness of the object support.
BOB PLEMMONS, Z. Smith Reynolds Professor of Mathematics and Computer Science, joined the Wake Forest
University faculty in 1990. His current research includes computational mathematics applied to problems arising in
image and signal processing, optics, and photonics. His work is supported by grants from the Army Research Office
(ARO) on the topic of "novel image quality control systems", the Air Force Office of Scientific Research (AFOSR) on the
topic of "novel imaging tools for space situational awareness", and by the Intelligence Advanced Research Projects
Activity (IARPA) on the topic of “multi-aperture, multi-functional computational imaging systems. Plemmons received
his bachelor's degree in mathematics from Wake Forest in 1962 and his doctorate in applied mathematics from Auburn
University in 1965. Before joining the Wake Forest faculty, Plemmons' experience included founding the University of
North Carolina System's Center for Research in Scientific Computation at North Carolina State University in 1988. In
U.S. Department of Defense (DoD) research for 35 years, he is the author of more than 150 papers and three books
on computational mathematics, and has also testified before two U.S. Congressional Committees as a consultant on
DoD basic research priorities.
Andrew Portnoy
Research Assistant
Duke University
Department of ECE Box 90291
Durham, NC 27708
Tel: (203) 470-6877
Email: adp4@duke.edu
Title: COMP-I Advances and Performance Metrics
As computational imaging systems evolve it becomes increasingly important to establish quantitative performance metrics for
comparison. Traditional standards are often incomplete when evaluating digitally processed data and need to be adapted to fairly
characterize the response of today’s sensors.
First, a brief discussion of the current status of the Compressive Optical MONTAGE Photography Initiative (COMP-I) [1] program will
be presented. Sponsored by Defence Advanced Research Projects Agency, the MONTAGE program is short for Multiple Optical
Non-redundant Aperture Generalized Sensors. The latest COMP-I cameras use a multichannel lenslet array integrated on a long
wave infrared (LWIR) imaging sensor operating in the 8-12 μm wavelength range. Image synthesis integrates information from
these multiple subimages into a single high resolution reconstruction. The presentation will also include the most recent results on
characterizing the COMP-I camera’s performance in comparison to a conventional system. In particular, this will address Noise
Equivalent Temperature Difference (NETD) [2].
Thermal imaging systems are often calibrated to measure the equivalent blackbody temperature distribution of a scene. NETD is a
metric for characterizing a system’s effective temperature resolution. By definition, NETD is the temperature where the signal to
noise ratio is unity. NETD translates pixel fluctuations resulting from system noise into an absolute temperature scale. As noise
statistics vary with operating temperature, the corresponding NETD fluctuates. To experimentally calculate NETD, we image a
collimated target aperture illuminated with a blackbody source. Especially for computational imaging systems, any meaningful
NETD measurement must also more explicitly explain signal to noise ratio. We describe denoising techniques in our reconstruction
algorithms and how they affect quantitative metrics. Preliminary results show comparable data between our multiple aperture
camera and a conventional system using the same microbolometer technology.
References:
[1] David J. Brady, Michael Feldman, Nikos Pitsianis, J.P. Guo, Andrew Portnoy, and Michael Fiddy, “Compressive optical MONTAGE
photography”
Proc. SPIE 5907, 590708 (2005)
[2] ASTM Standard E1543 - 00 (2006), “Standard Test Method for Noise Equivalent Temperature Different of Thermal Imaging
Systems,” ASTM International, West Conshohocken, PA, www.astm.org
ANDREW PORTNOY is a Ph.D. candidate in the Department of Electrical and Computer Engineering at Duke University. He also
received his M.S. (’06) in ECE and B.S.E. (’04) in ECE and Computer Science at Duke. His research is in the field of computational
optical sensors with expected graduation in spring 2009. Specifically, Portnoy’s work has been focused in multichannel imaging
systems most recently in the long wave infrared (LWIR) wavelength band. He has also researched focal plane coding, multiplex
holography, and hyperspectral imaging. He is a member of the Optical Society of America and has delivered an invited talk at their
Frontiers in Optics Annual Meeting. Portnoy was awarded a graduate student fellowship through the National Science Foundation’s
EAPSI program for summer research in Taiwan at National Chiao Tung University.
Dr. Sudhakar Prasad
University of New Mexico
Physics and Astronomy
800 Yale Blvd NE
Albuquerque, NM 87122
(505) 277-5876
sprasad@unm.edu
Title: PERIODIC Project: The Design, Construction, and Testing of Multimodal Imagers
The overarching goal of the PERIODIC project is to design, study, and construct a suite of computational
imaging systems that capture, process, and render scene information in the most efficient manner possible in order to
generate images of high quality, resolution, and discrimination. These systems are intrinsically multi-modal and
computational, emphasizing fundamental trade-offs both within the space of spatial, temporal, spectral, and
polarization data and among the optical, sensor, and processing sub-systems for a well optimized application-based,
resource-allocation strategy. In this two-part talk, we report on the past successes, current development, and future
plans of the project.
The computational and hardware sides of the project have seen the development of new superresolution
reconstruction algorithms for array image reconstruction, including fast registration methods, fusion of
polarization and spectral data, variational edge-preserving regularization techniques, and implementations on FPGA
systems for near real-time computation. Application areas of these computational techniques currently being tested
include biometric systems for iris recognition and hand fingerprints, biomedical systems for burn analysis, and most
recently single-snapshot dehazing made possible by a new PERIODIC camera capturing polarization and spectral data.
The theoretical development of the project has involved fundamental studies of the prospects and limits on
the extent of digital and optical superresolution in computational imagers with the help of information theory. These
studies have been greatly illuminated by our exploration of the implications of the theory of generalized
sampling expansions for digital resolution enhancement from sequences of low-resolution images and for prior
knowledge such as the finiteness of the object support.
SUDHAKAR PRASAD is currently a Professor of Physics and Astronomy at the University of New Mexico. As the
Director of the Center for Advanced Studies during the period 2000-2005, he actively sought and supported
interdisciplinary research activities in the natural sciences and mathematics at UNM. He has worked in the area of
astronomical imaging and image processing for the past 18 years with generous funding support by AFOSR, AFRL,
NASA, ARO, and IARPA, and often brings his early research background in quantum optics and field theory to bear on
problems of interest to the imaging community. In recent years, he has been applying concepts of Shannon and Fisher
information to derive fundamental limits on the information content of images degraded by noise and turbulence, and
on the restorability of those images. In 1999-2000, he led an AFOSR-funded effort to establish an imaging research
program at MHPCC, which later spawned a large five-year (2002-2007) AFOSR-PRET program at UNM’s Maui Scientific
Research Center. As the overall PI of the original multi-aperture computational imaging program funded by IARPA
(then ARDA) in 2005, he has played an important role in the development of the current phase of the PERIODIC
project, of which he is a co-leader. He has published nearly 80 original papers in fields ranging from quantum optics to
quantum field theory to astronomical imaging and image processing. He is a Fellow of the Optical Society of America.
Dr. Ramesh Raskar
MIT
Media Lab
E15-324 Media Lab MIT
20 Ames Street
Cambridge, MA 02139
16179539799
raskar@media.mit.edu
Title: Less is More: Coded Computational Photography
Computational Photography is an emerging multi-disciplinary field that is at the intersection of optics, signal processing, computer graphics+vision,
electronics, art, and online sharing in social networks. The field is evolving through three phases. The first phase was about building a super-camera
that has enhanced performance in terms of the traditional parameters, such as dynamic range, field of view or depth of field. I call this Epsilon
Photography. Due to limited capabilities of a camera, the scene is sampled via multiple photos, each captured by epsilon variation of the camera
parameters. It corresponds to the low-level vision: estimating pixels and pixel features. The second phase is building tools that go beyond
capabilities of this super-camera. I call this Coded Photography. The goal here is to reversibly encode information about the scene in a single
photograph (or a very few photographs) so that the corresponding decoding allows powerful decomposition of the image into light fields, motion
deblurred images, global/direct illumination components or distinction between geometric versus material discontinuities. This corresponds to the
mid-level vision: segmentation, organization, inferring shapes, materials and edges. The third phase will be about going beyond the radiometric
quantities and challenging the notion that a camera should mimic a single-chambered human eye. Instead of recovering physical parameters, the
goal will be to capture the visual essence of the scene and analyze the perceptually critical components. I call this Essence Photography and it
may loosely resemble depiction of the world after high level vision processing. It will spawn new forms of visual artistic expression and
communication.
In this talk, I will focus on Coded Photography. 'Less is more' in Coded Photography. By blocking light over time or space, we can preserve more
details about the scene in the recorded single photograph.
1. Coded Exposure: By blocking light in time, by fluttering the shutter open and closed in a carefully chosen binary sequence, we can preserve high
spatial frequencies of fast moving objects to support high quality motion deblurring.
2. Coded Aperture Optical Heterodyning: By blocking light near the sensor with a sinusoidal grating mask, we can record 4D light field on a 2D
sensor. And by blocking light with a mask at the aperture, we can extend the depth of field and achieve full resolution digital refocussing.
3. Coded Illumination: By observing blocked light at silhouettes, a multi-flash camera can locate depth discontinuities in challenging scenes without
depth recovery.
4. Coded Sensors: By sensing intensities with lateral inhibition, a ‘Gradient Camera’ can record large as well as subtle changes in intensity to recover
a high-dynamic range image.
5. Coded Spectrum: By blocking parts of a ‘rainbow’, we can create cameras with digitally programmable wavelength profile.
I will show several applications and describe emerging techniques to recover scene parameters from coded photographs.
Recent joint work with Jack Tumblin, Amit Agrawal, Ashok Veeraraghavan and Ankit Mohan
Ramesh Raskar joined the Media Lab in spring 2008 as head of the Camera Culture research group. The group focuses on developing tools to help
us capture and share the visual experience. This research involves developing novel cameras with unusual optical elements, programmable
illumination, digital wavelength control, and femtosecond analysis of light transport, as well as tools to decompose pixels into perceptually
meaningful components. Raskar's research also involves creating a universal platform for the sharing and consumption of visual media.
Raskar received his PhD from the University of North Carolina at Chapel Hill, where he introduced "Shader Lamps," a novel method for seamlessly
merging synthetic elements into the real world using projector-camera based spatial augmented reality. In 2004, Raskar received the TR100 Award
from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted
at MIT to recognize the top 20 Indian technology innovators
worldwide. He holds 30 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on
Computational Photography.
Dr. Timothy Schulz
Professor and Dean
Michigan Tech
1400 Townsend Drive
Houghton, Mi 49931
Tell: (906) 482-9223
Email: schulz@mtu.edu
Title: Estimating the degree of polarization from intensity measurements
The degree of polarization for a quasi-monochromatic field is a real number P between 0 and 1 that is used to
describe the extent to which a field is polarized [1,2]. Completely polarized fields – linear or circular – have P = 1, and
completely unpolarized fields have P = 0. For situations involving active laser illumination, several authors have
suggested and studied the use of the degree of polarization (and related parameters) of the reflected field as a feature
that can be used for object identification and classification [3-5]. In this presentation, performance bounds [6] are
presented for the estimation of the degree of polarization for situations involving the measurement and processing of
i) two orthogonal field components; ii) two orthogonal intensity components [7,8]; and ii) the total field intensity, and
these bounds are used to demonstrate that sensors that record only intensity data – and, hence, avoid the utilization
of optical interferometers – can be designed and utilized for the estimation of the degree of polarization.
References
[1] J. W. Goodman, Statistical Optics, Wiley, 1985.
[2] L. Mandel and E. Wolf, Optical Coherence and Quantum Optics , Cambridge University Press, 1995.
[3] J. E. Solomon, “Polarization imaging,” Appl. Opt. 20(9), pp. 1537-1544, 1981
[4] J. S. Tyo, M. P. Row, J. E. N. Pugh, and N. Engheta, “Target detection in optically scattering media by polarizationdifference imaging, Appl. Opt. 35(11), pp.1855-1870, 1996.
[5] F. Goudail and P. Refregier, “Statistical algorithms for target detection in coherent active polarimetric images,” J.
Opt. Soc. Am. A 18(12) pp.3049-3060, 2001.
[6] T. J. Schulz and N. K. Gupta, “Performance bounds for high-light-level amplitude and intensity interferometry,” J.
Opt. Soc. Am. A 15(6), pp.1619-1625, 1998.
[7] T. J. Schulz, “Estimation of the squared modulus of the mutual intensity form high-light-level intensity
measurements,” J. Opt. Soc. Am. A 12(6), pp.1331-1337, 1995.
[8] E. Wolf, “Correlation between photons in partially polarized light beams,” Proc. Phys. Soc. 76, pp. 424-426, 1960.
TIM SCHULZ is Dean for the College of Engineering, and the Dave House Professor of Electrical and Computer Engineering at
Michigan Tech. He received his D.Sc. in Electrical Engineering from Washington University in St. Louis, and worked for the
Environmental Research Institute of Michigan in Ann Arbor, MI prior to joining Michigan Tech. His research is directed toward the
development of computational sensing and imaging systems with an emphasis on the joint design of optical systems and
computational estimation techniques. He is a Fellow of the Optical Society of America and of SPIE – the International Society for
Optical Engineering. He is currently serving as a topical editor for the Journal of the Optical Society of America, and has served as
topical editor for IEEE Transactions on Image Processing, and for Applied Optics.
Dr. Colin Sheppard
Professor
Division of Bioengineering
National University of Singapore
7 Engineering Drive 1
Singapore 117574
Tel: (65) 6516 1911
Email: colin@nus.edu.sg
Title: Fundamentals of Superresolution
The principles of superresolution of an optical system are introduced based on its information capacity [1-3]. Various different classes of
superresolution schemes are classified [4]. For some methods, the spatial frequency cut-off is unchanged. For others, the spatial frequency cut-off
can be increased. It is shown how structured illumination and partial coherence can lead to a four-fold improvement in spatial frequency cut-off
compared with a conventional coherent system [5-7]. Even stronger improvement also holds for full 3D imaging. Other systems still can exhibit
unrestricted enhancement of cut-off. Various different measures for the focusing properties of a wave are described [8]. These include the intensity
at the focus compared with either the input power or the total intensity in the focal plane, and the width of the central lobe. Simple performance
parameters for focusing are introduced [9], valid for nonparaxial and vectorial optical systems [10-15]. The effects of polarization on the focusing of
light are discussed [12-15]. Although radial polarization results in a smaller central lobe, the intensity at the focus can be lower than for transverse
polarization. Different forms of transverse polarization can be based on linear mixtures of electric and magnetic dipole components, or alternatively
transverse electric and transverse magnetic components. For Bessel beams, electric dipole polarization results in the greatest intensity at the focus
for a given input power, but transverse electric polarization results in the smallest central lobe.
References
[1] Toraldo di Francia, G (1955). "Super-resolution." Optica Acta 2: 5-8.
[2] Cox, IJ and Sheppard, CJR (1986). "Information capacity and resolution in an optical system." J. Opt. Soc. Am. A 3: 1152-1158.
[3] Sheppard, CJR and Larkin, K (2003). "Information capacity and resolution in three dimensional imaging." Optik 114: 548-550.
[4] Sheppard CJR (2007) Fundamentals of superresolution, Micron, 38: 165-169
[5] Sheppard, CJR (1986). "The spatial frequency cut-off in three-dimensional imaging." Optik 72: 131-133.
[6] Sheppard, Colin (2005) Superlens overcomes diffraction limit - Comment, http://optics.org/articles/news/11/4/17/comment/view/186
[7] Sheppard Colin (2007) Developments in 3D Microscopy, SPIE Newsroom 10.1117/2.1200705.0707, http://spie.org/x14016.xml
[8] Sheppard CJR, Alonso MA, Moore NJ (2008) Localization measures for high-aperture wavefields based on pupil moments, J. Opt. A: Pure and Appl. Opt.10:
0333001
[9] Sheppard CJR, Hegedus ZS (1988) Axial behavior of pupil plane filters, J. Opt. Soc. Amer. A 5: 643-647.
[10] Sheppard CJR (2007) Filter performance parameters for high aperture focusing, Opt. Lett.32: 1653-1655
[11] Sheppard CJR, Ledesma S, Campos J, Escalera JC (2007) Improved expressions for gain factors for complex filters, Opt. Lett. 32: 1713-1715
[12] Sheppard, CJR and Larkin, KG (1994). "Optimal concentration of electromagnetic radiation." J. mod. Optics 41: 1495-1505.
[13] Sheppard CJR, Martinez-Corral M (2008) Filter performance parameters for vectorial high-aperture wave-fields, Opt. Lett. 33: 476-578.
[14] Sheppard, CJR and Török, P (1997). "Electromagnetic field in the focal region of an electric dipole wave." Optik 104: 175-177.
[15] Sheppard CJR, Yew EYS (2008) Performance parameters for focusing of radial polarization, Opt. Lett. 33: 497-499
COLIN SHEPPARD is Professor of Bioengineering and Professor of Diagnostic Radiology at National University of Singapore.
Previously, he was fellow of Pembroke and St. John’s Colleges, Oxford, and Professor of physics at Sydney University. He
received his Ph. D. (74) in Engineering at Cambridge University and the D. Sc. in Physical Sciences from Oxford University.
Professor Sheppard’s main area of research is in confocal microscopy, including instrumental development and investigation of
novel techniques with biomedical and industrial applications. He developed one of the world's first confocal microscopes in
1975, and launched the start-up company which marketed the first commercial confocal microscope in 1982. He proposed
various techniques of nonlinear microscopy in 1978. These included the proposal of 2-photon fluorescence microscopy and CARS
microscopy, and the publication of the first images from scanning second-harmonic microscopy. His research interests also
include diffraction and focusing, beam and pulse propagation, scattering and image formation. He was elected Fellow of the
Institute of Physics and Fellow of the Institution of Electrical Engineers. He has received several awards including the Alexander
von Humboldt Research Award, the Institute of Physics Optics and Photonics Division Prize, UK NPL Metrology Award, BTG
Academic Enterprise Award, IEE Gyr and Landis Prize, and the Prince of Wales Award for Industrial Innovation (presented by
HRH Prince Charles on BBC TV). He has served as Vice-President of the International Commission for Optics (ICO) and President
of the International Society for Optics Within Life Sciences (OWLS). He is Editor-in-Chief of Journal of Optics A: Pure and Applied
Optics (the official journal of the European Optical Society).
Sapna A. Shroff
Department of Electrical and Computer Engineering
Institute of Optics
Center for Visual Science
University of Rochester
Rochester, NY 14627
Tel: (585) 273 5991
Email: sapna@optics.rochester.edu
Title: Structured Illumination Imaging for Superresolution
The presentation will begin with a brief review of some super-resolution techniques including Super-SVA [1], then provide detail on the structured
illumination approach. Sinusoidally patterned illumination has been used to obtain lateral superresolution as well as axial sectioning in microscopy
[2-7]. In this talk we discuss the superresolution aspect of this technique. The sinusoidal illumination frequency heterodynes the superresolution
frequencies of the object into a low frequency moiré pattern which now lies within the passband of the imaging system. In order to extract
superresolution from this moiré beat pattern, multiple images are taken of the object with distinct phase shifts of the sinusoidal illumination. This
process is repeated for one or two more orientations of the sinusoidal illumination and the extracted superresolution information from the different
orientations is then combined appropriately to obtain a superresolved image.
The processing of the sinusoidally patterned images requires accurate knowledge of the phase shifts in the sinusoidal illumination and hence this
technique is usually restricted to imaging stationary objects using precise, pre-calibrated phase shifting elements. We discuss the application of this
technique to obtain lateral superresolution in fluorescent moving objects such as live or in vivo tissue, specifically the human retina in vivo. We
discuss methods of estimating the phase shifts in the sinusoidal illumination a posteriori to allow for unknown, random object motion. We also
discuss the combination of the different superresolution components to obtain an appropriately weighted, OTF compensated superresolved image.
References
[1] H.C. Stankwitz and M.R. Kosek, “Super-Resolution for SAR/ISAR RCS Measurement Using Spatially Variant Apodization,” Proceedings of the
Antenna Measurement Techniques Association (AMTA) 17th Annual Meeting and Symposium, Williamsburg, VA, 13-17 November 1995.
[2] M. Gustaffson, "Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy," Journal of Microscopy, Vol.
198, Pt 2, pp 82 – 87 (May 2000).
[3] R. Heintzmann, C. Cremer, "Laterally Modulated Excitation Microscopy: Improvement of resolution by using a diffraction grating," Optical
Biopsies and Microscopic Techniques III, Irving J. Biglo, Herbert Schneckenburger, Jan Slavik, Katrina Svanberg, M.D., Pierre M. Viallet, Editors,
Proceedings of SPIE Vol. 3568, pp. 185 – 196 (1999).
[4] M. A. A. Neil, R. Juskaitis, and T. Wilson, "Method of obtaining optical sectioning by using structured light in a conventional microscope," Opt.
Lett. 22, 1905-1907 (1997).
[5] Karadaglić, D., Wilson, T., “Image formation in structured illumination wide-field fluorescence microscopy,” Micron (2008), doi:
10.1016/j.micron.2008.01.017.
[6] L. H. Schaefer, D. Schuster, J. Schaffer, "Structured illumination microscopy: artifact analysis and reduction utilizing a parameter optimization
approach," Journal of Microscopy 216:2, 165-174 (2004).
[7] S. A. Shroff, J. R. Fienup, and D. R. Williams, "OTF compensation in structured illumination superresolution images," in Unconventional Imaging
IV, edited by Jean J. Dolne, Thomas J. Karr, Victor L. Gamiz, Proceedings of SPIE Vol. 7094 (SPIE, Bellingham, WA), in press, (2008).
SAPNA A. SHROFF is a graduate student working toward her Ph.D. in Electrical and Computer Engineering at the University of
Rochester, advised by Professor James R. Fienup at the Institute of Optics and Professor David R. Williams at the Center for
Visual Science. She has an M.S. in Electrical and Computer Engineering from the University of Rochester, 2005 and a B.E. in
Electronics and Telecommunications Engineering from Mumbai University, 2003.
Her primary areas of interest are imaging and image processing. Her research involves superresolved imaging of the human
retina in vivo using structured illumination. Her research encompasses theoretical, experimental and post-processing aspects
involved in obtaining lateral superresolution and axial sectioning using structured illumination as well as areas of
ophthalmological imaging using adaptive optics flood-illuminated and scanning-confocal retinal imaging systems.
Dr. Michael D. Stenner
Sr. Multi-Discipline Sys. Eng.
The MITRE Corporation
202 Burlington Road, M/S E060
Bedford, MA 01730-1420
Tel: (781) 271-3446
Email: mstenner@mitre.org
Dr. Stenner is generally interested in the field of computational imaging (CI), with specific interest in multiplexed
sensing, task-specific compressive sensing, image coding and system design tradeoffs. As part of the MONTAGE
program, he developed the Multi-Domain Optimization software framework for developing CI systems with both optical
and post-processing degrees of freedom. He also maintains an interest in fast- and slow-light pulse propagation.
MICHAEL STENNER has been a Sr. Multi-Discipline Sys. Eng. at MITRE for approximately two months. He joined
MITRE to pursue development of computational imaging systems. Before joining MITRE, Dr. Stenner worked as a
post-doctoral researcher at the University of Arizona, where he developed the Multi-Domain Optimization software
framework as part of the MONTAGE program. He received his Ph.D. from the Duke University Physics Department for
work in fast- and slow-light pulse propagation.
Dr. Thomas Suleski
Associate Professor
UNC Charlotte
Dept. of Physics and Optical Science
Charlotte, NC 28223
Tel: (704) 687-8159
Email: tsuleski@uncc.edu
Diffractive, refractive, and sub-wavelength micro/nano-optics
Novel fabrication methods for micro/nano-optics
Microsystems integration and applications
Nanoreplication and nanomanufacturing
Multi-axis free-form micromachining
Near-field diffraction and Talbot self-imaging
THOMAS SULESKI received a B.S. in physics from the University of Toledo in 1991, and M.S. and Ph.D. degrees in
physics from the Georgia Institute of Technology in 1993 and 1996, respectively. He has been an active researcher in
micro/nano-optics since 1991.
Dr. Suleski performed research in fabrication and integration of micro-optics at Digital Optics Corporation from 1996
until 2003, most recently as Manager of New Technology. In 2003, he joined the faculty of the University of North
Carolina at Charlotte in the Department of Physics and Optical Science. Dr. Suleski holds 9 patents and over 80
technical publications on the design, fabrication, and testing of micro- and nano-optical components and systems, and
is co-author of Diffractive Optics: Design, Fabrication, and Test (Bellingham, WA: SPIE Press, 2003).
Dr. Suleski is a Fellow of SPIE, the International Society for Optical Engineering, and a member of the Optical Society
of America. He currently serves as Senior Editor for the SPIE Journal of Micro/Nanolithography, MEMS, and MOEMS, as
well as MEMS/MOEMS Symposium Co-Chair at the annual SPIE Photonics West conference. Dr. Suleski previously
served as Group Chair for Holography and Diffractive Optics for the Optical Society of America Science and
Engineering Council from 2004-2006, and has chaired multiple conferences on micro/nano-optics technologies and
applications.
Dr. Xiaobai Sun
Associate Professor
Duke University
450 Research Drive, D107 LSRC
Durham, NC 27708
Tel: (919) 660-6518
Email: Xiaobai@cs.duke.edu
Research Interests
Numerical analysis, matrix theory, high-performance scientific computing and parallel computing.
Current Projects:
Theory and algorithm development for large matrix computation problems arising in computational science and
engineering.
Ph.D.,
M.S.,
University of Maryland at College Park
Academia Sinica, Beijing, China
1991
1983
Publications:
[1] Sun, X. “A Methodology Towards Automatic Implementation of N-body Algorithms” to appear in J. Numer. Comp.
[2] Sun, X. and Pitsianis, N. “A Matrix Version of the Fast Multiple Method” SIAM Review, 43.2, (2002): 289-300
[3] Sun, X. Jin, W. and Chase, J. “FastSlim: Prefetch-Safe Trace Reduction for I/O System Simulation” ACM
Transactions in
Modeling and Simulation, 2000
[4] Pauca, P., Ellerbroek, B., Plemmons, B. and Sun, X. “Structured Matrix Representatives of Two-parameter (Hankel)
Transforms
in Adaptive Optics” Linear Algebra and Its Applications, 316, (2000): 29-43
[5] Greengard, L. and Sun, X. “A new Version of the Fast Gauss Transform” Documenta Mathematica, Extra Colume
ICM III,
(1998): 575-584
Dr. Markus Testorf
Assistant Professor of Engineering
Thayer School of Engineering, Dartmouth College
8000 Cummings Hall, Hanover NH 03755U.S.A.
Phone: (603) 646 2610
Markus.Testorf@osamember.org
Title: Superresolution Imaging: A Skeptic's Perspective
Current interest in optical imaging technology is driven by the synergy of optical hardware, which supports flexible data acquisition
schemes, and numerical image reconstruction algorithms optimum for a given imaging task. One of the most important
performance measures is the image resolution and many systems claim super-resolution, i.e. surpassing the classical Rayleigh limit.
It is argued that so-called superresolution methods can be separated in essentially three categories: Firstly, systems based on a
unique physical principle, which cannot be compared directly to other imaging modalities, and for which the label “surperresolution”
is typically inappropriate. Secondly, systems based on encoding strategies for channelling high bandwidth signals through low
bandwidth systems. Here, the resolution enhancement is often defined in relation to a subsystem, while the entire imaging system
is acting in strict accordance with Rayleigh's resolution limit. Thirdly, methods which provide genuine bandwidth extrapolation and
superresolution imaging, but which show either rather limited performance, are applicable only to a rather small class of signals, or
exhibit high sensitivity to signal imperfections.
Emphasising the physics of image formation, a number of superresolution modalities are investigated. It is illustrated why any claim
of superresolution should be met with skepticism. At the same time, classifying superresolution methods by investigating mutual
similarities and differences is shown to carry the promise of improved information processing and image reconstruction capabilities.
Instrumental to this task is the analysis of sampling and image reconstruction in terms of optical phase spaces or joint space-spatial
frequency representations. The phase space analysis suggests that the key performance measure of any reconstruction method is
not resolution, but the recovered number of degrees of freedom of the input signal. This provides a powerful heuristic approach for
distinguishing between methods aimed at extracting the degrees of freedom of the signal with a minimum set of samples, and
methods which attempt bandwidth extrapolation beyond the measured signal bandwidth.
The presentation reviews the Rayleigh resolution limit as the baseline for further discussion. Then, Lukosz type superresolution is
identified as the prototype for many schemes currently discussed in the context of digital superresolution, structured illumination,
as well as generalized and compressive sampling. Finally, superresolution filters and bandwidth extrapolation based on prior
information ar discussed as a platform to revisit fundamentals of image formation and the limits to image resolution and signal
recovery.
MARKUS TESTORF received his Ph.D. in physics from the University of Erlangen-Nuremberg, Germany in 1994. He
has worked at the INAOE, Mexico, the University of Hagen, Germany, and the University of Massachusetts-Lowell.
Since 2003 he is with the Thayer School of Engineering at Dartmouth College. His research interests include inverse
problems, optical imaging as well as the design and application of diffractive optics and nano-optics. In his research he
particularly enjoys using phase-space methods to gain a better and intuitive understanding of optical phenomena. Dr.
Testorf has authored or co-authored one book publication, 50 journal papers and about 100 conference papers. He is
member of OSA, SPIE, EOS, and the German Optical Society (DGaO), He is currently chair of the OSA technical group
“Diffractive Optics and Holography” and serves as topical editor of Applied Optics. He is also general chair of the OSA
Topical Meeting on Signal Recovery and Synthesis 2009.
Dr. Todd Torgersen
Associate Professor
Wake Forest University
Department of Computer Science
Winston-Salem, NC 27109
Phone: (336) 758 - 5536
Email: torgerse@wfu.edu
Research Interests:
Research interests include image processing, image restoration, lenslet array imaging, phase diversity, wavefront
encoding, and inverse problems. Most recent work has been in collaboration with the PERIODIC project. The on-going
PERIODIC project investigates novel imaging data-diversity modalities including amplitude, wavelength, phase,
and polarization diversity. Application of multi-spectral imaging for rapid assessment of thermal injury is planned for
July 2008.
TODD TORGERSEN
Professional Preparation:
Syracuse University Mathematics BS May, 1975
Syracuse University Mathematics MS May, 1977
University of Delaware Computer Science Ph. D. May, 1989
Appointments:
Associate Professor, Department of Computer Science, Wake Forest University. July 1995 to present.
Assistant Professor, Department of Mathematics and Computer Science, Wake Forest University, August 1989 to June 1995.
Instructor, Department of Mathematics and Computer Science, Glassboro State College, August 1980 to May 1985.
Relevant Publications:
R. Barnard, J. Chung, J. Nagy, V. P. Pauca, R. J. Plemmons, J. van der Gracht, G. Behrmann, S. Mathews, M. Mirotznik, and S.
Prasad, “High-Resolution Iris Image Reconstruction from Low-Resolution Imagery,” Proceedings of SPIE Conference (6313) on
Advanced Signal Processing Algorithms, Architectures and Implementations XVI, held August 13-17, 2006, San Diego, CA.
S. Prasad, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “High-ResolutionImaging Using Integrated Optical Systems,”
International Journal on Imaging Systems and Technology, Vol. 14, No. 2, pp. 67–75, 2004 (invited paper).
S. Prasad, R. J. Plemmons, V. P. Pauca, and J. van der Gracht). “Pupil-Phase Optimization for Extended-Focus, Aberration-Corrected
Imaging Systems,” Advanced Signal Processing Algorithms, Architectures, and Implementations XIV, Proc. Of SPIE, Vol. 5559, pp.
335–345, 2004.
Dr. Robert Tyson
UNC Charlotte
Optics Student
9201 university city blvd
Charlotte, NC 28262
atcannis@uncc.edu
Research Interests and Areas of Specialization:
Adaptive Optics, Diffraction Theory, Fourier Optics, Atmospheric Propagation
ROBERT TYSON received his BS in Physics from Penn State University in 1970 and his MS and Ph.D. in
Physics from West Virginia University in 1976 and 1978 respectively. He is a Fellow of SPIE - The
International Society for Optical Engineering - and in 2006 was a Visiting Scientist at the National
University of Ireland – Galway. He is the author of three books: Field Guide to Adaptive Optics, SPIE
Press2004; Principles of Adaptive Optics2nd Edition Academic Press1997 and Adaptive Optics Engineering
Handbook Marcel Dekker, New York, 2000.
Dr. Joseph van der Gracht
Holospex, Inc.
6470 Freetown Rd
Ste 200-104
Columbia, MD 21044
(410) 740-0494
vanderj@holospex.com
Title: Form birefringent Pupil Phase Engineering for Imaging Polarized Objects
Subwavelength diffractive optical design can be used to develop elements that respond differently depending on the
incident polarization orientation. This polarization selectivity has been applied to polarization-selective beam splitters
that can be used to steer light of differing polarization toward different detector elements, thus providing physically
distinct imaging channels at the detector plane. In this work, I propose the use of polarization-selective pupil plane
masks that produce different point spread functions (PSFs) for orthogonal polarizations. The two imaging channels
are imaged onto the same set of detectors and should have sufficient blur to obscure image details. After detection,
image restoration can be applied to selectively enhance only those features from the desired channel.
In the initial simulation study, I borrow from the work of Stossel and George who suggested the use of a PSF
composed of a spatially random distribution of impulse functions to create sufficient blur to hide image detail while
providing a reasonably well-conditioned restoration problem. I show proof-of-concept by simply choosing two
different random distributions for the two PSF’s. I anticipate practical difficulties in the case of dim objects of one
polarization adjacent to bright objects of the orthogonal polarization. More sophisticated pupil phase engineering
design techniques can be applied to address this problem.
References:
1. M. S. Mirotznik, D. M. Pustai, D. W. Prather, and J. N. Mait, "Design of Two-Dimensional Polarization-Selective
Diffractive Optical Elements with Form-Birefringent Microstructures," Appl. Opt. 43, 5947-5954 (2004).
2. M. S. Mirotznik, J. van der Gracht, D. Pustai, and S. Mathews, "Design of cubic-phase optical elements using
subwavelength microstructures," Opt. Express 16, 1250-1259 (2008).
2. B.J. Stossel and N. George, “Multiple-point impulse responses: controlled blurring and recovery”, Opt. Comm.,
Volume 121, Number 4, 1 December 1995 , pp. 156-165(10).
JOE VAN DER GRACHT has been forming blurry images from an early age, first accidentally and then intentionally.
The intentional blurring began in 1995 when Dr. van der Gracht co-founded HoloSpex, Inc. with Dr. Ravi Athale in
order to design and manufacture spectacles with computer generated holographic lenses that gently blur the scene
while providing striking holographic reconstructions around each point-like light in a scene. To date, Holospex has
blurred the vision of over 150 million observers. Later in 1995, as an employee of the Army Research Lab, Dr. van der
Gracht performed the first laboratory experiment to validate the seminal wavefront coding work of Ed Dowski and Tom
Cathey. More recently, Dr. van der Gracht has worked on a variety of computational imaging projects including the
PERIODIC multichannel imaging architecture.
Bonnie Co
ay
ial Highw
ne Memor
(Hwy 29)
57
58
Hayes Recreational Field Complex
R ec. Field 8
k
ee
Cr
by
To
Rec. Field 9
Map prepared by Facilities Management
Information (704) 687-2000
www.uncc.edu
M
R ec. Field 8
t2
3
Ca
me
ro
.
vd
Bl
on
r
e
m
Ca
Cra
ver
Rd.
A thletic
Field 2
A thletic
Field 4
R ec. Field 3
Lot
D
R ec. Field 5
T rack &
A thletic
Field 1
44
M
14
T ennis
Courts
Phil
li
ps R
d.
Lo
9
t1
Lo
LDo
7
t1
Lot
M
17
Toby Creek Rd.
Lot
60
49
10
540
38
Lot 7
543
15
54
t9
42
M
Dec
k2
5
V
501
503
M
D
Uni
ver
sity
502
36
4
34
4
11
32
10
.
nB
lvd
Lot 8
29
M
M
Lo
MS t
U
ld
R ec. F ie
.
ise Rd
High-R
11
ld
ie
R ec. F
Bro
ad
12
rick
Blv
d
Services - Places of Interest
Academic Buildings
58
4
16
17
38
42
52
32
8
57
45
35
9
60
Applied Optics
Atkins (atkns)
Barnard (brnrd)
Belk Gym (gymns)
Burson (bursn)
Cameron Research Center(carc)
College of Education (coed)
Colvard (colvd)
Denny (denny)
Engineering Research(eng_res)
Fretwell (fret)
Friday (fridy)
Garinger (grngr)
Health & Human Services
1
2
20
51
19
56
12
41
10
Kennedy (knndy)
Macy (macy)
McEniry (mcen)
Robinson Hall (robin)
Rowe (rowe)
Science & Technology (stech)
Smith (smith)
Storrs (storr)
Winningham (winn)
General
39 Auxiliary Services Building(auxil)
46 Barnhardt (bsac)
Student Activity Center
Halton Arena
34 Bookstore (booka)
18 Brocker Health Center (brock)
48 Cato Hall (admis)
Admissions
5 Cone University Center (cone)
4 Counseling Center (atkns)
55a Facilities Management (new)
23 Maintenance & Operations
37 McMillan Greenhouse (green)
16
59
13
54
44
Alumni Center (new)
Belk Tower (belkc)
Chancellor's Residence
Irwin Belk Track Complex(belkt)
Susie Harwood Garden
Van Landingham Glen
43 Wachovia Field House (wach)
M
M
M
31
513
M
Van Landingham
Glen
45
Lot 6
D
Lot 5
M
Van Lan dingham Rd.
D
.
V East Deck 1
East Deck 2
M
Lot 4
D
East Deck 3
M
Lot 5A
Lot 4A
15
To I-485
Main
Entrance
Campus Housing
49 Miltimore-Wallis Center (atac)
55b Police (new)
Administration
11 King (king)
Registrar
14 Receiving / Stores (mshop)
36 Reese (reese)
Student Accounts
Dining Services
31 Cafeteria Activities Building(cfact)
34 Prospector (booka)
29 Residence Dining Hall (rhcaf)
Places of Interest
15 Alumni House (alumn)
536
Susie
Harwood
Garden
48
Mar
y Ale
.
x ander Rd
University City Blvd.
(US 49)
M
537
37
Rd
er
538
Lot 13
524
535
513
D
To Harris Blvd.
23
41
51
D
Lo
16 t
A
and
D
14
8
D
Lot 8A
T ennis Court
s
9
20
D
Lot 16
35
2
13
19
D
34
1
32
Rd.
2
Lot 1
M
D
39
12
D
504
r’ s
Ca
me
ro
ll o
Chan ce
Dec
k1
505
Plac
e
Lo
Con
e
M
x
Ale
.
vd
Bl
18
A
t9 M
Lo
D
Con
e
ry
Ma
on
er
m
Ca
D
Lot 21
55b
542
To Harris Blvd.
59
M
541
10A
Lot
46
D
55a
544
Lot 20 D
M
8
t1
52
D
D
546
56
D West
Deck
Lot 25
.
545
M
Lot 22
Philips Field
D
Dr
43
nB
lvd
irk
Lot 24
R ec. Field 6
Jo
hn
K
D Lo
535
546
543
538
536
504
505
542
513
501
540
524
541
502
Cedar Hall (cedar)
Cypress Hall (phs8)
Elm Hall (elm)
Hawthorn Hall (hawth)
Hickory Hall (hickr)
Holshouser Hall (holsh)
Hunt Village (huntv)
Maple Hall (maple)
Martin Village (martv)
Moore Hall (moore)
Oak Hall (oak)
Phase III (phs3)
Pine Hall (pine)
Sanford Hall (sanfr)
Parking Areas
503
545
537
544
Scott Hall (scott)
Squires Hall (squir)
Sycamore Hall (sycam)
Witherspoon Hall (withr)
Resident, Commuter & F/S
Lot 5
Lot 5A
Lot 6
Lot 8
Lot 10
Lot 12
Lot 14
Lot 17
Lot 18
Lot 19
Lot 23
Lot 25
West Deck
Resident & F/S
Lot 8A
Lot 9
Lot 9A
Lot 13
Lot 16 *
Lot 20
Lot 21
Lot 22
Lot 24
Lot MSU
* Note: No F/S in this lot
except in marked spaces
Commuter & F/S
Lot 4
Lot 4A
Lot 7
East Deck 2
East Deck 3
Faculty & Staff
Lot 10A
Lot 11
Lot 11A
Lot 15
Cone Deck 1
Cone Deck 2
East Deck 1
ATM
CATS Bus Stop
Route 29
Shu�le Bus Stop
Route 249
D
Disabled Parking
M
Meter Parking
V
Visitor Parking
Bldg. # 58
Bldg. # 57
Grigg Hall – Optics Center
Duke Centennial Hall
Lunch and Refreshments
Daily Workshop Meetings held in Rm # 345
Footpath
Shortcut
Bldg. # 544
Witherspoon Residence Hall
Overnight Accommodations
and
Breakfast and Evening Meals
Emergency Contact Information
Campus Police – from campus
Emergency dial 911
Non-emergency 7-2200
From off campus dial 704-687-2200
Carolinas Medical Center (University Hospital)
704-568-6000
Other Contacts
Witherspoon Residence Hall Front Desk
7-4980
From off campus dial 704-687-4980
Optoelectronics Center
7-8117
From off campus dial 704-687-8117
Scott Williams Cell Phone
704- 315-1333
Off Campus Transportation
Crown Cab
704-334-6666
Yellow Cab
704-332-6161
Download