Form for Basic Research Challenge Topic

advertisement
Basic Research Challenge Proposal Form
1. Program Officer Name / Code: Reza Malek-Madani, Don Wagner, Tristan Nguyen/31
2. Brief Topic Description –
Compressive Sensing for Networked Information Processing. The classical Shannon Theorem provides a
criterion for the reconstruction of band-limited signals – if the signal is known to have compact support, it can be
reconstructed from a finite sample whose size, the so-called Nyquist sample rate, is proportional to the support of
the signal. This result, which in essence states that when sampling uniformly, in order not to lose information one
must sample at least twice as fast as the signal’s bandwidth, works very well in practice when the signal has a
small support but is quite impractical in many applications when the Nyquist rate is prohibitively large.
Compressive sensing is a relatively new tool that relies on the sparseness of a signal to reconstruct it from samples
rather than the signal’s bandwidth. The key new information, sparsity of the signal, requires some a priori
knowledge that the signal has relatively few large coefficients when expanded in a basis (a wavelet basis, say),
and this knowledge is then used to sample the signal, not using point samples, but the more general random linear
functionals or random linear projections of the signal. These samples are then combined with an optimization
process to reconstruct the signal, where the reconstruction is accomplished typically with considerably fewer
sample sizes than those dictated by the Nyquist criterion. The goal of this initiative is to develop the rigorous
theoretical and computational tools that are necessary to make compressive sensing an effective tool for signal
processing involving networked sensors. This tool will be particularly effective in settings when multiple and
relatively inexpensive sensors are employed, as in Counter-IED operations, or when the signal is generated by
frequency-hopping, as in most cell phone transmissions.
3. Background (What is the current state of the art?) The basic mathematical theory of compressive sensing can be found in the two seminal papers by Donoho [1] and
by Candes, Romberg and Tao [2]. In addition to the theorems that describe reconstruction of signals and the
necessary sample size, these papers contain algorithms for such reconstructions. Compressive sensing begins
with the premise that natural signals are often sparse when represented in bases such as wavelets. For this reason
wavelets have proved to be very effective in compressing natural signals, especially images. One of the main
computational cost of wavelet compression is determining the coefficients that are larger than a certain threshold;
one typically computes thousands of coefficients out of which a small percentage, 1% say, of the coefficients end
up being large enough to be meet the threshold criterion. While we gain an enormous amount of compression by
keeping just 1% percent of the wavelet coefficients, we have essentially discarded 99% of the data we have
sampled, at a huge computational cost, the kind of computation that cannot be easily implemented on the type of
inexpensive sensors we have in mind. On the other hand, the reconstruction of the signal from 1% of its
coefficients is quite fast, one of the main attractions of the wavelet compression algorithm: on a desktop
computer images are reconstructed in micro-seconds, as witnessed by our daily experience.
By contrast, compressive sensing combines sensing and compression in its sampling strategy to overcome the
wastefulness of first over-sampling a signal and then discarding a large percentage of it. This strategy has two
components to it. First, instead of point sampling, the signal is sampled by applying random projections (inner
products). The remarkable mathematical theory described in [1] and [2] shows that the number of random
projections needed is a constant multiple of the (small number of) coefficients in the sparse representation of the
signal. Second, the signal is reconstructed using optimization tools (the reconstruction process is quite ill-posed
and requires additional constraints), most commonly by applying convex optimization (linear programming, see
[3]). The power of this technique is shown in the two figures below where a signal and its reconstruction using
compressed sensing are shown; the original signal is a sum of twenty various wavelet functions. The
reconstruction is achieved by applying about three dozen random projections as well as convex optimization.
Original Signal
Reconstructed Signal
In contrast with wavelet reconstruction compression algorithms, which as stated above typically take microseconds to achieve, the compressive sensing reconstruction could take several orders of magnitude longer.
4. Objectives (What challenges does the topic address? What are the expected outcomes?)
Compressed Sensing has demonstrated the capability of recovering certain signals with far fewer samples than
required in the classical Shannon model for signals. The signals captured are those which have a sparse
representation with respect to some representation system (basis or frame). This new model for signals meshes
well with application domains such as surveillance and sensing, especially when a network of multiple sensors
with diverse capabilities are involved. Compressed sensing has the potential of impacting our computational
ability much the same way that the multi-grid method and wavelet representation impacted the modeling
communities in the eighties and nineties. And perhaps more important, compressed sensing, when its boundaries
are understood at the level expected after the duration of this initiative, will have a significant impact on the
design of sensors.
5. Approach (Include partnerships with other Codes or Agencies)
This initiative will address several areas:
a) The usefulness of compressed sensing will depend on whether sensors can be built which enact the
compressed sensing theory. This will be problem dependent. In some domains such as medical imaging, the
capability of implementing compressed sensing is already demonstrated. In other areas of DoD interest, in
acoustics and electromagnetic imaging, new research is needed to identify the boundaries of effectiveness of
compressed sensing in relation to how costly measurements are.
b) One of the challenges in compressed sensing is to find more user friendly compressed sensing systems.
Ideally, this would replace random sensing by a deterministic sensing system. Whether this is possible is still
not known. If we are not able to design the linear functional deterministically, we need to understand what
sort of randomness we can really implement in a circuit. In reality random number generators are pseudo
random generators and the compressed sensing methodology does not as yet apply to them.
c) The other aspect of compressed sensing is the decoding of the signal from the compressed samples. In this
direction, we need to understand what are the fastest possible decoders. Most likely, we will be able to have
the fastest decoding only when we design the sensing part in concert with the decoder.
d) Currently several optimization techniques are used to reconstruct signals, the most popular one currently
being l_1convex optimization (linear programming). In the past twelve months other optimization methods,
including “greedy pursuit” and weighted l_2 optimization have been introduced, all having credible success.
An important research question has to do with understanding which optimization method will end up being
optimal for this inherently ill-posed problem.
e) It is important, and within reach, to understand what role compressive sensing will play in classification
and image segmentation in the case of problems where hyperspectral information is available (see [4]).
6. Potential Naval Relevance (How could the knowledge or discovery contribute to the Naval S&T strategy?)
The delivery of a rigorous analytical and computational set of tools will lead to the design of new sensors. It will
also contribute significantly to our ability to fuse information from networked sensors.
7. Risks/ Challenges (Is the topic pushing scientific frontiers? Is it technically feasible?)
This initiative will push the development of the theory of compressed sensing to a new level where the specific
DoD and Navy applications will require obtaining optimal bounds and robust algorithms. It will also lead to a
focused collaboration among mathematicians, computer scientists and sensor designers.
8. What makes this a new topic area for ONR? Does it have the potential to attract new performers?
This topic is arguably the most exciting new mathematical discovery of the past five years. The level of activity,
especially by some of the most gifted mathematical analysts and computer scientists, makes this topic ripe for
investment where the quality of deliverables will undoubtedly be high. This topic is particularly appropriate for
ONR’s 31 code because at its foundation it requires contributions from Computational Analysis and Optimization
Theory, two subareas that have had spectacular records of innovation in the past two decades.
9. Four Year Budget (Include any unique facilities or resources required)
Budget
The research primarily involves developing theoretical and computational. Its cost is mainly in supporting
graduate students and post doctoral fellows in the major institutions that have demonstrated expertise in the
disciplines of computational analysis and optimization. We estimate that a budget of $ 1000K per year is needed
for four years.
10. References:
[1] D. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
[2] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly
incomplete frequency information,” IEEE Trans. Inform. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006
[3] Donoho, D. L., For most large underdetermined systems of linear equations the minimal l_1 norm solution is
also the sparsest Solution. Comm. Pure Appl. Math. 59 (2006), 797–829.
[4] Rebecca Willett, Michael Gehm, and David Brady, Multiscale reconstruction for computational spectral
imaging. (Proc. Computational Imaging V at SPIE Electronic Imaging, San Jose, California, January 2007)
Download