Compressive Sensing & Its Various Applications Prateek Paliwal , Manish Sharma

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
Compressive Sensing & Its Various Applications
Prateek Paliwal #1, Manish Sharma*2
#*
SOE, Sanghvi Institute of Management & Science, Indore (M.P), India.
Abstract- Conventional techniques for sampling signals
appreciate Shannon's theorem, in a demand to construct a signal
without error in which the sampling rate must be at least the
Nyquist rate. The paper explains an emerging theory of
compressed sensing or compressive sampling or CS, which
explains that these traditional approaches are inaccurate. It is
amazing to know, that it is possible to reorganize images or
signals of scientific need exactly and occasionally even exactly
from a number of samples which is much less than the desired
resolution of the signal. It is considered that compressive sensing
has much attaining inferences. For instance, it explains the
chances of new data acquisition protocols that translate analog
information into digital form with few measurements than what
was considered important. This new sensing concept may come
to fundamental methods for sampling and compressing data at
the same time. In this brief overview, we explain few of the
important mathematical insights rooting this new theory, and
discuss few of the interactions between compressive sampling
and other fields such as statistics, information theory, coding
theory, and theoretical computer science. Compressive Sensing is
one of the newest tools for simultaneous sensing and compression
of data. It enables a significant depletion in the sampling and
computation costs for signals having sparse representation in
some basis.
Keywords- Compressive sensing, sparsity, underdetermined
systems of linear equations, applications of compressive sensing.
I. INTRODUCTION
One of the crucial facts of signal processing is the
Nyquist/Shannon sampling theory which states that the
number of samples needed to reconstruct a signal without
error is controlled by its bandwidth – the length of the shortest
interval which contains the support of the spectrum of the
signal under study. Contradicting above fact, an alternative
theory of “compressive sampling” has emerged which shows
that super-resolved signals and images can be reconstructed
from far fewer data/measurements than what is usually
considered necessary. The motive of this paper is to observe
and give some of the essential mathematical perceptions for
this new theory. An attractive aspect of compressive sensing is
that it has significant interactions and orientations on some
fields in the applied sciences and engineering such as
statistics, information theory, coding theory, theoretical
computer science, and others as well. Compressive sensing
investigates the recovery of a signal that can be sparsely
represented over complete basis given a small number of
linear combinations of the signal.Compressive sensing is a
paradigm for acquiring signals and has a wide range of
ISSN: 2231-5381
applications. The basic presumption is that one can recover a
sparse or compressible signal from far fewer measurements
than traditional methods [1].
From a common perspective, sparsity or compressibility has
a fundamental role in many fields of science. Sparsity leads to
efficient evaluations like the quality of estimation by
thresholding or shrinkage algorithms depends on the sparsity
of the signal we wish to estimate. Sparsity leads to efficient
compression; for example, the precision of a transform coder
depends on the sparsity of the signal we wish to encode [2].
Sparsity leads to dimensionality depletion and systematic
modelling. The originality here is that sparsity has
significance on the data acquisition process and leads to
efficient data acquisition protocols.As compressive sensing
explains ways to effectively translate analog data into already
compressed digital formrequiring fewer resources or costing
less money [3], [4]. As typical signals have some structure,
they can be compressed efficiently without much loss. For
instance, modern transform coders such as JPEG2000 exploit
the fact that many signals have a sparse representation in a
fixed basis, meaning that one can store or transmit only a
small number of adaptively chosen transform coefficients
rather than all the signal samples. The way this typically
works is that one acquires the full signal, computes the
complete set of transform coefficients, encode the largest
coefficients and discard all the others. This process of massive
data acquisition followed by compression is extremely
wasteful. This raises a fundamental question: because most
signals are compressible, why spend so much effort acquiring
all the data when we know that most of it will be discarded?
Could it be possible to acquire the data in compressed form so
that one does not need to throw away anything? “Compressive
sampling” also known as “compressed sensing” [3] shows that
this is indeed possible.
This phenomenon comprises of the theory of compressive
sampling, a standard that opposes the common wisdom in data
acquisition. Compressive sensing paradigm describes that one
can recover signals from fewer samples or measurements than
using traditional methods. CS theory depends on two features:
sparsity, describes the signals of interest, and incoherence,
which describes on the sensing model quality.
■ Sparsity explains the fact that the “information rate” of a
continuous time signal may be much smaller than suggested
by its bandwidth, or that a discrete-time signal depends on a
number of degrees of freedom which is comparably much
http://www.ijettjournal.org
Page 758
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
smaller than its length. Moreover exactly, CS explains the fact
that many natural signals are sparse or compressible in the
sense that they have brief representations when expressed in
the proper basisΨ.
some fixed basis. Then this assumption entirely changes the
problem, making the findings for solutions practicable. In
consideration, accurate and occasionally exact recovery can be
done by solving a simple convex optimization problem.
■ Incoherence extends the contrast between time and
frequency and explains the fact that objects possessing a
sparse representation in Ψmust be advance out in the domain
in which they are received. Incoherence expresses that similar
to the signal of concern, the sensing waveforms have
significantly dense representation in Ψ.
III. MATHEMATICS OF COMPRESSIVE SENSING
The important observation is that one can design efficient
sensing protocols that capture the useful information content
exploded in a sparse signal and decrease it into a small
amount of data. The protocols are non-adaptive and require
coordinating the signal with a small number of fixed
waveforms that are incomprehensible with the sparsifying
basis. The most incredible about these sampling protocols is
that they allow a sensor to very effectively capture the
information in a sparse signal without trying to penetrate that
signal. However, there is a way to use numerical optimization
to reconstruct the full-length signal from the small amount of
collected data. In other concepts, CS is a very simple and
efficient signal acquisition protocol which samples in a signal
independent fashion at a low rate and uses computational
power for reconstruction from an incomplete set of
measurements.
II. UNDERSAMPLED MEASUREMENTS
Consider the general problem of reconstructing a vector x ∈
RN from linear measurements y about x of the form
Yk = (x, ϕk), k = 1, . . . , K, or y = Φx
(1)
That is, we acquire information about the unknown signal by
sensing x against K vectors ϕk ∈ RN We are interested in the
“underdetermined” case K <<N, where we have many fewer
measurements than unknown signal values. Problems of this
type arise in a countless number of applications. In radiology
and biomedical imaging for instance, one is typically able to
collect far fewer measurements about an image of interest
than the number of unknown pixels. When we discuss about
wideband radio frequency signal analysis, we are able to
acquire a signal at a rate which is much lower than the
Nyquist rate as of current boundations in Analog-to-Digital
Converter technology.
Taking a brief overview firstly for solving the
underdetermined system of equations appears inadequate, as it
is easy to make up examples for which it clearly cannot be
done. Let us consider that the signal x is compressible, means
that it mandatory depends on a number of degrees of scope
which is smaller than N. For example, consider our signal is
sparse in the sense that it can be written either exactly or
accurately as a combination of a few number of vectors in
ISSN: 2231-5381
A. Sparsity and Incoherence
In all what follows, we will ratify an abstract and general
point of view when discussing the recovery of a vector x ∈ RN.
In practical instances, the vector x may be the coefficients of a
signal f ∈ RNin an orthonormal basis Ψ
For example, we might choose to broaden the signal as a
superposition of spikes (the canonical basis of RN), sinusoids,
B-splines, wavelets [5], and so on. As a side note, it is not
important to inhibit attention to orthogonal expansions as the
theory and practice of compressive sampling accommodates
other types of expansions. For example, x might be the
coefficients of a digital image in a tight-frame of curvelets [6].
To keep on using convenient matrix notations, one can write
the decay (2) as x = Ψf where Ψ is the N by N matrix with the
waveforms Ψi as rows or equivalently, f = Ψ*x.
We will say that a signal f is sparse in the Ψ-domain if the
belonging array is supported on a small set and compressible
if the sequence is focused near a small set. Suppose we have
available undersampled data about f of the same form as
before
Expressed in a different way, we collect partial information
about x via y = Φ’ x where Φ’= ΦΨ* In this setup, one would
recover f by finding - among all coefficient sequences
consistent with the data - the decomposition with minimum ℓ1norm.
With this in mind, the key concept underlying the theory of
compressive sampling is a kind of uncertainty relation, which
we explain next.
B. Recovery of sparse signals
In [4], Candes and Tao introduced the notion of uniform
uncertainty principle (UUP) which they refined in [7]. The
UUP essentially states that the K ×N sensing matrix Φ obeys a
“restricted isometry hypothesis.” Let ΦT, T ⊂ {1, . . . , N} be
the K × |T | submatrix obtained by extracting the columns of Φ
corresponding to the indices in T ; then [8] defines the Srestricted isometry constant δs of Φ which is the smallest
quantity such that
http://www.ijettjournal.org
Page 759
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
for all subsets T with
and coefficient sequences
. This property actually requires that each set of
columns with undoubtedly less than S approximately behaves
like an orthonormal system. An important result is that if the
columns of the sensing matrix Φ are approximately
orthogonal, then the exact recovery phenomenon occurs.
C. Recovery of compressible signals
In actual, signals may not be promoted in space on a set of

approximately small size. Instead, they may only be focused
near a sparse set. For example, a model in signal processing
presume that the coefficients of elements taken from a signal
class decay rapidly, typically like a power law. Smooth
signals, piecewise signals, images with bounded variations are
all of this type [2]. A question arises is that how one can
recover a signal that is just nearly sparse. For an arbitrary
vector x in RN, denote by xs its best S-sparse approximation;
that is, xs is the approximation obtained by keeping the S
largest entries of x and setting the others to zero. It turns out
that if the sensing matrix follows the UUP at level S, then the
recovery error is not much worse than
D. Random matrices
Apparently all of this would be interesting if one could
design a sensing matrix which would allow us to recover large
entries of x as possible with very few as K measurements. We

would like the condition δ2S+ θS,2S<1 to hold for large values
of S, ideally of the order of K. This poses a design problem.
How should one design a matrix Φ - that is to say, a collection
of N vectors in K dimensions - so that any subset of columns
of size about S be about orthogonal? And for what values of S
is this possible?
While it might be difficult to display a matrix which provably
obeys the UUP for very large values of S, we know that
incidental randomized constructions will do so with
overwhelming probability. We give an example. Sample N
vectors on the unit sphere of RK independently and uniformly
at random. Then the condition for S = O(K/log(N/K)) with
probability 1- πN where
πN = O(e−γN) for some γ >0. The
reason why this holds may be explained by some sort of
“blessing of high-dimensionality.” Because the highdimensional sphere is mostly empty, it is possible to pack
many vectors while maintaining approximate orthogonality.
 Gaussian measurements. Here we assume that the entries
of the K by N sensing matrix Φ are independently sampled
from the normal distribution with mean zero and variance 1/K.
Then if
S ≤ C · K/ log(N/K),
(5)
S obeys the condition of probability 1 − O(e−γ N) for some
γ
>0. The proof uses known concentration results about the
singular values of Gaussian matrices [8], [9].
 Binary measurements. Suppose that the entries of the K
by N sensing matrix Φ are independently sampled from the
ISSN: 2231-5381
symmetric Bernoulli distribution P(Φki = ±1/ ) = 1/2. Then it
is conjectured that the conditions are satisfied with probability
1 − O(e−γ N) for some γ >0 provided that S obeys. The proof of
this fact would probably follow from new concentration
results about the smallest singular value of a sub-gaussian
matrix [10]. Note that the exact reconstruction property for S
sparse signals and with S obeying are known to hold for
binary measurements [4].
Fourier measurements. Suppose now that Φ is a partial Fourier
matrix obtained by selecting K rows uniformly at random as
before, and renormalizing the columns so that they are unitnormed. Then Candes and Tao [4] showed overwhelming
probability if
S ≤ C · K/(log N)6. Recently, Rudelson and
Vershynin [11] improved this result and established S ≤ C ·
K/(log N)4. This result is nontrivial and use sophisticated
techniques from geometric functional analysis and probability
in Banach spaces. It is conjectured that S ≤ C · K/log N holds.
Incoherent measurements. Suppose now that Φ is obtained by
selecting K rows uniformly at random from an N by N
orthonormal matrix U and renormalizing the columns so that
they are unit-normed. As before, we could think of U as the
matrix ΦΨ* which maps the object from the Ψ to the Φ domain. Then the arguments used in [4], [11] to prove that the
UUP holds for incomplete Fourier matrices extend to this
more general situation. In particular, the overwhelming
probability provided that
where
maxi,j | Ui,j| (observe that for the Fourier matrix,
μ = 1 which gives the result in the special case of the Fourier
ensemble above). With U = ΦΨ*,
which is referred to as the mutual coherence between the
measurement basis Φ and the sparsity basis Ψ [12], [13]. The
greater the incoherence of the measurement/sparsity pair
(Φ,Ψ), the smaller the number of measurements needed.
In short, one can establish the UUP for a few interesting
random ensembles and we expect that in the future, many
more results of this type will become available.
E. Optimality
It is interesting to specialize our recovery theorems to
selected measurement ensembles now that we have
established the UUP for concrete values of S. Consider the
Gaussian measurement ensemble in which the entries of Φ are
i.i.d. N(0, 1/K). Our results say that one can recover any Ssparse vector from a random projection of dimension about
O(S · log(N/S)), see also [14]. Next, suppose that x is taken
from a weak- ℓp ball of radius R for some 0 < p <1, or from
the ℓ1 -ball of radius R for p = 1. Then we have shown that for
all x ∈w ℓp (R).
http://www.ijettjournal.org
Page 760
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
which has also been proven in [3]. Now can we find a
possibly adaptive set of measurements and a reconstruction
algorithm that would give a better constrained than (8)? By
adaptive, we can say that one could use a sequential
measurement procedure where at each stage, one would have
the option to decide which linear functional to use next based
on the data collected up to that stage.
IV. ROBUST COMPRESSIVE SENSING
In any realistic application, we cannot expect to measure Φx
without any error, and we now turn our attention to the
robustness of compressive sampling vis a vis measurement
errors. This is a very important issue because any real-world
sensor is subject to at least a small amount of noise. And one
thus immediately understands that to be widely applicable, the
methodology needs to be stable. Small perturbations in the
observed data should induce small perturbations in the
reconstruction. Fortunately, the recovery procedures may be
adapted to be surprisingly stable and robust vis a vis arbitrary
perturbations. Suppose our observations are inaccurate and
consider the model
Where e is a stochastic or deterministic error term with
bounded energy
. Because we have inaccurate
measurements, we now use a noise-aware variant of which
relaxes the data fidelity term. We propose a reconstruction
program of the form
The difference with (P1) is that we only ask the reconstruction
be consistent with the data in the sense that y −
be within
the noise level. The program (P2) has a unique solution, is
again convex, and is a special instance of a second order cone
program (SOCP) [15].
V. APPLICATIONS
In exercise, there are a lot of sparse or compressible signals.
Hence compressive sensing has a broad range of applications
and extensions in many areas, ranging from medicine and
coding hypothesis to astronomy and geophysics. Sparse
signals have various use in natural phenomena, so compressed
sensing apply it well to different situations. The three main
applications of the theory are error correction, imaging, and
radar.
A. Error Correction
The signal is encoded and gathers errors, when signals are
sent from one place to other. Sparse recovery can be applied
to reconstruct the signal from the corrupted encoded data, as
ISSN: 2231-5381
the errors commonly occur in some places [17]. The error
correction problem is a typical problem in coding theory. The
theory commonly supposes that data values lies in some finite
field, because there are various practical applications for
encoding over the continuous real. In digital communications,
suppose, one wishes to protect results of onboard
computations that have real values. These computations are
done by circuits that comprises of faults caused by
consequences of outside world. Above and various other
examples are hard real-world problems of error correction.
The error correction problem is explained as below. Consider
an m-dimensional input vector f €Rm that we wish to transmit
reliably to a distant receiver. In coding theory, this is called as
the “plaintext”. We transmit the measurements z = Af (or
“cipher text”) where A is the d × m measurement matrix, or
the linear code. It is clear that if the linear code A has full
rank, we can recover the input vector f from the cipher text z.
But as is often the case in practice, we consider the setting
where the cipher text z has been corrupted. We then wish to
reconstruct the input signal f from the corrupted
measurements z′ = Af+ e where e € Rn is the sparse error
vector. To realize it in the usual compressed sensing setting,
consider a matrix B whose kernel is the range of A. Apply B
to both sides of the equation z′ = Af+ e to get B z′ = B e. Set y
= B z′ and the problem becomes reconstructing the sparse
vector e from its linear measurements y. Once we have
recovered the error vector e, we have access to the actual
measurements Af and since A is full rank can recover the
input signal f.
B. Imaging
Image Processing is probably one of the areas that adopted
compressive sensing most forcibly. Each and every image is
sparse with respect to some basis. Due to this, various
applications in imaging are able to take benefits of the
mechanism provided by Compressed Sensing. The classic
digital camera today records every pixel in an image before
compressing that data and storing the compressed image.
Because of the use of silicon, digital cameras can operate in
the megapixel range. A normal question arises that why we
need to acquire this very large amount of data, just to throw
most of it away immediately. These criteria ignited the
emerging theory of Compressive Imaging.
In this new framework, the idea is to directly acquire
random linear measurements of an image without the
troublesome step of capturing every pixel initially. Several
issues led to arise. Firstly the problem is how to reconstruct
the image from its random linear measurements. The other
problem issue lies in to actually sample the random linear
measurements without first acquiring the entire image. The
solution of this problem is given by Compressed Sensing. The
single-pixel compressive sampling camera also operates at a
much broader range of the light spectrum than traditional
cameras that use silicon. For example, because silicon cannot
capture a wide range of the spectrum, a digital camera to
capture infrared images is much more complex and costly.
http://www.ijettjournal.org
Page 761
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
Compressed Sensing is also applied in medical imaging, in
particular with magnetic resonance (MR) images which
sample Fourier coefficients of an image [17]. These images
are totally sparse and can thus exploit on the theories of
Compressed Sensing. MR imaging is normally very time
consuming, as the speed of data collection is restricted by
many constraints. Thus it is extremely beneficial to reduce the
number of measurements collected without sacrificing quality
of the MR image [17][18]. Compressed Sensing again
provides exactly this, and many Compressed Sensing
algorithms have been specifically designed with MR images
in mind.
C. Radar
There are various other applications to compressed sensing,
and one more application is Compressive Radar Imaging. A
radar system transmits some sort of pulse, and then uses a
matched filter to correlate the signal received with that pulse.
The receiver uses a pulse compression system along with a
high-rate analog to digital (A/D) converter. This conventional
approach is not only difficult and expensive, but the resolution
of targets in this classical framework is limited by the radar
uncertainty principle. Compressive Radar Imaging handles
these problems by approximate the time-frequency plane into
a grid and considering each possible target scene as a matrix
[17-19]. If the number of targets is small enough, then the grid
will be sparsely populated, and we can employ Compressed
Sensing techniques to recover the target scene.
A compressible signal can be captured in an efficient
manner using a number of incomprehensiblemeasurements
[16] that is comparable to its information level S<<n has
consequences that are far reaching and have much other
number of possible applications explained as follows:A. Data compression: In various circumstances, the sparse
basis Ψmay be unknown at the encoder or inappropriate to
implement for data compression. As we know in “Random
Sensing”, a randomly designed Φcan be examined as a
universal in coding method, as it does not need to be designed
with respect to the structure of Ψ. This integrity may be
especially helpful for distributed source coding in multi-signal
settings such as sensor networks [20].
B. Channel coding: There are significant connections with the
problem of recovering signals from highly incomplete
measurementsas explained in [21]. CS principles such as
sparsity, randomness, and convex optimization, can be used to
design fast error correcting codes to protect from errors during
transmission.
C. Inverse problems: There are various other conditions,
where the only way to capture f is to employ a measurement
system Φof a certain manner [16]. Nonetheless, considering a
sparse basis Ψexists for f that is incoherent with Φ,which has
efficient sensing to be possible. One such application involves
MR angiography [22] and other types of MR setups [23],
where Φrecords a subset of the Fourier transform, and the
ISSN: 2231-5381
desired image f is sparse in the time or wavelet domains. In
this issue, Lustig et al. discuss this application in more detail.
D. Data acquisition: Ultimately, in various situations the full
collection of n discrete-time samples of an analog signal may
be difficult to obtain. Therefore, it could be helpful to design
physical sampling devices that directly record discrete, lowrate incoherent measurements of the incident analog signal.
Hence these applications suggest that mathematical and
computational techniques could have an abundant impression
in areas where standard hardware design has various
limitations. For example, conventional imaging devices that
use CMOS technology are limited essentially to the visible
spectrum. However, a CS camera that collects
incomprehensible measurements using a digital micro mirror
array could significantly expand these capabilities [24].
VI. FUTURE ENHANCEMENTS
Our objective in this short observation was merely to
introduce the new compressive sensing concepts. We
explained a technique based on the fact of uncertainty
principle which gives a powerful and unified treatment of
some of the main results underlying this theory. The theory
gives conditions for exact, approximate, and stable recovery
which are almost compulsory. Another benefit is that it makes
the explanation reasonably simple. Previous study of the early
papers on compressive sensing [3], [4] have explained a large
and interesting literature in which other approaches and ideas
have been proposed. Rudelson and Vershynin have used tools
from modern Banach space theory to derive powerful results
for Gaussian ensembles [11], [25], [26]. In this area, Pajor and
his colleagues have established the existence of abstract
reconstruction procedures from subgaussian measurements
(including random binary sensing matrices) with powerful
reconstruction properties. In a different direction, Donoho and
Tanner have leveraged results from polytope geometry to
obtain very precise estimates about the minimal number of
Gaussian measurements needed to reconstruct S-sparse signals
[27], [28], see also [11]. Tropp and Gilbert reported results
about the performance of greedy methods for compressive
sampling [29]. Haupt and Nowak have quantified the
performance of combinatorial optimization procedures for
estimating a signal from undersampled random projections in
noisy environments [30]. Finally, Rauhut has worked out
variations on the Fourier sampling theorem in which a sparse
continuous time trigonometric polynomials is randomly
sampled in time [31]. Because of space limitations, we are
unfortunately unable to do complete justice to this rapidly
growing literature.
We would like to emphasize that there are many aspects of
compressive sampling that we have not touched. For example,
we have not discussed the practical performance of this new
theory. In fact, numerical experiments have shown that
compressive sampling behaves extremely well in practice.
Further, numerical simulations with noisy data show that
http://www.ijettjournal.org
Page 762
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
compressive sampling is very stable and performs well in
noisy environments.
We would like to close this article by returning to the main
theme of this paper, which is that compressive sampling.
Because if one were to collect a comparably small number of
general linear measurements rather than the usual pixels, one
could in principle reconstruct an image with essentially the
same resolution as that one would obtain by measuring all the
pixels. Therefore, if one could design incoherent sensors, the
payoff could be extremely large. Several teams have already
reported progress in this direction. Compressive sampling may
also address challenges in the processing of wideband radio
frequency signals since high-speed analog-to-digital convertor
technology indicates that current capabilities fall well short of
needs, and that hardware implementations of high precision
Shannon-based conversion seem out of sight for decades to
come. Finally, compressive sampling has already found
applications in wireless sensor networks [32]. Here,
compressive sampling allows of energy efficient estimation of
sensor data with comparably few sensor nodes. The power of
these estimation schemes is that they require no prior
information about the sensed data. All these applications are
novel and exciting.
[11].
[12].
[13].
[14].
[15].
[16].
matrices and geometry of random polytopes.
Manuscript, 2004.
Rudelson, M., Vershynin, R., Sparse reconstruction by
convex
relaxation:
Fourier
and
Gaussian
measurements. Preprint, 2006.
Donoho, D. L., Elad, M., Optimally sparse
representation in general (nonorthogonal) dictionaries
via _1 minimization. Proc. Natl. Acad. Sci. USA 100
(2003), 2197–2202.
Elad, M., Bruckstein, A. M.,A generalized uncertainty
principle and sparse representationin pairs of RN bases.
IEEE Trans. Inform. Theory 48 (2002), 2558–2567.
Donoho, D. L., for most large underdetermined systems
of linear equations the minimal 1-norm solution is also
the sparsest Solution. Comm. Pure Appl. Math. 59
(2006), 797–829.
Boyd, S. Vandenberghe, L., Convex Optimization.
Cambridge University Press, Cambridge 2004.
Emmanuel J. Candès and Michael B. Wakin, “An
Introduction to Compressive Sensing” IEEE SIGNAL
PROCESSING MAGAZINE MARCH 2008.
[17]. Deanna Needell, ”Topics in Compressed Sensing” in
2009.
REFRENCES
Yue Mao, “Compressive Sensing” submitted in May
2010.
[2]. Donoho, D. L.,Vetterli, M., DeVore, R. A.,
Daubechies, I., Data compression and harmonic
analysis. IEEE Trans. Inform. Theory 44 (1998), 2435–
2476.
[3]. Donoho, D. L., Compressed sensing. Technical Report,
Stanford University, 2004.
[4]. Candès, E. J., Tao, T., Near-optimal signal recovery
from random projections and universal encoding
strategies. IEEE Trans. Inform. Theory, 2004,
submitted.
[5]. Mallat, S., A Wavelet Tour of Signal Processing.
Academic Press, San Diego, CA, 1998.
[6]. Candès, E. J., Donoho, D. L. New tight frames of
curvelets and optimal Representations of objects with
piecewise C2 singularities. Comm. Pure Appl. Math. 57
(2004), 219–266.
[7]. Candès, E. J., Tao, T., Decoding by linear
programming. IEEE Trans. Inform. Theory 51 (2005),
4203–4215.
[8]. Davidson, K. R., Szarek, S. J., Local operator theory,
random matrices and Banach spaces. In Handbook of
the geometry of Banach spaces (ed. ByW. B. Johnson,
J. Lindenstrauss), Vol. I, North-Holland, Amsterdam
2001, 317–366; Corrigendum, Vol. 2, 2003, 1819–
1820.
[9]. Szarek, S. J., Condition numbers of random matrices. J.
Complexity 7 (1991), 131–149.
[10]. Litvak, A. E., Pajor, A., Rudelson, M., TomczakJaegermann, N., Smallest singular value of random
[1].
ISSN: 2231-5381
[18]. Michael Lustig, David Donoho and John M. Pauly,
“Sparse MRI: The Application of Compressed Sensing
for Rapid MR Imaging” submitted to Magnetic
Resonance Systems Research Laboratory.
[19]. Adriana Schulz, Eduardo A. B. da Silva and Luiz
Velho, “Compressive Sensing”.
[20]. D. Baron, M.B. Wakin, M.F. Duarte, S. Sarvotham, and
R.G. Baraniuk, “Distributed compressed sensing,”
2005, Preprint.
[21]. E. Candès and T. Tao, “Decoding by linear
programming,” IEEE Trans. Inform. Theory, vol. 51,
no. 12, pp. 4203-4215, Dec. 2005.
[22]. E. Candès, J. Romberg, and T. Tao, “Robust
uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information,” IEEE Trans.
Inform. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.
[23]. M. Lustig, D.L. Donoho, and J.M. Pauly, “Rapid MR
imaging with compressed sensing and randomly undersampled 3DFT trajectories,” in Proc. 14th Ann.
Meeting ISMRM, Seattle, WA, May 2006.
[24]. D. Takhar, V. Bansal, M. Wakin, M. Duarte, D. Baron,
K.F. Kelly, and R.G. Baraniuk, “A compressed sensing
camera: New theory and an implementation using
digital micromirrors,” in Proc. Comp. Imaging IV SPIE
Electronic Imaging, San Jose, CA, 2006.
[25]. Candès, E. J., Rudelson, M.,Vershynin, R. and Tao, T.
Error correction via linear programming. In
http://www.ijettjournal.org
Page 763
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 15 - Mar 2014
[26].
[27].
[28].
[29].
[30].
[31].
[32].
Proceedings of the 46th Annual IEEE Symposium on
Foundations of Computer Science (FOCS) (2005),
IEEE Comput. Soc. Press, Los Alamitos, CA,295–308.
Rudelson, M., Vershynin, R., Geometric approach to
error-correcting codes and reconstruction of signals.
Internat. Math. Res. Notices 2005 (64) (2005), 4019–
4041.
Donoho, D. L., Neighborly polytopes and sparse
solutions of underdetermined linear equations.
Technical Report, Stanford University, 2005.
Donoho, D. L., Tanner, J., Neighborliness of randomly
projected simplices in high dimensions. Proc. Natl.
Acad. Sci. USA 102 (2005), 9452–9457.
Tropp, J. A., Gilbert, A. C., Signal recovery from
partial information via orthogonal matching pursuit.
Preprint, University of Michigan, 2005.
Haupt, J., Nowak, R., Signal reconstruction from noisy
random projections. IEEE Trans. Inform. Theory,
submitted.
Rauhut, H., Random sampling of sparse trigonometric
polynomials. Preprint, 2005.
Bajwa, W. U., Haupt, J., Sayeed, A. M., Nowak, R.,
Compressive wireless sensing. In Proc.5th Intl. Conf.
on Information Processing in Sensor Networks (IPSN
’06), Nashville, TN,2006, 134–142.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 764
Download