2D Sparse Sampling Algorithm for N D Fredholm Equations with

advertisement
2D Sparse Sampling Algorithm for N D Fredholm Equations with
Applications to NMR Relaxometry
Ariel Hafftka
Hasan Celik
Applied Mathematics and Scientific Computation
University of Maryland, College Park
ahafftka@math.umd.edu
Laboratory of Clinical Investigation, National Institute on Aging
National Institutes of Health
hasan.celik@nih.gov
Alexander Cloninger
Wojciech Czaja
Richard G. Spencer
Applied Mathematics
Yale University
alexander.cloninger@yale.edu
Department of Mathematics
University of Maryland, College Park
wojtek@math.umd.edu
Laboratory of Clinical Investigation, National Institute on Aging
National Institutes of Health
spencerri@mail.nih.gov
Abstract—In [1], Cloninger, Czaja, Bai, and Basser developed
an algorithm for compressive sampling based data acquisition for
the solution of 2D Fredholm equations. We extend the algorithm
to N dimensional data, by randomly sampling in 2 dimensions
and fully sampling in the remaining N − 2 dimensions. This
new algorithm has direct applications to 3-dimensional nuclear
magnetic resonance relaxometry and related experiments, such
as T1 −D−T2 or T1 −T1,ρ −T2 . In these experiments, the first two
parameters are time-consuming to acquire, so sparse sampling
in the first two parameters can provide significant experimental
time savings, while compressive sampling is unnecessary in the
third parameter.
I. I NTRODUCTION
We consider the problem of solving discrete separable
Fredholm integral equations of the form
m = (K1 ⊗ · · · ⊗ KN )f + e,
(1)
where m is the observed data vector of length m1 × · · · × mN ,
f ≥ 0 is a nonnegative distribution vector of length n1 × · · · ×
nN to be solved for, e is an unknown small error vector of
length m1 · · · mN , and each Ki is a known mi × ni matrix.
The problem (1) can be rewritten in terms of N D arrays
(tensors) in the form
M = (K1 , . . . , KN ) · F + E
(2)
where M and E are of size m1 × · · · × mN , F is of size
n1 × · · · × nN and (K1 , . . . , KN ) · F denotes the result of
multiplying F by Ki along the ith axis, for i = 1, . . . , N .
The tensors M , E, and F are obtained by arranging the entries
entires of m, e, and f into tensors lexicographically.
In nuclear magnetic resonance relaxometry and related
experiments, M denotes the experimentally acquired data and
F denotes the unknown distribution of specific parameters
characterizing a sample. Parameters of interest include, for
example, spin lattice relaxation time (T1 ), spin-spin relaxation
time (T2 ), and diffusion coefficient (D). T1 and T2 indicate the
rate at which perturbed magnetization returns to equilibrium in
the longitudinal and transverse planes, providing information
about molecular composition and mobility and microscopic
structure, while D indicates translational mobility.
In conventional 1D NMR experiments, for example a T2
experiment, F is a column vector giving the distribution of
c
978-1-4673-7353-1/15/$31.00 2015
IEEE
T2 values in the sample. A given sample can be characterized
by its distribution of relaxation times [9].
Higher dimensional NMR experiments aim to compute
the joint density function F of one or more parameters.
For example, in a T1 - T2 experiment, F denotes the joint
distribution of T1 and T2 . These 2D experiments have seen
growing applications in the chemical and biological sciences
and permit a much more complete description of materials
[10] [11]. Given the success of 2D relaxometry and related
experiments, it is clear that the ability to determine the joint
density function of multiple parameters in a sample provides
tremendous analytic power. Higher dimensional experiments
have also been observed to exhibit improved recovery stability
[4]. It would therefore be of great value to have available
higher dimensional NMR experiments for materials and tissue
characterization. However, each additional dimension results
in a substantial increase in data acquisition time.
Compressive sensing (CS) is a mathematical theory based
on the idea that if a signal is sparse in some basis, it can
often be accurately recovered from a small set of incoherent
measurements [5] [6]. For the case N = 2, Cloninger, et. al,
developed CS based algorithm for the solution of (2) from
observations of M on a random subset of its entries [1].
While there have been extensive applications of CS to
magnetic resonance imaging (MRI) [8] using various types
of sparsity, such as with respect to L1 , TV, and wavelet bases,
we are not aware of any previous applications of CS to NMR
relaxometry or related experiments other than Algorithm 1 in
[1]. Unlike MRI, which requires Fourier methods, relaxometry
problems require the solution of an inverse Laplace transform.
One natural way to extend the algorithm in [1] to N D would
be to randomly sample M . Filling in the missing entries of
M is a low-rank tensor completion problem, for which several
algorithms have recently been developed [12] [7]. However,
for some experiments, such as T1 - D - T2 , random sampling
of M along the axis corresponding to T2 does not provide
significant experimental time savings, so randomly sampling
of M is not an efficient sampling strategy. If we randomly
sample M in two axes and fully sample in the remaining
axes, the problem naturally splits into independent matrix
completion problems. With this motivation, we develop an
extension of the algorithm in [1] that uses random sampling
along the first 2 axes of M and full sampling along the
remaining N − 2 axes of M .
Section II establishes notation, Section III summarizes a
standard algorithm for the solution of relaxometry and related
problems, and Section IV restates the 2D reconstruction algorithm from [1]. Section V generalizes the 2D algorithm to
N Dimensions using sampling along 2D slices. In Sections VI
and VII we state the error bound from [1] and prove a similar
bound for the N D algorithm. Sections VIII and IX describe
results on simulated and experimental data.
II. N OTATION
An N -tensor M of size m1 × · · · × mN is an element of Rm1 ×··· ,mN , i.e., an N D matrix, with entries denoted by M [i1 , . . . , iN ]. We lexicographically order the indices of an N -tensor by the condition that (i1 , . . . , iN ) <
(j1 , . . . , jN )
⇐⇒
i1 < j1 or for some 1 ≤ k <
N, i1 = j1 , i2 = j2 , . . . , ik = jk and ik+1 < jk+1 . We
define rowvec(M ) to be the length m1 m2 · · · mN column
vector obtained by ordering the entries of M lexicographically. For example, if M is a matrix, rowvec(M ) is the
column vector obtained by concatenating the rows of M .
Given a column vector v of length m1 · · · mN , we define
rowreshape(v, m1 , . . . , mN ) to be the N -tensor of size
m1 × · · · × mN obtained by rearranging the entries of v into
an N -tensor according to the lexicographical ordering. Hence
rowreshape(rowvec(M ), m1 , . . . , mN ) = M.
The Frobenius norm
P of M is defined by ||M ||F =
|| rowvec(M )||2 = i1 ,...,iN |M [i1 , . . . , iN ]|2 . If M is a matrix, the operator norm ||M ||2 = σ1 (M ) is the large singular
Prank(M )
value of M . The nuclear norm ||M ||? = i=1
σi (M ) is
the sum of the singular values of M .
If A is m1 × n1 and B is m2 × n2 , we define the tensor
product of A and B, denoted A ⊗ B, to be the (m1 m2 ) ×
(n1 n2 ) matrix


A[1, 1]B · · ·
A[1, n1 ]B


..
..
..


.
.
.
A ⊗ B := 

 A[m1 , 1]B · · · A[m1 , n1 ]B 
For any fixed indices i3 , . . . , iN , we let M [·, ·, i3 , . . . , iN ]
denote the matrix obtained by fixing the last N − 2 indices of
M . Any vector obtained by fixing all indices except the k-th,
i.e., of the form A[j1 , . . . , jk−1 , ·, jk+1 , . . . , jN ], is called a
k-column of A.
If Ω ⊂ {1, . . . , m1 } × · · · × {1, . . . , mN } is an arbitrary
subset of the indices of M , we let M [Ω] be the vector obtained
by listing the entries M [i1 , . . . , iN ] for which (i1 , . . . , iN ) ∈
Ω in the lexicographical ordering.
The tensor product of N matrices Ki of size mi × ni , for
i = 1, . . . , N , is a matrix of size (m1 · · · mN ) × (n1 · · · nN )
defined by iterating the previous definition. If F is an N -tensor
of size n1 × · · · × nN and Ki are matrices of size mi × ni
for i = 1, . . . , N , we define (K1 . . . , KN ) · F :=
rowreshape (K1 ⊗ · · · ⊗ KN ) rowvec(F ), m1 , . . . , mN .
It can be shown that (K1 , . . . , KN ) · F is the N -tensor of size
m1 × · · · × mN obtained by multiplying all 1-columns of F
by K1 , all 2-columns of F by K2 , . . . , and all N -columns
of F by KN , in any order. For example, if F is a matrix,
(K1 , K2 ) · F = K1 F K20 is the result of multiplying all the
columns of F by K1 and all the resulting rows by K2 .
If Ji and Ki are mi × ni and ni × ri matrices for i =
1, . . . , N ,
(J1 ⊗ · · · ⊗ JN )(K1 ⊗ · · · ⊗ KN ) = (J1 K1 ) ⊗ · · · ⊗ (JN KN ).
If Ki are matrices of size mi × ni for i = 1, . . . , N with
singular value decompositions (SVDs) given by Ki = Ui Si Vi0 ,
the tensor product has SVD, it is shown in [22] that
K1 ⊗· · ·⊗KN = (U1 ⊗· · · UN )(S1 ⊗· · ·⊗SN )(V1 ⊗· · ·⊗VN )0 .
Let Ki = Ui Si Vi0 be the reduced SVD of each kernel.
III. VSH A LGORITHM
In applications, the kernels Ki have rapidly decaying singular values, so (2) is ill-conditioned. To reduce the sensitivity
to noise, we solve for α > 0, the Tikhonov regularization
min ||M − (K1 , . . . , KN ) · F ||2F + α||F ||2F .
F ≥0
(3)
Our N D algorithm and the 2D algorithm in [1] approximately solve (3) using the Venkatarmann, Song, Hurlimann
(VSH) Algorithm developed in [2], which we summarize as
follows. Since the singular values of each kernel decay rapidly,
we assume that each Ki has low rank ri ≤ min(mi , ni ). This
assumption is equivalent to truncating the singular values of
each kernel, and thus improves the condition number of the
kernel K1 ⊗ · · · ⊗ KN . Under this low-rank assumption, the
problem (3) is equivalent to
min ||M̃ − (K̃1 , . . . , K̃N ) · F ||2F + α||F ||2F ,
F ≥0
(4)
0
) · M is of size
where the compressed data M̃ := (U10 , . . . , UN
r1 × · · · × rN and for i = 1, . . . , N , the compressed kernels
K̃i := Ui0 Ki = Si Vi0 . are of size ri × ni . The VSH can be
used to rapidly solve (4).
Since the solution F of (3) and (4) only depends on the
compressed data M̃ , Cloninger et. al. observed that any CS
recovery approach should aim to recover M̃ , not the full data
set M [1]. Once M̃ is recovered using compressive sensing,
F can be obtained from the VSH Algorithm.
IV. 2D R ECONSTRUCTION A LGORITHM
The 2D algorithm in [1] aims to recover M̃ from measurements of M on a random subset of its indices. Cloninger,
et. al., observed that since Ui0 Ui = Iri , M = (K1 , K2 ) · F =
(U1 S1 V10 , U2 S2 V20 )·F = (U1 , U2 )·M̃ . Hence, for each (i1 , i2 ),
we have
M [i1 , i2 ] = (U1 [i, ·], U2 [j, ·]) · M̃ =< U1 [i, ·]0 U2 [j, ·], M̃ > .
(5)
As we shall see later, for our applications the rank 1 matrices
{Ui [i, ·]0 U2 [j, ·]}i,j are highly incoherent, and thus provide a
robust set of measurements from which to recover M̃ .
There is extensive previous work showing that low rank
matrices can often be accurately recovered by nuclear norm
minimization, under certain favorable conditions, such as
incoherence [14] [15] [16] [17] [13] [18] [19] [20].
We now state the algorithm in [1]. Let J = {1, . . . , m1 } ×
{1, . . . , mn }. Fix the number of measurements P ≤ |J|.
Algorithm 1. (from [1]) 2D Reconstruction
1) Choose a sampling set Ω ⊂ J with |Ω| = P uniformly
at random.
2) Let y[i1 , i2 ] = M [i1 , i2 ] + e[i1 , i2 ] denote noisy measurements of the entries of M , with ||e[Ω]||F ≤ . Only
the entries y[i1 , i2 ] with (i1 , i2 ) ∈ Ω will be used.
3) Reconstruct M̃ by approximately solving the nuclear
norm minimization
min ||M̃ ||? .
(6)
|| (U1 ,U2 )·M̃ −y [Ω]||2 ≤
4) Solve (4) using the VSH Algorithm, starting with the
data M̃ recovered in the previous step.
An approximation solution of (6) can be rapidly obtained by
fixed point continuation (FPC), a singular value thresholding
algorithm [3].
V. E XTENSION TO 2 + N − 2 D IMENSIONS
Let N ≥ 3. We will show that Algorithm 1 can be extended
to N D by randomly sampling in the first 2 axes and fully
sampling in the remaining N −2 axes. Let J = {1, . . . , m1 }×
{1, . . . , m2 }, K = {1, . . . , m3 } × · · · × {1, . . . , mN }, and
K̃ = {1, . . . , r3 } × · · · × {1, . . . , rN }.
Let Ω ⊂ J with |Ω| = P as in Algorithm 1. We will show
how M̃ can be recovered from observations of M [Ω × K].
For X
∈
Rm1 ×···×mN , define P(X)
=
0
(Im1 , Im2 , U30 , . . . , Um
)
·
X,
the
compression
of
X
along
N
axes 3, . . . , N .
Observe that P(M ) = (U1 , U2 , Ir3 , . . . , IrN ) · M̃ . Hence,
for each (i1 , i2 ) ∈ Ω and (j3 , . . . , jN ) ∈ K̃,
P(M ) [i1 , i2 , j3 , . . . , jN ]
(7)
= (U1 [i1 , ·], U2 [i2 , ·]) · M̃ [·, ·, j3 , . . . , jN ].
These measurements of M̃ [·, ·, j3 , . . . , jN ] are of the same
form as (5). Furthermore, P(M )[Ω×K̃] is directly computable
from M [Ω × K].
Algorithm 2. N Dimensional Reconstruction
1) Choose Ω ⊂ J with |Ω| = P uniformly at random.
2) Let y[i1 , . . . , iN ] = M [i1 , . . . , iN ] + e[i1 , . . . , iN ] be
measurements of M , with ||e[Ω × K]||F ≤ .
3) For each (j3 , . . . , jN )
∈
K̃, reconstruct
M [·, ·, j3˜, . . . , jN ] by approximately solving the
nuclear norm minimization
min
||M̃ [·, ·, j3 , . . . , jN ]||? .
|| (U1 ,U2 )·M̃ [·,·,j3 ,...,jN ]−P(y) [Ω]||2 ≤
4) Solve (4) using the VSH Algorithm.
VI. 2D R ECONSTRUCTION E RROR E STIMATE
We summarize the error bound for (6) in the case N = 2,
derived and proved [1], which rely on results on restricted
isometry property (RIP) [20] for random incoherent measurements and bounds on nuclear norm minimization recovery
error [14] under RIP.
Definition 1. Let V be a Hilbert space. A collection {vi } ⊂ V
is a bounded norm Parseval tight
P frame for V if it is bounded
and if for all x ∈ V , ||x||2 = i | < x, vi > |2 .
The set of matrices V = Rr1 ×r2 is a Hilbert space
with the Euclidean inner product. Since the columns of
U1 and of U2 are orthogonal, it can be easily shown that
{U1 [i, ·]0 U2 [j, ·]}(i,j)∈J is a bounded norm Parseval tight
frame for V .
Definition 2. (from [1], based on [20]) Let V = Rr1 ×r2 . A
bounded norm Parseval tight frame {vj }j∈J for V with finite
index set J is said to have incoherence parameter µ if for all
1 ,r2 )
.
j ∈ J, ||vj ||22 ≤ µ max(r
|J|
A small incoherence parameter µ ensures that if X is a
low rank matrix, the inner product magnitudes | < vj , X > |
are not too concentrated on any small subset of indices [20].
Intuitively, this means that randomly chosen measurements
< vj , X > are more likely to capture enough information to
reconstruct X.
For a Parseval tight frame in which each vj is rank 1, we
have ||vj ||2 = ||vj ||F , and it follows that the incoherence
parameter can be bounded above and below:
|J|
min(r1 , r2 ).
(8)
r1 r2
The maximum bound is attained by any frame containing
a unit-vector while the lower bound is obtained, for example,
by the Fourier frame F(i,j) [k, l] = √m11 m2 exp(2πik/m1 +
2πjl/m2 ). We will see later that in our applications,
{U1 [i, ·]0 U2 [j, ·]}(i,j)∈J is highly incoherent.
We now state the error estimate from [1]. For any matrix
X, let Xr denote the best rank r approximation of X in the
Frobenius norm.
min(r1 , r2 ) ≤ µ ≤
Theorem 1. (from [1]) Assume that the Parseval tight frame
{U1 [i, ·]0 U2 [j, ·]} has incoherence µ. Let r ≥ 0 and 0 < δ <
1/10. If the number of measurements P satisfies
Cµ(5r) max(r1 , r2 ) log5 max(r1 , r2 ) log(P )
P ≥
, (9)
δ2
then with probability greater than 1−exp(−C) over the choice
of Ω, the solution M̃est obtained in Algorithm 1 satisfies
P −1/2
||M̃ − M̃r ||?
√
||M̃est − M̃ ||F ≤ C0
+ C1
|J|
r
for all matrices M̃ , where C0 and C1 are small constants.
VII. E RROR E STIMATE FOR 2 + (N − 2) D IMENSIONAL
S AMPLING
We now state our main result, which bounds the recovery
error for Algorithm 2. If X is a tensor, we define Xr to be
the tensor for which each Xr [·, ·, j3 , . . . , jN ] is the best rank
r approximation of X[·, ·, j3 , . . . , jN ].
Theorem 2. Let N ≥ 3. With notation as in Algorithm 2,
assume that the Parseval tight frame {U1 [i, ·]0 U2 [j, ·]} has
incoherence µ. Let r ≥ 0 and 0 < δ < 1/10. If the number
of measurements P satisfies (9), then with probability greater
than 1 − exp(−C) over the choice of Ω, the solution M̃est
obtained in Algorithm 2 satisfies
The regularization parameter α = 1−10 . The singular values
for the kernels were truncated so that the condition number
was at most 104 .
For each sampling percentage, Algorithm 2 was performed
for 50 random choices of Ω. Each inversion F is considered
admissible if there are three peaks, excluding small edge
artifacts. The relative errors for M̃est and the peak centers
of mass and integrals for the recovered distribution Fest are
reported in Table I, with all relative errors computed with
respect to the true data, averaged over admissible inversions.
Fig. 1. Simulated 3-peak distribution F recovered with full sampling (left)
and 5% compressive sampling on 2D slices (right).
100% sampling
5% sampling
||M̃est − M̃ ||2F
!
2
2C02 X ≤
M̃ − M̃r [·, ·, j3 , . . . , jN ]
r
?
+ 2C12
N
P −1 Y
ri 2
|J|
i=3
for all tensors M̃ , where the summation is over all
(j3 , . . . , jN ) ∈ K̃ and C0 and C1 are as in Theorem 1.
Proof. For each (j3 , . . . , jN ) ∈ K̃, the sampling operation in
(7) is of the same form as in Algorithm 1. Hence we apply
the result of Theorem 1 to P(y)[·, ·, j3 , . . . , jN ] in place of y
and and M̃ [·, ·, j3 , . . . , jN ] in place of M̃ to obtain
||(M̃est − M̃ )[·, ·, j3 , . . . , jN ]||2F
P −1/2 2
M̃ − M̃r [·, ·, j3 , . . . , jN ]
?
√
+ C1
≤ C0
|J|
r
2
P −1
2C02 M̃ − M̃r [·, ·, j3 , . . . , jN ]
?
≤
+ 2C12
2 .
r
|J|
Summing
QN over (j3 , . . . , jN ) ∈ K̃ gives the conclusion, since
|K̃| = i=3 ri .
TABLE I
3- DIMENSIONAL SIMULATION
100%
25%
10 %
5%
2.5%
1%
0.0085
0.0027
0.0068
0.0086
0.0028
0.0070
0.0312
0.0158
0.0132
0.3934
0.3536
0.1257
0.0099
0.0131
0.0223
0.0083
0.0145
0.0221
0.0088
0.0149
0.0230
0.0233
0.0301
0.0414
0.2267
0.2211
0.3154
1
1
1
0.98
0.6
3.3e-5
5.8e-5
8.7e-5
0.0024
0.0663
Relative error in peak centroid
Peak 1
Peak 2
Peak 3
0.0081
0.0024
0.0067
0.0081
0.0025
0.0069
Relative error in peak integral
Peak 1
Peak 2
Peak 3
0.0092
0.0132
0.0217
Admissibility
1
Relative error in M̃
0
VIII. 3-D IMENSIONAL S IMULATIONS
IX. 3- DIMENSIONAL T1 -D-T2 OLIVE OIL EXPERIMENT
We performed simulations on a 3-dimensional distribution
F of size 64×64×64 with 3 hemispherical peaks of radius r =
0.1 and with centers c1 = (0.7, 0.3, 0.7), c2 = (0.3, 0.7, 0.3),
and c3 = (0.3, 0.5, 0.7). The kernels are defined for i = 1, 2, 3
by
Ki [j, k] = exp(−τi [j]/ti [k]), .
We tested Algorithm 2 on a 3D T1 - D - T2 experiment
obtained from an olive oil sample. For each sampling percentage, we report in Table II the relative error in M̃est and in
the recovered peak centers of mass and integrals, averaged
over the admissible results from 50 random sampling sets
Ω. The data M is of size 64 × 64 × 128 and F is of size
64 × 64 × 64. The compressed data M̃ was of size 8 × 4 × 12.
The inversion parameters were α = 10−9 and the singular
values were truncated so that the resulting kernel has condition
number at most 105 .
For i = 1, 2, τi consists of mi = 128 logarithmically spaced
points from 0.05 to 4, while τ3 consists of m3 = 1024 linearly
spaced points from 0.05 to 4. For i = 1, 2, 3, ti consists of
mi = 64 points linearly spaced from 0.05 to 1. The data M̃
has a signal to noise ratio (SNR) of approximately 256, where
SNR = ||M̃ ||F /||Ẽ||F , and E is pseudorandom gaussian noise.
Experimental details: Experimental data was collected on the olive oil
sample at 25◦ C using a 400 MHz Bruker Avance III NMR spectrometer
equipped with a 5 mm Micro2.5 micro-imaging solenoidal coil. The pulse
sequence consisted of an inversion recovery module with variable inversion times, followed by a stimulated echo diffusion encoding with variable
diffusion-sensitizing gradient strengths and a CPMG sequence with acquisition
at echo maxima. Experimental parameters included: echo time TE = 2ms,
number of echoes NE = 512, repetition time TR = 6s, number of
inversions NI = 64 with inversion times sampled logarithmically between 50
and 3250ms, and 64 diffusion sensitization b-values logarithmically spaced
between 1.25 and 5085s/mm2 , with a diffusion encoding period of ∆ = 20ms
and bipolar encoding gradient duration δ = 1ms for each gradient value.
Fig. 2. Experimental 2-peak T1 -D-T2 distribution F recovered with full
sampling (left) and 5% compressive sampling on 2D slices (right). *Denotes
inversion artifacts.
100% sampling
5% sampling
2 guarantees good recovery with high probability at very small
sampling percentages, provided that µ grows slowly compared
to |J|.
XI. C ONCLUSION
The 2D sparse sampling algorithm for NMR relaxometry
and related experiments introduced in [1] can be extended
to N D problems, by application on 2D data slices. We have
proved a guarantee of successful recovery using our algorithm
and demonstrated its effectiveness on simulated data and on
experimental data from an olive oil sample. We find that
significant subsampling can be performed while maintaining
excellent fidelity of the recovered model.
ACKNOWLEDGMENT
This work was supported by the Intramural Research Program, National Institute on Aging, of the National Institutes
of Health. AC was supported by NSF Award DMS-1402254.
WC was supported by HDTRA 1-13-1-0015.
R EFERENCES
TABLE II
O LIVE OIL T1 -D-T2 EXPERIMENTAL RESULTS
100%
25%
10 %
5%
2.5%
1%
0.0012
0.0005
0.0086
0.0019
0.0261
0.0075
0.1117
0.1573
0.0004
0.0003
0.0010
0.0007
0.0076
0.0022
0.0262
0.0097
0.2566
0.2594
1
1
1
0.66
0.12
0.0006
0.0014
0.0067
0.0580
0.2769
Relative error in peak centroid
Peak 1
Peak 2
0
0
0.0005
0.0003
Relative error in peak integral
Peak 1
Peak 2
0
0
Admissibility
1
Relative error in M̃
0
X. I NCOHERENCE IN P RACTICE
The incoherence for the Parseval tight frame, for the example of the simulated data, was µ = 94.17. Since r1 = r2 = 7
and |J| = 1282 , inequality (8) gives theoretical bounds 7 ≤
2
µ ≤ 128
≈ 2340.57. Hence, qualitatively, µ is fairly close
7
to its theoretical lower bound. This suggests that it is possible
to obtain successful recovery from few measurements, as supported by theory. For the experimental data, the incoherence
was 161.60, and the theoretical bounds were 4 < µ < 512.
While these values of µ are not as small as desired, the
90-percentiles of ||vi ||22 maxJr1 ,r2 were 7.92 and 8.13 for the
simulated and experimental data, respectively. Hence, most of
the frame entries have small norm. As suggested in [1], the
idea of asymptotic coherence could be used to further improve
our sampling bound [21].
The bound on P in (9) depends only on µ, the rank r, and
the dimension r1 and r2 , not explicitly on the size of the data
|J|. We remark that while the bound on P is not useful when
m1 ×m2 is small, for very large data sizes m1 ×m2 , Theorem
[1] Cloninger, A., Czaja, W., Bai, R., & Basser, P. J. (Jul 2014). Solving 2D Fredholm
Integral from Incomplete Measurements Using Compressive Sensing. SIAM J. on
Imag. Sci., 7, 3, 1775-1798.
[2] Venkataramanan, L., Song, Y.-Q., & Hurlimann, M. D. (January 2002). Solving
Fredholm Integrals of the First Kind With Tensor Product Structure in 2 and 2.5
Dimensions. IEEE Trans. on Sig. Proc., 50, 1017-1026.
[3] Ma, S., Goldfarb, D., & Chen, L. (Jan 2011). Fixed point and Bregman iterative
methods for matrix rank minimization. Mathematical Programming, 128, 1-2.
[4] Celik, H., Bouhrara, M., Reiter, D. A., Fishbein, K. W., & Spencer, R. G. (Nov
2013). Stabilization of the inverse Laplace transform of multiexponential decay
through introduction of a second dimension. J. Mag. Res., 236, 2, 134-139.
[5] Candes, E. J., Romberg, J., & Tao, T. (Jan 2006). Robust Uncertainty Principles:
Exact Signal Reconstruction From Highly Incomplete Frequency Information. IEEE
Trans. on Inf. Theory, 52, 2, 489.
[6] Donoho, D. L. (Jan 2006). Compressed Sensing. IEEE Trans. on Inf. Theory, 52,
4, 1289.
[7] Gandy, S., Recht, B., % Yamada, I. (2011) Tensor completion and low-n-rank tensor
recovery via convex optimization. Inverse Problems. 27, 2.
[8] Lustig, M., Donoho, D., & Pauly, J. M. (Dec, 2007). Sparse MRI: The application
of compressed sensing for rapid MR imaging. Mag. Res. in Med., 58, 6, 1182-1195.
[9] Reiter, D. A., Lin, P.-C., Fishbein, K. W., & Spencer, R. G. (Apr 2009). Multicomponent T2 relaxation analysis in cartilage. Mag. Res. Med., 61, 4, 803-809.
[10] Callaghan, P. T., Arns, C. H., Galvosas, P., Hunter, M. W., Qiao, Y., & Washburn,
K. E. (2007). Recent Fourier and Laplace perspectives for multidimensional NMR
in porous media. Mag. Res. Imag., 25, 4, 441-444.
[11] Hills, B. P. (2009). Relaxometry: Two-Dimensional Methods. Enc. of Mag. Res.
[12] Liu, J., Musialski, P., Wonka, P., & Ye, J. (Jan 2013). Tensor completion for
estimating missing values in visual data. IEEE Trans. on Pattern Analysis and
Machine Intel., 35, 1, 208-20.
[13] Candès, E. J., & Tao, T. (May 2010). The power of convex relaxation: Near-optimal
matrix completion. IEEE Trans. on Inf. Theory, 56, 5, 2053-2080.
[14] Fazel, M., Candes, E., Recht, B., & Parrilo, P. (Dec 2008). Compressed sensing
and robust recovery of low rank matrices. Asilomar Conf. 1043-1047.
[15] Gross, D. (Mar 2011). Recovering low-rank matrices from few coefficients in any
basis. IEEE Trans. on Inf. Theory, 57, 3, 1548-1566.
[16] Recht, B., Fazel, M., & Parrilo, P. A. (Nov 2010). Guaranteed minimum-rank
solutions of linear matrix equations via nuclear norm minimization. SIAM Review,
52, 3, 471-501.
[17] Chen, Y. (2015) Incoherence-Optimal Matrix Completion. IEEE Tr. on Inf. Theory.
[18] Candes, E., & Recht, B. (Jun 2012). Exact matrix completion via convex optimization. Comm. of the ACM, 55, 6, 111-119.
[19] Cai, J. F., Candes, E. J., & Shen, Z. (Apr 2010). A singular value thresholding
algorithm for matrix completion. Siam J. on Opt., 20, 4, 1956-1982.
[20] Liu, Yi-Kai. (2011). Universal low-rank matrix recovery from Pauli measurements.
Adv. in Neural Inf. Proc. Sys. 1638-1646.
[21] Adcock, B., Hansen, A. C., Poon, C., Roman, B. (Feb 2013) Breaking the coherence
barrier: A new theory for compressed sensing.
[22] Golub, G. H., & Van, L. C. F. (1996) Matrix computations. Johns Hopkins.
Download