Deterministic Construction of Quasi-Cyclic Sparse Sensing Matrices from One-Coincidence Sequence Lu Gan

advertisement
Deterministic Construction of Quasi-Cyclic Sparse
Sensing Matrices from One-Coincidence Sequence
Weijun Zeng, Huali Wang and Guangjie Xu
Lu Gan
Institute of Communications Engineering
PLA University of Science and Technology
Nanjing, China
Email: zwj3103@126.com
School of Engineering and Design
Brunel University
West London, United Kingdom
Email: lu.gan@brunel.ac.uk
Abstract—In this paper, a new class of deterministic sparse
matrices derived from Quasi-Cyclic (QC) low-density paritycheck (LDPC) codes is presented for compressed sensing (CS).
In contrast to random and other deterministic matrices, the
proposed matrices are generated based on circulant permutation matrices, which require less memory for storage and low
computational cost for sensing. Its size is also quite flexible
compared with other existing fixed-sizes deterministic matrices.
Furthermore, both the coherence and null space property of
proposed matrices are investigated, specially, the upper bounds
of signal sparsity k is given for exactly recovering. Finally, we
carry out many numerical simulations and show that our sparse
matrices outperform Gaussian random matrices under some
scenes.
I. I NTRODUCTION
Classically, compressed sensing (CS) theory considers a
discrete-time sparse signal x ∈ RN , and defines the k-sparse
signal as x has at most k nonzero elements, then the system
can get the observation y ∈ RM from linear projection in the
noiseless setting [1], [2], i.e.,
y = H · x.
(1)
where H is an M × N sensing matrix. The solution to this
system can be formulated as followings
min ∥x∥0 subject to H · x = y,
x∈RN
(2)
which is a non-convex optimization problem and also called
as ℓ0 -minimization problem. Generally, there are two ways to
recover the k-sparse signal x in CS. The first method is to
establish convex relaxation of (2), we can faithfully recover x
via ℓ1 -minimization
min ∥x∥1 subject to H · x = y.
x∈RN
(3)
The second method is greedy algorithms for ℓ0 -minimization
(2), such as orthogonal matching pursuit (OMP).
A. Sensing Matrix
Apart from the reconstruction algorithms, another main
concern in CS is the construction of sensing matrix. In fact, a
”good” sensing matrix can not only reduce the number of observation, but can also reduce reconstructing time-complexity.
c
978-1-4673-7353-1/15/$31.00 ⃝2015
IEEE
To recover the signal exactly and decide which sensing matrix
is ”good”, some criteria have been proposed. An insightful
and useful criteria called restricted isometry property (RIP)
was proposed by Candes and Tao [3]. It has been proved that
if H satisfies the RIP of order k with enough small restricted
isometry constant δ, signals with sparsity O(k) can be exactly
recovered by ℓ0 or ℓ1 -minimization.
In [4] Xu proposed a sufficient and necessary condition of
exactly recovering, named the null space property (NSP). The
null space of H, N (H), is the set {x ∈ RN : Hx = 0}. The
following lemma states the condition where ℓ0 -minimization
can exactly recover all k-sparse signals, here it is called as
(ℓ0 , k)-recoverability.
Lemma 1. Matrix H ∈ RM×N has (ℓ0 , k)-recoverability if
and only if N (H)\{0} contains no 2k-sparse vector.
In this paper, we theoretically analyze NSP of the proposed
sensing matrices according to Lemma 1. But generally, there is
no efficient algorithm to verify whether a deterministic matrix
is RIP and NSP or not. Therefore, it is desirable to find another
criteria. Most explicit constructions of RIP matrices are based
on bounding the mutual coherence between the columns of
the sensing matrix.
Definition 2 (Coherence). Let H = (h1 , h2 , · · · , hN ) be
an M ×N sensing matrix. The coherence between the columns
of M × N sensing matrix is then defined as
µ(H) =
|⟨hi ,hj ⟩|
max
.
1≤i̸=j≤N ∥hi ∥2 ∥hj ∥2
(4)
For deterministic signals, if k < 21 (µ(H)−1 + 1), then x is
the unique minimizer of (2) [1]. For generic signals, [5] have
the following proposition.
2
Proposition 3. If ∥H∥ = ρ and µ(H) ≤ logc N , where
c is an absolute constant. When the sparsity level satisfies
cN
k ≤ ρ log
N and the nonzero random sets of signal x are drown
uniformly, except with probability O(N −1 ), x is the unique
minimizer of (2). Where ∥.∥ denotes the spectral.
In this paper, we also analyze the bound of proposed
matrix coherence and we find that the proposed matrix is a
weakly incoherent matrix. Specially, we focus on the generic
signals for theoretical analysis and simulation results by the
proposition 3.
B. Related Work and Main Contribution
Gaussian, Rademacher random sensing matrices satisfy the
RIP with high probability, these matrices are best bet for
analyzing the CS system, but these matrices suffer from
storage and computational issues and can not use in real
scene. Taking these issues into account, DeVore constructed a
p2 ×pr+1 deterministic sensing matrices using finite fields [6],
such construction gives cyclic matrices which are interesting
of circuit implementation and these matrices also satisfy the
RIP of order k < p/r + 1 with high probability but with fixed
sizes, where p is the prime power and 1 < r < p is an integer.
Inspired by algebraic geometry codes, Li introduced a new deterministic construction via algebraic curves over finite fields,
which is a natural generalization of DeVores construction [7].
Then Li introduced the concept of near orthogonal systems to
characterize the matrices with low coherence [8] and use an
embedding operation [9] to merge sparse matrices with low
coherence [10]. Recently, Xia proposed deterministic binary
sparse sensing matrices based on finite geometry and showed
their large sparks [11]- [13]. They also used array codes to
construct quasi-cyclic(QC) sensing matrices [13] and gave two
lower bounds of the spark of the sensing matrices [11].
Inspired by the connection QC-sensing matrices and the
matrices with hash families [14], we construct the deterministic sensing sparse matrices from low-density parity-check
(LDPC), and its circulant matrices are permuted according to
the one-coincidence sequences (OCSs) [15] (the OCSs will be
described in next section), which is a generalization of array
codes [13]. Its main advantage is that the proposed sensing matrices with flexible sizes and can make the hardware realization
convenient and easy. We firstly get the coherence µ(H) = d1v ,
where dv is the uniform column weight of the sensing matrix
H. Afterwards, we analyze the (ℓ0 , k)-recoverability of the
proposed matrices, that is, any sparse signal can be exactly
recovered by ℓ0 -minimization with sparsity k ≤ σ(dv , g)/2,
where σ(dv , g) is the size of smallest stopping set of H.
II. C ONSTRUCTION OF Q UASI -C YCLIC S PARSE S ENSING
M ATRICES
A. The Sensing Matrices Design
In this subsection, we present a new class of deterministic
sparse matrices built from QC-LDPC [15]. The proposed
sensing matrices H can be represented by a dv × dc array
of circulant permutation matrices P as follows:


H=

P a00
P a10
···
P a(dv −1)0
P a01
P a11
···
P a(dv −1)1
···
···
···
···
P a0(dc−1 )
P a1(dc −1)
···


,

P a(dv −1)(dc −1)
(5)
where aij ∈ {0, 1, · · · , p−1}, p is a prime power, 0 ≤ i ≤ dv ,
0 ≤ j ≤ dc , P aij represents identity matrix cyclically shifted
the columns to the right aij positions. And the exponent matrix
E(H) of H is defined by


a00
a01
···
a0(dc−1 )


a10
a11
···
a1(dc −1)
.
E(H) = 


···
···
···
···
a(dv −1)0 a(dv −1)1 · · · a(dv −1)(dc −1)
(6)
The design of exponent matrix E(H) are widely investigated,
such as array codes presented in [13]. Here, we design the
exponent matrix by adopting the one-coincidence sequences
(OCSs) [15], here OCSs are multilevel sequences, which have
properties that any two different ones have at most one element
in common.
The element aij of exponent matrix E(H) can be generated
from QCSs as follows:
aij = [α(mi + nj )2 + φi + φj ] mod p ,
(7)
where α ∈ {1, 2, · · · , p − 1}, mi , nj ∈ {1, 2, · · · , p −
1}, φi , φj ∈ {0, 1, · · · p − 1}, and the two sequences
{m0 , m1 , · · · , mdv −1 }, {n0 , n1 , · · · , ndc −1 }, whose elements
are randomly selected from GF(p), if i ̸= j, mi ̸= mj ,
ni ̸= nj .
Comparing to other random and deterministic sensing matrices, the size of the proposed sensing matrices is dv p × dc p,
from this point of view, the size of proposed sensing matrices
is very flexible, we can adjust the weight p, dc and dv to
obtain the expected sensing matrices, which is not limited to
the fixed sizes. In [15], it has been proved that the OCS-LDPC
code is free of cycles of length 4.
B. Column Replacement
In this subsection, we consider the construction of sensing
matrices from the point view of the column replacement
technique. Here, we firstly introduce the column replacement
technique. Let P ∈ Rp×p , P = (Pij ) be a origin matrix,
let B ∈ {1, · · · p}m×n , B = (Bij ) be a pattern matrix. We
can get a new matrix H ∈ Rmp×p , H = (Hij ) through
column replacement P into B, that is, H(a−1)p+b,c = Pb,Bac
for 1 ≤ a ≤ m, 1 ≤ b ≤ p, 1 ≤ c ≤ n.
Since the submatrix P aij of H represents circulant permutation matrix cyclically shifted the columns to the right aij positions, we can get the pattern matrix B ∈ {1, · · · , p}dv ×dc p ,
Bi,(j−1)p+1:(j−1)p+p = ((p − aij + 1, p − aij + 2, · · · , p −
aij + p) mod p). In our main result of theorem 5, we adopt
column replacement to prove (ℓ0 , k)-recoverability of the
proposed matrix H, since the origin matrix P meets the
(ℓ0 , k)-recoverability, the matrix H may also meet the (ℓ0 , k)recoverability with proper pattern matrix B.
III. M AIN R ESULTS
In this section, we will show the low coherence of the
obtained matrices and analyze its (ℓ0 , k)-recoverability.
A. Coherence
The following theorem suggests that the sparse sensing
matrices built from OCSs have low coherence.
Theorem 4. For the sensing matrix H built from OCSs, as
shown in (5). We have
µ(H) =
1
dv
,
(8)
where dv is the uniform column weight of the sensing matrix
H.
Proof. Suppose
H has N columns h1 , h2 , · · · , hN , then
√
∥hi ∥2 = dv for 1 ≤ i ≤ N . Since the OCS-LDPC code
is free of cycles of length 4, it is easy to see that any two
distinct columns of H has only one same ”1” in all lows, so
the maximum inner product of any two distinct columns is 1,
we have
1
µ(H) =
.
dv
Remark 1: For generic signals and from the proposition 3,
if our proposed matrix satisfies µ(H) ≤ logc N , i.e.,
1
dv
≤
c
log(dc p)
.
(9)
√
Since the spectral norm of our proposed matrix is ρ = dv dc
cN
and then the generic signal with sparsity k ≤ ρ log
N , i.e.,
k≤
√
√ c dc p
dv log(dc p)
,
(10)
can be exactly recovered by (2) with probability 1 − O(N −1 ).
when the parameter of our proposed matrix satisfies dv ≥
log(dc p)/c, the inequality (9) holds, then the bounds of sparsity level k satisfies (10). From the inequality (9), it is shown that
there exists a threshold c to measure the recovery performance
c p)
of our proposed matrix, when log(d
< c, the recovery
dv
c p)
performance of the matrix is good, when log(d
> c, the
dv
recovery performance may degrade.
of its smallest cycle. For the regular LDPC codes, A.Orlitsky
[16] showed that σ(dv , 6) = dv + 1, σ(dv , 8) = 2dv and for
larger g, σ(dv , g) ≤ dv4−2 (dv − 1)g+2 , where dv > 2.
The following theorem consider (ℓ0 , k)-recoverability.
Theorem 5. Suppose that P is a p×p circulant permutation
matrix, where p is a prime power, the resulting sensing
matrices H is characterized by a dv × dc array of circulant
permutation matrices, and the element aij of its exponent
matrix H is generated from OCSs as shown in (7). Then,
the H is a sensing matrix that meets the (ℓ0 , k)-null space
condition, where k ≤ σ(dv , g)/2.
Proof. As discussed in section II of B, we can consider
the construction of sensing matrices as the point view of
the column replacement technique. We prove the (ℓ0 , k)recoverability of H according to contradiction. Suppose that
H does not meet (ℓ0 , k)-recoverability, from Lemma 1, we
can see that there exists a 2k-sparse vector z ∈ N (H)\{0}.
∆
Let supp{z+ } = {z1+ , · · · , zs+ } = {i : zi > 0}, and
∆
supp{z− } = {z1− , · · · , zs−′ } = {i : zi < 0}, here,without
loss of generality, we suppose s′ ≥ s. Since there exists a
2k-sparse vector z ∈ N (H)\{0}, thus s ≤ k and s′ ≥ 1.
From the definition of stopping set, when k ≤ σ(dv , g)/2,
there must not exist a stopping set between the {z1+ , · · · , zs+ }
and {z1− }. That is, there is a row γ in the pattern matrix B,
which make Bγz+ ̸= Bγz− or Bγzε+ ̸= Bγz+ , Bγzε+ ̸= Bγz−
1
1
i
j
holding, where 1 ≤ ε ̸= j ≤ s. We have Hγ z = 0 since
Hz = 0.∑Then, we form a special vector w ∈ Rp by adjusting
wη =
{zi : Bγi = η, 1 ≤ i ≤ N }, where 1 ≤ η ≤ p.
The process of forming vector w can be considered as a
projection, where the vector z is projected onto Rp according
to element in row γ of B. Since H can be accounted as column
replacement of P into B and Hγ z = 0 holds, it follows that
P w = 0. Then w is a 2k-sparse since z is 2k-sparse and w
is nonzero since wBγz− < 0 or wBγz+ > 0. From lemma
1
ε
B. Null Space Property
1, P does not meet the (ℓ0 , k)-recoverability, but it is clear
that p × p circulant permutation matrix , i.e., P , meets (ℓ0 , k)recoverability, thus a contradiction.
As discussed in section I, deterministic matrices with NSP
or RIP can not be efficiently verified. Many previous researches of sparse sensing matrices only consider other criteria,
such as in Theorem 4. For our proposed sensing matrices, the
verification of NSP is executed by reducing to a much smaller
permutation matrix. In particular, we exploit that the proposed
sensing matrix is formed by inflating a circulant permutation
matrix. By combining the methods of CS and hash families
in [14], we investigate the (ℓ0 , k)-recoverability of the sparse
sensing matrices built from OCSs.
Before stating the (ℓ0 , k)-recoverability of the sparse sensing matrices, we introduce a definition referred as stopping
set, which is related to the error performance of the LDPC
codes. A stopping set S of H is a subset of columns of H ,
which doesnt contain a row of weight one. Here, we denote
σ(dv , g) as the size of smallest stopping set of S, where dv is
the column weight of H and g is the girth of H, the length
Remark 2: In the work [11], they have proved a lower
bound for binary matrices spark(H) ≥ σ(dv , g), where
spark(H) is defined to be the smallest number of columns of
H that are linearly dependent [17] and it is shown in [17] that
1
spark(H) ≥ 1 + µ(H)
and spark(H) > 2k. In the current
work, we present a novel analysis of the (ℓ0 , k)-null space
condition by the column replacement technique, and we also
get a upper bound for exactly recovering.
Remark 3: In [18], the authors also showed that girth can be
used to certify good sensing matrices. In [12], they examined
the performance of reconstruction guarantee of binary sensing
matrices based on girth. The two papers focus on general
binary sensing matrices and analysis of its performance, in
this paper, we focus on the condition of exactly recovering
from the column replacement technique. From the Theorem
5, our results show that the larger girth g are, the large size
of smallest stopping set will be, thus the the large exactly
SIMULATION RESULTS
The proposed parity-check matrices H have been proved to
be the ”good” sensing matrices according to the theorem 4
and 5. In this section, we provide the simulation results for
the matrices built from OCSs.
The same experiments conditions with [8] are set in all
simulations. Sparse signals are generated as follows. The support of sparse signal is generated uniformly at random, while
the corresponding values of nonzero elements are generated
i.i.d. from standard Gaussian normal distribution. The sparse
sensing matrices are generated by (5) and (7), and in (7) we
set α = 1 and let φi = 0 for 0 ≤ i ≤ dv − 1, φj = 0
for 0 ≤ j ≤ dc − 1. It should be noted that we also set other
value to these parameters, the phenomenon is very similar with
the above settings, so they are omitted. Moreover, we make
comparison with the matrices constructed in [13]. However,
since the sizes of DeVores matrices in [6] and BCH matrices
in [9] are fixed, it is unfair when making comparison, we omit
these comparison in this paper.
Here, we denote Gaussian random matrix as Gaussian
matrix (M × N ), the proposed matrix as proposed matrix
(H(dv , p, dc )) and IAC matrix in [13] as IAC (H(dv , p)). For
signal x reconstruction, we adopt the OMP algorithm to solve
b. For each sthe ℓ0 -minimization, and denote its solution as x
parsity k, 1000 Monte Carlo trials is performed and we declare
that recovery is successful if the reconstruction
signal-to-noise
(
)
(SNR) satisfies SNR(x) = 10 · log10
∥x∥
2
dB ≥ 100.
x∥
∥x−b
2
Firstly, the proposed matrices with uniform column weight
dv = 4, prime power p = 11, 13, 19 and dc = p are adopted,
the corresponding Gaussian matrices and IAC matrices of
same sizes are chosen in figure 1. From the theorem 4 and 5,
when dv ≥
√ log(dc p)/c satisfies, generic signals with sparsity
dc p
k ≤ √d c log(d
can be recovered by OMP algorithm with
v
c p)
probability 1 − O((dc p)−1 ). From Fig. 1, it is shown that:
• When the prime power p is small, the performance of
proposed matrices (H(dv , p, dc )) and IAC (H(dv , p)) are
notably better than the corresponding Gaussian matrices.
along with p increasing, the performance of proposed
and IAC matrices will be degraded, to some extent, will
be worse than the Gaussian matrices. That is because
with p increasing the inequality (9) will not hold and the
conditions of proposition 3 are wrecked.
• On the other hand, when the inequality (9) holds, from
Fig 2 we can easily see that with p increasing, the sparsity
of generic signals can
√ be recovered also increases, since
dc p
the sparsity k ≤ √d c log(d
is proportional to p. In pracp)
v
•
c
√
dc p
tice, some generic signals with sparsity k > √d c log(d
v
c p)
can also be recovered.
Through the Fig. 1, the performance of proposed matrices and IAC matrices are almost the same. The above
Proposed matrix (H(4,11,11))
Gaussian matrix (44*121)
IAC (H(4,11))
Proposed matrix (H(4,13,13))
Gaussian matrix (52*169)
IAC (H(4,13))
Proposed matrix (H(4,19,19))
Gaussian matrix (76*361)
IAC (H(4,19))
0.8
Perfect Recovery Percentage
IV.
1
0.9
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
5
10
15
20
Sparsty k
25
30
35
Fig. 1. The perfect recovery percentage of noiseless signals. The corresponding Gaussian matrices sizes are 44 × 121, 52 × 169, 76 × 361.
1
0.9
0.8
Perfect Recovery Percentage
recovering signal sparsity k is. In the next simulations section,
we will show that the positive influence of girth to the
recovering performance.
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10
Proposed matrix (H(4,19,6))
Gaussian matrix (76*114)
Proposed matrix (H(4,19,9))
Gaussian matrix (76*171)
Proposed matrix (H(4,19,17))
Gaussian matrix (76*323)
15
20
25
30
Sparsty k
35
40
45
50
Fig. 2. The perfect recovery percentage of noiseless signals. The corresponding Gaussian matrices sizes are 76 × 114, 76 × 171, 76 × 323.
phenomenon still exists, when p is small or large, for
the sake of limits of space, we omit the corresponding
simulation results.
Secondly,we examine the performance of the flexible proposed matrix, when dc changed. Here, we set prime power
p = 19, dc = 6, 9, 17, and we also adopt the Gaussian matrix
as benchmark. As shown in Fig.2:
• The performance of the proposed matrix outperforms the
Gaussian matrix and the gap betweens them is widen,
along with dc decreases. Similarly with first experiment,
that is because with dc increasing the inequality (9) will
not hold and the conditions of proposition 3 will be
wrecked.
• When the inequality (9) holds, from Fig.2 we can easily
see that the proposed matrices H(dv , p, dc ) perform better
than the bound of√(10), that is, some generic signals with
dc p
sparsity k > √d c log(d
can also be recovered.
p)
v
c
Thirdly, in above two experiments the column weight
dv = 4 is a constant and the prime power p is varied. In
this experiment, we fix the prime power p = 29, dc = 23 and
let dv = 4, 5, 6. As shown in Fig.3, similarly, its performance
is in line with our theorem 4 and 5, even better than the theoretic bound. When dv is small, the performance of proposed
matrices is slightly worse than Gaussian matrices, but along
with dv increasing, the performance of proposed matrices will
surpass the Gaussian matrices.
From the above three experiments and statement of remark
c p)
1, we can conclude that there is a threshold of log(d
dv
log(dc p)
such that when
is smaller than the threshold, the
dv
proposed matrices will be better than the corresponding Gaus-
1
0.9
0.8
0.8
0.7
0.7
Perfect Recovery Percentage
Perfect Recovery Percentage
1
0.9
0.6
0.5
0.4
0.3
Gaussian matrix (116*667)
Proposed matrix (H(4,29,23))
Gaussian matrix (145*667)
Proposed matrix (H(5,29,23))
Gaussian matrix (174*667)
Proposed matrix (H(6,29,23))
0.2
0.1
0
10
20
30
Proposed matrix (H(3,29,5)), girth=6
Proposed matrix (H(3,29,5)), girth=8
Proposed matrix (H(3,29,5)), girth=10
0.6
0.5
0.4
0.3
0.2
0.1
0
15
40
50
60
70
20
25
30
35
40
45
50
55
60
Sparsty k
80
Sparsty k
Fig. 3. The perfect recovery percentage of noiseless signals. The corresponding Gaussian matrices sizes are 116 × 667, 145 × 667, 174 × 667.
log( d c p )
dv
Fig. 5. The perfect recovery percentage of the proposed matrix H(3, 29, 5)
with different girth.
( p, dc )
dv
Fig. 4. The threshold
log(dc p)
dv
of proposed matrices H(dv , p, dc ) which
perform better than corresponding Gaussian matrices. ( the
matrices perform worse than Gaussian matrices are not listed.)
log(dc p)
dv
of
sian matrices. In order to investigate this threshold, we run
numerous simulations and summed up those proposed matrices
H(dv , p, dc ) better than corresponding Gaussian matrices, see
Fig.4. From Fig.4, we can see that:
log(dc p)
• For fix p, there is a threshold of
which is shown in
dv
log(dc p)
bold such that when dv is smaller than the threshold
for any dc and dv , the proposed matrices will be better
than the corresponding Gaussian matrices.
log(dc p)
of proposed
• With p increasing, the threshold
dv
matrices outperform Gaussian matrices decreases. With
this fact, out proposed matrices can play an important
role in practice, since the proposed matrices can reduce
the storage space when column N = dc p is huge.
Finally, we verify the influence of girth to the recovery
performance. We consider the proposed matrix H(3, 29, 5),
firstly setting {m0 , m1 , m2 } = {0, 1, 3}, we can obtain the
sensing matrix with girth 6 by setting {n0 , n1 , n2 , n3 , n4 } =
{0, 1, 2, 3, 4}, obtain the sensing matrix with girth 8 by
setting {n0 , n1 , n2 , n3 , n4 } = {0, 1, 2, 5, 8}, obtain the sensing matrix with girth 10 by setting {n0 , n1 , n2 , n3 , n4 } =
{0, 1, 5, 14, 25}. As shown in Fig.5, when the girth increase,
the performance of proposed matrices will be better, which
verifies the Theorem 5 and the statement of Remark 3,
that is, large girth has positive influence to the recovering
performance.
V. C ONCLUSION
In this paper, we studied a new class of deterministic sensing
sparse matrices built from QC-LDPC codes for CS. The coherence and (ℓ0 , k)-recoverability of proposed matrices were
analyzed, we got two low bound of the signal sparsity order
k. Simulations reveal that the proposed matrices significantly
outperform Gaussian random matrices under some conditions
and the theoretical results about signal sparsity order k has
also been verified. On the other hand, comparing with other
existing fixed-sizes deterministic matrices and random matrices, the proposed sensing matrices not only require less
memory for storage and low computational cost for sensing
and reconstruction, but the sizes are also quite flexible, it can
attain wide application.
R EFERENCES
[1] D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, vol. 52, pp.
1289-1306, Jul. 2006.
[2] E. Cand‘es, J. Romberg, and T. Tao, Robust uncertainty principles: Exact
signal reconstruction from highly incomplete frequency information,IEEE
Trans. Inf. Theory, vol. 52, pp. 489-509, Feb. 2006
[3] E. J. Candes and T. Tao,Decoding by linear programming, IEEE Trans.
Inf. Theory, vol. 51, no. 12, pp. 4203-4215, Dec. 2005.
[4] W. Xu and B. Hassibi, Compressed sensing over the Grassmann manifold:
A unified analytical framework, in Proc. 46th Allerton Conf. Commun.,
Control, Comput., Monticello, IL, pp. 562-567, Sep.2008.
[5] Tropp J A, The Sparsity Gap: Uncertainty Principles Proportional to
Dimension, in Information Sciences and Systems (CISS), 2010 44th
Annual Conference on. pp. 1-6, 2010.
[6] R. A. DeVore, Deterministic constructions of compressed sensing matrices, J. Complexity, vol. 23, pp. 918-925, 2007.
[7] S. Li, F. Gao, G. Ge, and S. Zhang, Deterministic construction of
compressed sensing matrices via algebraic curves, IEEE Trans. Inf.
Theory, vol. 58, no. 8, pp.5035-5041, Apr. 2012.
[8] S. Li and G. Ge, Deterministic sensing matrices arising from near
orthogonal systems, IEEE Trans. Inf. Theory, vol. 60, no. 4, pp. 22912302, Apr. 2014.
[9] A. Amini and F. Marvasti, Deterministic construction of binary, bipolar,
and ternary compressed sensing matrices, IEEE Trans. Inf.Theory, vol.
57, no. 4, pp. 2360-2370, Mar. 2011.
[10] S. Li and G. Ge, Deterministic construction of sparse sensing matrices
via finite geometry, IEEE Trans. Signal Process., vol. 62, no. 11, pp.
2850-2859, Jun. 2014.
[11] X. J. Liu, S. T. Xia,Sparks and Deterministic Constructions of Binary Measurement Matrices from Finite Geometry, arXiv preprint arXiv:1301.5952.
[12] X. J. Liu, S. T. Xia, Reconstruction guarantee analysis of binary
measurement matrices based on girth, In IEEE Information Theory
Proceedings (ISIT), pp. 474-478, 2013.
[13] X. J. Liu, S. T. Xia, Constructions of quasi-cyclic measurement matrices
based on array codes, In IEEE Information Theory Proceedings (ISIT),
2013.
[14] C.J. Colbourn, D. Horsley, C. McLean, Compressive sensing matrices
and hash families, IEEE Trans. Commun. vol.59 ,no. 7, pp. 1840-1845,
2011.
[15] C. M. Huanget al., Construction of quasi-cyclic LDPC codes from
quadratic congruences, IEEE Commun. Lett., vol. 12, no. 4, pp. 313-315,
Apr. 2008.
[16] A. Orlitsky, R. L. Urbanke, K. Viswanathan, and J. Zhang,Stopping sets
and the girth of Tanner graph,in Proc. Int. Symp. Inf. Theory, Lausanne,
Switzerland, Jun. pp. 2, 2002.
[17] D. L. Donoho and M. Elad,Optimally sparse representation in general
(nonorthogonal) dictionaries via ℓ1 minimization, Proc. Nat. Acad. Sci.,
vol. 100, no. 5, pp. 2197-2202, 2003.
[18] A. G. Dimakis, R. Smarandache, and P. O. Vontobel,LDPC codes for
compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 5, pp. 30933114, May 2012.
Download