Overview of the Mathematics of Compressive Sensing Simon Foucart

advertisement
Overview of the
Mathematics of Compressive Sensing
Simon Foucart
Reading Seminar on
“Compressive Sensing, Extensions, and Applications”
Texas A&M University
29 October 2015
Optimality of Uniform Guarantees
Gelfand Widths
Gelfand Widths
For a subset K of a normed space X , define
linear m
m
m
E (K , X ) := inf sup kx − ∆(Ax)k, A : X → R , ∆ : R → X
x∈K
Gelfand Widths
For a subset K of a normed space X , define
linear m
m
m
E (K , X ) := inf sup kx − ∆(Ax)k, A : X → R , ∆ : R → X
x∈K
The Gelfand m-width of K in X is
m
m
m
d (K , X ) := inf
sup kxk, L subspace of X , codim(L ) ≤ m
x∈K ∩Lm
Gelfand Widths
For a subset K of a normed space X , define
linear m
m
m
E (K , X ) := inf sup kx − ∆(Ax)k, A : X → R , ∆ : R → X
x∈K
The Gelfand m-width of K in X is
m
m
m
d (K , X ) := inf
sup kxk, L subspace of X , codim(L ) ≤ m
x∈K ∩Lm
If −K = K , then
d m (K , X ) ≤ E m (K , X ),
Gelfand Widths
For a subset K of a normed space X , define
linear m
m
m
E (K , X ) := inf sup kx − ∆(Ax)k, A : X → R , ∆ : R → X
x∈K
The Gelfand m-width of K in X is
m
m
m
d (K , X ) := inf
sup kxk, L subspace of X , codim(L ) ≤ m
x∈K ∩Lm
If −K = K , then
d m (K , X ) ≤ E m (K , X ),
and if in addition K + K ⊆ a K , then
E m (K , X ) ≤ a d m (K , X ).
Gelfand Widths of `1 -Balls: Upper Bound
Gelfand Widths of `1 -Balls: Upper Bound
Let A ∈ Rm×N with m ≈ c s ln(eN/m) be such that δ2s < 1/2.
Gelfand Widths of `1 -Balls: Upper Bound
Let A ∈ Rm×N with m ≈ c s ln(eN/m) be such that δ2s < 1/2.
Let ∆1 : Rm → RN be the `1 -minimization map.
Gelfand Widths of `1 -Balls: Upper Bound
Let A ∈ Rm×N with m ≈ c s ln(eN/m) be such that δ2s < 1/2.
Let ∆1 : Rm → RN be the `1 -minimization map. Given 1 < p ≤ 2,
for any x ∈ B1N ,
kx − ∆1 (Ax)kp ≤
C
s
σ (x)1 ≤
1−1/p s
C
s 1−1/p
≈
C0
.
(m/ ln(eN/m))1−1/p
Gelfand Widths of `1 -Balls: Upper Bound
Let A ∈ Rm×N with m ≈ c s ln(eN/m) be such that δ2s < 1/2.
Let ∆1 : Rm → RN be the `1 -minimization map. Given 1 < p ≤ 2,
for any x ∈ B1N ,
kx − ∆1 (Ax)kp ≤
C
s
σ (x)1 ≤
1−1/p s
C
s 1−1/p
≈
This gives an upper bound for E m (B1N , `N
p ),
C0
.
(m/ ln(eN/m))1−1/p
Gelfand Widths of `1 -Balls: Upper Bound
Let A ∈ Rm×N with m ≈ c s ln(eN/m) be such that δ2s < 1/2.
Let ∆1 : Rm → RN be the `1 -minimization map. Given 1 < p ≤ 2,
for any x ∈ B1N ,
kx − ∆1 (Ax)kp ≤
C
s
σ (x)1 ≤
1−1/p s
C
s 1−1/p
≈
C0
.
(m/ ln(eN/m))1−1/p
This gives an upper bound for E m (B1N , `N
p ), and in turn
d
m
(B1N , `N
p)
ln(eN/m) 1−1/p
≤ C min 1,
.
m
Gelfand Widths of `1 -Balls: Lower Bound
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1
for all x ∈ RN .
s
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m),
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
eN s
m ≥ c 0 s ln
+ c 0 s ln
s
m
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
eN s
s
m ≥ c 0 s ln
+ c 0 m ln
s
m
m
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
eN s
s
m ≥ c 0 s ln
+ c0 m
ln
s
|m {z m }
≥−1/e
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
eN s
s
m ≥ c 0 s ln
+ c0 m
ln
≥ ···
s
|m {z m }
≥−1/e
Gelfand Widths of `1 -Balls: Lower Bound
The Gelfand width of B1N in `N
p , p > 1, also satisfies
ln(eN/m) 1−1/p
d m (B1N , `N
)
≥
c
min
1,
.
p
m
Once established, this will show the optimality of the CS results.
Indeed, suppose the existence of (A, ∆) such that
C
C
kx − ∆(Ax)kp ≤ 1−1/p σs (x)1 ≤ 1−1/p kxk1
for all x ∈ RN .
s
s
Then,
ln(eN/m) 1−1/p
C
m
N N
c min 1,
≤ d m (B1N , `N
p ) ≤ E (B1 , `p ) ≤ 1−1/p ,
m
s
in other words
ln(eN/m)
1
c 0 min 1,
≤ .
m
s
0
0
We derive either s ≤ 1/c or m ≥ c s ln(eN/m), in which case
eN s
eN s
m ≥ c 0 s ln
+ c0 m
ln
≥ · · · ≥ c 00 s ln
.
s
s
|m {z m }
≥−1/e
The lower estimate for d m (B1N , `Np ): two key insights
The lower estimate for d m (B1N , `Np ): two key insights
I
small width implies `1 -recovery of s-sparse vectors for large s.
The lower estimate for d m (B1N , `Np ): two key insights
I
small width implies `1 -recovery of s-sparse vectors for large s.
I
`1 -recovery of s-sparse vectors only possible for moderate s.
The lower estimate for d m (B1N , `Np ): two key insights
I
small width implies `1 -recovery of s-sparse vectors for large s.
There is a matrix A ∈ Rm×N such that every s-sparse x ∈ RN
is a minimizer of kzk1 subject to Az = Ax for
s≈
I
1
m
2d (B1N , `N
p)
p
p−1
.
`1 -recovery of s-sparse vectors only possible for moderate s.
The lower estimate for d m (B1N , `Np ): two key insights
I
small width implies `1 -recovery of s-sparse vectors for large s.
There is a matrix A ∈ Rm×N such that every s-sparse x ∈ RN
is a minimizer of kzk1 subject to Az = Ax for
s≈
I
1
m
2d (B1N , `N
p)
p
p−1
.
`1 -recovery of s-sparse vectors only possible for moderate s.
For s ≥ 2, if A ∈ Rm×N is a matrix such that every s-sparse
vector x is a minimizer of kzk1 subject to Az = Ax, then
m ≥ c1 s ln
N ,
c2 s
The lower estimate for d m (B1N , `Np ): two key insights
I
small width implies `1 -recovery of s-sparse vectors for large s.
There is a matrix A ∈ Rm×N such that every s-sparse x ∈ RN
is a minimizer of kzk1 subject to Az = Ax for
s≈
I
1
m
2d (B1N , `N
p)
p
p−1
.
`1 -recovery of s-sparse vectors only possible for moderate s.
For s ≥ 2, if A ∈ Rm×N is a matrix such that every s-sparse
vector x is a minimizer of kzk1 subject to Az = Ax, then
m ≥ c1 s ln
N ,
c2 s
c1 ≥ 0.45, c2 = 4.
Deriving the lower estimate
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
Setting s ≈ 1/(cµ) ≥ 2,
≤ 1.
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors.
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
m ≥ c1 s ln
N c2 s
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
m ≥ c1 s ln
N N ≥ c1 s ln 0
c2 s
c2 m
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
m ≥ c1 s ln
N N eN ≥ c1 s ln 0
≥ c10 s ln
c2 s
c2 m
m
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
N N eN ≥ c1 s ln 0
≥ c10 s ln
c2 s
c2 m
m
0
c
ln(eN/m)
≥ 1
n
ln(eN/m) o
c
min 1,
m
m ≥ c1 s ln
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
N N eN ≥ c1 s ln 0
≥ c10 s ln
c2 s
c2 m
m
0
0
c
ln(eN/m)
c
≥ 1
≥ 1m
n
o
ln(eN/m)
c
c
min 1,
m
m ≥ c1 s ln
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
N N eN ≥ c1 s ln 0
≥ c10 s ln
c2 s
c2 m
m
0
0
c
ln(eN/m)
c
≥ 1
≥ 1 m > m,
n
o
ln(eN/m)
c
c
min 1,
m
m ≥ c1 s ln
provided c is chosen small enough.
Deriving the lower estimate
1−1/p /2, where
Suppose that d m (B1N , `N
p ) < (cµ)
ln(eN/m)
µ := min 1,
m
≤ 1.
Setting s ≈ 1/(cµ) ≥ 2, there exists A ∈ Rm×N allowing
`1 -recovery of all s-sparse vectors. Therefore
N N eN ≥ c1 s ln 0
≥ c10 s ln
c2 s
c2 m
m
0
0
c
ln(eN/m)
c
≥ 1
≥ 1 m > m,
n
o
ln(eN/m)
c
c
min 1,
m
m ≥ c1 s ln
provided c is chosen small enough. This is a contradiction.
Insight 1: width and `1 -recovery
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds,
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvS k1
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
kvS k1 ≤ s 1−1/p kvS kp
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
kvS k1 ≤ s 1−1/p kvS kp ≤ s 1−1/p kvkp
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
kvS k1 ≤ s 1−1/p kvS kp ≤ s 1−1/p kvkp ≤ ds 1−1/p kvk1 .
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
kvS k1 ≤ s 1−1/p kvS kp ≤ s 1−1/p kvkp ≤ ds 1−1/p kvk1 .
Choose s ≈
1 p
p−1
2d
Insight 1: width and `1 -recovery
Every s-sparse x ∈ RN is a minimizer of kzk1 subject to Az = Ax if
and only if the null space property of order s holds, i.e.,
kvS k1 ≤ kvk1 /2
for all v ∈ ker A and all S ∈ [N] with |S| ≤ s.
m×N such that
Setting d := d m (B1N , `N
p ), there exists A ∈ R
kvkp ≤ dkvk1
for all v ∈ ker A.
Then, for v ∈ ker A and S ∈ [N] with |S| ≤ s,
kvS k1 ≤ s 1−1/p kvS kp ≤ s 1−1/p kvkp ≤ ds 1−1/p kvk1 .
Choose s ≈
1 p
p−1
to derive the null space property of order s.
2d
A combinatorial lemma
A combinatorial lemma
Lemma
There exists n ≥
N s
2
4s
subsets S 1 , . . . , S n of size s such that
|S j ∩ S k | <
s
2
for all 1 ≤ j 6= k ≤ n.
A combinatorial lemma
Lemma
There exists n ≥
N s
2
4s
subsets S 1 , . . . , S n of size s such that
|S j ∩ S k | <
s
2
for all 1 ≤ j 6= k ≤ n.
Define s-sparse vectors x1 , . . . , xn by
1/s if i ∈ S j ,
j
(x )i =
0
if i ∈
6 Sj.
A combinatorial lemma
Lemma
There exists n ≥
N s
2
4s
subsets S 1 , . . . , S n of size s such that
|S j ∩ S k | <
s
2
for all 1 ≤ j 6= k ≤ n.
Define s-sparse vectors x1 , . . . , xn by
1/s if i ∈ S j ,
j
(x )i =
0
if i ∈
6 Sj.
Note that kxj k1 = 1
A combinatorial lemma
Lemma
There exists n ≥
N s
2
4s
subsets S 1 , . . . , S n of size s such that
|S j ∩ S k | <
s
2
for all 1 ≤ j 6= k ≤ n.
Define s-sparse vectors x1 , . . . , xn by
1/s if i ∈ S j ,
j
(x )i =
0
if i ∈
6 Sj.
Note that kxj k1 = 1 and kxj − xk k1 > 1 for all 1 ≤ j 6= k ≤ n.
Insight 2: `1 -recovery and number of measurements
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1
v∈ker A
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
In particular,
k[xj ]k = 1
for all 2s-sparse x ∈ RN .
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
In particular,
k[xj ]k = 1
and
k[xj ] − [xk ]k > 1,
all 1 ≤ j 6= k ≤ n.
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
In particular,
k[xj ]k = 1
and
k[xj ] − [xk ]k > 1,
all 1 ≤ j 6= k ≤ n.
The size of this 1-separating set of the unit sphere satisfies
2 m
n ≤ 1+
1
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
In particular,
k[xj ]k = 1
and
k[xj ] − [xk ]k > 1,
all 1 ≤ j 6= k ≤ n.
The size of this 1-separating set of the unit sphere satisfies
2 m
n ≤ 1+
= 3m .
1
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
In particular,
k[xj ]k = 1
and
k[xj ] − [xk ]k > 1,
all 1 ≤ j 6= k ≤ n.
The size of this 1-separating set of the unit sphere satisfies
N s
2
4s
2 m
≤n ≤ 1+
= 3m .
1
Insight 2: `1 -recovery and number of measurements
For A ∈ Rm×N , suppose that every 2s-sparse vector x ∈ RN is a
minimizer of kzk1 subject to Az = Ax.
In the quotient space `N
1 / ker A, this means
k[x]k := inf kx − vk1 = kxk1
v∈ker A
for all 2s-sparse x ∈ RN .
In particular,
k[xj ]k = 1
and
k[xj ] − [xk ]k > 1,
all 1 ≤ j 6= k ≤ n.
The size of this 1-separating set of the unit sphere satisfies
N s
2
4s
2 m
≤n ≤ 1+
= 3m .
1
Taking the logarithm yields
m≥
N s
ln
.
ln 9
4s
Download