Overview of the Mathematics of Compressive Sensing Simon Foucart

advertisement
Overview of the
Mathematics of Compressive Sensing
Simon Foucart
Reading Seminar on
“Compressive Sensing, Extensions, and Applications”
Texas A&M University
8 October 2015
Coherence-based Recovery Guarantees
Coherence
Coherence
For a matrix with `2 -normalized columns a1 , . . . , aN , define
µ := max |hai , aj i| .
i6=j
Coherence
For a matrix with `2 -normalized columns a1 , . . . , aN , define
µ := max |hai , aj i| .
i6=j
As a rule, the smaller the coherence, the better.
Coherence
For a matrix with `2 -normalized columns a1 , . . . , aN , define
µ := max |hai , aj i| .
i6=j
As a rule, the smaller the coherence, the better.
However, the Welch bound reads
s
µ≥
N −m
.
m(N − 1)
Coherence
For a matrix with `2 -normalized columns a1 , . . . , aN , define
µ := max |hai , aj i| .
i6=j
As a rule, the smaller the coherence, the better.
However, the Welch bound reads
s
µ≥
I
N −m
.
m(N − 1)
Welch bound achieved at and only at equiangular tight frames.
Coherence
For a matrix with `2 -normalized columns a1 , . . . , aN , define
µ := max |hai , aj i| .
i6=j
As a rule, the smaller the coherence, the better.
However, the Welch bound reads
s
µ≥
I
I
N −m
.
m(N − 1)
Welch bound achieved at and only at equiangular tight frames.
√
Deterministic matrices with coherence µ ≤ c/ m exist.
Recovery Conditions using Coherence
Recovery Conditions using Coherence
I
Every s-sparse x ∈ CN is recovered from y = Ax ∈ Cm via at
most s iterations of OMP provided
µ<
1
.
2s − 1
Recovery Conditions using Coherence
I
Every s-sparse x ∈ CN is recovered from y = Ax ∈ Cm via at
most s iterations of OMP provided
µ<
I
1
.
2s − 1
Every s-sparse x ∈ CN is recovered from y = Ax ∈ Cm via
Basis Pursuit provided
µ<
1
.
2s − 1
Recovery Conditions using Coherence
I
Every s-sparse x ∈ CN is recovered from y = Ax ∈ Cm via at
most s iterations of OMP provided
µ<
I
Every s-sparse x ∈ CN is recovered from y = Ax ∈ Cm via
Basis Pursuit provided
µ<
I
1
.
2s − 1
1
.
2s − 1
In fact, the Exact Recovery Condition can be rephrased as
kA†S AS k1→1 < 1,
and this implies the Null Space Property.
Situation circa 2004
Situation circa 2004
The coherence conditions for s-sparse recovery are of the type
µ≤
c
,
s
Situation circa 2004
The coherence conditions for s-sparse recovery are of the type
µ≤
c
,
s
while the Welch bound (for N ≥ 2m, say) reads
c
µ≥ √ .
m
Situation circa 2004
The coherence conditions for s-sparse recovery are of the type
µ≤
c
,
s
while the Welch bound (for N ≥ 2m, say) reads
c
µ≥ √ .
m
Therefore, arguments based on coherence necessitate
m ≥ c s 2.
Situation circa 2004
The coherence conditions for s-sparse recovery are of the type
µ≤
c
,
s
while the Welch bound (for N ≥ 2m, say) reads
c
µ≥ √ .
m
Therefore, arguments based on coherence necessitate
m ≥ c s 2.
This is far from the ideal linear scaling of m in s...
Situation circa 2004
The coherence conditions for s-sparse recovery are of the type
µ≤
c
,
s
while the Welch bound (for N ≥ 2m, say) reads
c
µ≥ √ .
m
Therefore, arguments based on coherence necessitate
m ≥ c s 2.
This is far from the ideal linear scaling of m in s...
We now introduce new tools to break this quadratic barrier.
RIP-based Recovery Guarantees
Restricted Isometry Constants
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Alternative form:
δs =
max
card(S)≤s
kA∗S AS − Idk2→2 .
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Alternative form:
δs =
max
card(S)≤s
kA∗S AS − Idk2→2 .
Suitable CS matrices have small restricted isometry constants.
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Alternative form:
δs =
max
card(S)≤s
kA∗S AS − Idk2→2 .
Suitable CS matrices have small restricted isometry constants.
Typically, δs ≤ δ∗ holds for a number of random measurements
m ≈ c(δ∗ ) s ln(eN/s).
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Alternative form:
δs =
max
card(S)≤s
kA∗S AS − Idk2→2 .
Suitable CS matrices have small restricted isometry constants.
Typically, δs ≤ δ∗ holds for a number of random measurements
m≈
c
s ln(eN/s).
δ∗2
Restricted Isometry Constants
Restricted Isometry Constant δs = the smallest δ > 0 such that
(1 − δ)kzk22 ≤ kAzk22 ≤ (1 + δ)kzk22
for all s-sparse z ∈ CN .
Alternative form:
δs =
max
card(S)≤s
kA∗S AS − Idk2→2 .
Suitable CS matrices have small restricted isometry constants.
Typically, δs ≤ δ∗ holds for a number of random measurements
m≈
In fact, δs ≤ δ∗ imposes m ≥
c
s ln(eN/s).
δ∗2
c
s.
δ∗2
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}.
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
1
kA(vS0 )k22
1 − δ2s
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
X
k≥1
δ2s kvS0 k2 kvSk k2 .
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
Simplify by kvS0 k2
X
k≥1
δ2s kvS0 k2 kvSk k2 .
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
X
δ2s kvS0 k2 kvSk k2 .
k≥1
Simplify by kvS0 k2 and observe that
1
kvSk k2 ≤ √ kvSk−1 k1
s
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
X
δ2s kvS0 k2 kvSk k2 .
k≥1
Simplify by kvS0 k2 and observe that
1
kvSk k2 ≤ √ kvSk−1 k1
s
to obtain
kvS0 k2 ≤
δ2s
1
√ kvk1 ,
1 − δ2s s
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
X
δ2s kvS0 k2 kvSk k2 .
k≥1
Simplify by kvS0 k2 and observe that
1
kvSk k2 ≤ √ kvSk−1 k1
s
to obtain
kvS0 k2 ≤
δ2s
1
√ kvk1 ,
1 − δ2s s
hence kvS0 k1 ≤
δ2s
kvk1 .
1 − δ2s
Exact Sparse Recovery via `1 -Minimization when δ2s < 1/3
Take v ∈ ker A \ {0}. Consider sets S0 , S1 , . . . of s indices ordered
by decreasing magnitude of entries of v.
kvS0 k22 ≤
X
1
1
kA(vS0 )k22 =
hA(vS0 ), A(−vSk )i
1 − δ2s
1 − δ2s
k≥1
≤
1
1 − δ2s
X
δ2s kvS0 k2 kvSk k2 .
k≥1
Simplify by kvS0 k2 and observe that
1
kvSk k2 ≤ √ kvSk−1 k1
s
to obtain
kvS0 k2 ≤
δ2s
1
√ kvk1 ,
1 − δ2s s
hence kvS0 k1 ≤
δ2s
kvk1 .
1 − δ2s
Note that ρ := δ2s /(1 − δ2s ) < 1/2 whenever δ2s < 1/3.
Stable and Robust Sparse Recovery via `1 -Minimization
Stable and Robust Sparse Recovery via `1 -Minimization
Objective: for p ∈ [1, 2], for all x ∈ CN and e ∈ Cm with kek2 ≤ η:
stability
z
kx − ∆(Ax + e)kp ≤
C
s 1−1/p
}|
min
xs s−sparse
where ∆(y) = ∆1,η (y) := argmin kzk1
robustness
{
}|
{
z
kx − xs k1 + D s 1/p−1/2 η,
subject to kAz − yk2 ≤ η.
Stable and Robust Sparse Recovery via `1 -Minimization
Objective: for p ∈ [1, 2], for all x ∈ CN and e ∈ Cm with kek2 ≤ η:
stability
z
kx − ∆(Ax + e)kp ≤
C
s 1−1/p
}|
min
xs s−sparse
where ∆(y) = ∆1,η (y) := argmin kzk1
robustness
{
}|
{
z
kx − xs k1 + D s 1/p−1/2 η,
subject to kAz − yk2 ≤ η.
Taking x = v ∈ CN , e = −Av ∈ Cm , and η = kAvk2 gives
kvkp ≤
C
kv k1 + D s 1/p−1/2 kAvk2
s 1−1/p S
for all S ⊆ [N] with card(S) = s.
Robust Null Space Property
Robust Null Space Property
For q ∈ [1, 2], A ∈ Cm×N has the `q -robust null space property of
order s (wrto k · k) with constants 0 < ρ < 1 and τ > 0 if, for any
set S ⊂ [N] with card(S) ≤ s,
kvS kq ≤
ρ
s 1−1/q
kvS k1 + τ kAvk
for all v ∈ CN .
Robust Null Space Property
For q ∈ [1, 2], A ∈ Cm×N has the `q -robust null space property of
order s (wrto k · k) with constants 0 < ρ < 1 and τ > 0 if, for any
set S ⊂ [N] with card(S) ≤ s,
kvS kq ≤
I
ρ
s 1−1/q
kvS k1 + τ kAvk
for all v ∈ CN .
`q -RNSP (wrto k · k) ⇒ `p -RSNP (wrto s 1/p−1/q k · k)
whenever 1 ≤ p ≤ q ≤ 2.
Robust Null Space Property
For q ∈ [1, 2], A ∈ Cm×N has the `q -robust null space property of
order s (wrto k · k) with constants 0 < ρ < 1 and τ > 0 if, for any
set S ⊂ [N] with card(S) ≤ s,
kvS kq ≤
ρ
s 1−1/q
kvS k1 + τ kAvk
for all v ∈ CN .
I
`q -RNSP (wrto k · k) ⇒ `p -RSNP (wrto s 1/p−1/q k · k)
whenever 1 ≤ p ≤ q ≤ 2.
I
`q -RNSP (wrto k · k) implies that, for 1 ≤ p ≤ q,
kz−xkp ≤
C
s 1−1/p
for all x, z ∈ CN .
kzk1 −kxk1 +2σs (x)1 +D s 1/p−1/q kA(z−x)k
Robust Null Space Property
For q ∈ [1, 2], A ∈ Cm×N has the `q -robust null space property of
order s (wrto k · k) with constants 0 < ρ < 1 and τ > 0 if, for any
set S ⊂ [N] with card(S) ≤ s,
kvS kq ≤
ρ
s 1−1/q
kvS k1 + τ kAvk
for all v ∈ CN .
I
`q -RNSP (wrto k · k) ⇒ `p -RSNP (wrto s 1/p−1/q k · k)
whenever 1 ≤ p ≤ q ≤ 2.
I
`q -RNSP (wrto k · k) implies that, for 1 ≤ p ≤ q,
kz−xkp ≤
C
s 1−1/p
kzk1 −kxk1 +2σs (x)1 +D s 1/p−1/q kA(z−x)k
for all x, z ∈ CN . The converse holds when q = 1.
Robust Null Space Property
For q ∈ [1, 2], A ∈ Cm×N has the `q -robust null space property of
order s (wrto k · k) with constants 0 < ρ < 1 and τ > 0 if, for any
set S ⊂ [N] with card(S) ≤ s,
kvS kq ≤
ρ
s 1−1/q
kvS k1 + τ kAvk
for all v ∈ CN .
I
`q -RNSP (wrto k · k) ⇒ `p -RSNP (wrto s 1/p−1/q k · k)
whenever 1 ≤ p ≤ q ≤ 2.
I
`q -RNSP (wrto k · k) implies that, for 1 ≤ p ≤ q,
kz−xkp ≤
I
C
s 1−1/p
kzk1 −kxk1 +2σs (x)1 +D s 1/p−1/q kA(z−x)k
for all x, z ∈ CN . The converse holds when q = 1.
√
`2 -RNSP (wrto k · k2 ) holds when δ2s < 1/ 2.
Iterative Hard Thresholding and Hard Thresholding Pursuit
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
HTP: Start with an s-sparse x0 ∈ CN and iterate:
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
HTP: Start with an s-sparse x0 ∈ CN and iterate:
(HTP1 )
(HTP2 )
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
HTP: Start with an s-sparse x0 ∈ CN and iterate:
(HTP1 ) S n+1 = s largest abs. entries of xn + A∗ (y − Axn ) ,
(HTP2 )
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
HTP: Start with an s-sparse x0 ∈ CN and iterate:
(HTP1 ) S n+1 = s largest abs. entries of xn + A∗ (y − Axn ) ,
(HTP2 ) xn+1 = argmin ky − Azk2 , supp(z) ⊆ S n+1 ,
Iterative Hard Thresholding and Hard Thresholding Pursuit
I
I
I
solving the rectangular system Ax = y amounts to solving the
square system A∗ Ax = A∗ y,
classical iterative methods suggest the iteration
xn+1 = xn + A∗ (y − Axn ),
at each iteration, keep s largest absolute entries and set the
other ones to zero.
IHT: Start with an s-sparse x0 ∈ CN and iterate:
(IHT)
xn+1 = Hs (xn + A∗ (y − Axn ))
until a stopping criterion is met.
HTP: Start with an s-sparse x0 ∈ CN and iterate:
(HTP1 ) S n+1 = s largest abs. entries of xn + A∗ (y − Axn ) ,
(HTP2 ) xn+1 = argmin ky − Azk2 , supp(z) ⊆ S n+1 ,
until a stopping criterion is met (S n+1 = S n is natural here).
Stable and Robust Sparse Recovery via IHT and HTP
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3.
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
I
kxS − xn+1 k2 ≤ ρkxS − xn k2 + (1 − ρ)τ kAxS + ek2 ,
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
I
kxS − xn+1 k2 ≤ ρkxS − xn k2 + (1 − ρ)τ kAxS + ek2 ,
I
kxS − xn k2 ≤ ρn kxS − x0 k2 + τ kAxS + ek2 ,
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
I
kxS − xn+1 k2 ≤ ρkxS − xn k2 + (1 − ρ)τ kAxS + ek2 ,
I
kxS − xn k2 ≤ ρn kxS − x0 k2 + τ kAxS + ek2 ,
I
The outpout x] of the algorithms satisfy
kxS − x] k2 ≤ τ kAxS + ek2 ,
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
I
kxS − xn+1 k2 ≤ ρkxS − xn k2 + (1 − ρ)τ kAxS + ek2 ,
I
kxS − xn k2 ≤ ρn kxS − x0 k2 + τ kAxS + ek2 ,
I
The outpout x] of the algorithms satisfy
kxS − x] k2 ≤ τ kAxS + ek2 ,
I
√
With 2s in place of s in the algorithms, if δ6s < 1/ 3, then
kx − x] kp ≤
C
s 1−1/p
σs (x)1 + D s 1/p−1/2 kek2 ,
1 ≤ p ≤ 2,
Stable and Robust Sparse Recovery via IHT and HTP
√
Suppose that δ3s < 1/ 3. Then, for some ρ < 1 and τ > 0, the
following facts hold for all x ∈ CN and e ∈ Cm , where S denotes
an index set of s largest absolute entries of x:
I
kxS − xn+1 k2 ≤ ρkxS − xn k2 + (1 − ρ)τ kAxS + ek2 ,
I
kxS − xn k2 ≤ ρn kxS − x0 k2 + τ kAxS + ek2 ,
I
The outpout x] of the algorithms satisfy
kxS − x] k2 ≤ τ kAxS + ek2 ,
I
√
With 2s in place of s in the algorithms, if δ6s < 1/ 3, then
kx − x] kp ≤
I
C
s 1−1/p
σs (x)1 + D s 1/p−1/2 kek2 ,
1 ≤ p ≤ 2,
For HTP, (pseudo)robustness is achieved in ≤ c s iterations.
Recovery by Orthogonal Matching Pursuit
Recovery by Orthogonal Matching Pursuit
OMP: Start with S 0 = ∅ and x0 = 0, and iterate:
S n = S n−1 ∪ j n := argmaxj A∗ (y − Axn−1 ) j ,
xn = argminz ky − Azk2 , supp(z) ⊆ S n .
Recovery by Orthogonal Matching Pursuit
OMP: Start with S 0 = ∅ and x0 = 0, and iterate:
S n = S n−1 ∪ j n := argmaxj A∗ (y − Axn−1 ) j ,
xn = argminz ky − Azk2 , supp(z) ⊆ S n .
If A has `2 -normalized columns a1 , . . . , aN , then
∗
A (y − Axn−1 ) n 2
j
ky − Axn k22 = ky − Axn−1 k22 −
d(aj n , span[aj , j ∈ S n−1 ])2
Recovery by Orthogonal Matching Pursuit
OMP: Start with S 0 = ∅ and x0 = 0, and iterate:
S n = S n−1 ∪ j n := argmaxj A∗ (y − Axn−1 ) j ,
xn = argminz ky − Azk2 , supp(z) ⊆ S n .
If A has `2 -normalized columns a1 , . . . , aN , then
∗
A (y − Axn−1 ) n 2
j
ky − Axn k22 = ky − Axn−1 k22 −
d(aj n , span[aj , j ∈ S n−1 ])2
2
≤ ky − Axn−1 k22 − A∗ (y − Axn−1 ) n .
j
Recovery by Orthogonal Matching Pursuit
OMP: Start with S 0 = ∅ and x0 = 0, and iterate:
S n = S n−1 ∪ j n := argmaxj A∗ (y − Axn−1 ) j ,
xn = argminz ky − Azk2 , supp(z) ⊆ S n .
If A has `2 -normalized columns a1 , . . . , aN , then
∗
A (y − Axn−1 ) n 2
j
ky − Axn k22 = ky − Axn−1 k22 −
d(aj n , span[aj , j ∈ S n−1 ])2
2
≤ ky − Axn−1 k22 − A∗ (y − Axn−1 ) n .
j
I
Suppose that δ13s < 1/6. Then s-sparse recovery via 12s
iterations of OMP is stable and robust.
Recovery by Orthogonal Matching Pursuit
OMP: Start with S 0 = ∅ and x0 = 0, and iterate:
S n = S n−1 ∪ j n := argmaxj A∗ (y − Axn−1 ) j ,
xn = argminz ky − Azk2 , supp(z) ⊆ S n .
If A has `2 -normalized columns a1 , . . . , aN , then
∗
A (y − Axn−1 ) n 2
j
ky − Axn k22 = ky − Axn−1 k22 −
d(aj n , span[aj , j ∈ S n−1 ])2
2
≤ ky − Axn−1 k22 − A∗ (y − Axn−1 ) n .
j
I
Suppose that δ13s < 1/6. Then s-sparse recovery via 12s
iterations of OMP is stable and robust.
I
Challenge: natural proof of OMP success in c s iterations?
Download