A generalized Prony method for sparse approximation

advertisement
A generalized Prony method for sparse approximation
Gerlind Plonka and Thomas Peter
Institute for Numerical and Applied Mathematics
University of Göttingen
Dolomites Research Week on Approximation
September 2014
A generalized Prony method for sparse approximation
Outline
•
Original Prony method
Reconstruction of sparse exponential sums
•
Ben-Or & Tiwari method
Reconstruction of sparse polynomials
•
The generalized Prony method
- Application to shift and dilation operator
- Application to Sturm-Liouville operator
- Recovery of sparse vectors
•
Numerical issues
1
Reconstruction of sparse exponential sums (Prony’s method)
Function
f (x) =
M
P
cj eTj x
j=1
We have
We want
M , f (`), ` = 0, . . . , 2M − 1
cj , Tj ∈ C, where −π ≤ Im Tj < π, j = 1, . . . , M .
Consider the Prony polynomial
M
M
Q
P
P (z) :=
z − eTj =
p` z `
j=1
`=0
with unknown parameters Tj and pM = 1.
M
P
p` f (` + m) =
`=0
M
P
`=0
=
M
P
j=1
p`
M
P
cj eTj (`+m) =
j=1
cj eTj m P (eTj ) = 0,
M
P
j=1
cj eTj m
M
P
p` eTj `
`=0
m = 0, . . . , M − 1.
2
Reconstruction algorithm
Input: f (`), ` = 0, . . . , 2M − 1
•
Solve the Hankel system

f (0)
f (1) . . . f (M − 1)
 f (1)
f (2) . . . f (M )

..
..
..

.
.
.
f (M − 1) f (M ) . . . f (2M − 2)




f (M )
p0
  p1 
 f (M + 1) 
 .  = −
.
.
  .. 


..
pM −1
f (2M − 1)
PM
•
Compute the zeros of the Prony polynomial P (z) = `=0 p` z ` and
extract the parameters Tj from its zeros zj = eTj , j = 1, . . . , M .
•
Compute cj solving the linear system
f (`) =
M
X
j=1
cj eTj ` ,
` = 0, . . . , 2M − 1.
Output: Parameters Tj and cj , j = 1, . . . , M .
3
Literature
[Prony] (1795):
[Schmidt] (1979):
[Roy, Kailath] (1989):
Reconstruction of a difference equation
MUSIC (Multiple Signal Classification)
ESPRIT
(Estimation of signal parameters via
rotational invariance techniques)
[Hua, Sakar] (1990):
Matrix-pencil method
[Stoica, Moses] (2000):
Annihilating filters
[Potts, Tasche] (2010, 2011): Approximate Prony method
Golub, Milanfar, Varah (’99); Vetterli, Marziliano, Blu (’02);
Maravić, Vetterli (’04); Elad, Milanfar, Golub (’04);
Beylkin, Monzon (’05,’10); Batenkov, Sarg, Yomdin (’12, ’13);
Filbir, Mhaskar, Prestin (’12); Peter, Potts, Tasche (’11,’12,’13);
Plonka, Wischerhoff (’13); ...
4
Reconstruction of M -sparse polynomials
Ben-Or & Tiwari algorithm
Function
f (x) =
M
P
cj xdj
j=1
0 ≤ d1 < d2 < . . . < dM = N , dj ∈ N0 , cj ∈ C.
We have
We want
M , f (x`0 ), ` = 0, . . . , 2M − 1, (x`0 pairwise different)
coefficients cj ∈ C \ {0}, indices dj of “active” monomials
Consider the Prony polynomial
P
M M
Q
dj
P (z) :=
z − x0 =
p` z `
j=1
with unknown zeros
dj
x0
`=0
and pM = 1.
5
Prony polynomial
P (z) :=
M Q
z
j=1
Then
M
P
`=0
p` f (x`+m
)=
0
=
M
P
p`
`=0
M
P
j=1
M
P
j=1
dj (`+m)
cj x 0
dj m
dj
cj x0 P (x0 )
dj
− x0
=
=
M
P
p` z `
`=0
M
P
M
dj m P
dj `
cj x 0
p ` x0
j=1
`=0
m = 0, . . . , M − 1.
= 0,
Hence
MP
−1
`=0
Compute p`
p` f (x0`+m ) = −f (x`+m
),
0
⇒
P (z)
⇒
zeros
m = 0, . . . , M − 1.
dj
x0
⇒
dj
⇒
cj .
6
Literature
[Ben-Or, Tiwari] (1988):
Reconstruction of multivariate
M -sparse polynomials
[Grigoriev, Karpinski, Singer] (1990): Sparse polynomial interpolation over finite fields
[Dress, Grabmeir] (1991):
[Lakshman, Saunders] (1995):
[Kaltofen, Lee] (2003):
[Giesbrecht, Labahn, Lee] (2008)
Interpolation of k-sparse
character sums
sparse polynomial interpolation in non-standard bases
Early termination in sparse
polynomial interpolation
Sparse interpolation of
multivariate polynomials
7
The generalized Prony method (Peter, Plonka (2013))
Let V be a normed vector space and let A : V → V be a linear operator.
Let {en : n ∈ I} be a set of eigenfunctions of A to pairwise different
eigenvalues λn ∈ C,
A en = λn en .
Let
f=
X
j∈J
cj ej ,
J ⊂ I with |J| = M, cj ∈ C.
Let F : V → C be a linear functional with F (en ) 6= 0 for all n ∈ I.
We have
We want
M , F (A` f ) for ` = 0, . . . , 2M − 1
J ⊂ I, cj ∈ C for j ∈ J
8
Prony polynomial
P (z) =
Y
j∈J
(z − λj ) =
M
X
p` z
`
`=0
with unknown λj , i.e., with unknown J. Hence, for m = 0, 1, . . .
!
!
M
M
M
P
P
P
P
P
`+m
`+m
p` F (A
f) =
p` F
cj A
ej =
p` F
cj λj`+m ej
`=0
j∈J
`=0
=
X
j∈J
=
P
j∈J
m
cj λj
M
X
`=0
`
p` λj
!
`=0
j∈J
F (ej )
cj λm
j P (λj )F (ej ) = 0.
Thus, if F (A` f ), ` = 0, . . . , 2M − 1 is known, then we can compute the
index set J ⊂ I of the active eigenfunctions and the coefficients cj .
9
Application to linear operators: shift operator
Choose the shift operator Sh : C(R) → C(R), h > 0
Sh f (x) := f (x + h)
Eigenfunctions of Sh
Sh e
Tj x
Tj (x+h)
=e
Tj h Tj x
=e
e
,
π π
Tj ∈ C, Im Tj ∈ [− , ).
h h
Prony method: For the reconstruction of
M
P
f (x) =
cj eTj x we need F (Sh` f ) = F (f (· + h`)), ` = 0, . . . , 2M − 1.
j=1
F (Sh` f ) = f (x0 + h`)
Put F (f ) := f (x0 ).
Put F (f ) :=
R x0 +h
x0
f (x)dx.
F (Sh` f ) =
x0 +(`+1)h
R
f (x)dx.
x0 +`h
10
Application to linear operators: dilation operator
Choose the dilation operator Dh : C(R) → C(R),
Dh f (x) := f (hx)
Eigenfunctions of Dh : Dh xpj = (hx)pj = hpj xpj ,
We need: hpj are pairwise different for all j ∈ I.
pj ∈ C, x ∈ R.
Ben-Or & Tiwari method: For reconstruction of
f (x) =
M
X
cj x p j ,
j=1
we need F (Dh` f ) = F (f (h` ·)), ` = 0, . . . , 2M − 1.
Put F (f ) := f (x0 ).
Put F (f ) :=
R1
0
f (x)dx.
F (Dh` f ) = f (h` x0 )
F (Dh` f ) =
1
h`
`
h
R
f (x)dx.
0
11
Application to linear operators: Sturm-Liouville operator
Choose the Sturm-Liouville operator Lp,q : C ∞ (R) → C ∞ (R),
Lp,q f (x) := p(x)f 00 (x) + q(x)f 0 (x),
where p(x) and q(x) are polynomials of degree 2 and 1, respectively. It is well-known, that
where
p(x),orthogonal
q(x) arepolynomials
polynomials
of eigenfunctions
degree 2 and
suitably
defined
Qn are
of 1,
thisrespectively.
differential operator for
special
sets of eigenvalues
Lp,q Qn = λn Qn .where
For convenience,
weλlistQthe. most
n , n ∈ N0 , i.e., polynomials,
Eigenfunctions
areλorthogonal
Lp,q Qn =
n n
prominent orthogonal polynomials with their corresponding p(x), q(x) and their eigenvalues
λn , n ∈ N in Table 1.
p(x)
(1 − x2 )
(1 − x2 )
(1 − x2 )
(1 − x2 )
(1 − x2 )
1
x
q(x)
(β − α − (α + β + 2)x)
−(2α + 1)x
−2x
−x
−3x
−2x
(α + 1 − x)
λn
−n(n + α + β + 1)
−n(n + 2α)
−n(n + 1)
−n2
−n(n + 2)
−2n
−n
name
Jacobi
Gegenbauer
Legendre
Chebyshev 1. kind
Chebyshev 2. kind
Hermite
Laguerre
symbol
(α,β)
Pn
(α)
Cn
Pn
Tn
Un
Hn
(α)
Ln
Table 1. Polynomials p(x) and q(x) defining the Sturm-Liouville operator,
corresponding eigenvalues λn and eigenfunctions.
Obviously, Gegenbauer, Legendre, and Chebyshev polynomials are special cases of Jacobi
(− 12 ,− 12 )
(α)
(α−1/2,α−1/2)
(0,0)
polynomials, where we have Cn := Pn
and Un := 12
, Pn := Pn , Tn := Pn
Sparse sums of orthogonal polynomials
Function
f (x) =
M
P
j=1
cnj Qnj (x).
We want: cnj ∈ C \ {0}, indices nj of “active” basis polynomials Qnj
Now, f can be uniquely recovered from
F (Lkp,q f ) = Lkp,q f (x0 ) =
M
X
j=1
cnj λknj Qnj (x0 ),
k = 0, . . . , 2M − 1.
Theorem.
For each polynomial f and each x ∈ R, the values Lkp,q f (x), k =
0, . . . , 2M − 1, are uniquely determined by f (m) (x), m = 0, . . . , 4M − 2.
If p(x0 ) = 0, then Lp,q f (x0 ) reduces to Lp,q f (x0 ) = q(x0 )f 0 (x0 ), and
the values Lkp,q f (x0 ), k = 0, . . . , 2M − 1, can be determined uniquely
by f (m) (x0 ), m = 0, . . . , 2M − 1.
13
mple, for M = 2, this leads to the triangular matrix


(1 +
α)
0
Example:
Sparse
Laguerre
expansion 0

(1 + α)(2 + α)
0
G = −(1 + α)
(1 equation
+ α) −3(1 + α)(2 + α) (1 + α)(2 + α)(3 + α)
Operator
(α) 00
(α) 0
(α)
(αLet
+ 1us
− give
x)(Lansmall
) (x)numerical
= −n Lnexample.
(x)
reprocessing step x(L
of Algorithm
4.3.
For α = 0
n ) (x) +
en values f (0), f � (0), . . . , f (11) (0) of the function
Sparse Laguerre expansion
6
�
6
P
(0)
f
(x)
=
c
n
j (x), (with α = 0)
f (x) =
cnj LjnLj n(x)
j=1j=1
Algorithm 4.3 to calculate approximations
ñj , c̃nj of the original parameters nj , cnj for
0
(11)
Given values: f (0), f (0), . . . , f
(0).
. , 6, as shown in Table 2.
nj
c nj
ñj
c̃nj
j
1 142 −3 142.0000000018223 −2.999999999999987
2 125 −1 125.0000000494359 −1.000000000000034
3
91
2
90.9999998114290
2.000000000000063
4
69 −3
69.0000003316075 −3.000000000000058
5
53 −1
53.0000003445395 −0.999999999999988
6
11
2
10.9999999973030
2.000000000000004
2. Numerical evaluation of indices of active basis polynomials and coefficients of a
sparse Laguerre expansion using Algorithm 4.3.
14
Application to linear operators: Recovery of sparse vectors
Choose the operator D : CN → CN
Dx := diag(d0 , . . . , dN −1 ) x
with pairwise different dj .
Eigenvectors of D:
D ej = dj ej
We want to reconstruct
x=
M
X
j=1
cnj enj
j = 0, . . . , N − 1.
cnj ∈ C, 0 ≤ n1 < . . . < nM ≤ N − 1.
We need F (D` x), ` = 0, . . . 2M − 1.
PN −1
T
Let F (x) := 1 x := j=0 xj . Then F (D` x) = 1T D` x = aT
` x,
` , . . . , d`
where aT
=
(d
0
N −1 ), ` = 0, . . . 2M − 1.
`
15
Example: Sparse vectors
Choose
N −1
0
1
, ωN
, . . . , ωN
),
D = diag(ωN
where ωN := e−2πi/N denotes the N -th root of unity.
Then an M -sparse vector x can be recovered from
y = F2M,N x,
k` )2M −1,N −1 ∈ C2M ×N .
where F2M,N = (ωN
k,`=0
More generally, choose σ ∈ N with (σ, N ) = 1 and
D=
σ(N −1)
0
σ
diag(ωN , ωN , . . . , ωN
),
Then an M -sparse vector x can be recovered from
e 2M,N x,
y=F
e 2M,N = (ω σk` )2M −1,N −1 ∈ C2M ×N .
where F
N k,`=0
16
Numerical issues
P
Let f =
cj ej with J ⊂ I and |J| = M , Aen = λn vn for n ∈ I.
j∈J
Algorithm (Recovery of f )
Input: M , F (Ak f ), k = 0, . . . , 2M − 1.
• Solve the linear system
M
−1
X
k=0
•
•
pk F (Ak+m f ) = −F (AM +m f ),
Form the Prony polynomial P (z) =
m = 0, . . . , M − 1.
M
P
(1)
pk z k . Compute the zeros
k=0
λj , j ∈ J, of P (z) and determine vj , j ∈ J.
Compute the coefficients cj by solving the overdetermined system
X
F (Ak f ) =
cj λkj vj
k = 0, . . . , 2M − 1.
j∈J
Output: cj , vj , j ∈ J, determining f .
17
Numerical issues II (Generalization of [Potts,Tasche ’13])
Numerically stable procedure for steps 1 and 2:
−1
Let gk := (F (Ak+m f ))M
m=0 , k = 0, . . . , M − 1 and
−1
e = (g1 . . . gM ).
G := (F (Ak+m f ))M
=
(g
.
.
.
g
)
and
G
0
M
−1
k,m=0
Then (1) yields
Gp = −gM ,
Let C(P ) be the companion matrix of the Prony polynomial P (z) with
eigenvalues λj , j ∈ J. Then
e
G C(P ) = G,
Hence λj are the eigenvalues of the matrix pencil
e
zG − G,
z ∈ C.
Now apply a QR-decomposition or SVD-decomposition to (g0 . . . gM ).
18
References
•
Thomas Peter, Gerlind Plonka
A generalized Prony method for reconstruction of sparse sums of
eigenfunctions of linear operators.
Inverse Problems 29 (2013), 025001.
•
Thomas Peter, Gerlind Plonka, Daniela Roşca
Representation of sparse Legendre expansions.
Journal of Symbolic Computation 50 (2013), 159-169.
•
Gerlind Plonka, Marius Wischerhoff
How many Fourier samples are needed for real function reconstruction? Journal of Appl. Math. and Computing 42 (2013), 117-137.
•
Thomas Peter
Generalized Prony method.
PhD thesis, University of Göttingen, September 2013.
•
Gerlind Plonka, Manfred Tasche
Prony methods for recovery of structured functions.
GAMM Mitteilungen, 2014, to appear.
19
\thankyou
20
Download