Dynamical Sampling with an Additive Forcing Term Akram Aldroubi Keri Kornelson

advertisement
Dynamical Sampling with an Additive Forcing Term
Akram Aldroubi
Keri Kornelson
Department of Mathematics
Vanderbilt University
Nashville, TN 37240, U.S.A.
Email: akram.aldroubi@vanderbilt.edu
Department of Mathematics
University of Oklahoma
Norman, OK, 73019, U.S.A.
Email: kkornelson@ou.edu
Abstract—In this paper we discuss a system of dynamical
sampling, i.e. sampling a signal x that evolves in time under
the action of an evolution operator A. We examine the timespace sampling that allows for reconstruction of x. Here we
describe the possible reconstruction systems when the system also
contains an unknown constant forcing term σ. We give conditions
under which both x and σ can be reconstructed from the spaciotemporal set of sampling.
I. I NTRODUCTION
The notion of dynamical sampling, in which a signal is
sampled in both space and time, was introduced in [1] and
further developed in [2]–[5]. This notion was inspired by the
work of Vetterlli et al. [6]–[10]. This situation is different
from the typical sampling and reconstruction problems (see
[11], [12] and the reference therein), in which a function x
is to be reconstructed from it samples, dynamical sampling
proposes to reconstruct f from coarse samples of x and coarse
samples of its subsequent states xt that result from the action
of a given evolution operator A.
Let x ∈ `2 (I) be a signal, where I is a countable set, and
let Ω ⊆ I. Suppose that x varies in time increments under the
action of the operator A on `2 (I) resulting in the vectors
x0
=
x
x1
=
Ax
x2 =
..
.
A(Ax) = A2 x
..
.
The figure below shows selected space samples from an
index set Ω on the horizontal axis and the time samples on
the vertical axis. Each location i ∈ Ω is sampled until time `i .
time
samples
space samples
The fundamental dynamical sampling problem is to find
conditions on Ω, A, and the number li of time increments
978-1-4673-7353-1/15/$31.00 c 2015 IEEE
such that measurements on each components given by i ∈ Ω
over times `i can be used to reconstruct x. In other words, we
want to construct x from
Y = {hA` x, ei i : ` = 0, 1, . . . , `i ; i ∈ Ω}.
(I.1)
It is known that the problem reduces to finding conditions
under which {A∗ ` ei : i ∈ Ω, ` = 0, 1, . . . , `i } is complete (if
no stability is required) or is a frame (if stability is required).
Recall that a frame for `2 (I) is a collection of vectors {fj } ⊂
`2 (I) for which there exist positive constants A, B giving
X
Akf k2 ≤
|hf, fj i|2 ≤ Bkf k2
j
2
for all f ∈ ` (I).
Lemma I.1 ([2]). Let FΩ denote the set FΩ = {A∗ ` ei : i ∈
Ω, ` = 0, 1, . . . , `i }. Then
1) Any x ∈ `2 (I) can be recovered uniquely from the
sampling set Y in (I.1) if and only if the set FΩ is
complete in `2 (I).
2) Any x ∈ `2 (I) can be recovered in a stable way from
the sampling set Y in (I.1) if and only if the set FΩ is
a frame for `2 (I).
Note that for finite dimensional spaces (`2 {1, . . . , N } =
C ) completeness and the frame property are equivalent.
For finite dimensional spaces, a necessary and sufficient
condition on an operator A, the sample set Ω and the time
levels `i was given in [2] to ensure reconstruction of the
signal x0 given enough space-time samples. For the purpose
of this note, we will only consider the special case when A
is a diagonalizable operator. In this case, we can write the
decomposition A∗ = B −1 DB for A∗ where D is diagonal
and of the form


λ 1 I1


λ2 I2


(I.2)

,
.
..


λk Ik
N
where {λ1 , λ2 , . . . , λk } are the distinct complex eigenvalues
of A∗ and, for each j = 1, . . . , k, Ij is the identity matrix of
dimension equal to that of the λj -eigenspace.
We will need some definitions before discussing the connection to dynamical sampling.
Definition I.2. Let D be an N × N diagonal matrix of the
N
form in Equation (I.2) and let S = {bi }m
i=1 ⊂ C . Let Pj be
the projection onto the λj -eigenspace of D for j = 1, . . . , k.
We say that the set of vectors S has the projection property on
D if for each j = 1, . . . , k, {Pj bi }m
i=1 is a frame (a spanning
set) for the λj -eigenspace Pj (CN ).
For example, if every eigenvalue of D is nonzero and has
multiplicity 1, then a singleton {b} will have the projection
property if and only if there are no zeros in the standard
basis representation of b. It can be shown that a necessary
condition for the projection property is that the cardinality of
the set S must be greater than or equal to the dimension of
any eigenspace of A [2].
Definition I.3. Let D be a diagonal N ×N matrix of the form
in Equation (I.2) and let b ∈ CN . The annihilating polynomial
of D for b is a monic polynomial mD
b of minimal degree such
that
[mD
b (D)]b = 0.
The degree of the polynomial mD
b will be denoted by rb .
Given the diagonalizable operator A with Jordan decomposition A∗ = B −1 DB and a set Ω ⊆ {1, . . . N }, we define for
i = 1, . . . N the columns of the matrix B corresponding to Ω.
Let bi = Bei , i ∈ Ω, and note that these vectors are linearly
independent. It turns out that FΩ is complete (is a frame) if
and only if the set
[
E=
{bi , Dbi , D2 bi , . . . , Dri −1 bi }
i:bi 6=0
x0
= x
x1
= Ax + σ
x2 = A(Ax + σ) + σ = A2 x + (A + I)σ
..
..
.
.
Our goal here is to determine conditions on A, Ω, and `i
such that x = x0 and σ can both be reconstructed from
measurements
Y = {hx` , ei i : ` = 0, 1, . . . `i ; i ∈ Ω}.
(II.1)
We begin by making the assumption that x0 = 0 while the
forcing term σ 6= 0. We motivate this particular hypothesis by
noting that it would be valid in a case where the evolution
system modeled by A is dissipative, e.g., the spectral radius
R(A) of A is strictly smaller than 1. For this case any initial
state x0 is quickly driven to 0. If we delay sampling until
the impact of the original signal is below our measurement
threshold, we find ourselves in the case where x0 = 0 while
σ 6= 0. In this case, we discover that the conditions for
reconstruction are exactly the same as with a nonzero x0 and
no forcing term.
Let
C` := I + A + · · · + A` .
(II.2)
We can now state a result similar to that Lemma I.1.
N
is complete (is a frame) for C [2]. Thus, the necessary and
sufficient conditions are given in terms of the operator D and
the set S defined above. The following proposition has been
proved in [2]
Proposition I.4 ([2]). Let D be a matrix of the form in
N
Equation (I.2), and let S = {bi }m
i=1 ⊂ C . For each
i = 1, . . . m, let ri be the minimal degree of an annihilating
polynomial of D for bi . S satisfies the projection property from
Definition I.2 if and only if the set
[
E=
{bi , Dbi , D2 bi , . . . , Dri −1 bi }
i:bi 6=0
is a frame for CN .
Corollary I.5. Given a diagonalizable N × N matrix A with
Jordan decomposition A∗ = B −1 DB where D is of the form
in Equation (I.2), and given Ω ⊆ {1, . . . N }, let {bi }i∈Ω be
the vectors {Bei }i∈Ω . Then any x ∈ CN can be reconstructed
from the samples Y as shown in (I.1) if and only if {bi }i∈Ω
has the projection property for D.
Lemma II.1. Let GΩ denote the set GΩ = {C`∗ ei : i ∈
Ω, ` = 0, 1, . . . , `i }. Then
1) Any x ∈ `2 (I) can be recovered uniquely from the
sampling set Y in (II.1) if and only if the set GΩ is
complete in `2 (I).
2) Any x ∈ `2 (I) can be recovered in a stable way from
the sampling set Y in (II.1) if and only if the set GΩ is
a frame for `2 (I).
It is not difficult to show that the set {C`∗ ei : i ∈ Ω, ` =
0, 1, . . . , `i } is complete in `2 (I) if and only if {A∗ ` ei : i ∈
Ω, ` = 0, 1, . . . , `i } is complete in `2 (I). Thus, we have the
following proposition.
Proposition II.2. Let A be diagonalizable with decomposition
A∗ = B −1 DB and let Ω ⊆ {1, . . . , N } be the fixed
measurement set. Assume the forcing term σ 6= 0 but the initial
signal x0 = 0. Then σ can be reconstructed if and only if the
set {bi }i∈Ω has the projection property for D, where bi = Bei
for each i ∈ Ω.
II. S OURCE TERM
III. F ORCING AND INITIAL STATE
Beside the initial state x0 , there are situations in which a
constant source σ is feeding the evolving system. In this case
the time evolution system is given by
We now consider the case where both the initial signal x0
and the forcing term σ are nonzero. We move the problem
into C2N and seek to solve simultaneously for x0 and σ.
A. Constant source
Using Lemmas I.1 and II.1, it is not difficult to show that
we can reconstruct the vector [x0 σ]T from the sampling set
Ω exactly when FΩ = {A∗ ` ei : i ∈ Ω, ` = 0, 1, . . . `i } forms
a frame for CN and when the set GΩ = {(C`∗ ei : i ∈ Ω, ` =
0, 1, . . . `i } forms a frame for CN , where C` is as in Equation
(II.2).
Let A∗ = B −1 DB as before, and let bi = Bei for each
i ∈ Ω. Then the frame condition above reduces to the two
conditions:
1) the set {D` bi : i ∈ Ω, ` = 0, 1, . . . , `i } is a frame for
CN ; S
2) the set i∈Ω {bi , (D + I)bi , (D2 + D + I)bi , . . . , (D`i +
· · · + D + I)bi } is a frame for CN .
The first condition implies the second, via Lemma II.1.
We consider the vector y = [xT σ T ]T in C2N . Consider
{bi }i∈Ω as row vectors. We now construct the block matrix
M having 2 block columns of 1 × N blocks:


b1
0
 Db1

b1
 2

D b1
(D + I)b1 
 3

D b1 (D2 + D + I)b1 


 ..

..

.
.
M =
(III.1)


 b2

0


 Db2

b2
 2

D b2

(D
+
I)b
2


..
..
.
.
Lemma III.1. The vector y can be reconstructed for matrix
A, {ei }i∈Ω , and `i exactly when the matrix M is rank 2N . A
necessary, but not sufficient, condition for this is the vectors
{bi }i∈Ω satisfying the projection property for D.
We vary the number of rows, which correspond to the
time-space samples, in order to create a full-rank matrix. We
begin by examining the rows of M for linear dependence
relations. For each i ∈ Ω, let ri be the degree of the
annihilating polynomial of bi for D and let `i = ri − 1.
The vectors {bi , Dbi , . . . , D`i bi } are thus linearly independent from each other while Dri bi is linearly dependent on
{bi , Dbi , . . . , D`i bi }. This yields the following lemma.
Lemma III.2. The matrix


b1
0
 Db1

b1


 D 2 b1

(D + I)b1


M1 = 

..
..


.
.


 D ` 1 b1
(D`1 −1 + · · · + D + I)b1 
D`1 +1 b1
(D`1 + · · · + D + I)b1
has linearly independent rows.
However, an argument using row operations shows that
adding one further row to the matrix M1 above (corresponding
to taking one more time sample at the position e1 ) will break
the linear independence of the rows. This occurs as a result
of the degree of the minimal annihilating polynomial for b1 .
Lemma III.3. The last row of the matrix


b1
0

 Db1
b1



 D2 b1
(D
+
I)b
1


+
M1 = 

..
..


.
.


`1
D`1 +1 b1
(D + · · · + D + I)b1 
D`1 +2 b1 (D`1 +1 + · · · + D + I)b1
is linearly dependent on the other matrix rows. Moreover, all
rows of the form
` +p
D 1 b1 (D`1 +p−1 + · · · + D + I)b1
are linearly dependent on the rows of the original matrix M1 .
We now explore how many new linearly independent rows
are produced by the inclusion of a second vector b2 . In our
dynamical sampling scheme, this corresponds to adding a
second sensor location e2 to take space samples. Let M be the
matrix M1 above with the maximal number of independent
rows generated by b1 , followed by rows generated by the
vector b2 .


b1
0
 Db1

b1


 D 2 b1

(D + I)b1




..
..


.
.
 ` +1

`1
1

D
b
(D
+
·
·
·
D
+
I)b
M =
(III.2)
1
1

 b2

0


 Db2

b2


 D 2 b2

(D
+
I)b
2


..
..
.
.
We have proved that the rows with b1 in M are linearly independent. We also know that the rows with b2 will be linearly
independent amongst themselves up to D`2 +1 b2 . Suppose,
however, that for some k ≤ `2 we have Dk b2 in the span
of Z = {b1 , Db1 , . . . , D`1 b1 , b2 , . . . , Dk−1 b2 }. In this case,
we find that Dk+1 b2 is (together with all higher powers of D
applied to b2 ) also in the span of Z.
If Dk b2 is in the span of Z, a set of row operations will
show that the rows of M from the top down to the row
k+1
D
b2 (Dk + · · · + D + I)b2
are linearly dependent, while further rows become linearly
dependent on the rows above.
This sequence of lemmas about the linear independence of
the rows of M in Equation (III.2) yields the following results.
Theorem III.4. If a set ∪pi=1 {bi , Dbi , . . . D`i bi } ∪
{bp+1 , Dbp+1 , . . . Dk bp1 } spans CN , each new point of
space sampling, represented by more vectors bj , will provide
at most only one additional linearly independent row in the
corresponding matrix M .
Corollary III.5. Suppose for a vector b1 that r1 = N ,
i.e. {b1 , Db1 , . . . DN −1 b1 } is a basis for CN . Then N − 1
additional vectors {bi } are needed to successfully reconstruct
both the signal x and the forcing term σ.
Corollary III.6. If the index set of space samples Ω has |Ω| =
m to sample signals in CN , the rank of the matrix M is at
most N + m. In other words, reconstruction of both a signal
x and forcing term σ, both in CN , requires N space positions
of sampling.
This result shows that when a forcing term is present,
sampling more in time does not allow us to reduce the need
for space samples. The space sampling is necessary to be
able to differentiate between the signal and the forcing term.
However, in applications, the spectral radius R(A) of A is
less than 1, i.e., the evolution operator dissipate energy and
any initial conditions dies out. For these cases, we can always
assume that x0 = 0, and use the result of the preceding section.
Moreover, if additional assumptions on σ or x0 are made (e.g.,
sparsity), then it is still possible to reduce the size of Ω and
still differentiate between the two components x0 and σ.
B. Forcing delay
in Lemma III.2 —becomes

b1
 Db1


..

.
 t −1
 D 0 b1
M1 = 
 D t 0 b1
 t +1
 D 0 b1


..

.
D`i +t0 b1
x0
= x
x1 = Ax
..
..
.
.
x t 0 = At 0 x + σ
..
...
.
xk
= Ak x + Ck−t0 +1 σ,
where Ck−t0 +1 is defined as in Equation (II.2).
Suppose that for Ω ⊆ {1, 2, . . . N } and some time restrictions {`i }i∈Ω , we have the set FΩ = {A∗ ` ei : i ∈ Ω, ` =
0, 1, . . . `i } is complete in CN , hence any x ∈ CN can be
recovered from the samples hA` x, ei i on each i ∈ Ω for times
` = 0, 1, . . . , `i . If t0 ≥ max{`i : i ∈ Ω}, then the samples
taken are enough to reconstruct x before the forcing term
begins. In this situation, we avoid the necessity to distinguish
between the desired signal x0 and the forcing portion σ.
If, however, 0 < t0 < `i for at least one i ∈ Ω, we must
combine the information gathered on the signal alone and
the information gathered with the forcing term present. If we
consider a single sensor location b1 , the matrix M1 having the
maximal number of linearly independent rows—as computed






.






0
b1
(D + I)b1
..
.
(D`i + · · · + D + I)b1
Lemma III.7. Let the forcing term in the dynamical sampling
system be delayed until time t0 . The matrix M1 above has
`i + t0 linearly independent rows generated from the single
element b1 , where `i is one less than the degree of the minimal
annihilating polynomial of D for b1 . This is the maximum
number of independent rows generated by b1 .
When a second sensing location is added, as in Equation
(III.2), we see that the delay in the forcing term results in
an additional increase in information. Suppose Dk b2 is in the
span of Z = {b1 , Db1 , . . . , D`i b1 , b2 , Db2 , . . . , Dk−1 b2 }. If
k < t0 , we will see (up to) 2k additional linearly independent
rows generated in the matrix M .

For the general case in which the spectral R(A) of A is
not less than 1, it is possible that the source of the forcing
component σ is not present when the initial samples are taken,
but instead begins at time t0 , resulting in the the system

0
0
..
.
b1
Db1
..
.




 ` +t
D i 0 b1

 b2

 Db2

M =
..

.

 Dk−1 b2

 (∗)

 D t 0 b2


..

.
Dt0 +k b2
0
0
..
.






`i
(D + · · · + D + I)b1 


0


0


..

.


0


(∗)


b2


..

.
k
(D + · · · + D + I)b2
(III.3)
The rows indicated by (∗) are not part of the linearly
independent set. Therefore, in the case where t0 ≥ k, there are
2k linearly independent rows in M that are generated by b2 .
In the case where t0 < k, there are k +t0 linearly independent
rows generated by b2 .
Lemma III.8. The matrix M above has `1 + t0 linearly
independent rows generated by b1 and min(`2 + t0 , 2k, k + t0 )
linearly independent rows generated by b2 .
Thus, using the two lemmas above, a careful examination of
the set Ω, the values of `i for each i ∈ Ω, the delay t0 , and the
matrix M will give the necessary and/or sufficient conditions
for the exact reconstruction of both x and σ. One conclusion
is the following.
Theorem III.9. If the forcing term begins at time t0 and a
set ∪pi=1 {bi , Dbi , . . . D`i bi }∪{bp+1 , Dbp+1 , . . . Dk bp1 } spans
CN , each new point of space sampling, represented by rows
generated by vector bj , will provide at most t0 additional
linearly independent rows in the matrix M .
IV. C ONCLUSIONS
In this note, we present preliminary results about the impact
of a forcing term on dynamical sampling. We find that, in the
most general case with the forcing term and the original state
both nonzero, there is no time-space tradeoff benefit. Sampling
in time is of only limited value in separating out the initial
signal from the forcing expression. There is increased benefit,
however, if the signal decays over time or if the forcing term
is not present when sampling begins.
This note presents our preliminary results on this problem.
There are many more questions still in progress with regard
to this form of sampling system.
ACKNOWLEDGEMENTS
Akram Aldroubi is supported in part by the collaborative
NSF ATD grant DMS-1322099. The work of Keri Kornelson
was supported in part by Simons Foundation Grant #244718.
R EFERENCES
[1] A. Aldroubi, J. Davis, and I. Krishtal, “Dynamical sampling: time-space
trade-off,” Appl. Comput. Harmon. Anal., vol. 34, no. 3, pp. 495–503,
2013. [Online]. Available: http://dx.doi.org/10.1016/j.acha.2012.09.002
[2] A. Aldroubi, C. Cabrelli, U. Molter, and S. Tang, “Dynamical sampling,”
2015, arXiv:1409.8333.
[3] R. Aceska, A. Aldroubi, J. Davis, and A. Petrosyan, “Dynamical
sampling in shift-invariant spaces,” in Commutative and noncommutative
harmonic analysis and applications, ser. Contemp. Math. Amer. Math.
Soc., Providence, RI, 2013, vol. 603, pp. 139–148. [Online]. Available:
http://dx.doi.org/10.1090/conm/603/12047
[4] A. Aldroubi, J. Davis, and I. Krishtal, “Exact reconstruction of spatially
undersampled signals in evolutionary systems,” J. Fourier Anal.
Appl., dOI: 10.1007/s00041-014-9359-9. ArXiv:1312.3203. [Online].
Available: http://arxiv.org/abs/1312.3203
[5] A. Aldroubi, I. Krishtal, and E. Weber, “Finite dimensional dynamical
sampling: an overview,” in Excursions in harmonic analysis. Volume
4, ser. Appl. Numer. Harmon. Anal. Birkhäuser/Springer, New York,
2015, to appear.
[6] J. Ranieri, A. Chebira, Y. M. Lu, and M. Vetterli, “Sampling and
reconstructing diffusion fields with localized sources,” in Acoustics,
Speech and Signal Processing (ICASSP), 2011 IEEE International
Conference on, May 2011, pp. 4016–4019.
[7] Y. Lu and M. Vetterli, “Spatial super-resolution of a diffusion field by
temporal oversampling in sensor networks,” in Acoustics, Speech and
Signal Processing, 2009. ICASSP 2009. IEEE International Conference
on, april 2009, pp. 2249–2252.
[8] T. Blu, P.-L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulot, “Sparse
sampling of signal innovations,” Signal Processing Magazine, IEEE,
vol. 25, no. 2, pp. 31–40, march 2008.
[9] A. Hormati, O. Roy, Y. Lu, and M. Vetterli, “Distributed sampling
of signals linked by sparse filtering: Theory and applications,” Signal
Processing, IEEE Transactions on, vol. 58, no. 3, pp. 1095 –1109, march
2010.
[10] Y. Lu, P.-L. Dragotti, and M. Vetterli, “Localization of diffusive sources
using spatiotemporal measurements,” in Communication, Control, and
Computing (Allerton), 2011 49th Annual Allerton Conference on, Sept
2011, pp. 1072–1076.
[11] A. Aldroubi and K. Gröchenig, “Nonuniform sampling and
reconstruction in shift-invariant spaces,” SIAM Rev., vol. 43,
no. 4, pp. 585–620 (electronic), 2001. [Online]. Available:
http://dx.doi.org/10.1137/S0036144501386986
[12] J. J. Benedetto and P. J. S. G. Ferreira, Eds., Modern sampling theory,
ser. Applied and Numerical Harmonic Analysis. Birkhäuser Boston,
Inc., Boston, MA, 2001, mathematics and applications. [Online].
Available: http://dx.doi.org/10.1007/978-1-4612-0143-4
Download