A General Approach for Convergence Analysis of Adaptive Sampling-Based Signal Processing

advertisement
A General Approach for Convergence Analysis of
Adaptive Sampling-Based Signal Processing
Holger Boche∗
Ullrich J. Mönich†
Technische Universität München
Lehrstuhl für Theoretische Informationstechnik
E-mail: boche@tum.de
Massachusetts Institute of Technology
Research Laboratory of Electronics
E-mail: moenich@mit.edu
Abstract—It is well-known that there exist bandlimited signals
for which certain sampling series are divergent. One possible
way of circumventing the divergence is to adapt the sampling
series to the signals. In this paper we study adaptivity in the
number of summands that are used in each approximation step,
and whether this kind of adaptive signal processing can improve
the convergence behavior of the sampling series. We approach
the problem by considering approximation processes in general
Banach spaces and show that adaptivity reduces the set of signals
with divergence from a residual set to a meager or empty set. Due
to the non-linearity of the adaptive approximation process, this
study cannot be done by using the Banach–Steinhaus theory. We
present examples from sampling based signal processing, where
recently strong divergence, which is connected to the effectiveness
of adaptive signal processing, has been observed.
I. N OTATION
Let fˆ denote the Fourier transform of a function f . Lp (R),
1 ≤ p < ∞, is the space of all pth-power Lebesgue integrable
functions on R, with the usual norm k · kp , and L∞ (R) is the
space of all functions for which the essential supremum norm
k · k∞ is finite.
For 0 < σ < ∞ let Bσ be the set of all entire functions
f with the property that for all > 0 there exists a constant
C() with |f (z)| ≤ C() exp((σ + )|z|) for all z ∈ C. The
Bernstein space Bσp , 1 ≤ p ≤ ∞, consists of all functions
in Bσ , whose restriction to the real line is in Lp (R). The
norm for Bσp is given by the Lp -norm on the real line, i.e.,
k · kBσp = k · kp . A function in Bσp is called bandlimited to σ.
For 0 < σ < ∞ and 1 ≤ p ≤ ∞, we denote by PW pσ
the Paley-WienerR space of functions f with a representation
σ
f (z) = 1/(2π) −σ g(ω) eizω dω, z ∈ C, for some g ∈
Lp [−σ, σ]. If f ∈ PW pσ , then g = fˆ. TheRnorm for PW pσ , 1 ≤
σ
p < ∞, is given by kf kPW pσ = (1/(2π) −σ |fˆ(ω)|p dω)1/p .
II. I NTRODUCTION AND M OTIVATION
Sampling theory studies the reconstruction of a signal in
terms of its samples. In addition to its mathematical significance, sampling theory plays a fundamental role in modern
∗ This work was partly supported by the German Research Foundation
(DFG) under grant BO 1734/20-1.
† U. Mönich was supported by the German Research Foundation (DFG)
under grant MO 2572/1-1.
c
978-1-4673-7353-1/15/$31.00 2015
IEEE
signal and information processing because it is the basis for
today’s digital world [1].
The fundamental initial result of the theory states that the
Shannon sampling series
∞
X
f (k)
k=−∞
sin(π(t − k))
π(t − k)
(1)
can be used to reconstruct bandlimited signals f with finite L2 norm from their samples {f (k)}k∈Z . Since this initial result,
many different sampling theorems have been developed, and
determining the signal classes for which the theorems hold
and the modes of convergence now constitutes an entire area
of research [2]–[5].
Recently, sampling series that do not converge for all signals
from a given signal space have been analyzed [6], [7]. In
order to obtain the divergence results, the Banach–Steinhaus
theorem [8] or techniques related to Baire’s category theorem
[9, pp. 11] have been employed. However, these techniques
are not sufficient to answer the question whether adaptive
signal processing can improve the convergence behavior of
the sampling series, because adaptivity leads to a non-linear
approximation operator. In this paper we address this question
and analyze the structure of the set of signals for which we
have divergence. In particular, we are interested whether the
size of this set can be reduced if adaptivity is employed.
A. Divergence of the Shannon Sampling Series
For signals f ∈ PW pπ , 1 < p < ∞, the series (1) converges
absolutely and uniformly on all of R [4]. However, for p = 1,
i.e., for f ∈ PW 1π we have
!
N
X
sin(π(t − k))
lim sup max
f (k)
= ∞,
(2)
t∈R
π(t − k)
N →∞
k=−N
that is the peak value diverges as N tends to infinity. Recently,
this result has been strengthened in [10], where it has been
shown that there exists a signal f ∈ PW 1π such that
!
N
X
sin(π(t − k))
lim max
f (k)
= ∞.
(3)
N →∞
t∈R
π(t − k)
k=−N
It is important to understand the difference in the divergence
behavior of (2) and (3). In (3) we have divergence in terms of
the lim whereas in (2) we only have divergence in terms of the
lim sup. In order to illustrate this difference, we consider the
expressions lim supn→∞ xn = ∞ and limn→∞ xn = ∞ for
a general sequence {xn }n∈N ⊂ C. The lim sup divergence
is a much weaker notion of divergence, because it merely
guarantees the existence of a subsequence {Nn }n∈N of the
natural numbers for which we have limn→∞ xNn = ∞. This
leaves the possibility that there exist a different subsequences
{Nn∗ }n∈N such that limn→∞ xNn∗ < ∞. In contrast for the
lim divergence, we have divergence for all subsequences.
Therefore, we call the lim divergence strong divergence.
For the further discussion we need the following concepts
from metric spaces [9]. A subset M of a metric space X is
said to be nowhere dense in X if the closure [M ] does not
contain a non-empty open set of X. M is said to be meager (or
of the first category) if M is the countable union of sets each
of which is nowhere dense in X. M is said to be nonmeager
(or of the second category) if is not meager. The complement
of a meager set is called a residual set. Meager sets may
be considered as “small”. According to Baire’s theorem [9],
in a complete metric space any residual set is dense and
nonmeager. One property that shows the richness of residual
sets is the following: The countable intersection of residual
sets is always a residual set. Further, any subset of a meager
set is a meager set and any superset of a residual set is a
residual set. In particular we will use the following fact in our
proof. In a complete metric space any open and dense set is
a residual set because its complement is nowhere dense.
Divergence results as in (2) are usually proved by using
the uniform boundedness principle, which is also known as
Banach–Steinhaus theorem [8]. As an immediate consequence,
the obtained divergence is in terms of the lim sup and not
a statement about strong divergence. However, the strength
of the uniform boundedness principle is that the divergence
statement holds not only for a single signal but immediately
for a large set of signals: the set of all signals for which we
have divergence is a residual set.
B. System Approximation
While the Shannon sampling series (1) is concerned with
the reconstruction of a bandlimited signal f from its samples
{f (k)}k∈Z , a slightly more general problem is the approximation of the output T f of a stable linear time-invariant
(LTI) system T from the samples {f (k)}k∈Z of the input
signal f . This is the situation that is encountered in digital
signal processing applications, where the interest is not in the
reconstruction of a signal, but rather in the implementation of
a system, i.e., in some transformation T f of the sampled input
signal f [11].
We briefly review some basic definitions and facts about
stable linear time-invariant (LTI) systems. A linear system T :
PW pπ → PW pπ , 1 ≤ p ≤ ∞, is called stable if the operator
T is bounded, i.e., if kT k = supkf kPW p ≤1 kT f kPW pπ < ∞.
π
Furthermore, it is called time-invariant if (T f ( · − a))(t) =
(T f )(t − a) for all f ∈ PW pπ and t, a ∈ R. For every stable
LTI system T : PW 1π → PW 1π , there exists exactly one
function ĥT ∈ L∞ [−π, π] such that
Z π
1
fˆ(ω)ĥT (ω) eiωt dω, t ∈ R, (4)
(T f )(t) =
2π −π
for all f ∈ PW 1π [12]. hT = T sinc is called the impulse
response of the system T .
A natural approach to approximate the system output T f
from the samples of f is to use the approximation series
∞
X
f (k)hT (t − k).
(5)
k=−∞
In order to analyze the convergence behavior of (5), we
introduce the abbreviation
(TN f )(t) =
N
X
f (k)hT (t − k).
(6)
k=−N
The convergence behavior of the system approximation process (6) is more problematic than the convergence behavior
of the Shannon sampling series. While the Shannon sampling
series is locally uniformly convergent for all f ∈ PW 1π [13],
the system approximation process TN can be divergent [12].
For all t ∈ R there exists stable LTI system T : PW 1π → PW 1π
and a signal f ∈ PW 1π such that
lim sup|(TN f )(t)| = ∞.
(7)
N →∞
This divergence result is even true for oversampling and any
arbitrary choice of the reconstruction kernel.
However, the divergence as stated in (7) is only weak
lim sup divergence and no strong divergence. Thus, the clever
choice of a subsequence {Nn }n∈N , i.e., of the number of
samples that are used in each approximation step, may lead to
a convergent approximation process. It is not guaranteed that
such a subsequence exists. If it exists, the subsequence will in
general depend on the signal f . In this case the approximation
process TNn (f ) would be adapted to the signal f . The problem
of finding an index sequence, depending on the signal f , that is
suitable for achieving the desired goal, is the task of adaptive
signal processing.
As for the global approximation behavior of the Shannon
sampling series, this kind of adaptiveness is useless, because
we have strong divergence according to (3), and thus no
subsequence exists that leads to convergence.
In general the question if adaptive signal processing can
improve the behavior of an approximation process is important
for practical applications. In the case of divergence it is
interesting to know the size of the set of signals for which we
have divergence. Both questions will be addressed in Sections
III–V.
III. A PPROXIMATION P ROCESSES IN BANACH S PACES
We will approach the question of strong divergence in
a more general and abstract setting. To this end we consider two Banach spaces, B1 and B2 , and a bounded linear
operator T : B1 → B2 . We want to approximate T by a
sequence of bounded linear operators {UN }N ∈N , mapping
from B1 to B2 . The operator norm of T is given by kT k =
supkf kB1 ≤1 kT f kB2 . The norm for UN is defined analogously.
This setting includes many relevant special cases, as shown
by the following examples:
Examples.
1) The Shannon sampling series (global convergence):
B1 = PW 1π , B2 = Bπ∞ , T = Id,
N
X
(UN f )(t) =
f (k)
k=−N
sin(π(t − k))
.
π(t − k)
2) System approximation process (global convergence):
B1 = PW 1π , B2 = Bπ∞ , T : PW 1π → PW 1π a stable LTI
system,
(UN f )(t) =
N
X
f (k)hT (t − k),
(8)
k=−N
where hT (t) = (T sinc)(t).
3) System approximation process (pointwise convergence at
t ∈ R): B1 = PW 1π , B2 = C, T : PW 1π → C, f 7→
(T̃ f )(t), where T̃ : PW 1π → PW 1π is a stable LTI
system.
4) Non-equidistant sampling series and system approximation processes.
For our analyses we assume:
(A1) There exists a dense subset S1 of B1 such that
lim kUN f − T f kB2 = 0
N →∞
which contains all the signals for which adaptive signal
processing is useless, is non-empty. Further, in the case where
Dstrong is non-empty, we are interested in its structure.
Remark 1. For the Shannon sampling series (Example 1)
we already know from (3) that the set Dstrong is non-empty.
Consequently, (A2’) has to be true for Example 1. For the
system approximation (Example 2), it has been shown in [14]
that the Hilbert transform is a system for which we have strong
divergence even with oversampling.
It is clear that the Banach–Steinhaus theorem together with
condition (A2’) gives immediately that there cannot exist a
universal subsequence {Nl }l∈N of the natural numbers such
that (9) is true for all f ∈ B1 . However, the Banach–
Steinhaus does not make a statement about the question if
it is possible to find for every signal f ∈ B1 a subsequence
{Nl (f )}l∈N of the natural numbers such that (9) is true. In
this case the subsequence {Nl (f )}l∈N is adapted to the signal
f . Clearly, the answer to this question is negative if and only
if Dstrong 6= ∅.
It is easy to see that if we have strong divergence then
we have strong divergence not only for one signal but for all
signals from a dense subset of B1 .
Observation 1. Let B1 and B2 be two Banach spaces and
T : B1 → B2 a bounded linear operator. Further, let {UN }N ∈N
be a sequence of bounded linear operators mapping from B1
to B2 such that (A1) and (A2’) are fulfilled. If Dstrong 6= ∅
then Dstrong is dense in B1 .
IV. S ET OF S IGNALS WITHOUT S TRONG D IVERGENCE
for all f ∈ S1 .
(A2) We have lim supN →∞ kUN k = ∞.
For the Shannon sampling series and the system approximation
process, assumption (A1) is naturally fulfilled, because we
have convergence for all signals in PW 2π , which is a dense
subspace of PW 1π . Assumption (A2) is necessary, because if
(A2) was not fulfilled, i.e., if we had lim supN →∞ kUN k < ∞,
we would have convergence for all f ∈ B1 , and nothing
needed to be analyzed.
If (A2) is fulfilled, then it follows from the Banach–
Steinhaus theorem that the set
DBS = f ∈ B1 : lim supkUN f kB2 = ∞
In the case where Dstrong is not empty, we want to analyze
the set Dsb = DBS \ Dstrong . Clearly, Dsb is given by
Dsb = f ∈ B1 : lim supkUN f kB2 = ∞
N →∞
and lim inf kUN f kB2 < ∞
is a residual set in B1 . If lim inf N →∞ kUN k < ∞ then there
exists a universal subsequence {Nl }l∈N of the natural numbers
such that
lim kUNl f − T f kB2 = 0
(9)
Theorem 1. Let B1 and B2 be two Banach spaces and
T : B1 → B2 a bounded linear operator. Further, let
{UN }N ∈N be a sequence of bounded linear operators mapping
from B1 to B2 such that (A1) and (A2’) are fulfilled. Then Dsb
is a residual set in B1 .
N →∞
l→∞
for all f ∈ B1 . In this case, adaptive signal processing would
lead to convergence for the whole space B1 . Hence, we want
to study the case
(A2’) limN →∞ kUN k = ∞
and analyze under what circumstances the set
n
o
Dstrong = f ∈ B1 : lim kUN f kB2 = ∞ ,
N →∞
N →∞
and contains all signals for which the approximation process
diverges and adaptive signal processing leads to a bounded approximation process. The subset of signals for which adaptive
signal processing leads to a convergent approximation process
will be analyzed in Section V.
We have the following theorem.
Proof. For the proof we introduce the set
D2 = f ∈ B1 : lim supkUN f kB2 = ∞
N →∞
and lim inf kUN f kB2 ≤ kT f kB2 + 1 .
N →∞
We will show that D2 is a residual set. Since D2 ⊂ Dsb , this
implies that Dsb is also a residual set.
For M, N, K ∈ N we consider the set
D(M, N, K) = f ∈ B1 : kUN f kB2 > K
and kUM f kB2 < kT f kB2 + 1 .
We first prove that D(M, N, K) is an open set. (The interesting case is where D(M, N, K) 6= ∅. If D(M, N, K) = ∅ then
D(M, N, K) is open by definition.) Let f ∈ D(M, N, K) be
arbitrary but fixed. We need to show that there exists an > 0
such that V (f ) ⊂ D(M, N, K), where
V (f ) = {g ∈ B1 : kf − gkB1 < }
(10)
denotes the neighborhood of f with radius . Since f ∈
D(M, N, K), we have kUN f kB2 > K and kUM f kB2 <
kT f kB2 + 1, and therefore C1 (f ) = kUN f kB2 − K > 0
and C2 (f ) = kUM f kB2 − kT f kB2 < 1. For f∗ ∈ B1 we have
kUN f kB2 = kUN (f − f∗ ) + UN f∗ kB2
≤ kUN (f − f∗ )kB2 + kUN f∗ kB2
≤ kUN k · kf − f∗ kB1 + kUN f∗ kB2 .
Thus, for f∗ ∈ B1 with kf − f∗ kB1 < C1 (f )/kUN k, we have
kUN f∗ kB2 > K. Further, for f∗ ∈ B1 , we have
kUM f∗ kB2 − kT f∗ kB2
= kUM f − UM f∗ + UM f kB2
+ kT f kB2 − kT f∗ kB2 − kT f kB2
≤ kUM f kB2 + kUM (f − f∗ )kB2
+ kT (f − f∗ )kB2 − kT f kB2
= C2 (f ) + kUM (f − f∗ )kB2 + kT (f − f∗ )kB2
≤ C2 (f ) + (kUM k + kT k)kf − f∗ kB1 .
Thus, for f∗ ∈ B1 with
(kUM k + kT k)kf − f∗ kB1 < 1 − C2 (f )
we have
kUM f∗ kB2 < kT f∗ kB2 +C2 (f )+(1−C2 (f )) = kT f∗ kB2 +1.
Hence, if we choose a ∗ with
1 − C2
C1 (f )
,
,
0 < ∗ < min
kUN k kUM k + kT k
it follows that for all g ∈ V∗ (f ) we have g ∈ D(M, N, K).
This shows that the set D(M, N, K) is open.
Next, we will prove that for all N0 , K ∈ N the set
[
e 0 , K) =
D(N
D(M, N, K)
(11)
and a N1 = N1 (f ) such that for all N ≥ N1 we have
kUN f kB2 < kT f kB2 + 1.
(12)
We choose N̂ > max{N0 , N1 } so large that
2
kUN̂ k > (K + kT f kB2 + 1) ,
which is possible due to assumption (A2’). Since
kUN̂ k =
sup kUN̂ f kB2 =
kf kB1 ≤1
sup kUN̂ f kB2 ,
kf kB1 =1,
f ∈S1
there exists a fN̂ ∈ S1 with kfN̂ kB1 = 1 such that
2
(K + kT f kB2 + 1) .
(13)
Now we consider f∗ = f + /2fN̂ . Clearly, we have kf −
e 0 , K) is dense in
f∗ kB1 < . To complete the proof that D(N
e
B1 , we will show that f∗ ∈ D(N0 , K). We have
kUN̂ f∗ kB2 = UN̂ fN̂ + UN̂ f 2
B2
≥ kUN̂ fN̂ kB2 − kUN̂ f kB2
2
2
(K + kT f kB2 + 1) − kT f kB2 − 1
>
2 = K + kT f kB2 + 1 − kT f kB2 − 1
kUN̂ fN̂ kB2 >
= K,
(14)
where we used (12) and (13) in the third to last line. Since
f∗ ∈ S1 , there exists a M0 ∈ N, such that for all M > M0 , we
have kUM f∗ − T f∗ kB2 < 1. Hence, for M̂ > max{N0 , M0 },
we have
kUM̂ f∗ kB2 < kT f∗ kB2 + 1.
(15)
It follows from (14) and (15) that f∗ ∈ D(M̂ , N̂ , K). Since
e 0 , K), which shows
M̂ , N̂ ≥ N0 , it follows that f∗ ∈ D(N
e 0 , K) is dense in B1 .
that D(N
e 0 , K) is the union of open sets, it
Further, since D(N
e 0 , K) is open. Thus, we have established
follows that D(N
e 0 , K) is an open set that is dense in B1 . This is true
that D(N
for all N0 and K in N. It follows that
∞
∞
\
\
e 0 , K)
D3 =
D(N
(16)
K=1 N0 =1
is a residual set in B1 .
Let f ∈ D3 be arbitrary but fixed. Form (11) and (16) we
see that for every N0 , K ∈ N there exist natural numbers
NN0 ,K and MN0 ,K satisfying min{NN0 ,K , MN0 ,K } ≥ N0 ,
kUNN0 ,K f kB2 > K, and kUMN0 ,K f kB2 < kT f kB2 + 1. It
follows that
lim supkUN f kB2 = ∞
N →∞
and
M,N ≥N0
lim inf kUN f kB2 ≤ kT f kB2 + 1,
is dense in B1 . Let N0 , K ∈ N, f ∈ B1 , and > 0 be arbitrary
but fixed. Due to assumption (A1), there exists a f ∈ S1 such
that
kf − f kB1 < ,
2
that is, we have f ∈ D2 . This shows that D3 ⊂ D2 , and since
D3 is a residual set, it follows that D2 and consequently Dsb
is a residual set.
N →∞
Corollary 1. Let B1 and B2 be two Banach spaces and
T : B1 → B2 a bounded linear operator. Further, let
{UN }N ∈N be a sequence of bounded linear operators mapping
from B1 to B2 such that (A1) and (A2’) are fulfilled. Then the
set Dstrong is either empty or a meager set.
Proof. Let f ∈ Dstrong be arbitrary. Then we have
lim inf N →∞ kUN f kB2 = ∞. Thus the second condition in
the definition of the set Dsb is not fulfilled, which means
f ∈ B1 \ Dsb . Since, f ∈ Dstrong was arbitrary, we see that
Dstrong ⊂ B1 \ Dsb . According to Theorem 1, Dsb is a residual
set in B1 , and consequently B1 \Dsb is a meager set. It follows
that Dstrong ⊂ B1 \ Dsb is a meager set.
In [10] it has been shown that there exists a signal f ∈ PW 1π
such that the peak value of the Shannon sampling series
diverges strongly. Moreover, in [14] strong divergence has
been proved for the peak value of the system approximation
process (8) if the system is the Hilbert transform, even when
oversampling is applied. Now we know from Corollary 1 that,
in both cases, the set of signals with strong divergence can be
at most a meager set.
V. S ET OF S IGNALS WITH A C ONVERGENT S UBSEQUENCE
Next, we want to analyze for which signals f ∈ DBS
we can find a convergent subsequence, that is, we want to
analyze the set of signals for which the approximation process
diverges and adaptive signal processing leads to a convergent
approximation process. This set is given by
Dsc = f ∈ B1 : lim supkUN f kB2 = ∞
N →∞
and lim inf kUN f − T f kB2 = 0 .
N →∞
Obviously, we have Dsc ⊂ Dsb . From Theorem 1 we already
know that Dsb is a residual set. The next theorem shows that
even Dsc is a residual set.
Theorem 2. Let B1 and B2 be two Banach spaces and
T : B1 → B2 a bounded linear operator. Further, let
{UN }N ∈N be a sequence of bounded linear operators mapping
from B1 to B2 such that (A1) and (A2’) are fulfilled. Then Dsc
is a residual set in B1 .
The proof of Theorem 2 is similar to the proof of Theorem 1
and omitted due to space constraints.
VI. C ONCLUSION
Due to our assumption (A2’), the set of signals for which the
approximation process diverges is a residual set. Corollary 1
shows that Dstrong , i.e., the set of signals for which we have
divergence even with adaptivity, is either empty or a meager
set. Thus, adaptivity, i.e., the adaptive choice of the number
of summands in each approximation step always reduces the
set of signals with divergence from a residual set to a meager
set. This answers the question about the size of the divergence
set, which was posed in [15].
The answer to the question whether adaptivity creates an
approximation process that converges for all signals f ∈ B1
depends on the specific approximation process. For the global
reconstruction behavior of the Shannon sampling series (Example 1), we have seen that adaptivity in the number of
samples for the reconstruction cannot completely eliminate the
observed divergence, because we have strong divergence for
certain signals. For fixed t ∈ R in the system approximation
case (Example 3) the situation is different: We do not have
strong divergence according to the following result [16].
Theorem 3. Let T : PW 1π → PW 1π be a stable LTI system,
t ∈ R, and f ∈ PW 1π . There exists a monotonically increasing
subsequence {Nk = Nk (t, f, T )}k∈N of the natural numbers
such that limk→∞ (TNk f )(t) = (T f )(t).
Thus, in certain cases, adaptive signal processing can even
be used to create an approximation process that is convergent
for all f ∈ PW 1π .
R EFERENCES
[1] C. E. Shannon, “Communication in the presence of noise,” in Proceedings of the IRE, vol. 37, no. 1, Jan. 1949, pp. 10–21.
[2] A. J. Jerri, “The Shannon sampling theorem–its various extensions and
applications: A tutorial review,” Proceedings of the IEEE, vol. 65, no. 11,
pp. 1565–1596, Nov. 1977.
[3] J. R. Higgins, “Five short stories about the cardinal series,” Bull. Amer.
Math. Soc., vol. 12, no. 1, pp. 45–89, 1985.
[4] P. L. Butzer, W. Splettstößer, and R. L. Stens, “The sampling theorem
and linear prediction in signal analysis,” Jahresbericht der Deutschen
Mathematiker-Vereinigung, vol. 90, no. 1, pp. 1–70, Jan. 1988.
[5] F. Marvasti, Ed., Nonuniform Sampling: Theory and Practice. Kluwer
Academic / Plenum Publishers, 2001.
[6] H. Boche and U. J. Mönich, “There exists no globally uniformly
convergent reconstruction for the Paley-Wiener space PW 1π of bandlimited functions sampled at Nyquist rate,” IEEE Transactions on Signal
Processing, vol. 56, no. 7, pp. 3170–3179, Jul. 2008.
[7] ——, “The class of bandlimited functions with unstable reconstruction
under thresholding,” in Proceedings of the 8th International Conference
on Sampling Theory and Applications (SampTA’09), May 2009.
[8] S. Banach and H. Steinhaus, “Sur le principe de la condensation de
singularités,” Fundamenta Mathematicae, vol. 9, pp. 50–61, 1927.
[9] K. Yosida, Functional Analysis. Springer-Verlag, 1971.
[10] H. Boche and B. Farrell, “Strong divergence of reconstruction procedures for the Paley-Wiener space P W 1π and the Hardy space H 1 ,”
Journal of Approximation Theory, vol. 183, pp. 98–117, Jul. 2014.
[11] H. Boche and U. J. Mönich, New Perspectives on Approximation and
Sampling Theory — Festschrift in honor of Paul Butzer’s 85th birthday,
ser. Applied and Numerical Harmonic Analysis. Birkhauser (SpringerVerlag), 2014, ch. Signal and System Approximation from General
Measurements, pp. 115–148.
[12] ——, “Sampling-type representations of signals and systems,” Sampling
Theory in Signal and Image Processing, vol. 9, no. 1–3, pp. 119–153,
Jan., May, Sep. 2010.
[13] J. L. Brown, Jr., “Bounds for truncation error in sampling expansions
of band-limited signals,” IEEE Transactions on Information Theory,
vol. 15, no. 4, pp. 440–444, Jul. 1969.
[14] H. Boche and U. J. Mönich, “Adaptive signal and system approximation and strong divergence,” in Proceedings of the IEEE International
Conference on Acoustics, Speech, and Signal Processing (ICASSP ’15),
2015, accepted.
[15] H. Boche and V. Pohl, Sampling Theory, a Renaissance, ser. Applied and Numerical Harmonic Analysis.
Birkhauser (SpringerVerlag), 2014, ch. System Approximations and Generalized Measurements in Modern Sampling Theory, to be published, preprint available
http://arxiv.org/abs/1410.5872.
[16] H. Boche and U. J. Mönich, “Strong divergence for system approximations,” in preparation, 2015.
Download