65, 4 (2013), 511–518 December 2013 OPERATORS TO THE UNIT-UNIVARIATE CASE, REVISITED

advertisement
MATEMATIQKI VESNIK
originalni nauqni rad
research paper
65, 4 (2013), 511–518
December 2013
RATE OF CONVERGENCE OF SOME NEURAL NETWORK
OPERATORS TO THE UNIT-UNIVARIATE CASE, REVISITED
George A. Anastassiou
Abstract. This article deals with the determination of the rate of convergence to the unit
of some neural network operators, namely, “the normalized bell and squashing type operators”.
This is given through the modulus of continuity of the involved function or its derivative and that
appears in the right-hand side of the associated Jackson type inequalities.
1. Introduction
The Cardaliaguet-Euvrard operators were first introduced and studied extensively in [2], where the authors among many other things proved that these operators converge uniformly on compacta, to the unit over continuous and bounded
functions. Our “normalized bell and squashing type operators” (1) and (18) were
motivated and inspired by [2]. The work in [2] is qualitative where the used bellshaped function is general. However, our work, though greatly motivated by [2],
is quantitative and the used bell-shaped and “squashing” functions are of compact
support. We produce a series of inequalities giving close upper bounds to the errors
in approximating the unit operator by the above neural network induced operators.
All involved constants there are well determined. These are mainly pointwise estimates involving the first modulus of continuity of the engaged continuous function
or some of its derivatives. This work is a continuation and simplification of our
earlier work in [1].
2. Convergence with rates of the normalized
bell type neural network operators
We need the following (see [2]).
Definition 1. A function b : R → R is said to be bell-shaped if b belongs to
L1 and its integral is nonzero, if it is nondecreasing on (−∞, a) and nonincreasing
2010 AMS Subject Classification: 41A17, 41A25, 41A30, 41A36
Keywords and phrases: Neural network approximation; bell and squashing functions; modulus of continuity.
511
512
G. A. Anastassiou
on [a, +∞), where a belongs to R. In particular b (x) is a nonnegative number
and at a b takes a global maximum; it is the center of the bell-shaped function. A
bell-shaped function is said to be centered if its center is zero. The function b (x)
may have jump discontinuities. In this work we consider only centered bell-shaped
functions of compact support [−T, T ], T > 0.
Example 2. (1) b (x) can be the characteristic function over [−1, 1].
(2) b (x) can be the hat function over [−1, 1], i.e.,


 1 + x, −1 ≤ x ≤ 0,
b (x) =
1 − x, 0 < x ≤ 1


0,
elsewhere.
Here we consider functions f : R → R that are either continuous and bounded, or
uniformly continuous.
In this article we study the pointwise convergence with rates over the real line,
to the unit operator, of the “normalized bell type neural network operators”,
¢¢
¡ k ¢ ¡ 1−α ¡
Pn2
x − nk
k=−n2 f n b n
(1)
(Hn (f )) (x) :=
¡
¡
¢¢ ,
Pn2
1−α x − k
k=−n2 b n
n
where 0 < α < 1 and x ∈ R, n ∈ N. The terms in the ratio of sums (1) can be
nonzero iff
¯
¯
¯
µ
¶¯
¯ 1−α
¯
¯
k ¯¯
¯n
¯x − k ¯ ≤ T
x
−
≤
T
,
i.e.
¯
¯
¯
n
n ¯ n1−α
iff
nx − T nα ≤ k ≤ nx + T nα .
(2)
In order to have the desired order of numbers
−n2 ≤ nx − T nα ≤ nx + T nα ≤ n2 ,
(3)
it is sufficient enough to assume that
n ≥ T + |x| .
(4)
When x ∈ [−T, T ] it is enough to assume n ≥ 2T which implies (3).
Proposition 3. (see [1]) Let a ≤ b, a, b ∈ R. Let card (k) (≥ 0) be the
maximum number of integers contained in [a, b]. Then
max (0, (b − a) − 1) ≤ card (k) ≤ (b − a) + 1.
Note 4. We would like to establish a lower bound on card (k) over the interval
[nx − T nα , nx + T nα ]. From Proposition 3 we get that
card (k) ≥ max (2T nα − 1, 0) .
513
Some neural network operators
We obtain card (k) ≥ 1, if
1
2T nα − 1 ≥ 1 iff n ≥ T − α .
So to have the desired order (3) and card (k) ≥ 1 over [nx − T nα , nx + T nα ], we
need to consider
³
´
1
n ≥ max T + |x| , T − α .
(5)
Also notice that card (k) → +∞, as n → +∞.
Denote by [·] the integral part of a number and by d·e its ceiling. Here comes
our first main result.
³
´
1
Theorem 5. Let x ∈ R, T > 0 and n ∈ N such that n ≥ max T + |x| , T − α .
Then
¶
µ
T
|(Hn (f )) (x) − f (x)| ≤ ω1 f, 1−α ,
(6)
n
where ω1 is the first modulus of continuity of f .
Proof. Call
[nx+T nα ]
X
V (x) :=
k=dnx−T nα e
µ
¶¶
µ
k
b n1−α x −
.
n
Clearly we obtain
∆ := |(Hn (f )) (x) − f (x)|
¯P
¯
¡
¢¢
¡ ¢ ¡
¯ [nx+T nα ]
¯
¯ k=dnx−T nα e f nk b n1−α x − nk
¯
¯
=¯
− f (x)¯¯ .
V (x)
¯
¯
The last comes by the compact support [−T, T ] of b and (2).
Hence it holds
¯
¯
¢ µ
¯ [nx+T nα ] ¡ ¡ k ¢
µ
¶¶¯
X
¯
¯
−
f
(x)
f
k
n
¯
b n1−α x −
∆ = ¯¯
¯
V (x)
n
¯
¯k=dnx−T nα e
α
¯
¯
¡
¢
µ
µ
¶¶
[nx+T n ]
X
ω1 f, ¯ nk − x¯
k
b n1−α x −
.
≤
V (x)
n
α
k=dnx−T n e
Thus
|(Hn (f )) (x) − f (x)|
µ
≤ ω1 f,
proving the claim.
T
n1−α
³
¶ P[nx+T nα ]
k=dnx−T nα e
¡
¡
¢¢´
b n1−α x − nk
V (x)
µ
= ω1 f,
T
n1−α
¶
,
514
G. A. Anastassiou
Our second main result follows.
³
´
1
Theorem 6. Let x ∈ R, T > 0 and n ∈ N such that n ≥ max T + |x| , T − α .
Let f ∈ C N (R), N ∈ N, such that f (N ) is a uniformly continuous function or f (N )
is continuous and bounded. Then
|(Hn (f )) (x) − f (x)|
≤
¯ ¶
µX
N ¯¯ (j)
f (x)¯ T j
nj(1−α) j!
j=1
µ
+ ω1 f (N ) ,
T
¶
n1−α
·
TN
. (7)
N ! nN (1−α)
Notice that as n → ∞ we have that R.H.S.(7)→ 0, therefore L.H.S.(7)→ 0,
i.e., (7) gives us with rates the pointwise convergence of (Hn (f )) (x) → f (x), as
n → +∞, x ∈ R.
Proof. Note that b here is of compact support [−T, T ] and all assumptions are
as earlier. By Taylor’s formula we have that
µ ¶ NX
µ
¶j Z k ³
−1 (j)
´ ¡ k − t¢N −1
n
k
f (x) k
(N )
(N )
−x +
dt.
f
=
f
(t) − f
(x) n
n
j!
n
(N − 1)!
x
j=0
Call
[nx+T nα ]
X
V (x) :=
k=dnx−T nα e
µ
µ
¶¶
k
1−α
b n
x−
.
n
Hence
¡
¢¢
¢¢
¡ ¢ ¡
µ
¶j ¡ 1−α ¡
N
X
f nk b n1−α x − nk
b n
x − nk
f (j) (x) k
=
−x
V (x)
j!
n
V (x)
j=0
¡
¡
¢¢ Z k ³
´ ¡ k − t¢N −1
n
b n1−α x − nk
(N )
(N )
+
f
(t) − f
(x) n
dt.
V (x)
(N − 1)!
x
Thus
[nx+T nα ]
X
(Hn (f )) (x) − f (x) =
k=dnx−T nα e
=
N
X
f (j) (x)
j=1
j!


f
¡ k ¢ ¡ 1−α ¡
¢¢
x − nk
n b n
− f (x)
V (x)
[nx+T nα ]
X
k=dnx−T nα e
µ
¢¢ 
¶j ¡ 1−α ¡
k
b n
x− n
k
 + R,
−x
n
V (x)
where
[nx+T nα ]
R :=
X
k=dnx−T nα e
¡
¡
¢¢ Z k ³
´ ¡ k − t¢N −1
n
b n1−α x − nk
(N )
(N )
f
(t) − f
(x) n
dt. (8)
V (x)
(N − 1)!
x
515
Some neural network operators
So that
|(Hn (f )) (x) − f (x)|
¯
N ¯¯ (j)
X
f (x)¯

≤
j!
j=1
[nx+T nα ]
X
k=dnx−T nα e
And hence
|(Hn (f )) (x) − f (x)| ≤
¡ 1−α ¡
¢¢ 
k
b
n
x
−
T
n
 + |R| .
V (x)
nj(1−α)
j
¯¶
¯
µX
N
T j ¯f (j) (x)¯
j=1
Next we estimate
¯
α
¡
¡
¢¢
¯ [nx+T
Xn ] b n1−α x − k Z nk
¯
n
¯
|R| = ¯
V (x)
x
¯k=dnx−T nα e
¯
α
¡
¡
¢¢ Z k
[nx+T n ]
X
b n1−α x − nk ¯¯ n
≤
¯
¯ x
V (x)
α
k=dnx−T n e
[nx+T nα ]
≤
X
k=dnx−T nα e
where
+ |R| .
(9)
¯
´ ¡ k − t¢N −1 ¯¯
dt¯¯
f (N ) (t) − f (N ) (x) n
(N − 1)!
¯
¯
³
´ ¡ k − t¢N −1 ¯
¯
(N )
(N )
f
(t) − f
(x) n
dt¯
¯
(N − 1)!
³
¡
¡
¢¢
b n1−α x − nk
· γ ≤ (∗) ,
V (x)
¯Z k
¯
¯ n
| nk − t|N −1 ¯¯
¯
(N )
(N )
γ := ¯
|f
(t) − f
(x) |
dt¯ ,
¯ x
(N − 1)! ¯
and
[nx+T nα ]
(∗) ≤
X
k=dnx−T nα e
where
nj(1−α) j!
¡
¡
¢¢
b n1−α x − nk
· ϕ = ϕ,
V (x)
µ
ϕ := ω1 f (N ) ,
(11)
¶
TN
.
n1−α n! nN (1−α)
The last part of inequality (11) comes from the following:
(i) Let x ≤ nk , then
Z nk ¯
¯ ¯¯ k − t¯¯N −1
¯ (N )
¯
(N )
γ=
(t) − f
(x)¯ n
dt
¯f
(N − 1)!
x
Z nk
³
´ ¯¯ k − t¯¯N −1
(N )
≤
ω1 f , |t − x| n
dt
(N − 1)!
x
¢N −1
¯
¯¶ Z k ¡ k
µ
n
¯
k ¯¯
(N ) ¯
n −t
≤ ω1 f , ¯x − ¯
dt
n
(N − 1)!
x
¢N
µ
¶ ¡k
µ
¶
T
T
TN
(N )
n −x
≤ ω1 f , 1−α
≤ ω1 f (N ) , 1−α
;
n
N!
n
N ! nN (1−α)
T
(10)
(12)
516
G. A. Anastassiou
i.e., when x ≤
k
n
we get
µ
γ ≤ ω1 f (N ) ,
T
n1−α
¶
TN
.
N ! nN (1−α)
(ii) Let x ≥ nk , then
¯Z
¯
¯ x¯
¯ ¯¯t − k ¯¯N −1 ¯
¯
¯ (N )
¯
¯
(N )
n
γ=¯
dt¯
(t) − f
(x)¯
¯f
¯ k
¯
(N
−
1)!
n
¡
¢
N
−1
Z x¯
¯ t− k
¯
¯ (N )
n
(t) − f (N ) (x)¯
=
dt
¯f
k
(N
− 1)!
n
Z x ³
´ ¡t − k ¢N −1
(N )
n
≤
ω1 f , |t − x|
dt
k
(N
− 1)!
n
¢N −1
¯
¯¶ Z x ¡
µ
¯
t − nk
k ¯¯
(N ) ¯
≤ ω1 f , ¯x − ¯
dt
k
n
(N − 1)!
n
¢N
¯
¯¶ ¡
µ
µ
¶
¯
k ¯¯ x − nk
T
TN
(N ) ¯
= ω1 f , ¯x − ¯
≤ ω1 f (N ) , 1−α
.
n
N!
n
N ! nN (1−α)
Thus in both cases we have
µ
¶
T
TN
(N )
γ ≤ ω1 f , 1−α
.
n
N ! nN (1−α)
Consequently from (11), (12) and (14) we obtain
µ
¶
T
TN
(N )
|R| ≤ ω1 f , 1−α
.
n
N ! nN (1−α)
Finally from (15) and (9) we conclude inequality (7).
(13)
(14)
(15)
Corollary 7. Let b (x) be a centered bell-shaped continuous function on R
∗
∗
∗
of compact
³ support [−T,
´ T ]. Let x ∈ [−T , T ], T > 0, and n ∈ N be such that
1
∗
−α
n ≥ max T + T , T
, 0 < α < 1. Consider p ≥ 1. Then
µ
¶
1
1
T
kHn (f ) − f kp,[−T ∗ ,T ∗ ] ≤ ω1 f, 1−α · 2 p · T ∗ p .
(16)
n
From (16) we get the Lp convergence of Hn (f ) to f with rates.
Corollary 8. Let b (x) be a centered bell-shaped continuous function on R
∗
∗
∗
of compact
³ support [−T,
´ T ]. Let x ∈ [−T , T ], T > 0, and n ∈ N be such that
1
n ≥ max T + T ∗ , T − α , 0 < α < 1. Consider p ≥ 1. Then
kHn (f ) − f kp,[−T ∗ ,T ∗ ]
°
°
¶ 1 N ∗1
¶
µ
µX
N T j · °f (j) °
2p T T p
T
p,[−T ∗ ,T ∗ ]
(N )
+
ω
f
,
,
≤
1
1−α
j(1−α)
n
n
j!
N ! nN (1−α)
j=1
where N ≥ 1.
(17)
517
Some neural network operators
Here from (17) we get again the Lp convergence of Hn (f ) to f with rates.
Proof. Inequality (17) now comes by integration of (7) and the properties of
the Lp -norm.
2.1. The “normalized squashing type operators” and their convergence to the unit with rates
We need
Definition 9. Let the nonnegative function S : R → R, S has compact
support [−T, T ], T > 0, and is nondecreasing there and it can be continuous only
on either (−∞, T ] or [−T, T ]. S can have jump discontinuities. We call S the
“squashing function” (see also [2]).
Let f : R → R be either uniformly continuous or continuous and bounded. For
x ∈ R we define the “normalized squashing type operator”
¡k¢
¡ 1−α ¡
¢¢
Pn2
· x − nk
2 f
n ·S n
(Kn (f )) (x) := k=−n
,
¡
¡
¢¢
Pn2
1−α · x − k
k=−n2 S n
n
³
´
1
0 < α < 1 and n ∈ N : n ≥ max T + |x| , T − α . It is clear that
P[nx+T nα ]
k=dnx−T nα e
(Kn (f )) (x) =
f
¡k¢
n
¡
¡
¢¢
· S n1−α · x − nk
W (x)
where
[nx+T nα ]
X
W (x) :=
k=dnx−T nα e
,
µ
µ
¶¶
k
S n1−α · x −
.
n
Here we give the pointwise convergence with rates of (Kn f ) (x) → f (x), as n →
+∞, x ∈ R.
Theorem 10. Under the above terms and assumptions we obtain
¶
µ
T
|Kn (f ) (x) − f (x)| ≤ ω1 f, 1−α .
n
Proof. As in Theorem 5.
³
´
1
Theorem 11. Let x ∈ R, T > 0 and n ∈ N such that n ≥ max T + |x| , T − α .
Let f ∈ C N (R), N ∈ N, such that f (N ) is a uniformly continuous function or f (N )
is continuous and bounded. Then
¯ ¶
µX
µ
¶
N ¯¯ (j)
f (x)¯ T j
T
TN
(N )
|(Kn (f )) (x) − f (x)| ≤
+
ω
f
,
.
·
1
1−α
j(1−α)
n
j! n
N ! nN (1−α)
j=1
So we obtain the pointwise convergence of Kn (f ) to f with rates.
518
G. A. Anastassiou
Proof. As similar to Theorem 6 is omitted.
Note 12. The maps Hn , Kn are positive linear operators reproducing constants, in particular
Hn (1) = Kn (1) = 1.
REFERENCES
[1] G.A. Anastassiou, Rate of convergence of some neural network operators to the unitunivariate case, J. Math. Anal. Appl. 212 (1997), 237–262.
[w] P. Cardaliaguet, G. Euvrard, Approximation of a function and its derivative with a neural
network, Neural Networks 5 (1992), 207–220.
(received 23.12.2011; available online 10.09.2012)
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, U.S.A.
E-mail: ganastss@memphis.edu
Download