MONOTONE SOLUTIONS OF DYNAMIC SYSTEMS ON TIME

advertisement
MONOTONE SOLUTIONS OF DYNAMIC SYSTEMS ON TIME
SCALES
L. ERBE, A. PETERSON, AND C. C. TISDELL
Lynn Erbe and Allan Peterson
Department of Mathematics
University of Nebraska - Lincoln
Lincoln, NE 68588-0323
USA
lerbe@math.unl.edu
apeterso@math.unl.edu
Christopher C. Tisdell
School of Mathematics
The University of New South Wales
Sydney, NSW 2052
AUSTRALIA
cct@maths.unsw.edu.au
Abstract. We are concerned with proving that solutions of certain dynamical systems on time scales satisfy some monotoneity conditions. These results then give
important results for nth order linear scalar equations. We then give a related result
for a third order nonlinear (Emden–Fowler type) dynamic equation.
Key words: time scales, dynamic equations, monotone solutions.
AMS Subject Classification: 39A10.
This paper is dedicated to the memory of Bernd Aulbach.
1. Introduction
First we give some introductory definitions and results concerning the time scale
calculus that will be used in this paper. For more detailed information see the books
[2], [3], and [8] and the papers [1], [7]. The set T is a time scale provided it is a
nonempty closed subset of the real numbers R. The forward jump operator σ and
the backward jump operator ρ are defined by
σ(t) := inf{τ > t : τ ∈ T} ∈ T,
and
ρ(t) := sup{τ < t : τ ∈ T} ∈ T,
1
2
ERBE, PETERSON, AND TISDELL
for all t ∈ T, where inf ∅ := sup T and sup ∅ := inf T, where ∅ denotes the empty
set. We assume thoughout that T has the topology that it inherits from the standard
topology on the real numbers R. If σ(t) > t, we say t is right-scattered, while if
ρ(t) < t we say t is left-scattered. If σ(t) = t and t < sup T we say t is right-dense,
while if ρ(t) = t and t > inf T we say t is left-dense. The function x : T → R is said
to be right-dense continuous (rd-continuous) and we write x ∈ Crd provided x is
continuous at each right-dense point in T and at each left-dense point in T left-hand
limits exist (finite). The function x : T → R is said to be regressive provided the
regressivity condition
1 + µ(t)x(t) 6= 0, t ∈ T
holds. Let R denote the set of all functions x : T → R such that x is right-dense
continuous and regressive and let
R+ := {x ∈ R : 1 + µ(t)x(t) > 0, t ∈ T}.
The set R is called the set of regressive functions and the set R+ is called the set
of positively regressive functions.
Thoughout this paper we make the blanket assumption that a < b are points in T.
We define the time scale interval
[a, b]T := {t ∈ T such that a ≤ t ≤ b}
and other types of time scale intervals are defined similarly.
Time scale calculus unifies continuous and discrete calculus and is much more general
as T can be any nonempty closed subset of the reals R. For example, it includes
quantum calculus [5], which is time scale calculus on the time scales
q Z ∪ {0} := {0, 1, q ±1 , q ±2 , q ±3 , · · · },
q > 0,
q 6= 1,
and hZ := {0, ±h, ±2h, ±3h, · · · }.
Definition 1. Assume x : T → R and fix t ∈ Tκ , then we define x∆ (t) to be the number
(provided it exists) with the property that given any > 0, there is a neighborhood U of
t such that
|[x(σ(t)) − x(s)] − x∆ (t)[σ(t) − s]| ≤ |σ(t) − s|,
for all s ∈ U. We call x∆ (t) the (delta) derivative of x at t.
n
n
We write x ∈ Crd
if the n-th delta derivative function, denoted by x∆ , is rdcontinuous.
The following theorem concerning (delta) differentiation is due to Hilger [7]. See also
[2, Theorem 1.16].
Theorem 1. Assume that g : T → Rn and let t ∈ Tκ .
(i) If g is differentiable at t, then g is continuous at t.
(ii) If g is continuous at t and t is right-scattered, then g is differentiable at t with
g ∆ (t) =
g(σ(t)) − g(t)
.
σ(t) − t
(iii) If g is differentiable and t is right-dense, then
g ∆ (t) = lim
s→t
g(t) − g(s)
.
t−s
MONOTONE SOLUTIONS
3
(iv) If g is differentiable at t, then
g(σ(t)) = g(t) + µ(t)g ∆ (t).
(1)
Note that
g ∆ (t) = g 0 (t),
if T = R,
and g ∆ (t) = ∆g(t) := g(t + 1) − g(t) if T = Z,
where ∆ is the forward difference operator. If T = q N0 := {1, q, q 2 , q 3 , · · · }, q > 1, then
one gets the so-called q derivative (quantum derivative) [5]
g ∆ (t) = Dq g(t) :=
g(qt) − g(t)
.
(q − 1)t
See [5] for some important applications of this quantum derivative. This q derivative is
called the Hahn derivative in orthogonal polynomial theory (where it is usually assumed
that 0 < q < 1 with a related time scale).
2. Main Results
Our first main result concerns the first-order linear vector dynamic equation
x∆ = A(t)xσ ,
(2)
where xσ denotes the composite function x ◦ σ.
Let us recall some notation. We write A(t) ≤ 0 provided each element aij (t) of A(t)
satisfies aij (t) ≤ 0. We say A is right-dense continuous on T provided each element
of A is right-dense continuous on T. Finally we say A is a regressive matrix function
on T provided I + µ(t)A(t) is invertible for t ∈ T. In the proof of the next theorem
we will use the fact [2, Chapter 5] that if the n × n matrix function is regressive and
right-dense continuous on T, t0 ∈ Tκ , and x0 ∈ Rn , then the IVP
x∆ = A(t)xσ ,
x(t0 ) = x0
has a unique solution defined on all of T. Throughout the remainder of the paper we
assume ω := sup T = ∞ or that ω ∈ T is left-dense and we will be concerned with
the behavior of solutions on [a, ω)T . If ω < ∞, then we do not assume that the matrix
function A is defined at ω.
Theorem 2. Assume that the n × n matrix function A is regressive and right-dense
continuous on [a, ω)T , with A(t) ≤ 0 on T. Then the linear dynamic system (2) has a
nontrivial solution x satisfying
x(t) ≥ 0,
x∆ (t) ≤ 0,
t ∈ [a, ω)T .
Proof. Assume τ ∈ (a, ω)T , y0 ∈ Rn with y0 > 0, and let y(t, τ ) be the solution of the
IVP
y ∆ = A(t)y σ , y(τ ) = y0 .
We claim that y(t, τ ) > 0 on [a, τ ]T . Assume not, then there is a t1 ∈ [a, τ )T such that
either σ(t1 ) = t1 , y(t, τ ) > 0 on (t1 , τ ]T and at least one component of y(t, τ ) is zero
at t1 or σ(t1 ) > t1 , y(t, τ ) > 0 on [σ(t1 ), τ ]T , and at least one component of y(t, τ ) is
nonpositive at t1 . In either case
y ∆ (t, τ ) = A(t)y σ (t, τ ) ≤ 0
4
ERBE, PETERSON, AND TISDELL
for t ∈ [t1 , τ )T . It follows from this that
y(t1 , τ ) ≥ y(τ, τ ) = y0 > 0,
which is a contradiction. Hence y(t, τ ) > 0 on [a, τ ]T for each τ ∈ (a, ω)T . Let
{τn }∞
n=1 ⊂ (a, ω)T with limn→∞ τn = ω and let
xn (t) :=
y(t, τn )
,
ky(a, τn )k
t ∈ [a, ω)T ,
n ≥ 1.
Then for each n ≥ 1, xn is a solution of (2) with kxn (a)k = 1. It follows that there is
a subsequence {xnk (a)}∞
k=1 such that
lim xnk (a) = x0 ,
k→∞
where
kx0 k = 1.
Let x be the solution of the limit IVP
x∆ = A(t)xσ ,
x(a) = x0 .
Then
x(t) = lim xnk (t) ≥ 0,
t ∈ [a, ω)T
x∆ (t) = A(t)xσ (t) ≤ 0,
t ∈ [a, ω)T .
k→∞
and so it follows that
We next give the corresponding result for an alternative form of a first order linear
system,
x∆ = B(t)x.
(3)
Corollary 3. Assume that B is a regressive and right-dense continuous matrix function
on [a, ω)T . If (I + µ(t)B(t))−1 B(t) ≤ 0 (or B(t)(I + µ(t)B(t))−1 ≤ 0) on [a, ω)T . Then
the linear dynamic system (3) has a nontrivial solution x satisfying
x(t) ≥ 0,
x∆ (t) ≤ 0,
t ∈ [a, ω)T .
Proof. Using xσ (t) = x(t) + µ(t)x∆ (t) (see part (iv) of Theorem 1) is easy to see that
the vector dynamic equation (3) is equivalent to the vector dynamic equation
x∆ = (I + µ(t)B(t))−1 B(t)xσ .
Also it is easy to varify that
(I + µ(t)B(t))−1 B(t) = B(t)(I + µ(t)B(t))−1 .
This corollary then follows from Theorem 2.
We now can use Theorem 2 to prove the analogous result (Theorem 4) for the nth
order scalar linear dynamic equation.
(4)
n
u∆ + pn−1 (t)u∆
n−1 σ
+ pn−2 (t)u∆
n−2 σ
+ · · · + p0 (t)uσ = 0.
We say that equation (4) is regressive on [a, ω)T in case pn−1 ∈ R([a, ω)T ) and
pi ∈ Crd ([a, ω)T ), 0 ≤ i ≤ n − 1. Under these conditions all initial value problems for
(4) have unique solutions that exist on [a, ω)T (see [2, Section 5.5]).
MONOTONE SOLUTIONS
5
Theorem 4. Assume (4) is regressive and that the coefficient functions pi in (4) satisfy
(−1)n+i pi−1 (t) ≥ 0 on [a, ω)T , 1 ≤ i ≤ n. Then (4) has a solution satisfying
(5)
i
(−1)i u∆ (t) ≥ 0,
u(t) > 0,
1 ≤ i ≤ n,
t ∈ [a, ω)T .
Proof. Let u be a solution of (4) on [a, ω)T and set

u(t)
 u∆ (t) 
,
x(t) = D 


···
∆n−1
(t)
u

t ∈ [a, ω)T ,
where D is the diagonal matrix
D := diag{1, −1, 1, · · · , (−1)n−1 }.
(6)
Then


u∆ (t)
 u∆2 (t) 

x∆ (t) = D 
 ··· ,
n
u∆ (t)
t ∈ [a, ω)T ,
Using the formula (1) we get that
i
i
u∆ σ (t) = u∆ (t) + µ(t)u∆
(7)
i+1
(t),
1≤i≤n−1
and since u is a solution of (4) we have
(8)
n
u∆ (t) = −p0 (t)uσ (t) − p1 (t)u∆σ (t) − · · · − pn−1 (t)u∆
n−1 σ
(t).
The equations (7) and (8) can be written in the vector form
(9)

0
1
0






0
..
.
0
..
.
1
..
.




= 


···
..
.
..
.
0
··· ···
0
−p0 −p1 −p2 · · ·
··· 0
.
..
. ..
0 1
µ
.. . .
. ... ... 0
.
0 ··· ··· 1 µ
0 0 ··· 0 1
1
µ
0
0
..
.
0
1
−pn−1







u∆
2
u∆
3
u∆
···
n
u∆










.


uσ
u∆σ
2
u∆ σ
···
∆n−1 σ
u






6
ERBE, PETERSON, AND TISDELL
It follows, after a simple calculation of the inverse of the matrix appearing on the right
hand side of (9), that









=


· · · (−1)n−1 µn−1
.
1 −µ . . (−1)n−2 µn−2
..
..
..
..
.
.
.
.
··· 0
1
−µ
0 ··· 0
1

1 −µ






0
..
.
0
0
u∆
2
u∆
3
u∆
···
n
u∆
µ2

0
1
0






0
..
.
0
..
.
1
..
.
···
...
..
0
..
.
.
0
1
−pn−1
0
··· ···
0
−p0 −p1 −p2 · · ·
Hence



x∆ = D 


u∆
2
u∆
3
u∆
···
n
u∆






 = BC · D 




uσ
u∆σ
2
u∆ σ
···
∆n−1 σ
u



 = BCxσ ,


where




B = D



1



= 


0
..
.
0
0
· · · (−1)n−1 µn−1
.
0 1 −µ . . (−1)n−2 µn−2
.. . .
..
..
..
.
.
.
.
.
0 ··· ··· 1
−µ
0 0 ··· 0
1

−µ µ2 · · · (−1)n−1 µn−1
..
. (−1)n−1 µn−2 
−1 µ


..
... ... ...

.

··· ··· 0
(−1)n−1 µ 
0 ··· 0
(−1)n−1
1 −µ
µ2














uσ
u∆σ
2
u∆ σ
···
∆n−1 σ
u



.


MONOTONE SOLUTIONS
7
and

0
1



C = 


0
..
.
0
..
.
0
···
−p0 −p1
···
0
..
..
.
1
.
..
..
.
.
0
··· 0
1
···
−pn−1
0

0
−1
0



= 


0
..
.
0
...
1
...
0 ··· ···
−p0 p1 −p2




D


···
..
.
...
0
..
.
..
.
0
(−1)n−1
· · · (−1)n pn−1




.


Since the sign of every element in the ith column of B(t) is (−1)i−1 and from the sign
assumptions on the coefficient functions pi we see that the sign of every element in the
ith row of C is (−1)i it follows that
A(t) := B(t)C(t) ≤ 0
on [a, ω)T . Therefore, from Theorem 2 there is a nontrivial solution u of (4) on [a, ω)T
satisfying


u(t)
 u∆ (t) 
 ≥ 0, t ∈ [a, ω)T ,
x(t) = D 


···
∆n−1
u
(t)
and

u∆ (t)
 u∆2 (t) 
∆

x (t) = D 
 · · ·  ≤ 0,
n
u∆ (t)

t ∈ [a, ω)T .
It follows that u satisfies (5).
A second important form of an nth order linear scalar equation (see [2, Section 5.5])
is
(10)
n
u∆ + qn−1 (t)u∆
n−1
+ · · · + q0 (t)u = 0.
We say the dynamic equation (10) is regressive provided the coefficient functions qi (t),
0 ≤ i ≤ n − 1, are rd-continuous on T and the regressivity condition
R(t) := 1 +
n−1
X
(−µ(t))n−j qj (t) 6= 0,
t∈T
j=0
holds. It follows ([2, Corollary 5.90]) that if the dynamic equation (10) is regressive on
[a, ω)T , then every initial value problem has a unique solution and all solutions exist
on [a, ω)T .
8
ERBE, PETERSON, AND TISDELL
Corollary 5. Assume that the dynamic equation (10) is regressive and
(11)
R(t)
i−1
X
(−1)n−j−1 µi−j−1 (t)qj (t) ≥ 0,
j=0
for t ∈ [a, ω)T , 1 ≤ i ≤ n. Then the dynamic equation (10) has a nontrivial solution u
satisfying (5).
Proof. This follows from the fact that (see [2, Theorem 5.99]) the dynamic equations
(4) and (10) are equivalent if
i
pi (t) :=
1 X
(−µ(t))i−j qj (t),
R(t) j=0
0 ≤ i ≤ n − 1.
Hence we have from (11) that
i−1
n+i
(−1)
1 X
pi−1 (t) =
(−1)n−j−1 µi−j−1 (t)qj (t) ≥ 0
R(t) j=0
for t ∈ [a, ω)T , 1 ≤ i ≤ n. The result then follows from Theorem 4.
In the next theorem we see that we can relax the sign condition on the coefficient
function pn−1 in Theorem 4 and get a slightly different conclusion. In Theorem 6 we
consider the generalized exponential function eq (t, t0 ) for q ∈ R. See [2, Section 2.2]
for an elementary development of this generalized exponential function.
Theorem 6. Assume pn−1 ∈ R+ and that the coefficient functions pi satisfy
(−1)n+i pi−1 (t) ≥ 0 on
[a, ω)T ,
1 ≤ i ≤ n − 1.
Then the dynamic equation (4) has a solution u satisfying
(12) u(t) > 0,
i
(−1)i u∆ (t) ≥ 0,
1 ≤ i ≤ n − 1,
(−1)n (pxn−1 )∆ (t) ≥ 0,
for t ∈ [a, ω)T , where p(t) := epn−1 (t, a).
Proof. Since pn−1 ∈ R+ , we have (by [2, Theorem 2.48]) that
p(t) := epn−1 (t, a) > 0
for all t ∈ T. Letting u be a solution of (4) and multiplying both sides of (4) by the
integrating factor p(t) we get that u is a solution of
(13)
(pu∆
n−1
)∆ + qn−2 (t)u∆
n−2 σ
+ · · · + q0 (t)uσ = 0,
where qi (t) := p(t)pi (t), 0 ≤ i ≤ n − 2. Note that
(14) (−1)n+i qi (t) = p(t) (−1)n+i pi (t) ≥ 0, t ∈ [a, ω)T ,
1 ≤ i ≤ n − 2.
Let u be a solution of (4), then u is a solution of (13). Setting


u(t)


u∆ (t)
 , t ∈ [a, ω)T ,
x(t) = D 


···
∆n−1 ∆
(pu
) (t)
MONOTONE SOLUTIONS
9
where D is given by (6) we get using an argument very similar to that in the proof of
Theorem 4 that u(t) solves a system of the form
x∆ = A(t)xσ ,
where A(t) = B(t)C(t)

1 −µ

 0 −1
 . .
 .. ..
B=

 0 0

 0 0
0 0
where in this case (surpressing arguments)
n−1
n−2
(−1)n−1 µ p
µ2 · · · (−1)n−3 µn−3 (−1)n−2 µ p
n−3
n−2
µ · · · (−1)n−3 µn−4 (−1)n−2 µ p
(−1)n−1 µ p
..
..
..
..
. ···
.
.
.
n−1 µ2
n−3
n−2 µ
(−1)
0 ···
(−1)
(−1)
p
p
n−2 1
(−1)n−1 µp
0 ···
0
(−1) p
0 ···
0
0
(−1)n−1 p1










and

−1
0
..
.
0
0
..
.
0
1
..
.
···
···
0
0
..
.
0
0
..
.


C=
···

 0 ··· ··· ···
0
(−1)n−1
n−2
−q0 q1 −q2 · · · (−1) qn−2
0



.


Using the sign conditions (14) on the coefficient functions qi (t) the rest of the proof is
similar to the end of the proof of Theorem 4.
We can now slightly improve Corollary 5.
Corollary 7. Assume that the dynamic equation (10) is regressive and q ∈ R+ , where
n−1
1 X
(−µ(t))n−1−j pj (t).
q(t) :=
R(t) j=0
Further assume that
(15)
R(t)
i−1
X
(−1)n−j−1 µi−j−1 (t)qj (t) ≥ 0,
j=0
for t ∈ [a, ω)T , 1 ≤ i ≤ n−1. Then the dynamic equation (10) has a nontrivial solution
u satisfying
u(t) > 0,
i
(−1)i u∆ (t) ≥ 0,
t ∈ [a, ω)T ,
1 ≤ i ≤ n − 1,
and
(−1)n (eq (t, a)u∆
n−1
(t))∆ ≥ 0,
t ∈ [a, ω)T .
3. A Third Order Nonlinear Dynamic Equation
In this section we will be concerned with the third order nonlinear dynamic equation
(16)
x∆∆∆ + p(t)x∆σ + r(t)xγσ = 0,
where γ is the quotient of odd integers and p, q are rd-continuous functions on [a, ω)T .
This may be considered as an analogue of the third order Emden–Fowler equation.
10
ERBE, PETERSON, AND TISDELL
These results are related to some results of Erbe [4] dealing with monotonicity properties of a third order nonlinear differential equation. To help us prove our main result
concerning the dynamic equation (16) we first prove two important lemmas.
Lemma 8. If x is a solution of (16) and b ∈ [a, ω)T , then
b
Z
γ
(17)
∆
b
Z
σ
[(x∆∆ (t))2 − p(t)(x∆σ (t))2 ]∆t − x∆ (t)x∆∆ (t)]ba .
r(t)(x (t)x (t)) ∆t =
a
a
Proof. Assume x is a solution of (16), then multiplying both sides of equation (16) by
x∆σ (t) and integrating from a to b we get
b
Z
x
∆σ
(t)x
∆∆∆
b
Z
p(t)(x
(t)∆t +
∆σ
2
(t)) ∆t +
a
a
Z
b
r(t)(xγ (t)x∆ (t))σ ∆t = 0.
a
After an integration by parts on the first term one easily gets the desired result (17). In connection with the third order dynamic equation (16), we will be concerned with
the second order dynamic equation
y ∆∆ + p(t)y σ = 0.
(18)
Definition We say that (18) is right-disfocal on [a, ω)T provided if y is a solution of
(18), with y(s) = 0, y ∆ (s) > 0, then y ∆ (t) > 0 on (s, ω)T , for all s ∈ T.
2
1
Lemma 9. Assume v(t) > 0 with v ∈ Crd
and assume y ∈ Crd
. Then
Z b
(19)
a
2
b
Z b
v ∆∆ (t) σ
y (t)v ∆ (t)
2
2
(y (t)) + σ
(y (t)) ∆t =
F (t)∆t +
,
v (t)
v(t)
a
a
∆
2
q
∆
√y(t)v σ(t) .
where F (t) := y (t) vv(t)
σ (t) −
∆
v(t)v (t)
MONOTONE SOLUTIONS
11
Proof. Consider (here we surpress arguments)
Z b
v ∆∆ σ 2
∆ 2
(y ) + σ (y ) ∆t
v
a
2 σ
Z b
(y)
∆∆
∆ 2
∆t
v
=
(y ) +
v
a
2 ∆ #
2 b
Z b"
y
y ∆
∆ 2
∆
(y ) −
=
v ∆t +
v
(integrating by parts)
v
v
a
a
2 b
Z b
v(y 2 )∆ − y 2 v ∆
y ∆
∆ 2
∆
(y ) −
=
v ∆t +
v
(quotient rule)
σ
vv
v
a
a
2 b
Z b
2vyy ∆ + µv(y ∆ )2 − y 2 v ∆
y ∆
∆ 2
∆
=
(y ) −
(product rule)
v
v ∆t +
σ
vv
v
a
a
2 b
Z b
2yy ∆ v ∆ y 2 (v ∆ )2 µ(y ∆ )2 v ∆
y ∆
∆ 2
+
−
v
=
(y ) −
∆t +
σ
σ
σ
v
vv
v
v
a
a
2 b
Z b ∆ 2
∆ ∆
2 ∆ 2
2yy v
y (v )
(y ) σ
y ∆
=
(v − µv ∆ ) −
+
v
∆t +
σ
σ
σ
v
v
vv
v
a
a
2 b
Z b
∆ ∆
2 ∆ 2
v ∆ 2 2yy v
y (v )
y ∆
=
(y ) −
+
v
∆t +
σ
σ
σ
v
v
vv
v
a
a
2 b
2
Z b r
∆
y ∆
yv
v
√
=
−
∆t
+
v
y∆
vσ
v
vv σ
a
a
2 b
Z b
y ∆
F 2 ∆t +
=
.
v
v
a
a
With the aid of Lemmas 8 and 9 we can now easily prove the following theorem.
Theorem 10. If (18) is right-disfocal on [a, ∞)T and r(t) ≤ 0 on [a, ∞)T and not
identically zero on any nondegenerate time scale subinterval of [a, ∞)T , then (16) has
a solution x satisfying
x(t) > 0,
x∆ (t) > 0,
x∆∆ (t) > 0
on (σ(a), ∞)T .
Proof. Let x be a solution of (16) satisfying
x(a) = x∆ (a) = 0,
x∆∆ (a) > 0.
The claim is that x(t) > 0, x∆ (t) > 0, and x∆∆ (t) > 0 on (σ(a), ∞)T . Assume this is
not the case. Then there is a first b ∈ (σ(a), ω)T such that x∆∆ (b) ≤ 0. From Lemma
8, using x∆ (a) = 0, we get
12
ERBE, PETERSON, AND TISDELL
Z
b
γ
∆
Z
σ
b
[(x∆∆ (t))2 − p(t)(x∆σ (t))2 ]∆t − x∆ (t)x∆∆ (t)]ba
r(t)(x (t)x (t)) ∆t =
a
a
Z
b
=
[(x∆∆ (t))2 − p(t)(x∆σ (t))2 ]∆t − x∆ (b)x∆∆ (b).
a
Since (16) is right-disfocal it is easy to see that there is a solution v of (16) satisfying
v ∆ (t) > 0,
v(t) > 0,
t ∈ [a, b]T .
Then by Lemma 9, with y(t) := x∆ (t), we have that
Z b
Z b
γ
∆
σ
r(t)(x (t)x (t)) ∆t =
[(x∆∆ (t))2 − p(t)(x∆σ (t))2 ]∆t − x∆ (b)x∆∆ (b)
a
a
Z b
v ∆∆ (t) σ
(y (t))2 ]∆t − x∆ (b)x∆∆ (b)
[(y ∆ (t))2 + σ
=
v
(t)
a
2 b
Z b
y ∆
2
F (t)∆t +
=
− x∆ (b)x∆∆ (b)
v
v
a
a
∆ 2 b
(x ) ∆
v
≥
− x∆ (b)x∆∆ (b)
v
a
∆
2
(x (b)) ∆
=
v (b) − x∆ (b)x∆∆ (b).
v(b)
Since the left hand side is strictly negative to get a contradiction it suffices to show
that the right hand side
D :=
(x∆ (b))2 ∆
v (b) − x∆ (b)x∆∆ (b) ≥ 0.
v(b)
We know that x∆∆ (b) ≤ 0. If x∆∆ (b) = 0, then
D=
(x∆ (b))2 ∆
v (b) ≥ 0.
v(b)
Next assume that x∆∆ (b) < 0. In this case ρ(b) < b and since x∆∆ (ρ(b)) > 0 implies
x∆ (b) > 0 we get that
D :=
(x∆ (b))2 ∆
v (b) − x∆ (b)x∆∆ (b) ≥ 0
v(b)
and this is the desired contradiction.
Acknowledgement: This research was supported by the Australian Research
Council’s Discovery Project DP0450752.
References
[1] R. Agarwal and M. Bohner, Basic calculus on time scales and some of its applications, Results
Math., 35 (1999), 3–22.
[2] M. Bohner and A. Peterson, Dynamic Equations on Time Scales:An Introduction With Applications, Birkháuser, Boston, 2001.
MONOTONE SOLUTIONS
13
[3] M. Bohner and A. Peterson, Editors, Advances in Dynamic Equations on Time Scales, Birkháuser,
Boston, 2003.
[4] L. Erbe, Oscillation, nonoscillation, and asymptotic behavior for third order nonlinear differential
equations, Annali di Mat. Pura ed Applicata, 100 (1976), 373–391.
[5] V. Kac and P. Chueng, Quantum Calculus, Springer, Universitext, New York, 2002.
[6] P. Hartman, Ordinary Differential Equations, Wiley, New York, 1964.
[7] S. Hilger, Analysis on measure chains-a unified approach to continuous and discrete calculus,
Results Math., 18 (1990), 18–56.
[8] B. Kaymakçalan, V. Laksmikantham, and S. Sivasundaram, Dynamical Systems on Measure
Chains, Kluwer Academic Publishers, Boston, 1996.
Download