Green's Functions for Elliptic and Parabolic Equations with Random Coecients

advertisement
New York Journal of Mathematics
New York J. Math. 6 (2000) 153{225.
Green's Functions for Elliptic and Parabolic
Equations with Random Coecients
Joseph G. Conlon and Ali Naddaf
Abstract. This paper is concerned with linear uniformly elliptic and par-
abolic partial dierential equations in divergence form. It is assumed that
the coecients of the equations are random variables, constant in time. The
Green's functions for the equations are then random variables. Regularity
properties for expectation values of Green's functions are obtained. In particular, it is shown that the expectation value is a continuously dierentiable
function whose derivatives are bounded by the corresponding derivatives of
the heat equation. Similar results are obtained for the related nite dierence
equations.
Contents
1. Introduction
2. Proof of Theorem 1.6
3. Proof of Theorem 1.5
4. Proof of Theorem 1.4|Diagonal Case
5. Proof of Theorem 1.4|O Diagonal case
6. Proof of Theorem 1.2
References
1.
153
159
174
192
204
215
224
Introduction
Let ( F ) be a probability space and a : ! Rd(d+1)=2 be a bounded measurable function from to the space of symmetric d d matrices. We assume that
there are positive constants such that
(1.1)
Id a(!) Id ! 2 in the sense of quadratic forms, where Id is the identity matrix in d dimensions. We
assume that Rd acts on by translation operators x : ! , x 2 Rd , which are
measure preserving and satisfy the properties x y = x+y , 0 = identity, x y 2 Rd .
We assume also that the function from Rd to dened by (x !) ! x !,
Received May 25, 2000.
Mathematics Subject Classication. 35R60, 60J75.
Key words and phrases. Green's functions, diusions, random environments.
153
ISSN 1076-9803/00
Joseph G. Conlon and Ali Naddaf
154
2 Rd , ! 2 , is measurable.
It follows that with probability 1 the function
a(x !) = a(x!), x 2 Rd , is a Lebesgue measurable function from Rd to d d
matrices.
Consider now for ! 2 such that a(x !) is a measurable function of x 2 Rd ,
x
the parabolic equation
(1.2)
d @
@u = X
@ u(x t !)
a
(
x
!
)
ij
@t ij=1 @xi
@xj
x 2 Rd t > 0
u(x 0 !) = f (x !) x 2 Rd :
It is well known that the solution of this initial value problem can be written as
u(x t !) =
Z
Rd
Ga (x y t !)f (y !)dy
where Ga (x y t !) is the Green's function, and Ga is measurable in (x y t !).
Evidently Ga is a positive function which satises
Z
(1.3)
Rd
Ga (x y t !)dy = 1:
It also follows from the work of Aronson 1] (see also 5]) that there is a constant
C (d ) depending only on dimension d and the uniform ellipticity constants of (1.1) such that
yj :
0 Ga (x y t !) C (dd=2 ) exp C;j(dx ;
)t
t
In this paper we shall be concerned with
the expectation value of Ga over .
Denoting expectation value on by
we dene the function Ga (x t) x 2
Rd t > 0 by
D
E
Ga (x 0 t ) = Ga (x t) :
2
(1.4)
Using the fact that x y = x+y x y 2 Rd , we see from the uniqueness of solutions
to (1.2) that
Ga (x y t !) = Ga (x ; y 0 t y !)
whence the measure preserving property of the operator y yields the identity,
D
E
Ga (x y t ) = Ga (x ; y t) :
From (1.3), (1.4) we have
Z
Rd
Ga (x t)dx = 1 t > 0
xj
d
0 Ga (x t) C (dd=2 ) exp C (;j
d )t x 2 R t > 0:
t
In general one cannot say anything about the smoothness properties of the function
Ga (x y t !). We shall, however, be able to prove here that Ga (x t) is a C 1 function
of (x t) x 2 Rd , t > 0.
(1.5)
2
Green's Functions for Equations with Random Coecients
155
Theorem 1.1. Ga(x t) is a C 1 function of (x t) x 2 Rd t > 0. There is a
constant C (d ), depending only on d such that
@Ga
;jxj2 C (d ) exp
(
x
t
)
@t
C (d )t
td=2 + 1
@Ga
;jxj2 :
C (d ) exp
(
x
t
)
@xi
C (d )t
td=2 + 1=2
The Aronson inequality (1.5) shows us that Ga (x t) is bounded by the kernel
of the heat equation. Theorem 1.1 proves that corresponding inequalities hold for
the rst derivatives of Ga (x t). We cannot use our methods to prove existence of
second derivatives of Ga (x t) in the space variable x. In fact we are inclined to
believe that second space derivatives do not in general exist in a pointwise sense.
As well as the parabolic problem (1.2) we also consider the corresponding elliptic
problem,
(1.6)
d
ij =1
@u (x !) = f (x !) x 2 Rd :
aij (x !) @x
j
; X @x@ i
If d 3 then the solution of (1.6) can be written as
Z
u(x !) =
Rd
Ga (x y !)f (y !)dy
where Ga (x y !) is the Green's function and is measurable in (x y !). It follows
again by Aronson's work that there is a constant C (d ), depending only on
d , such that
(1.7)
0 Ga (x y !) C (d )=jx ; yjd;2 d 3:
Again we consider the expectation of the Green's function, Ga (x), dened by
D
E
Ga (x y ) = Ga (x ; y):
It follows from (1.7) that
0 Ga (x) C (d )=jxjd;2 d 3:
Theorem 1.2. Suppose d 3. Then Ga(x) is a C 1 function of x for x 6= 0. There
is a constant C (d ) depending only on d , such that
@Ga C (d )
(x) x 6= 0:
@xi
jxjd;1
We can also derive estimates on the Fourier transforms of Ga (x t) and Ga (x).
For a function f : Rd ! C we dene its Fourier transform f^ by
f^(
) =
Z
Rd
f (x)eix dx 2 Rd :
Evidently from the equation before (1.5) we have that jG^ a (
t)j 1
156
Joseph G. Conlon and Ali Naddaf
Theorem 1.3. The function G^a(
t) is continuous for 2 Rd , t > 0, and dieren-
tiable with respect to t. Let satisfy 0 < 1. Then there is a constant C ( )
depending only on , such that
jG^a(
t)j 1C (+ j
j2)t]
@G
C ( )j
j2
^a
(
t
)
@t
1 + j
j2 t]1+
where j
j denotes the Euclidean norm of 2 Rd .
Remark 1.1. Note that the dimension d does not enter in the constant C ( ).
Also, our method of proof breaks down if we take ! 1.
In this paper we shall be mostly concerned with a discrete version of the parabolic
and elliptic problems (1.2), (1.6). Then Theorems 1.1, 1.2, 1.3 can be obtained as
a continuum limit of our results on the discrete problem. In the discrete problem
we assume Zd acts on by translation operators x : ! x 2 Zd, which
are measure preserving and satisfy the properties x y = x+y 0 = identity. For
functions g : Zd ! R we dene the discrete derivative ri g of g in the i th direction
to be
ri g(x) = g(x + ei) ; g(x) x 2 Zd
where ei 2 Zd is the element with entry 1 in the i th position and 0 in other
positions. The formal adjoint of ri is given by ri , where
ri g(x) = g(x ; ei ) ; g(x) x 2 Zd:
The discrete version of the problem (1.2) that we shall be interested in is given by
(1.8)
d
@u = ; X
ri aij (x !)rj u(x t !)] x 2 Zd t > 0
@t
ij =1
u(x 0 !) = f (x !) x 2 Zd:
The solution of (1.8) can be written as
X
u(x t !) =
Ga (x y t !)f (y !)
y2Zd
where Ga (x y t !) is the discrete Green's function. As in the continuous case, Ga
is a positive function which satises
X
Ga (x y t !) = 1:
y2Zd
It also follows from the work of Carlen et al 3] that there is a constant C (d )
depending only on d such that
2 =tg C
(
d
)
min
fj
x
;
y
j
j
x
;
y
j
:
0 Ga (x y t !) exp ;
C (d )
1 + td=2
Now let Ga (x t) x 2 Zd t > 0, be the expectation of the Green's function,
D
E
(1.9)
Ga (x y t ) = Ga (x ; y t):
Green's Functions for Equations with Random Coecients
Then we have
X
x2Zd
(1.10)
157
Ga (x t) = 1 t > 0
d ) exp
Ga (x t) C1(+
td=2
2 ; minCfj(xdj jx)j =tg
x 2 Zd t > 0 :
The discrete version of Theorem 1.1 which we shall prove is given by the following:
Theorem 1.4. Ga(x t) x 2 Zd t > 0 is dierentiable in t. There is a constant
C (d ), depending only on d such that
2 @Ga
C (d ) exp ; minfjxj jxj =tg
(
x
t
)
@t
C (d )
1 + td=2 + 1
2 C
(
d
)
min
jriGa(x t)j 1 + td=2 + 1=2 exp ; Cfj(dxjjx)j =tg :
Let satisfy 0 < 1. Then there is a constant C ( d ) depending only on
d such that
2 =tg min
fj
x
j
j
x
j
C
(
d
)
(1.11)
jri rj Ga(x t)j 1 + t(d+1+)=2 exp ; C (d ) :
Remark 1.2. As in Theorem 1.1, Theorem 1.4 shows that rst derivatives of
Ga (x t) are bounded by corresponding heat equation quantities. It also shows
that second space derivatives are almost similarly bounded. We cannot put = 1
in (1.11) since the constant C ( d ) diverges as ! 1.
The elliptic problem corresponding to (1.8) is given by
(1.12)
d
X
ij =1
ri aij (x !)rj u(x !)] = f (x !) x 2 Zd:
If d 3 then the solution of (1.12) can be written as
X
u(x !) =
Ga (x y !)f (y !)
y2Zd
where Ga (x y !) is the discrete Green's function. It follows from Carlen et al 3]
that there is a constant C (d ) depending only on d such that
(1.13)
0 Ga (x y !) C (d )=1 + jx ; yjd;2] d 3:
Letting Ga (x) be the expectation of the Green's function,
D
E
Ga (x y ) = Ga (x ; y)
it follows from (1.13) that
(1.14)
0 Ga (x) C (d )=1 + jxjd;2 ] d 3:
We shall prove a discrete version of Theorem 1.2 as follows:
Theorem 1.5. Suppose d 3. Then there is a constant C (d ), depending only
on d such that
jriGa(x)j C (d )=1 + jxjd;1] x 2 Zd:
Joseph G. Conlon and Ali Naddaf
158
Let satisfy 0 < 1. Then there is a constant C ( d ) depending only on
d such that
jri rj Ga(x)j C ( d )=1 + jxjd;1+ ]
x 2 Zd:
Remark 1.3. As in Theorem 1.4 our estimates on the second derivatives of Ga(x)
diverge as ! 1.
Next we turn to the discrete version of Theorem 1.3. For a function f : Zd ! C
we dene its Fourier transform f^ by
X
f^(
) =
f (x)eix 2 Rd :
x2Zd
For 1 k d 2 Rd , let ek (
) = 1 ; eiek and e(
) be the vector e(
) =
(e1 (
) : : : ed(
)). Let G^ a (
t) be the Fourier transform of the function Ga (x t) x 2
Zd t > 0, dened by (1.9). From the equation before (1.10) it is clear that
jG^ a(
t)j 1.
Theorem 1.6. The function G^ a(
t) is continuous for 2 Rd and dierentiable
for t > 0. Let satisfy 0 < 1. Then there is a constant C ( ) depending
only on , such that
jG^a(
t)j 1C+(je(
))j2 t]
2
j @@tG^a (
t)j C1(+ je()
)jj2et(]
1+)j
where je(
)j denotes the Euclidean norm of e(
) 2 C d .
In order to prove Theorems 1.1{1.6 we use a representation for the Fourier transform of the expectation of the Green's function for the elliptic problem (1.12), which
was obtained in 4] . This in turn gives us a formula for the Laplace transform of the
function G^ a (
t) of Theorem 1.6. We can prove Theorem 1.6 then by estimating
the inverse Laplace transform. In order to prove Theorems 1.4, 1.5 we need to use
interpolation theory, in particular the Hunt Interpolation Theorem 10]. Thus we
prove that G^ a (
t) is in a weak Lp space which will then imply pointwise bounds
on the Fourier inverse. We shall prove here Theorems 1.4{1.6 in detail. In the
nal section we shall show how to generalize the proof of Theorem 1.5 to prove
Theorem 1.2. The proofs of Theorems 1.1 and 1.3 are left to the interested reader.
We would like to thank Jana Bjorn and Vladimir Maz'ya for help with the proof of
Lemma 2.6.
There is already a large body of literature on the problem of homogenization of
solutions of elliptic and parabolic equations with random coecients, 4] 6] 7] 8]
11]. These results prove in a certain sense that, asympotically, the lowest frequency
components of the functions Ga (x) and Ga (x t) are the same as the corresponding
quantities for a constant coecient equation. The constant depends on the random
matrix a(). The problem of homogenization in a periodic medium has also been
studied 2] 11], and similar results been obtained.
Green's Functions for Equations with Random Coecients
2.
159
Proof of Theorem 1.6
Let G^ a (
t) 2 Rd t > 0, be the function in Theorem 1.6. Our rst goal will
be to obtain a formula for the Laplace transform of G^ a (
t), which we denote by
G^ a (
),
Z 1
G^ a (
) =
dt e;t G^ a (
t) Re() > 0 :
0
It is evident that G^ a (
) is the Fourier transform of the expectation of the Green's
function for the elliptic problem,
(2.1)
u(x !) +
d
X
ij =1
ri aij (x!)rj u(x !)] = f (x !)
x 2 Zd :
In 4] we derived a formula for this. To do that we dened operators @i 1 i d,
on functions : ! C by @i (!) = (ei !) ; (!), with corresponding adjoint
operators @i 1 i d, dened by @i (!) = (;ei !) ; (!). Hence for 2 Rd
we may dene an operator L on functions : ! C by
d
L (!) = P X ei(ei ;ej ) @i + ei(;
)] aij (!) @j + ej (
)] (!)
ij =1
where P is the projection orthogonal to the constant function and ej (
) is dened
just before the statement of Theorem 1.6. Note that L takes a function to a
function L satisfying hL i = 0. Now for 1 k d 2 Rd Re() > 0, let
k (
!) be the solution to the equation,
(2.2)
L + ]k (
!) +
d
X
j =1
eiej @j + ej (;
) akj (!) ; hakj ()i] = 0 :
Then we may dene a d d matrix q(
) by,
*
(2.3)
qkk (
) = akk () +
0
0
d
X
j =1
akj ()e;iej @j + ej (
)] k (
)
0
+
:
The function G^ a (
) is then given by the formula,
(2.4)
G^ a (
) = + e(
)q(1
)e(;
) 2 Rd Re() > 0 :
We actually established the formula (2.4) in 4] when is real and positive. In that
case q(
) is a d d Hermitian matrix bounded below in the quadratic form sense
by Id . It follows that G^ a (
) is nite for all positive . We wish to establish
this for all satisfying Re() > 0. We can in fact argue this from (2.1). Suppose
the function on the RHS of (2.1) is a function of x only, f (x !) = f (x). Then the
Fourier transform u^(
!) of the solution to (2.1) satises the equation,
hu^(
)i = G^a(
)f^(
) 2 Rd :
Joseph G. Conlon and Ali Naddaf
160
If we multiply (2.1) by u(x !), and sum with respect to x, we have by the Plancherel
Theorem,
Z
Z
2
2
jj
ju^(
!)j d
jf^(
)j2 d
:
;]d
;]d
Since f^(
) is an arbitrary function it follows that jG^ a (
)j 1=jj. We improve
this inequality in the following:
Lemma 2.1. Suppose Re() > 0 and 2 Rd . Let = (1 : : : d) 2 C d . Then
(2.5)
Re + q(
)] Re() + jj2
(2.6)
Im()Imq(
)] 0:
Proof. From (2.2), (2.3) we have that
qkk (
) =
d
DX
0
ij =1
Thus we have
aij () ki + eiei @i + ei (;
)]k (;
)
D
E
k j + e;iej @j + ej (
)]k (
) + k (;
)k (
) :
0
(2.7) q(
) =
0
d
DX
ij =1
where
E
0
h
i
aij () i + eiei @i + ei (;
)]'(
)
E
D
E
j + e;iej @j + ej (
)]'(
) + '(
)'(
)
(2.8)
Evidently we have that
(2.9) L + ]'(
!) +
'(
) =
d
X
k=1
k
d
X
j =1
d
X
k=1
k k (
) :
eiej @j + ej (;
)] akj (!) ; hakj ()i] = 0 :
It follows from the last equation that
L + ]'(
) = L + ]'(
)
whence
(2.10) L + Re()]'(
) ; '(
)] = ;i Im()'(
) + '(
)] :
Hence
E
D
(2.11) '(
) ; '(
)]L + Re()]'(
) ; '(
)]
D
E
= ;i Im() '(
) ; '(
)]'(
) + '(
)] :
Observe that since the LHS of (2.11) is real, the quantity
D
E
'(
) ; '(
)]'(
) + '(
)]
is pure imaginary.
Green's Functions for Equations with Random Coecients
161
Next for 1 j d, let us put
Aj = j + e;iej @j + ej (
)] 12 f'(
) + '(
)g
Bj = e;iej @j + ej (
)] 12 f'(
) ; '(
)g:
Then
q(
) =
d
DX
ij =1
E
D
E
aij ()Ai ; Bi ]Aj + Bj ] + '(
)'(
) :
We can decompose this sum into real and imaginary parts. Thus
d
DX
ij =1
E
aij ()Ai ; Bi ]Aj + Bj ] =
d
DX
ij =1
aij ()Ai Aj
+ 2i Im
d
DX
ij =1
E
;
d
DX
ij =1
aij ()Bi Bj
E
E
aij ()Ai Bj :
Evidently the rst two terms on the RHS of the last equation are real while the
third term is pure imaginary. We also have that
D
E
'(
)'(
)
= 14
n
o
'(
) + '(
)] + '(
) ; '(
)]
f'(
) + '(
)] + '(
) ; '(
)]g
= 14 j'(
) + '(
)j2 ; 41 j'(
) ; '(
)j2
; 2Im(i ) '(
) ; '(
)]L + Re()]'(
) ; '(
)]
where we have used (2.11). Observe that the rst two terms on the RHS of the last
equation are real while the third term is pure imaginary. Hence
D
E
'(
)'(
) =
Re() Dj'(
) + '(
)j2 E ; Re() Dj'(
) ; '(
)j2 E
4
4
Dh
iE
+ 12 '(
) ; '(
)]L + Re()]'(
) ; '(
)
D
E
D
E
+ i Im(4 ) j'(
) + '(
)j2 ; i Im(4 ) j'(
) ; '(
)j2
) Dh'(
) ; '(
)]L + Re()]'(
) ; '(
)iE :
; 2i Re(
Im()
Joseph G. Conlon and Ali Naddaf
162
We conclude then from the last four equations that
(2.12)
Re q(
)] =
d
DX
ij =1
aij ()Ai Aj
E
;
d
DX
ij =1
aij ()Bi Bj
E
D
E
D
E
+ Re(4 ) j'(
) + '(
)j2 ; Re(4 ) j'(
) ; '(
)j2
Dh
iE
+ 21 '(
) ; '(
)]L + Re()]'(
) ; '(
)
(2.13)
d
DX
E
D
E
aij ()Ai Bj + Im(4 ) j'(
) + '(
)j2
ij =1
D
E
) ; Im(4 ) j'(
) ; '(
)j2 ; 2Re(
Im(
)
Dh
iE
'(
) ; '(
)]L + Re()]'(
) ; '(
) :
Imq(
)] = 2 Im
We can simplify the expression on the RHS of (2.12) by observing that
d
DX
D
E
E
aij ()Bi Bj = 14 '(
) ; '(
)]L '(
) ; '(
)] :
ij =1
Hence we obtain the expression,
(2.14) Req(
)] =
d
DX
D
E
aij ()Ai Aj + Re(4 ) j'(
) + '(
)j2
ij =1
E
D
E
+ 41 '(
) ; '(
)]L + Re()]'(
) ; '(
)] :
Now all the terms on the RHS of the last expression are nonnegative, and from
Jensen's inequality,
d
DX
ij =1
aij ()Ai Aj
E
jj2 :
The inequality in (2.5) follows from this.
To prove the inequality (2.6) we need to rewrite the RHS of (2.13). We have
now
d
DX
ij =1
E
aij ()Ai Bj =
d
DX
ij =1
aij ()i Bj
Dh
E
iE
+ 41 '(
) + '(
)]L '(
) ; '(
) :
Green's Functions for Equations with Random Coecients
163
From (2.10) we have that
Dh
i
E
'(
) + '(
) L '(
) ; '(
)]
= ;i Im() j'(
) + '(
)j2
i
E
Dh
; Re() '(
) + '(
) '(
) ; '(
)]
= ;i Im() j'(
) + '(
)j2
Re() Dh'(
) ; '(
)i L + Re()] '(
) ; '(
)]E
+ i Im(
)
where we have used (2.11). We also have that
d
DX
ij =1
aij ()i Bj
E
= 21
d
Dn X
=;
=;
ij =1
o
i eiej @j + ej (;
) aij () '(
) ; '(
)]
E
1 DfL + ] '(
) + L + ] '(
)g '(
) ; '(
)]E
4D 1 fL + Re()] '(
) + '(
)]g '(
) ; '(
)]E
4
+ 4i Im() j'(
) ; '(
)j2
i
E
Dh
= ; 14 '(
) + '(
) L + Re()] '(
) ; '(
)]
+ 4i Im() j'(
) ; '(
)j2
= 4i Im() j'(
) + '(
)j2 + j'(
) ; '(
)j2 :
It follows now from (2.13) and the last three identities that
Imq(
)] = 14 Im() j'(
) + '(
)j2
(2.15)
+ 41 Im() j'(
) ; '(
)j2 :
The inequality (2.6) follows.
Let us denote by G^ a (
t) t > 0 the inverse Laplace transform of G^ a (
). Thus
from (2.4) we have
1 ZN
et
G^ a (
t) = Nlim
(2.16)
dIm()]:
!1 2 ;N + e(
)q(
)e(;
)
It follows from Lemma 2.1 that, provided Re() > 0, the integral in (2.16) over the
nite interval ;N < Im() < N exists for all N . We need then to show that the
limit as N ! 1 in (2.16) exists.
Lemma 2.2. Suppose Re() > 0 and 2 Rd . Then the limit in (2.16) as N ! 1
exists and is independent of Re() > 0.
Joseph G. Conlon and Ali Naddaf
164
Proof. We rst note that for any 2 C d q(
) and q(
) are complex
conjugates. This follows easily from (2.7). We conclude from this that
(2.17)
Z N
1 ZN
et
1
et
d
Im(
)]
=
Re
2 ;N + e(
)q(
)e(;
)
0
+ e(
)q(
)e(;
) dIm()]
Z N
nZ N
o
h(
) cosIm()t]dIm()] + k(
) sinIm()t]dIm()]
= 1 expRe()t]
0
0
where
h(
) = Re + e(
)q(
)e(;
)] j + e(
)q(
)e(;
)j2
k(
) = Im + e(
)q(
)e(;
)] j + e(
)q(
)e(;
)j2 :
We show there is a constant C
, depending only on such that
(2.18)
Z 1
(2.19)
0
jh(
)jdIm()] C
Re() > 0:
To see this, observe from (2.14), (2.15) that
(2.20)
jh(
)j fgRe2 +()(Im+ )2 ] where
DP
E
i
E
Dh
d a ()A A + 1 '(
) ; '(
) L '(
) ; '(
)]
i j
ij =1 ij
4
h
=
1
1 + 4 hj'(
) + '(
)j2 i + 41 hj'(
) ; '(
)j2 i
and the quantity fg in the rst line of (2.20) is the same as the one in the second.
It is easy to see that
(2.21)
d
DX
E
Dh
i
E
aij ()Ai Aj + 14 '(
) ; '(
) L '(
) ; '(
)] je(
)j2 :
ij =1
We can also obtain an upper bound using the fact that
d
DX
E
Dh
i
aij ()Ai Aj + 14 '(
) ; '(
) L '(
) ; '(
)]
ij =1
d
DX
2
Dh
E
i
E
E
aij ()ei (
)ej (
) + 12 '(
) + '(
) L '(
) + '(
)]
ij =1
Dh
i
E
+ 14 '(
) ; '(
) L '(
) ; '(
)]
Dh
i
E
2je(
)j2 + 14 '(
) + '(
) L '(
) + '(
)]
E
E
D
D
+ 12 '(
)L '(
) + 21 '(
)L '(
) :
Green's Functions for Equations with Random Coecients
We see from (2.9) that
E
D
i
Dh
165
'(
)L '(
) je(
)j2
D
E
'(
)L '(
) je(
)j2
'(
) + '(
) L '(
) + '(
)]
E
4je(
)j2:
Hence we obtain an upper bound
(2.22)
d
DX
E
Dh
i
E
aij ()Ai Aj + 41 '(
) ; '(
) L '(
) ; '(
)] 4je(
)j2 :
ij =1
Observe that the upper and lower bounds (2.21), (2.22) are comparable for all
Re() > 0, 2 Rd .
Next we need to nd upper and lower bounds on the quantity,
1 j'(
) + '(
)j2 + 1 j'(
) ; '(
)j2 4
4
= 21 j'(
)j2 + 12 j'(
)j2 :
Evidently zero is a trivial lower bound. To get an upper bound we use (2.9) again.
We have from (2.9) that
D
jj j'(
)j2 '(
)L '(
)
d
DX
E1=2
aij ()ei (
)ej (
) je(
)j2
whence
E1=2
ij =1
j'(
)j2 je(
)j2=jj:
We conclude then that
(2.23) 41 j'(
) + '(
)j2 + 41 j'(
) ; '(
)j2 je(
)j2 =jj:
We use this last inequality together with (2.21), (2.22) to prove (2.19). First
note from (2.5) that
jh(
)j 1= Re() + je(
)j2 Re() > 0 2 Rd :
We also have using (2.20),(2.21), (2.22),(2.23) that
(2.24)
jh(
)j Re() + 4je(
)j2 =Im()2 + Re() + 12 je(
)j2 2 Re() > 0 jj je(
)j2 2 Rd :
We then have
Z 1
Z je()j2
dIm()]
j
h(
)jdIm()] Re(
) + je(
)j2
0
0
Z 1
Re() + 4je(
)j2
+
dIm()] C
1
2
0 Im( ) + Re( ) + 2 je(
)j2 ]2
Joseph G. Conlon and Ali Naddaf
166
where C
depends only on , whence (2.19) follows.
Next we wish to show that the function k(
) of (2.18) satises
(2.25)
j@k(
)=@ Im()]j 1=jIm()j2 Re() > 0 2 Rd :
In view of the analyticity in of q(
) we have
1
@
@k(
)=@ Im()] = ;Re @
+ e(
)q(
)e(;
)
(
)=@]e(;
)
= Re 1 + e+(
e)(@q
)q(
)e(;
)]2 :
Let us denote now by (
!) the function '(
!) of (2.8) with = e(;
). It
is easy to see then from (2.9) that
(2.26)
'(
!) = (;
!) :
It follows now from (2.7) and (2.9) that
e(
)@q(
)=@]e(;
) = (;
)(
)
E
D
D
@ (
)E
(
;
)
(
)
+
(
;
)
+ @
@
@
D
E D @
E
@
+ (;
)L @ (
) + @ (;
)L (
)
D
Hence,
(2.27)
E
; @
@ (;
)L + ](
)
D
E
; (;
)L + ] @
(
)
@
= (;
)(
)
j@k(
) @ Im()]j 1 + ( + e(
)q (
; )(
) 1
)e(;
)]2 Im()2
from (2.15).
We can use (2.19), (2.27) to show the limit in (2.16) exists. In fact from (2.19)
it follows that
lim
Z N
N !1 0
exists. We also have that
Z N
0
h(
) cosIm()t]dIm()]
Z N
@k(
) n1 ; cosIm()t]odIm()]
0 @ Im( )]
1
+ t k(
Re() + iN )f1 ; cosNt]g:
k(
) sinIm()t]dIm()] = ; 1t
From (2.6) we have that
jk(
Re() + iN )f1 ; cosNt]gj 2=N:
Green's Functions for Equations with Random Coecients
167
From (2.27) we have that
1 Z 1 @k(
) n1 ; cosIm()t]odIm()] C
(2.28)
t 0 @ Im()] for some universal constant C . Hence the limit,
Z N
lim
N !1 0
k(
) sinIm()t]dIm()]
exists, whence the result holds.
Lemma 2.3. Let G^ a(
t) be dened by (2.16). Then G^a(
t) is a continuous
bounded function. Furthermore, for any 0 < 1, there is a constant C ( ),
depending only on , such that
.
(2.29)
jG^a(
t)j C ( ) 1 + je(
)j2 t] :
Proof. Consider the integral on the LHS of (2.28). We can obtain an improvement
on the estimate of (2.28) by improving the estimate of (2.27). We have now from
(2.27), (2.14), (2.15), and (2.23) that
j2=jj :
(2.30)
j@k(
)=@ Im()]j 21je+(
)j4je+(
)jIm(
)j2
Assume now that je(
)j2 t > 2 and write the integral on the LHS of (2.28) as a sum,
2
1 Z 1=t + 1 Z je()j + 1 Z 1 :
t
t
t je()j2
We have now from (2.28), (2.30) that for 0 < 1
1 Z 1=t 1 Z 1=t
1+
1 ; cosIm()t] 1; dIm()]
t 0
t 0 2 je(
)j2 Im()
Im()2
0
1=t
Z 1=t 1+
2 2 je(
)j2 Im() dIm()] C ( )=je(
)j t]
0
for some constant C ( ) depending only on . Next we have
2
2
1 Z je()j 1 Z je()j
1+
t 1=t
t 1=t 2 je(
)j2 Im() dIm()]
10t1;2
Finally, we have
2
= 1 +2 logjej(e
()
j)2j t t] :
1
Z 1
t je()j2
1t
Z 1
dIm()] 1 :
je(
)j2 t
je()j2 Im()2
We conclude therefore that the integral on the LHS of (2.28) is bounded by
C ( )
1 + je(
)j2 t]
Joseph G. Conlon and Ali Naddaf
168
for any 0 < 1.
Next we need to estimate as N ! 1 the integral
Z N
Z N
@h(
) sinIm()t] dIm()]
h(
) cosIm()t] dIm()] = ; 1t
@
Im()]
0
0
1
+ t h(
Re() + iN ) sinNt]:
It is clear from (2.6) that limN !1 h(
Re() + iN ) = 0: We have now that
@h(
)
@
1
(2.31)
@ Im()] = ; Im @ + e(
)q(
)e(;
)
(
)=@]e(;
)
= Im 1 + e+(
e)(@q
)q(
)e(;
)]2 :
Hence the estimates (2.27), (2.30) on the derivative k(
) apply equally to the
derivative of h. We therefore write the integral
(2.32)
2
1 Z 1 @h(
) j sinIm()t]jdIm()] = 1 Z 1=t + 1 Z je()j + 1 Z 1 :
t 0 @ Im()] t 0
t 1=t
t je()j2
as a sum just as before. We have now from (2.30),
1
Z 1=t
1 Z 1=t
1+
2
t 0
t 0 je(
)j2 Im() j sinIm()t]jdIm()]
C
=je(
)j2t]
where C
depends only on . The other integrals on the RHS of (2.32) are
estimated just as for the corresponding integrals in k. We conclude that
Z 1
0
h(
) cosIm()t] dIm()]
j
C ( )=1 + je(
)j2 t]
for any 0 < 1. It follows now from (2.17) that (2.29) holds for any 0 < 1.
Lemma 2.4. The function G^a(
t) 2 Rd is t dierentiable for t > 0. For any
0 < 1, there is a constant C ( ) depending only on such that
@G
^ a (
@t
t) C ( )
t1 + je(
)j2 t]
t > 0:
Proof. From Lemma 2.2 we have that
(2.33)
exp;Re()t]G^a (
t) =
Z 1
h(
) cosIm()t]dIm()]
(
) f1 ; cosIm()t]g dIm()]:
; t 0 @@kIm(
)]
0 Z
1 1
Green's Functions for Equations with Random Coecients
169
We consider the rst term on the RHS of (2.33). Evidently, for nite N , we have
(2.34)
@
@t
Z N
0
h(
) cosIm()t]dIm()] = ;
= 1t
;
Z N
@
Z N
0
h(
) Im() sinIm()t]dIm()]
@ Im()] fh(
)Im()gf1 ; cosIm()t]g dIm()]
1 h(
Re() + iN )N f1 ; cosNt]g:
t
0
It is clear from the inequality (2.24) that
lim h(
Re() + iN )N = 0:
N !1
We have already seen from (2.24) that
Z 1
(2.35)
0
jh(
)jdIm()] C
for some constant C
depending only on . We shall show now that we also
have
Z 1 @h
(
)
(2.36)
@ Im( )] jIm( )jdIm( )] C
:
0
To see this we use (2.31) to obtain
1 + h(
)(;
)i
@h(
)
(2.37)
@ Im()] = Im + e(
)q(
)e(;
)]2 :
For any complex number a + ib it is clear that
Im (a +1ib)2 = (a2;+2ab
b2 )2
whence
1 min 1 2jaj :
Im
(a + ib)2 a2 jbj3
We conclude
then that
je(
)j2 :
Im + e(
)q(1
)e(;
)]2 min 2 je1(
)j4 8
jIm()j3
It follows that
(2.38)
Z 1
Z je()j2 Z 1
1
Im() Im + e(
)q(
)e(;
)]2 dIm()] +
C
:
0
0
je()j2
We also have
that
h (
) (;
)i je(
)j2 :
min
+ e(
)q (
)e(;
)]2 2 je(
)j2 jIm()j jIm()j3
We conclude that
Z 1
h (
) (;
)i Im() + e(
)q(
)e(;
)]2 dIm()] C
:
(2.39)
0
Joseph G. Conlon and Ali Naddaf
170
The inequality (2.36) follows from (2.38), (2.39). It follows from (2.34), (2.35),
(2.36) that the rst integral on the RHS of (2.33) is dierentiable with respect to t
for t > 0 and
Z 1
@
h(
) cosIm()t]dIm()t] =
(2.40) @t
0
1Z 1
@ fh(
) Im()gh1 ; cosIm()t]idIm()]:
t
@ Im()]
0
Furthermore, there is the inequality,
Z 1
@
h(
@t
0
) cosIm()t]dIm()]
Next we wish to improve this inequality to
Z 1
@
h(
@t
0
(2.41)
) cosIm()t]dIm()]
C
=t:
t1 +Cje(
)j2 t] :
To do this we integrate by parts on the RHS of (2.40) to obtain
@ Z 1 h(
) cosIm()t]dIm()] =
(2.42) @t
0
1
t2
Z 1
2@h(
0
) + Im() @ 2 h(
) sinIm()t]dIm()]:
@ Im()]
@ Im()]2
We have already seen in Lemma 2.3 that
1 Z 1 @h(
) j sinIm()t]jdIm()] C
:
t 0 @ Im()] je(
)j2 t]
for any 0 < 1, where we assume je(
)j2 t > 1. The inequality (2.41) will
follow therefore if we can show that
(2.43)
1 Z 1 @ 2 h(
) jIm()j sinIm()t]dIm()] C
je(
)j2 t > 1 0 < 1:
@ Im( )]2 t
je(
)j2 t]
0
To prove this we use the fact that
2
@ h(
) @ 2
1
(2.44)
@ Im( )]2 @ 2 + e(
)q (
)e(;
) @2
1
@ 1 + h(;
)(
)i
@2 + e(
)q(
)e(;
) = ; @ + e(
)q(
)e(;
)]2
2
= 2f1++he((;
)
q(
))e((
;
)]3)ig
; h@(;
)=@] (+
e(
))qi(+
h)e(;(;
)] 2)@(
)=@]i] :
Observe now that similarly to (2.27), (2.30) we have that
2
2
j
1
+
h
(
;
)
(
)
ij
1
2
2
(2.45) j + e(
)q(
)e(;
)j3 min jIm()j3 3 je(
)j6 + 3 je(
)j2 jj2 :
Green's Functions for Equations with Random Coecients
171
We can conclude from this last inequality just like we argued in Lemma 2.3 that
1 Z 1 j1 + h(;
)(
)ij2 jIm()jj sinIm()t]jdIm()] C
t 0 j + e(
)q(
)e(;
)j3
je(
)j2 t]
for 0 < 1 je(
)j2 t > 1.
Next from (2.9) we see that @(
)=@ satises the equation,
(2.46)
L + ] @(
) + (
) = 0:
@
From this equation and the Schwarz inequality we easily conclude that
j@(
)=@j2 jj;2 j(
)j2 :
It follows then that
jh(;
)@(
)=@]ij2 min 1
(2.47)
j + e(
)q(
)e(;
)j2
jIm()j3 2 je(
)j2 jj2 :
Since this inequality is similar to (2.45) we conclude that (2.43) holds.
We have proved now that (2.41) holds. To complete the proof of the lemma we
need to obtain a similar estimate for the second integral on the RHS of (2.33). To
see this observe that we can readily conclude that the integral is dierentiable in t
and
Z 1
@
1
@k(
)
(2.48)
@t ; t 0 @ Im(Z)] f1 ; cosIm()t]g dIm()]
1 @k(
)
= t22
@ Im()] f1 ; cosIm()t]g dIm()]
0
Z 1 2
1
@ k(
) Im() f1 ; cosIm()t]g dIm()]:
+ t2
@
Im()]2
0
We have already seen in Lemma 2.3 that
C
1 Z 1 @k(
) f1 ; cosIm()t]gdIm()] t2 0 @ Im()] t1 + je(
)j2 t]
for any 0 < 1. Hence we need to concern ourselves with the second integral
on the RHS of (2.48). Now it is clear that @ 2 k(
)=@ Im()]2 satises the same
estimates we have just established for @ 2 h(
)=@ Im()]2 . It follows in particular
that
1 Z 1 @ 2 k(
) jIm()jf1 ; cosIm()t]gdIm()]
t2 0 @ Im()]2 Z 1
cosIm()t] dIm()] C
t12 0 1 ;Im(
)]2
t
for some universal constant C . Arguing as in Lemma 2.3 we also have that
1 Z 1 @ 2 k(
) jIm()jf1 ; cosIm()t]gdIm()]
t2 0 @ Im()]2 Z 1=t
0
+
Z je()j2
1=t
+
Z 1
je()j
2
tjCe(
)
j2 t]
Joseph G. Conlon and Ali Naddaf
172
for 0 < 1 je(
)j2 t > 1. The last three inequalities then give us the same
estimate on the derivative of the second integral on the RHS of (2.33) as we have
already obtained for the rst integral.
The estimate for @ G^ a(
t)=@t in Lemma 2.4 diverges as t ! 0. We rectify this
in the following:
Lemma 2.5. There is a constant C
depending only on such that
@G
^ a (
t) 2
d
@t C
je(
)j 2 R t > 0:
Proof. To bound the derivative of the rst integral on the RHS of (2.33) it is
sucient to show that
1 Z 1 jh(
)jf1 ; cosIm()t]g dIm()] C je(
)j2
(2.49)
t
0
1 Z 1 @h(
) jIm()jf1 ; cosIm()t]g dIm()] C je(
)j2 :
t 0 @ Im()] We have now from (2.24) that
jh(
)j 4je(
)j2=jIm()j2 Re() = 0
and from the inequalities before (2.39) that
j@h(
)=@ Im()]j 9je(
)j2=jIm()j3 Re() = 0:
The inequalities (2.49) follow from these last two inequalities.
The derivative of the second integral is given by the RHS of (2.48). Using the
identity
1 Z N 2@k(
) + Im() @ 2 k(
) f1 ; cosIm()t]gdIm()]
t2 0 @ Im()]
@ Im()]2
=
Z N
k(
) Im() cosIm()t]gdIm()]
@k
(
Re
(
)
+
iN
)
f1 ; cosNt]g
+ k(
Re() + iN ) + N
@ Im()]
t2
; Nk(
Re()t+ iN ) sin Nt
0
we see that the derivative of the second integral on the RHS of (2.33) is also given
by the formula
(2.50)
lim
m!1
Z m=t
0
k(
) Im() cosIm()t]dIm()]
where the limit in (2.50) is taken for integer m ! 1. Writing + e(
)q(
)e(;
) =
a() + ib() we see from (2.18) that
2
Im(
)
b
(
)
Im(
)
a
(
)
Im()k(
) = a()2 + b()2 = b() 1 ; a()2 + b()2 :
Green's Functions for Equations with Random Coecients
173
Observe now that just as before we have
Z 1
jIm()j a()2 dIm()] Z 1 a()2 dIm()]
jb()j a()2 + b()2
0 a( )2 + b( )2
0
Z je()j2
0
+
Z 1
je()j2
C
je(
)j2
for a constant C
depending only on . We also have from (2.15) that
Im() = 1= 1 + 1 j(
)j2 + 1 j(;
)j2 b()
2
2
1 j (
)j2 + 1 j (;
)j2 2
= 1; 2 1
:
1 + 2 hj(
)j2 i + 21 hj(;
)j2 i
It follows therefore from (2.50) that the result will be complete if we can show that
Z 1
j
(
)j2 dIm()] C
je(
)j2
0
with a similar inequality
for (;
). Note that (2.51) does not follow from the
bound j(
)j2 je(
)j2 =jj which we have already established. In view of
(2.51)
(2.8) the inequality (2.51) is a consequence of the following lemma.
Lemma 2.6. Let '(
) be the function dened by (2.8). Then there is the inequality,
Z 1
0
j'(
)j2 dIm()] 2jj2:
Proof. Let '(t ) t > 0, be the solution to the initial value problem,
(2.52)
@'(t ) + L '(t ) = 0 t > 0
@t
'(0 ) +
It is clear that
d
X
k=1
k
d X
j =1
@j + ej (;
) eiej akj () ; hakj ()i] = 0:
Z 1
'(
) =
0
e;t '(t )dt Re() > 0:
Hence the Plancherel Theorem yields
Z 1
0
j'(
j
) 2 dIm()]
2
Z 1
0
'(t )j2 dt:
We can estimate the RHS of this last equation by dening a function !(t ) t > 0,
which satises the equation,
L !(t ) = '(t ) t > 0:
Hence (2.52) yields
E D
(2.53)
!(t ) @'(t ) + j'(t )j2 = 0 t > 0:
@t
Joseph G. Conlon and Ali Naddaf
174
Observe next that
t ) E = 1 @ D!(t )L !(t )E :
!(t ) @'(@t
2 @t
Integrating (2.53) w.r. to t and using the positivity of L we conclude that for
any > 0, there is the inequality,
Z 1
D
E
'(t )j2 dt 12 !( )L !( ) :
We have now from (2.52) that
D
D
!(0 )L !(0 )
The result follows now on letting ! 0.
3.
E
jj2:
Proof of Theorem 1.5
For real and positive, let q(
) be the d d matrix dened in (2.3). We shall
show that the function Ga (x) of Theorem 1.5 is given by,
1 Z
;ix =e(
)q(
)e(;
) x 2 Zd :
Ga (x) = lim
(3.1)
!0 (2)d ;]d d
e
In view of the fact that q(
) Id , we see from the following lemma that the
limit (3.1) exists if d 3.
Lemma 3.1. The limit lim!0 q(
) exists for all 2 Rd .
Proof. For > 0 x 2 Zd, let G (x) be the Green's function satisfying the equation
d
X
ri ri G (x) + G (x) = (x) x 2 Zd
i=1
where (x) is the Kronecker function. Now for ' 2 L2 () there is a unique
solution 2 L2 () to the equation,
d
X
@i + ei (;
)] @i + ei (
)] (!) + (!) = '(!) ! 2 i=1
which can be written as
(3.2)
(!) =
X
x2Zd
G (x)e;ix '(x !) ! 2 :
Observe the RHS of (3.2) is square integrable since G (x) decreases exponentially
as jxj ! 1. Now for 1 k k0 d we dene operators Tkk by Tkk (') =
e;iek @k + ek (
)], where is the solution to the equation,
0
(3.3)
d
X
@ + e
i=1
i
;
)] @i + ei(
)] (!) + (!)
0
i(
= eiek @k + ek (;
)] '(!) ! 2 :
0
0
0
Green's Functions for Equations with Random Coecients
From (3.2) we see that
175
X
rk rk G (x)e;ix '(x!) ! 2 :
Since rk rk G (x) is exponentially decreasing as jxj ! 1 it follows that Tkk 2
2
Tkk '(!) =
(3.4)
0
0
x2Zd
0
0
is a bounded operator on L (). Observe that the projection operator P on L ()
orthogonal to the constant function commutes with Tkk . It follows from (3.3)
that kTkk k 1, independent of as ! 0. We wish to show that there is an
operator Tkk 0 on L2 () with kTkk 0 k 1 such that
(3.5)
lim kT
' ; Tkk 0 'k = 0 ' 2 L2 ():
!0 kk 0
0
0
0
0
0
We follow the argument used to prove the von Neumann Ergodic Theorem 9].
Thus if ' 2 L2 () satises @k + ek (;
)]' = 0, then Tkk ' = 0. Thus we set
Tkk 0 ' = 0 for ' in the null space of @k + ek (;
)]: Now the range of @k + ek (
)]
is dense in the subspace of L2 () orthogonal to the null space of @k + ek (;
)]. If
' = e;iek @k + ek (
)] with 2 L2 () then
0
0
0
0
0
0
0
0
0
0
0
0
0
X
Tkk '(!) =
0
x2Zd
rk rk rk G (x)e;ix (x !)
0
0
It is clear from this representation that if we take
Tkk 0 '(!) =
0
X
x2Zd
rk rk rk G0 (x)e;ix (x !)
0
0
! 2 :
!2
then Tkk 0 (') 2 L2() and (3.5) holds. Thus Tkk 0 is dened on a dense
subspace of L2() and kTkk 0 k 1. If follows easily that one can extend the
denition of Tkk 0 to all of L2 () and (3.5) holds.
Suppose now b : ! Rd(d+1)=2 is a bounded measurable function from to the
space of symmetric d d matrices. We dene kbk to be
0
0
0
0
d
d
ij =1
i=1
kbk = sup j X bij (!)i j j : X 2i = 1
!2 :
Next let H() = f' = ('1 : : : 'd ) : 'i 2 L2() 1 i dg be the Hilbert space
with norm k'k2 = k'1 k2 + + k'dk2 ' = ('1 : : : 'd ). We dene an operator
Tb on H() by
Tb '() k =
Evidently,
d
X
ij =1
Tki bij ()'j ()
r=1
1 k d:
Tb '() k = e;iek @k + ek (
)]() 1 k d
where () satises the equation,
d X
@r + er (;
) @r + er (
) (!) + (!) =
d
X
ij =1
eiei @i + ei (;
)] bij ()'j () :
Joseph G. Conlon and Ali Naddaf
176
If follows that Tb is a bounded operator on H() with norm kTb k kbk. In
view of (3.5) there exists a bounded operator Tb0 on H() such that kTb0 k kbk and
lim kT ' ; Tb0 'k = 0 ' 2 H() :
!0 b
Let us take now b() = Id ; a() =, whence kbk < 1. Let k (
) be the
function satisfying (2.2). Dene "k (
) to be
;
"k (
) = e;ie1 @1 + e1 (
)]k (
) : : : e;ied @d + ed(
)]k (
) :
Then "k (
) 2 H() and satises the equation,
d
X
1
(3.6) "k (
!) ; PTb=
"k (
!) + Tj=
akj (!) ; akj () = 0
j =1
where for 1 j d Tj is a bounded operator from L2 () to H() dened by
Tj (') = (T1j ' : : : Tdj '). Writing
"k (
) = ("k1 (
) : : : "kd (
))
we have from (2.3) that
(3.7)
D
qkk (
) = akk () +
0
0
d
X
j =1
E
akj ()"k j (
) :
0
It is clear now that lim!0 q(
) exists.
Our next goal is to assert some dierentiability properties of q(
) in which
are uniform as ! 0.
Lemma 3.2. Suppose > 0 1 k d ' 2 L2() and b() a random symmetric
matrix satisfying kbk < 1. For 2 Rd let "(
) be the solution to the equation,
(3.8)
(I ; PTb )"(
) = Tk '():
Then "(
), regarded as a function of 2 Rd to H() is dierentiable. The
derivative of "(
) is given by
(3.9)
@ " (
) = (I ; PT );1 @ T
b
@
@
k '()
j
j
+ (I ; PTb );1 P @
@ Tb (I ; PTb );1 Tk '() 1 j d:
j
Proof. Consider rst the k0 th component of Tk '(), which is
Tk k '() =
X
rk rk G (x)e;ix '(x):
x2Zd
Since G (x) decreases exponentially as jxj ! 1 it is clear that Tk k '(), red
2
0
0
garded as a mapping from R to L () is dierentiable and
(3.10)
X
@ T
ixj rk rk G (x)eix '(x ):
'
(
)
=
;
k
k
@
j
x2Zd
0
0
0
Green's Functions for Equations with Random Coecients
177
We regard (3.10) as the denition of the operator @=@
j Tk k which is clearly a
bounded operator on L2 (). Similarly we can dene @=@
j Tb by
0
d @
X
@
(3.11)
@
j Tb () r = ij =1 @
j Tri bij ()j ()
= (1 : : : d ) 2 H() 1 r d. Again it is clear that @=@
j Tb is a
bounded operator on H(). From (3.8) "(
) is given by the Neumann series,
0
0
0
"(
) =
1
X
(PTb )n Tk '()
n=0
which converges since kbk < 1. Formally the derivative of "(
) is given by
(3.12)
1
@ "(
) = X
(PTb )n @
@ Tk '()
@
j
j
n=0
1
X
+
(PTb )n (P @
@ Tb )(PTb )m Tk '():
j
nm=0
Since the RHS of (3.12) converges it is easy to see that "(
), regarded as a
mapping from Rd to H() is dierentiable and the derivative is given by (3.12).
Finally observe that the RHS of (3.12) is the same as the RHS of (3.9).
p
<
1
let Lp ( ; d) be the space of functions (
!) 2
; d ! 2 such that kkp < 1, where
Z
kkpp = ;]d d
j(
)j2 p=2 :
Suppose now f : ; d ! C is a smooth periodic function. The Fourier
For 2
transform fb of f is given by
1
Z
;ix
d
(2)d ;]d f (
)e d
x 2 Z :
Since fb is rapidly decreasing we can dene for ' 2 L2 () an operator T' by
X
(3.13)
T'(f )(
!) =
fb(x)e;ix '(x !) 2 ; d ! 2 :
fb(x) =
x2Zd
Evidently T'(f ) 2 L1( ; d).
Lemma 3.3. Suppose 2 p 1
and ' 2 L2 (). Then the operator T' extends
p
to a bounded operator from L ( ; d ) to Lp ( ; ]d )and the norm of T'
satises kT'k k'k.
Proof. Now by Bochner's Theorem 9] there is a positive measure d' on ; ]d
such that
D
E
'(x )'(y ) =
Z
;]d
ei(x;y)
d' ( )
Joseph G. Conlon and Ali Naddaf
178
whence
(3.14)
Z
jT'(f )(
j ;]d jf ( ; )j2 d'( ):
R
Since ;]d d' ( ) = k'k2 the fact that kT'k k'k follows immediately from
(3.14) for p = 2 p = 1. The fact that kT'k k'k when 2 < p < 1 can be
)2 =
obtained by an application of Holder's inequality.
Next for 2 p < 1 let Hp ( ; ]d ) be the space of functions (
!) 2
; ]d with (
) 2 H() such that kkp < 1, where
Z
kkpp = ;]d d
k(
)kp:
We dene an operator for ' 2 L2 () T'b by
X
(3.15)
T'b (f )(
) =
fb(x)e;ix x b()(I ; PTb );1 Tk '() :
x2Zd
It is clear that if f is smooth then T'b (f ) is in H1 ( ; ]d ) provided kbk < 1.
Lemma 3.4. Suppose kbk < 1. Then,
(a) If ' 2 L2(), T'b extends to a bounded operator from L1 (; ]d ) to
H1 ( ; ]d). The norm of T'b satises
kT'bk (1kb;kkk'bkk) :
(b) If ' 2 L1(), T'b extends to a bounded operator from L2 (; ]d ) to
H2( ; ]d ). The norm of T'b satises
pdkbkk'k
kT'bk (1 ; kbk)21 :
Proof. To prove (a) observe that (3.14) implies
kT'b(f )(
)k2 kf k21kb()(I ; PTb );1Tk '())k2
2
2
2
kf(1k1;kbkkbkk)'2 k :
To prove (b) we consider the integral
(3.16)
Z
X
X
d
k
fb(x)e;ix x b()(PTb )m Tk '] k2 = (2)d k
;]d
where
1
X
x2Zd
is given by
x1 ++xm+2 =r
= (2)d
mY
+1
fb(x1 )b(x1 )
j =2
r2Zd
X
r2Zd
k2
k 2 k2
1
r rG (xj )P b(x ++xj ) r rk G (xm+2 )'(r )
1
Green's Functions for Equations with Random Coecients
and
2
is given by
X
y1 :::ym+1
mY
+1
fb(y1 )b(y1 )
Observe now that
(3.17)
)
)
r rG (yj ; yj;1 )b(yj )
;
mY
+1
rrG (yj ; yj;1)P b(yj ) rrk G (r ; ym+1)'(r ):
rrG (yj ; yj;1 )P b(yj )
fb(y1 )b(y1
= fb(y1 )b(y1
j =2
j =2
mY
+1
j =2
m
X
n=1
179
fb(y1 )b(y1 )
Y
n
j =2
rrG (yn+1 ; yn)b(yn )
rrk G (r ; ym+1)'(r )
r rk G (r ; ym+1)'(r )
rrG (yj ; yj;1)b(yj )
mY
+1
j =n+2
rrG (yj ; yj;1 )P b(yj )
r rk G (r ; ym+1)'(r ) :
Next let M be the space of complex d d matrices and L2 (Zd M) the set of
functions A : Zd ! M. We can make L2 (Zd M) into a Hilbert space by dening
the norm of A to be
kAk2M = (2)d X Tr(A (x)A(x)):
x2Zd
We can also dene an operator T on L2 (Zd M)by
X
T A(x) = A(y)r rG (x ; y) x 2 Zd:
y
It is easy to see that T is bounded on L2 (Zd M) and kT k 1. For n = 1 : : : m+1,
! 2 , yn 2 Zd, let us dene An (yn !) 2 M by
An (yn !) =
Y
n
X
y1 :::yn
fb(y1 )b(y1 !)
;1
j =2
r rG (yj ; yj;1 )b(yj !) :
It follows from the fact that kT k 1 that for any xed ! 2 the function
An ( !) 2 L2(Zd M) and
(3.18)
kAn( !)kM kbknkfIb dkM = pdkbknkf k2 :
Recall now that P is the projection operator, P() = () ; hi 2 L2(). If
we introduce the notation P as ()P = P() then
rrG (yn+1 ; yn)b(yn )
mY
+1
j =n+2
rrG (yj ; yj;1)P b(yj )
r rk G (r ; ym+1)'(r )
Joseph G. Conlon and Ali Naddaf
180
is equal to
mY
+1
rrG (yn+1 ; yn)b(yn )
j =n+2
rrG (yj ; yj;1)P b(yj )
r rk G (r ; ym+1)'(r ) :
Let us denote now by L2 (Zd M) the set of functions A : Zd ! M with
norm,
kAk2Mran = (2)d X hTr(A (x )A(x ))i :
x2Zd
For n = 1 : : : m dene an operator Tn on this space by
X
TnA(ym+1 ) =
yn :::ym
A(yn )r rG (yn+1 ; yn )
mY
+1
b(yn )
j =n+2
rrG (yj ; yj;1)P b(yj ) :
We can see as before that Tn is a bounded operator on L2 (Zd M) and
(3.19)
kTnk kbkm+1;n:
Observe now from (3.17) that 2 (the expression inside the norm in the last line
of (3.16)) is given by
X
y1 :::ym+1
fb(y1 )b(y1
)
mY
+1
j =2
r rG (yj ;yj;1)P b(yj )
= T Am+1 (r )ek ] '(r ) ;
rrk G (r;ym+1)'(r )
m X
n=1
T Tn Aran
n (r )ek '(r )
d
where Aran
n denotes that An (x ! ) x 2 Z ! 2 is to be regarded as a function
of x only, with parameter !, on which Tn acts. We have now
X
(2)d
k T Am+1 (r )ek ] '(r )k2 k'k21 kAm+1( !)k2M
r2Zd
where we have used (3.18). Similarly we have
(2)d
X r2Zd
k
T Tn Aran
n (r )ek '(r )
k'k21dkbk2(m+1)kf k22
2
k k'k22 kT TnAran
n (! )kMran
2
where ! denotes the random parameter for Aran
n . The expectation is then to be
taken with respect to this parameter. If we use now (3.18), (3.19) we have
(2)d
X r2Zd
k
T Tn Aran
n (r )ek '(r )
k k'k22dkbk2(m+1)kf k22 :
2
Green's Functions for Equations with Random Coecients
181
We conclude therefore from (3.16) that
Z
d
k
d
;]
X
x2Zd
It follows that
Z
;]
fb(x)eix x b()(PTb )m Tk ' k2
d(m + 1) k'k21 kbk2(m+1)kf k22 m 0:
d
kT'b (f )(
)k2
d
1=2
1
pdk'k1kf k2 X pm + 1 kbkm+1
m=0
p
dk'k1kf k2kbk=(1 ; kbk)2 :
It follows from Lemma 3.3 and the Riesz-Thorin Interpolation Theorem 10] that
if ' 2 L1 () and kbk < 1 then T'b is a bounded operator from Lp(; ]d ) to
Hp ( ; ]d) 2 p 1 and the norm of T'b satises the inequality,
kT'bk C(1dk;bkkkb'kk)12
where Cd is a constant depending only on d. Consider next the weak spaces
Lpw (; ]d ) and Hwp ( ; ]d ) 2 < p < 1. Thus f 2 Lpw (; ]d ) if for
all > 0 there is the inequality,
(3.20)
measf
2 ; ]d : jf (
)j > g C p =p :
The weak Lp norm of f kf kpw is then the minimum constant C such that (3.20)
holds for all > 0. Similarly (
!) 2 Hwp ( ; ]d ) if for all > 0 there is the
inequality,
(3.21)
measf
2 ; ]d : k(
)k > g C p =p :
The weak Lp norm of kkpw is again the minimum constant C such that (3.21)
holds. Lemmas 3.3, 3.4 and Hunt's Interpolation Theorem 10] then imply the
following:
Lemma 3.5. Suppose 2 < p < 1. Then
(a) There is a constant Cp depending only on p such that T' is a bounded operator
from Lpw (; ]d ) to Lpw ( ; ]d ) and kT'k Cp k'k:
(b) There is a constant Cpd depending only on p and d such that T'b is a
bounded operator from Lpw (; ]d ) to Hwp ( ; ]d ) and
kT'bk C(1pd;kbkkkbk')k21 :
We can use Lemma 3.5 to obtain bounds on the rst two derivatives with respect
to of the function q(
) dened by (3.7).
Lemma 3.6. Let d = 3 > 0 1 k k0 d. Then qkk (
) is a C 1 function
of 2 Rd and for any i j 1 i j d the function @qkk =@
i 2 L3w (; ]d ) and
@ 2 qkk =@
i @
j 2 L3w=2 (; ]d ). Further, there is a constant C
, depending only
on such that
0
0
0
k@qkk =@
ik3w C
k@ 2qkk =@
i@
j k3=2w C
:
0
0
Joseph G. Conlon and Ali Naddaf
182
Proof. Observe that the function "k occurring in (3.7) satises (3.6) and hence
corresponds to the function " of Lemma 3.2 with ' 2 L1 (). Observe next from
(3.10) that
0
@T
@
i k k ' = T'(f )
where T' is dened by (3.13) and the function f is
0
f (
) = @
@ e;iek ; 1)(eiek ; 1)Gb (
)]:
(3.22)
i
0
Clearly f 2 L3w (; ]d ) and kf k3w C , where C is universal. Consider now the
formula (3.9) for the derivative of ". For the rst term on the RHS of (3.9) we
have
k(I ; PTb );1( @
@ i Tk ) '()k 1 ;1kbk k @
@ i Tk '()k:
It follows from Lemma 3.5 (a) that
@ T '() 2 L3 ( ; ]3 )
w
@
i k
k @
@ i Tk '()k3w C for some universal constant C , since k'k is bounded by a constant times . Hence
the rst term on the RHS of (3.9) is in L3w ( ; ]3 ) with norm bounded by a
constant depending only on . Similarly the second term on the RHS of (3.9) is
bounded by
1 k @ T
;1
1 ; kbk @
i b (I ; PTb ) Tk 'k :
It follows from (3.11),(3.15) that
@
;1
@
i Tb (I ; PTb ) Tk ' = T'b (f )
where T'b is like the operator (3.15) but acts on matrix valued functions f (
) =
fij (
)] 2 ; ]d . The functions fij (
) are similar to (3.22) and hence are in
L3w (; ]3 ). It follows by the argument of Lemma 3.4 and Lemma 3.5 (b) that
the second term on the RHS of (3.9) is in L3w ( ; ]3 ) with norm bounded by a
constant depending only on . We conclude that @qkk =@
i 2 L3w (; ]d ) with
norm bounded by a constant depending only on .
Next we turn to the second derivative, @ 2 qkk =@
i @
j . To estimate this we need
a formula for the second derivative of the function " of Lemma 3.2. One can see
0
0
Green's Functions for Equations with Random Coecients
from (3.9) that
@ 2 " (
) = ;I ; PT
b
183
P @
@ Tb
;1 ;
@
i @
j
j
;
;1 ; @
;
; 2
I ; PTb @
Tk '() + I ; PTb ;1 @
@@
Tk '()
i
i j
;
;1 ; @
;
;1 ; @
+ I ; PTb P @
Tb I ; PTb
T
'()
k
@
j
i
;
;
;
;
;
+ I ;PTb ;1 P @
@ Tb I ;PTb ;1 P @
@ Tb I ;PTb ;1 Tk '()
j
i
2
;
;
;1 ;
@
+ I ; PTb P @
@
Tb I ; PTb ;1 Tk '():
i j
D
E
Let '0 2 H1 () and consider the expectation value '0 ()@ 2 "(
)=@
i @
j .
From above this is a sum of ve terms. The rst term is given by
D
E
'0 ()(I ; PTb );1 (P @
@ Tb )(I ; PTb );1 ( @
@ Tk )'()
j
Dh
i
i
= (P @
@ Tb )(I ; PTb );1 '0 () (I ; PTb );1 ( @
@ Tk )'()
j
i
whence we have
D
0
' ( )(I
E
E
; PTb );1 (P @
@ j Tb )(I ; PTb );1 ( @
@ i Tk )'() k(P @
@ j Tb )(I ; PTb );1 '0()k k( @
@ i Tk )'()k=(1 ; kbk):
We have already seen that both
k P @
@ j Tb (I ; PTb );1 '0()k k( @
@ i Tk ) '()k
D
E
are in L3w (; ]3 ). We conclude that the rst term in '0 ()@ 2 "(
)=@
i @
j
is in L3w=2 (; ]3 ) with norm depending only on . Consider now the second
term. To estimate this observe that if T' is given by (3.13) then
X
(3.23)
T'(fg)(
!) =
fb(x)eix T'(g)(x !) :
We have from (3.22) that
x2Zd
@2 T
@
i @
j k k ' = T'(fg)
where fg 2 L3w=2 (; ]3 ) , whence we can choose f g so that f g 2 L3w (; ]3 ).
With thisD choice of f g and usingEthe formula (3.23) we can argue as for the rst
term of '0 ()@ 2 "(
)=@
i @
j to conclude that the second term is also in
L3w=2(; ]3 ) Dwith norm depending only
E on . One can estimate the other
three terms of '0 ()@ 2 "(
)=@
i @
j similarly.
0
Joseph G. Conlon and Ali Naddaf
184
Proposition 3.1. Let d = 3. Then the function Ga(x) dened by (3.1) satises
the inequalities
(3.24)
0 Ga (x) C
1 + jxj]2;d x 2 Zd
(3.25)
jrGa(x)j C
log 2 + jxj]=(1 + jxj)2 x 2 Zd:
The constant C
depends only on .
Proof. For > 0 let Ga (x) be dened by
Ga (x) = (21 )d
Z
;]d
d
e;ix =e(
)q(
)e(;
) x 2 Zd:
Then by Lemma 3.1 it follows that lim!0 Ga (x) = Ga (x). It will be sucient
therefore for us to obtain bounds on Ga (x) rGa (x) which are uniform as ! 0.
Since q(
) Id 2 ; ]d , we clearly have Ga (x) Cd , where Cd is a
constant depending only on d 3. To obtain the decay in (3.24) we write
Ga (x) =
(3.26)
Z
jj<=jxj
+
Z
jj>=jxj
where is a parameter, 1 2. Evidently there is a constant C such that
Z
jj<=jxj
d
C=jxj:
To bound the second integral in (3.26) we integrate by parts. Thus
Z
Z
d
@ e;ix
= ix1 (21 )d
(3.27)
;
1
jj>=jxj
jj>=jxj e(
)q(
)e(;
) @
1
Z
;1 1
e;ix
@ e(
)q(
)e(;
)]
= ix
d
d
2
(2
)
e
(
)
q
(
)
e
(
;
)]
@
1
1
jj>=jxj
Z
;ix 1
1
1
e
+ ix (2)d
d
1
jj==jxj e(
)q(
)e(;
)]j
j
where we have assumed wlog that jx1 j = maxjx1 j : : : jxd j]. Evidently the surface
integral on the RHS of the last expression is bounded by C=jxj. We estimate the
volume integral by integrating by parts again. Thus,
(3.28)
Z
;ix
d
e(
)q(e
)e(;
)]2 @
@ e(
)q(
)e(;
)
jj>=jxj
Z
1
d
e(
)q(
)e(;
)]2 @
@ e(
)q(
)e(;
) ; @
@ e;ix
1 jj>=jxj
1
1
Z
@
1
@
1
d
e;ix @
e(
)q(
)e(;
)]2 @
e(
)q(
)e(;
)
= ix
1 jj>=jxj
1
1
Z
+ ix1
d
e;ix e(
)q(
)1e(;
)]2 j
j @
@ e(
)q(
)e(;
) :
= ix1
1 jj==jxj
1
1
Green's Functions for Equations with Random Coecients
185
We divide the surface integral in the last expression into three parts corresponding
to the three terms in the expression,
@e(
)
@ @q
@
1 e(
)q(
)e(;
) = @
1 q(
)e(;
) + e(
) @
1 (
)e(;
)
(;
) :
+ e(
)q(
) @e@
1
The surface integral corresponding to the rst and third terms in this expansion is
bounded by
CZ
d
C 0 =2
jxj
jj==jxj 2 j
j3
where C C 0 are universal constants. The surface integral corresponding to the
middle term is bounded by
1 k@q(
)=@
kd
CZ
1
2
jxj jj==jxj j
j2
(3.29)
for some universal constant C . To bound this we use the well known fact that if
f 2 Lpw (; ]3 ) 1 < p < 1, then for any measurable set E one has
Z
(3.30)
E
jf jd
Cp kf kpwm(E )1;1=p
where the constant Cp depends only on p. If we average the expression (3.29) over
1 < < 2, then we have from Lemma 3.6 that
Z 2
1
Z
C
1 k@q(
)=@
kd
d jxj
1
2
jj==jxj j
j2
0 2Z
C jx2 j jxj 1<jj<2jxj 1 d
k@q(
)=@
1 k C
;
;
where we have used (3.30) with p = 3.
Next we consider the volume integral on the RHS of (3.28). For any 1 2,
this is bounded by
Z
n
C
(
)
;4
;3
d
(3.31)
jxj jj>1=jxj j
j + j
j k@q(
)=@
1k
o
+ j
j;2 k@ 2q(
)=@
12 k + k@q(
)=@
1 k2]
for some constant C ( ) depending only on . Evidently there is a constant
C 0 ( ) such that
d
C ( ) Z
jxj jj>1=jxj j
j4
C 0( ):
Joseph G. Conlon and Ali Naddaf
186
Also we have
C ( ) Z
j xj
d
k@q(
)=@
1k
jj>1=jxj j
j3
1 Z
X
= C (jxj)
d
n=0 2n+1 =jxj>jj>2n=jxj
Z
1
X
2
;
3
n
C ( ) x
2
2n+1 =jxj>jj>2n=jxj
n=0
jj
k@q(
)=@
1k d
1
C 0 ( )jxj2 X 2;3n2n=jxj]2 C 00( )
n=0
where we have used Lemma 3.6 and (3.30). We can similarly bound the third
term in (3.31) using Lemma 3.6 and (3.30). We conclude that (3.31) is bounded
by a constant depending only on . If we put this inequality together with the
previous inequalities we obtain (3.24).
The proof of (3.25) is similar. For any unit vector n 2 Rd we have
Z
1
(3.32) n:rGa (x) = (2)d
d
e;ix ;n e(;
)]=e(
)q(
)e(;
) :
;]d
We do a decomposition similar to (3.26) and it is clear that
Z
jj<=jxj
C=jxj2:
For the fj
j > =jxjg integral we do a decomposition analogous to (3.27). It is clear
the surface integral which appears is bounded by C=jxj2 . For the other integral we
do a separate integration by parts as in (3.28). The average of the corresponding
surface integral over 1 2, is bounded by C ( )=jxj. The volume integral
is bounded analogously to (3.31) by
;3
C ( ) Z
;2
d
jxj jj>1=jxj j
j + j
j k@q(
)=@
1k
+ j
j;1 k@ 2 q(
)=@
12 k + k@q(
)=@
1 k2 ] :
Arguing as before we see this is bounded by C 0 ( )log1 + jxj]=jxj. Putting this
inequality together with the previous inequalities yields (3.25)
Proposition 3.1 gives an improvement of the estimate (1.14) when d = 3. We
wish now to obtain a corresponding improvement for all d 3. To do this we need
generalizations of Lemmas 3.4{3.6 appropriate for all d 3. Let A 2 M, the space
of complex d d matrices. The norm of A is dened to be
kAk2 = Tr(AA):
Similarly if A : ! M is a random function we dene kA()k by
kA()k2 = hTr(A()A())i :
Green's Functions for Equations with Random Coecients
187
Consider now functions A : ; ]d ! M. For 2 p 1 we can dene the space
Lp(; ]d M) with norm kAkp given by
kAkpp =
Z
;]d
kA(
)kpd
:
Similarly we can consider functions A : ; ]d ! M and associate with them
spaces Lp (; ]d M) with norm kAkp given by
kAk
p=
p
Z
;]d
kA(
)kpd
:
We dene an operator which generalizes the operator given in (3.15). For n =
1 2 : : : the operator Tn'b acts on n functions Ar : ; ]d ! M 1 r n.
The resulting quantity Tn'b (A1 : : : An ) is a function from ; ]d ! M.
Specically we dene
(3.33) Tn'b (A1 : : : An )(
) =
X
nY
;1 x1 :::xn 2Zd r=1
n
o
Abr (xr )e;ixr xr P b()(I ; PTb );1 Abn (xn )e;ixn xn '():
Evidently the operators (3.13),(3.15) correspond to the cases n = 1 and n = 2 in
(3.33).
Lemma 3.7. Suppose 1 p1 : : : pn p 2 and p11 + + p1n = 1p . Then if Ar 2
Lpr (; ]d
M) and
M) 1 r n, the function Tn'b(A1 : : :
An ) 2 Lp(; ]d n;1 k'k1 Qn kAr kp
k
b
k
r:
r=1
(3.34)
kTn'b(A1 : : : An)kp (1 ; kbk)2n
Proof. Consider rst the case n = 2. If A1 A2 2 L1(; ]d M) then it is clear
that T2'b (A1 A2 ) 2 L1 (; ]d M) and
kT2'b(A1 A2)k1 kbkk'k(12k;Ak1kb1k)kA2k1 :
If A2 2 L1 A1 2 L2 then we can see by the argument of Lemma 3.4 that
T2'b (A1 A2 ) 2 L2 and
(3.35)
kT2'b (A1 A2 )k2 kbkk'(1k1;kAkb2kk1)2kA1k2 :
It follows therefore by interpolation theory that if A2 2 L1 A1 2 Lp p 2, then
T2'b (A1 A2 ) 2 Lp and
kT2'b(A1 A2 )kp kbkk'(1k1;kAkb2kk1)2kA1kp :
(3.36)
Joseph G. Conlon and Ali Naddaf
188
Suppose next that A1 2 Lp1 A2 2 Lp2 and 1=p1 + 1=p2 = 1=2. We have now that
(3.37)
Z
;]d
d
kT2'b (A1 A2 )(
)k
2
1 Z
X
d
k
d
;]
m=0
1=2
X
x1 x2 2Zd
Ab1 (x1 )e;ix1 x1
k
P b( )(PTb )m Ab2 (x2 )e;ix2 x2 '( ) 2
1=2
:
Observe now that as in (3.16) one has
Z
d
k
d
;]
X
x1 x2 2Zd
Ab1 (x1 )e;ix1 x1 b()(Tb )m Ab2 (x2 )e;ix2 x2 '()k2
mY
+1
X
X
d
b
= (2)
A1 (y1 )b(y1 )
G (yj yj;1 )b(yj )
j =2
r2Zd y1 :::ym+1
Ab2 (r ym+1 )'(r ) 2
mY
+1
X
X
G (yj yj;1 )b(y )
(2)d ' 21
Ab1 (y1 )b(y1 )
j
j =2
r2Zd y1 :::ym+1
Ab2 (r ym+1 ) 2
X
Z
m
2
;
ix
1
b
= '1
d
A1 (x1 )e
x1 b( )(Tb ) Id A2 (
) 2
;]d
d
x1 2Z
Z
X
' 21
d
Ab1 (x1 )e;ix1 x1 b( )(Tb )m Id 2 A2 (
) 2
;]d
x1 2Zd
Z
2=p1
X
2
;
ix
m
p
1
1
b
'1
d
A1 (x1 )e
x1 b( )(Tb ) Id
A2 2p2 :
;]d
d
x1 2Z
k
;
kk
;
kk
rr
;
k
jj
jj
k
k k
k k
rr
We have already seen from interpolation theory that
;]
X
x1 2Zd
Ab1 (x1 )e;ix1 x1 b()(Tb )m Id kp1
k
k
d
k
d
;
k
Z
1=p1
jj jj
k
k
k k
kbkm+1kA1kp
1
:
2
:
We conclude therefore that
Z
d
k
d
;]
X
x1 2Zd
Ab1 (x1 )e;ix1 x1 b()(Tb )m Ab2 (x2 )e;ix2 x2 '()k2
Arguing as in Lemma 3.4 we also see that
Z
d
k
d
;]
X
x1 x2 2Zd
kbk2m+2k'k21kA1k2p kA2 k2p
1
Ab1 (x1 )e;ix1 x1 P b()(PTb )m Ab2 (x2 )e;ix2 x2 '()k2
(m + 1)2kbk2m+2k'k21kA1k2p kA2k2p
1
2
:
Green's Functions for Equations with Random Coecients
189
It follows now from (3.37) that
(3.38)
kT2'b(A1 A2 )k2 kbkk'k(11;kAk1bkkp)2kA2kp
1
2
provided 1=p1 + 1=p2 = 1=2. We can now use the inequalities (3.36),(3.38) to do a
further interpolation. Thus we have if p 2 and 1=p1 + 1=p2 = 1=p then
kT2'b(A1 A2)kp kbkk'k(11;kAk1bkkp)2kA2kp
1
2
:
This proves the result for n = 2.
To deal with n = 3 we subdivide into 7 cases:
(a) 1=p1 = 0 1=p2 = 0 1=p3 = 0,
(b) 1=p1 = 1=2 1=p2 = 0 1=p3 = 0,
(c) 1=p1 < 1=2 1=p2 = 0 1=p3 = 0,
(d) 1=p1 + 1=p2 = 1=2 1=p3 = 0,
(e) 1=p1 + 1=p2 < 1=2 1=p3 = 0,
(f) 1=p1 + 1=p2 + 1=p3 = 1=2,
(g) 1=p1 + 1=p2 + 1=p3 < 1=2.
For (a) it is easy to see that
2
kT3'b(A1 A2 A3)k1 kbk k'k2k(1A1;k1kbkkA)22k1kA3k1 :
For (b) we use the argument of Lemma 3.4 to conclude
2
kT3'b(A1 A2 A3)k2 kbk k'k1(1kA;1kk2bkkA)24k1kA3k1 :
The Riesz-Thorin Theorem applied to (a), (b) yield for (c) the inequality,
2
kT3'b(A1 A2 A3 )kp kbk k'k1(1kA;1kkpbkkA)42k1kA3k1
1
where p = p1 . For (d) we use the argument to obtain (3.38) to conclude that
kT3'b(A1 A2 A3)k2 kbk k'k1(1kA;1kkpbkkA)42kp kA3k1
2
1
2
:
Now the Riesz-Thorin Theorem applied to (c) and (d) yield for (e) the inequality
kT3'b(A1 A2 A3)kp kbk k'k1(1kA;1kkpbkkA)42 kp kA3k1
2
1
2
where 1=p = 1=p1 + 1=p2. To obtain an inequality for (f) we use the argument to
obtain (3.38). This reduces us to the case dealt with in (e). Hence we can use the
inequality for (e) to obtain the bound
kT3'b(A1 A2 A3 )k2 kbk k'k1(1kA;1kkpbkkA)42kp kA3kp
2
1
2
3
:
Finally the Riesz-Thorin Theorem applied to (e) and (f) yields the inequality (3.34)
with n = 3 for (g). Since it is clear we can generalize the method for n = 3 to all
n, the result follows.
190
Joseph G. Conlon and Ali Naddaf
Lemma 3.8. Suppose 1 > p1 : : : pn p > 2 and p11 + + p1n = 1p . Then if Ar 2
Lpwr (; ]d
M) 1 r n, the function Tn'b (A1 : : :
(3.39)
kTn'b(A1 : : :
M) and
An )kpw C kbk
n;1
where the constant C depends only on p1 : : : pn .
An ) 2 Lpw (; ]d k'k1 Qnr=1 kAr kpr w
(1 ; kbk)2n
Proof. We use Lemma 3.7 and Hunt's Interpolation Theorem 10]. Suppose n = 2.
Now from Lemma 3.7 we know that for a given p1 2 p1 1, then
kT2'b(A1 A2)kp kbkk'k(11;kAk1bkkp)2kA2k1
kT2'b(A1 A2 )k2 kbkk'(1k1;kAk1bkkp)2kA2kq
1
1
1
1
where 1=p1 + 1=q1 = 1=2. Hence the Hunt Theorem implies that
(3.40)
kT2'b(A1 A2 )kpw C (p1 p2)kbkk(1';k1kbkkA)12kp1 kA2kp2w
provided 1=p1 + 1=p2 = 1=p > 1=2 p2 < 1, and C (p1 p2 ) is a constant depending
only on p1 p2 . Suppose now that A2 is xed with kA2 kp2 w nite for some p2 2 <
p2 < 1. We consider the mapping
A1 ! T2'b (A1 A2 ):
We see from (3.40) that this maps L1 to Lpw2 . For " > 0 let p1 (") satisfy 1=p1(") +
1=p2 = 1=2 + " = 1=p("). From (3.40) we also see that it maps Lp1 (") to Lpw(") . It
follows again from interpolation theory that for any p1 , p1 (") < p1 < 1 it maps
Lpw1 to Lpw where 1=p1 + 1=p2 = 1=p. The inequality (3.39) for n = 2 follows from
this. It is clear this method can be generalised to all n.
Lemma 3.9. Let d 3 > 0 1 k k0 d. Then qkk (
) is a C 1 function
of 2 ; ]d . Further, let = (1 :Q
: : d), where i 0 1 i d, and jj =
jj
1 + + d < d. Then the function di=1 ( @@ i )i qkk (
) is in Ld=
w (; ]d ),
Qd
and k i=1 ( @@ i )i qkk kd=jjw C
d , where the constant C
d depends only on
d.
0
0
0
Proof. We argue as in Lemma 3.6. It is easy to see that the function " of
Lemma 3.2 has the property that
d
Y
i=1
( @
@ )i "(
)
i
is a sum of terms Tn'b (A1 : : : An )(
)ek , where 1 n jj Ar 2 Lpwr 1 r n, and 1=p1 + + 1=pn = jj=d. The result follows now from Lemma 3.8 if
jj < d=2. To deal with the case d=2 jj < d, we argue exactly as for the d = 3
case with jj = 2.
Green's Functions for Equations with Random Coecients
191
Proposition 3.2. Let d 3. Then the function Ga(x) dened by (3.1) satises
the inequalities
0 Ga (x) C
d 1 + jxj]2;d x 2 Zd
jrGa (x)j C
d log2 + jxj](1 + jxj)1;d x 2 Zd
where the constant C
d depends only on d.
Proof. We argue as in Proposition 3.1, using Lemma 3.9.
Proposition 3.2 gives an alternative derivation of the inequality (1.14). We can
extend the argument in the proposition to obtain Theorem 1.5. To do this we need
the following improvement of Lemma 3.9.
Lemma 3.10. Let qkk (
) be the function of Lemma 3.9 and jj = d;1. Suppose
2 Rd jj 1. Then for any " 0 < " < 1 the function
0
d
Y
( @
@ )i qkk (
+ ) ; qkk (
) =jj1;"
0
0
i=1 i
(d;")( ]d ), and
is in Ld=
w
Y
d @ 1;" i
(
) qkk (
+ ) qkk (
) = @
i
d=(d;")w
i=1
where the constant C
d depends only on d .
;
0
;
0
jj
C
d
Before proving Lemma 3.10 we rst show how Theorem 1.5 follows from it.
Proof of Theorem 1.5. We shall conne ourselves to the case d = 3 since the
argument for d > 3 is similar. Consider the representation (3.32) for n:rGa (x).
We have now,
Z
Z
= ix1 (21 )d
d
e;ix @
@ e(
;)qn(
e();e(
;) )
1
1
jj>=jxj
jj>=jxj
Z
; ix11 (21 )d jj==jxj d
e;ix e(
)nq(
e(;)e
();
) j
1j :
Evidently one has a bound for the surface integral,
Z
jj==jxj C=jxj2:
If we integrate by parts again in the volume integral we have
@
;
n
e
(
;
)
1 1 Z
;
ix
ix1 (2)d jj>=jxjd
e @
1 e(
)q(
)e(;
)
2 Z
1
1
@
n
e
(
;
)
;
ix
= x2 (2)d
d
e
@
1 e(
)q(
)e(;
)
jj>=jxj
1
Z
1
@
n
e
(
;
)
1
;
ix
d
e @
e(
)q(
)e(;
) j
1j :
+ x2 (2)d
1
jj==jxj
1
3
Since q(
) is bounded and @q(
)=@
i is in Lw it follows that the average of the
surface integral over 1 < < 2, is bounded by C=jxj2 for some constant C . To
Joseph G. Conlon and Ali Naddaf
192
bound the volume integral let 2 R3 be such that e;ix = ;1 and has minimal
magnitude. Then jj 10=jxj and
2 @
n
e
(
;
)
1 1 Z
;
ix
x21 (2)d jj>=jxj d
e
@
1 e(
)q(
)e(;
)
Z
d
e;ix
= 2x1 2 (21 )d
jj>100=jxj
1
@ 2 n e(;
) ;
n e(;
; )
@
1 e(
)q(
)e(;
) e(
+ )q(
+ )e(;
; )
Z
+ x12
R(
)e;ix d
1 1=jxj<jj<200=jxj
where R(
) is the remainder term. Using the fact that @q=@
1 2 L3w , @ 2q=@
12 2 L3w=2
we can easily see that
1Z
jR(
)jd
C=jxj2
x21 1=jxj<<200=jxj
for some constant C. We can bound the rst term above using Lemma 3.9 and
Lemma 3.10. Evidently we will get a term
@
C jj1;" Z
2
jxj jj>100=jxj @
1
2
q(
+ ) ; q(
)
= 1;" d
:
From Lemma 3.10 this is bounded by
jj
jj
1 Z
1
C 0 jj1;" jxj1;" X
C jj1;" X
;n(1;") 10C 0
2
2
2
jxj n=0 2n+1=jxj>jj>2n=jxj jxj
jxj2
n=0
for some constant C 0 . Other terms are bounded using Lemma 3.9. We have proved
the rst inequality of Theorem 1.5. The second inequality of the theorem for d = 3
is proved similarly.
Proof of Lemma 3.10. The argument follows the same lines as in Lemma 3.9.
The main point to observe is that if f (
) is given by
d Y
@ i e (;
)e (
)Gb (
)
k
k
i=1 @
i
with jj < d then for any " 0 < " < 1, the function f (
+ ) ; f (
)]=jj1;"
(1+jj;") and there is a constant C depending only on d " such that
is in Ld=
w
d"
kf (
+ ) ; f (
)]=jj1;" kd=(1+jj;")w Cd", provided jj 1 .
f (
) =
4.
0
Proof of Theorem 1.4|Diagonal Case
Here we shall prove the inequalities of Theorem 1.4, but without the exponential
fallo term. We shall call this the diagonal case. First we show that Theorem 1.6
already gives us the diagonal case of the inequality (1.10) in dimension d = 1.
Corollary 4.1. The function Ga(x t) satises the inequality
p
(4.1)
0 Ga (x t) C ( )=1 + t] if d = 1
Green's Functions for Equations with Random Coecients
193
where the constant C ( ) depends only on . If d > 1, it satises an inequality,
(4.2)
0 Ga (x t) C" (d )=1 + t1;" ]
where " can be any number, 0 < " < 1, and C" (d ) is a constant depending only
on " d .
Proof. We have
1
Z
^
(2)d ;]d jGa (
t)jd
1 Z
1 Z
d
+
(2)d jj<1=pt
(2)d jj>1=pt d
:
Since G^ a (
t) is bounded on ; ]d it follows that
Z
1
d=2
(4.3)
(2)d jj<1=pt d
C (d )=1 + t ]:
p
The integral over fj
j > 1= tg is nonzero only if t > 1=2 d. In that case one has
from Theorem 1.6,
Z
1 Z
d
(2)d jj>1=pt d
C ( ) jj>1=pt (
2 t)
for any satisfying 0 < 1. For d = 1 we have on taking > 1=2 an inequality
Ga (x t)
Z
d
jj>1=pt (
2 t)
pCt :
The inequality (4.1) follows from this and (4.3). For d > 1 and any p > d=2 we
have
Z
"
#1=p
d
(2)d(1;1=p) Z
d
Cp =td=2p
p
p
2
2
p
(
t
)
(
t
)
jj>1= t
jj>1= t
where the constant Cp depends only on p and we have chosen to satisfy 1 >
> d=2p. The inequality (4.2) follows now from this last inequality and (4.3) on
choosing p to satisfy d=2p = 1 ; ".
We can similarly use Theorem 1.6 to obtain estimates on the t derivative of
Ga (x t).
Corollary 4.2. The function Ga(x t) is dierentiable w.r. to t for t > 0 and the
derivative satises the inequality
@Ga (x t) C ( )=1 + t3=2 ]
if d = 1
@t
where the constant C ( ) depends only on . If d > 1, it satises an inequality
@Ga (x t) 2;"
@t C" (d )=1 + t ]
where " can be any number, 0 < " < 1, and C" (d ) is a constant depending only
on " d :
Joseph G. Conlon and Ali Naddaf
194
Observe now that Corollary 4.1 almost obtains the diagonal case of the inequality
(1.10) for d = 2. In order to obtain this inequality for d = 2 we shall have to use
the methods of Section 3.
Lemma 4.1. For d = 2, there is a constant C ( ) depending only on such
that
0 Ga (x t) C ( )=1 + t] t > 0 :
Proof. We shall use the notation of Section 2, in particular the functions h k
dened by (2.18). It is easy to see that
(4.4)
Z 1
Z 1 2
@ h(
)
1
h(
) cosIm()t]dIm()] = t2
@ Im()]2 f1 ; cosIm()t]gdIm()]:
0
0
We have now from (2.44), (2.45), (2.47) that
Z 1 2
@ h(
) dIm( )] < 2 :
2
je(
)j4
je()j2 @ Im()] Hence if t > 1, then
Z
Z 1
h(
;]d 0
Z
) cosIm()t]dIm()] d
Z
Z
2
C
+
+
je(
)j;4 d
=
je()j<1=pt je()j>1=pt t t2 je()j>1=pt
Z
1 Z je()j2 @ 2 h(
)
d
+
f
1
;
cosIm(
)
t
]
g
d
Im(
)]
@ Im()]2
je()j>1=pt t2 0
where we have used the fact that the LHS of (4.4) is bounded by a universal
constant. Evidently the rst two terms on the RHS of the last inequality are
bounded by C=t, so we concentrate on the third term.
In view of (2.44) and the inequalities following it we have
Z
1 je()j2 @ 2 h(
)
1
cosIm(
)
t
]
d
Im(
)]
d
@ Im()]2
je()j>1=pt t2 0
Z
Z je()j2 (
) 2
C
+ C
1
d
p
t
e(
) 4 Im() 1 cosIm()t]
je()j>1= t t2 0
Z
f;
g
j j f ;
j j
gdIm()]
for some constant C
depending only on . Now for Re() > 0 and 2 < p < 1
let hp (
) 2 ; ]2 , be the function
hp (
) = j
" 2
X
1
=
2
;
1
=p
Im()
k (
k=1
j
j
#1=2
j
where k (
) is given by (2.2). We shall show that hp 2 Lpw (; ]2 ) and there
is a constant Cp
depending only on p such that
(4.5)
khpkpw Cp
Re() > 0 2 < p < 1:
)2
Green's Functions for Equations with Random Coecients
195
Observe now from (2.8),(2.26) that
2
1 Z je()j j(
)j2 f1 ; cosIm()t]gdIm()]
t2 0
je(
)j4 Im() 2
Z je()j
2
; cosIm()t]g dIm()]
t1+11 =p 0 je(
)jh2pIm((
))1;1=p f1Im(
)t]1;1=p
Z je()j2
2
t1+1C =p 0 je(
)jh2pIm((
))1;1=p dIm()]
for some universal constant C . Next we have that
(4.6)
1 Z je()j
hp (
)2 dIm()] d
p
je(
j2Im()1;1=p
je()j>1= t t1+1=p 0
2
n
+2
Z
1
C X 2;2n 2 =t dIm()] Z
h (
)2 d
:
t1=p n=0
Im()1;1=p je()j<2n+1 =pt p
0
Z
2
From (3.30) it follows that
Z
hp (
)2 d
Cp khpk2pw 22n(1;2=p) =t1;2=p :
je()j<2n+1=pt
If we use the inequality (4.5) we see from the last inequality that the RHS of (4.6)
is bounded by Cp
=t for some constant Cp
depending only on p . We
conclude that if (4.5) holds then
Z 1
d
h(
2
;]
0
Z
) cosIm()t]dIm()]
C
=t
t>0
for some constant C
depending only on .
To prove (4.5) note that k (
) is a sum of terms PT'b (f ) where T'b is
the operator (3.15), ' is an entry of the matrix a() and f is the Fourier transform
of rj G (x) x 2 Zd, 1 j d. Hence
j h
1
1
2
jf (
)j je(
)j2je+(
j)Im(
i
1
;
2
=p
p
)j 2 jIm()j
je(
)j2=p 2 ; ]
whence f 2 Lpw (; ]2 ) with norm bounded by a constant times jIm()j1=p;1=2 .
The inequality (4.5) follows now from Lemma 3.5.
Next, observe that for nite N ,
Z N
k(
) sinIm()t]dIm()] = ;k(
Re() + iN ) cos Nt
t
@k(
) cosIm()t]dIm()]
+t
@
Im()]
0
where we have used the fact that k(
) = 0 if Im() = 0. Integrating again by
parts and letting N ! 1 we conclude that
Z 1 2
Z N
@ k(
) sinIm()t]dIm()]:
;
1
lim
k(
) sinIm()t]dIm()] = t2
N !1 0
@
Im()]2
0
0
1Z N
Joseph G. Conlon and Ali Naddaf
196
Since the estimates on @ 2 k(
)=@ Im()]2 are similar to those on @ 2 h(
)=@ Im()]2
we can argue as previously to conclude that
Z
N
d
lim
k (
N
!1
0
;]2
Z
) sinIm()t]dIm()]
C
=t
t>0
for some constant C
depending only on .
We can similarly sharpen the result of Corollary 4.2 when d = 2.
Lemma 4.2. For d = 2, there is a constant C ( ) depending only on such
that
@Ga (x t) 2
@t C ( )=1 + t ] t > 0 :
Proof. Integrating by parts in (2.42) we have that
@ Z 1 h(
) cosIm()t]dIm()] =
@t 0
;1 Z N 3 @ 2h(
) + Im() @ 3h(
) f1 ; cosIm()t]gdIm()] :
lim
N !1 t3 0
@ Im()]2
@ Im()]3
We can compute @ 3h(
)=@ Im()]3 from (2.44). It is clear we can derive similar
estimates on Im()@ 3 h(
)=@ Im()]3 to the ones on @ 2 h(
)=@ Im()]2 which we
used in the proof of Lemma 4.1. We conclude therefore that there is a constant
C
depending only on such that
Z 1
Z
@
h(
) cosIm()t]dIm()] C
=t2
d
@t
2
0
;]
t>1:
From (2.48) we have that
@ ;1 Z 1 @k(
) f1 ; cosIm()t]gdIm()]
@t t 0 @ Im(
)] Z 1
@k(
) + @k(
) f1 ; cosIm()t]gdIm()]
@
= t12
Im(
)
@ Im()]
@ Im()]
@ Im()]
0
Z N
@
@k
(
)
@k
(
)
;
1
Im() @ Im()] + @ Im()] cosIm()t]dIm()]
= Nlim
!1 t2 0 @ Im()]
where we have used the fact that k(
) = 0 if > 0 is real. Integrating now by
parts we conclude that
@ ;1 Z 1 @k(
) f1 ; cosIm()t]gdIm()] =
@t t 0 @ Im()]
Z N
2 k (
)
3 k (
) 1
@
@
3 @ Im()]2 + Im() @ Im()]3 sinIm()t]dIm()] :
lim
N !1 t3 0
We can then argue just as for the integral in h that
Z
@ ;1 Z 1 @k (
)
C
=t2 t > 1
d
f
1
;
cosIm(
)
t
]
g
d
Im(
)]
@t t 0 @ Im()]
;]2
for a constant C
depending only on .
Green's Functions for Equations with Random Coecients
197
So far we have not obtained diagonal estimates, even in dimension 1, on spatial
derivatives of Ga (x t), which correspond to the estimates of Theorem 1.4. Next we
shall prove these estimates for the case d = 2. The method readily extends to the
case d = 1.
Lemma 4.3. For d = 2 there is a constant C ( ) depending only on such
that
jri Ga(x t)j C ( )=1 + t3=2 ] :
Let satisfy 0 < 1. Then there is a constant C ( ) depending only on
such that
jri rj Ga(x t)j C ( )=1 + t(3+)=2 ] :
Proof. If we use the fact that for 0 < 1, jej (;
)j 21; jej (;
)j , then we see
from the proof of Lemma 4.1 that it is sucient to show that for t > 1,
(4.7)
Z
Z
1 je()j2 @ 2 h(
)
C
1+
d
j
e
(
;
)
j
f
1
;
cosIm(
)
t
]
g
d
Im(
)]
(3+)=2 :
p
2
2
t
@
Im(
)]
t
je()j>1= t
0
This follows by the argument of Lemma 4.1 if we can choose hp 2 Lpw with
p < 2=(1+ ). We cannot do this since hp 2 Lpw only for p > 2. To get around this
we can argue similarly to Lemma 3.6 in the proof that k@ 2 qkk =@
i @
j k 2 L3w=2 .
For Re() > 0 and 2 < p < 1, let gp (
) 2 ; ]2 be the function
0
gp (
) = j
j
X
2
1=2
jhk (;
)@k (
)=@]ij :
Then from Section 3 we see that gp 2 Lpw (; ]2 ) and there is a constant Cp
Im() 1;1=p
kk =1
0
0
depending only on p such that
kgpkpw Cp
Re() > 0 2 < p < 1 :
Observe now that the contribution of the last term on the RHS of (2.44) to the
integral on the LHS of (4.7) is bounded by a constant times
Z
Z je()j2
1
gp (
)2
d
je(
)j1; Im()2;2=p dIm()] :
je()j>1=pt t2 0
Arguing as in Lemma 4.1 we see that this is bounded by the RHS of (4.7) provided
< 1. Note that as ! 1 the estimate diverges. Since we can make similar
estimates for the other terms on the RHS of (2.44), the result follows.
We wish to extend Lemmas 4.1, 4.2, 4.3 to d 3. To do this we need the
following lemma.
Lemma 4.4. The functions h(
) k(
), Re() > 0, have the property that
@ m h(
)=@ Im()]m = 0 m odd > 0 real
@ m k(
)=@ Im()]m = 0 m even > 0 real:
Joseph G. Conlon and Ali Naddaf
198
Proof. Note that we can see @h(
)=@ Im()] = 0 for real > 0 from (2.37) if we
use (2.15) and the fact that (;
) = (
). More generally the result follows
from the fact that the function g() = h(
) + ik(
) is analytic for Re() > 0
and real when is real. In fact from the Cauchy-Riemann equations we have that
@ m g() = (;i)m @ m h(
) + i @ m k(
) m = 0 1 2 : : :: :
@m
@ Im()]m @ Im()]m
Evidently the LHS of this identity is real for all > 0 real.
Lemma 4.5. For d 3 there is a constant C ( d) depending only on d
such that
0 Ga (x t) C ( d)=1 + td=2 ] t > 0:
Proof. From Lemma 4.4Zwe see that
1
(4.8)
0
h(
) cosIm()t]dIm()]
is, for d odd, one of the integrals,
Z 1 (d+1)=2
1
) sinIm()t]dIm()]
t(d+1)=2 0 @@Im()]h(d(+1)
(4.9)
=2
Z 1 (d+1)=2
@
h(
)
1
t(d+1)
=2 0 @ Im( )](d+1)=2 f1 ; cosIm( )t]gdIm( )]:
For d even it is one of the integrals,
Z 1 d=2+1
) sinIm()t]dIm()]
td=12+1 0 @@Im()]h(d=
2+1
(4.10)
Z 1 d=2+1
1
) f1 ; cosIm()t]gdIm()]:
td=2+1 0 @@Im()]h(d=
2+1
We can see from (2.44) that for m = 1 2 : : : the derivative @ mh(
)=@ Im()]m
is bounded in absolute value by a sum of terms
mY
;1
(4.11)
r=0
E
D r
@ () 2 r =2
@r
P
+ e(
)q(
)e( ) m+1; mr=1;1 rr
j
; j
where the r r = 0 : : : m ; 1, are nonnegative integers satisfying the inequality
mX
;1
(4.12)
r=0
(2r + 1)r 2m:
By dierentiating (2.46) suciently often and using Cauchy-Schwarz we obtain the
inequality,
D @ r (
) 2 E (r!)2 j(
)j2 r = 0 1 2 : : : :
@r
jj2r
Observe next that (4.12) implies that
m+1;
mX
;1
r=1
rr > 21
mX
;1
r=0
r :
Green's Functions for Equations with Random Coecients
199
It follows therefore from (2.15) that there is a constant Cm depending only on m
such that (4.11) is bounded by Cm =jIm()jm+1 . Hence,
Z 1 m
@ h(
) dIm( )] Cm
m
je(
)j2m
je()j2 @ Im()] for some constant Cm depending only on m.
Let us assume now that d is odd and that the rst integral in (4.9) is the correct
representation of (4.8). Then, following the argument of Lemma 4.1, we have that
Z 1
h(
d
;] 0
Z
Z
=
) cosIm()t]dIm()] d
Z
je()j<1=pt
+
je()Zj>1=pt
Cd
Ctd
d=2 + t(d+1)=2
+
Z
je()j>1=pt
;d;1 d
p je(
)j
j
e
()j>1= t
Z je()j2 (d+1)=2
1
@
h(
)
d
t(d+1)=2
0
sinIm(
)
t
]
d
Im(
)]
:
@ Im()](d+1)=2
The rst two terms on the RHS of the last inequality are bounded by Cd
=td=2
for some constant Cd
depending only on d . We are left therefore to deal
with the nal term.
Consider the simplest case of (4.11) when r = 0 r > 0 and 0 2m. Then
(4.11) is bounded by
j(
)j2 0=2 =2(m+1)je(
)j2(m+1):
We shall show that
(4.13)
Z je()j2 Z
1
j
(
)j2 0 =2
d
sinIm(
)
t
]
d
Im(
)]
Cd
=td=2
je(
)jd+3
je()j>1=pt t(d+1)=2 0
provided 0 d + 1. To do this we dene for d p < 1 the function hp (
),
2 ; ]d , by
(4.14)
hp (
) = jIm()]j1=2 ; d=2p
X
d
k=1
jk (
)j2 1=2
where k (
) is given by (2.2). We can see similarly to the argument of Lemma 4.1
that hp 2 Lpw (; ]d ) and that there is a constant Cp
d depending on p d
such that
khpkpw Cp
d Re() > 0 d p < 1 :
We have now that
Z
Z je()j2 1
j
(
)j2 0 =2
d
sinIm(
)
t
]
d
Im(
)]
p
d
+3
(
d
+1)
=
2
je(
)j
t
je()j>1= t
0
2
Z
dIm()]
1 Z je()j hp (
)0
je()j>1=pt d
t(d+1)
=2 0
je(
)jd+3;0 Im()]0 (1=2 ; d=2p) :
Joseph G. Conlon and Ali Naddaf
200
Since 0 d + 1 we can nd p > d + 1 such that 0 (1=2 ; d=2p) < 1. The proof of
the inequality (4.13) follows now exactly the same lines as in (4.6).
Next we consider the general case of (4.11). To do this we dene for r =
0 1 2 : : : Re() > 0 and p satisfying d=(2r + 1) p < 1, functions hpr (
)
by
X
d D @ r (
) 2 E1=2
k
r
+1
=
2
;
d=
2
p
:
h (
) = jIm()j
pr
k=1
@r
Observe that hp0 (
) is the function (4.14). We shall show that for p > max2 2rd+1 ],
the function hpr 2 Lpw (; ]d ) and there is a constant Cpr
d depending only
on p r d such that
(4.15)
khpr kpw Cpr
d Re() > 0 max2 d=(2r + 1)] < p < 1:
To prove this note that @ r k (
)=@r is a sum of terms PTn'b (A1 : : : An )(
),
where Tn'b is the operator (3.33) and ' is an entry of the matrix a(). Furthermore, there are positive integers r1 : : : rn such that r1 + + rn = r + 1 and
.
kA1(
)k je(
)j je(
)j2 + jIm()j]r1
.
kAj (
)k je(
)j2 je(
)j2 + jIm()j]rj +1 2 j n:
Suppose now p1 : : : pn are positive numbers satisfying
(4.16)
p1 2r d; 1 pj 2dr 2 j n:
j
1
If also pj > 1 j = 1 : : : n then we see that Aj 2 Lpwj (; ]d ) 1 j n, and
kA1kp1 w Cdr =jIm()jr1 ;1=2;d=2p1
kAj kpj w Cdr =jIm()jrj ;d=2pj 2 j n
where Cdr is a constant depending only on d r. Note now that since
n 2r
2r1 ; 1 + X
j = 2r + 1
d
d
j =2 d
if p satises p > max2 d=(2r + 1)], it is possible to choose p1 : : : pn satisfying
pj > 2, 1 j n, the inequalities (4.16) and the identity
1
1 1
p + + p = p :
1
n
The inequality (4.15) follows now from Lemma 3.7.
For the general case of (4.11) we need to show that
(4.17)
Z
1
Z je()j2
d
je()j>1=pt t(d+1)=2 0
Q(d;1)=2
r=0 Phqr r (
) r
e(
) d+3; (rd=0;1)=2 (2r+1)r
j j
dIm()]
Cd
P
d = (r+1=2;d=2qr )r td=2
Im()] r
(
;1) 2
=0
where the qr r = 0 : : : (d ; 1)=2 are restricted to satisfy
(4.18)
qr > max2 d=(2r + 1)] r = 0 : : : (d ; 1)=2:
Green's Functions for Equations with Random Coecients
Dene now q by
201
;1)=2
1 = (dX
r :
q
q
(4.19)
r=0
It follows then from (4.12) that
(dX
;1)=2 r=0
r
r + 21 ; 2dq r = 12 d0 + 1 ; dq
r
where d0 is an integer satisfying ;1 d0 d. Observe that the RHS of the last
inequality is strictly less than 1 if d0 1 or d0 > 1 q < d=(d0 ; 1). Suppose now
qr r = 0 : : : (d ; 1)=2 and q satisfy (4.18), (4.19), with q > 1. Then it follows
from (4.15) that
(d;
1)=2
Y
r=0
hqr r (
)r 2 Lqw (; ]d )
with norm bounded by a constant depending only on d q qr r = 0 : : : (d;1)=2.
If we can also arrange that for d0 > 1, q satises q < d=(d0 ; 1) then we can see that
(4.17) holds by arguing just as we did in Lemma 4.1. Evidently it is possible to
choose the qr r = 0 : : : (d ; 1)=2, such that qr > d=(2r +1) and 1 < q < d=(d0 ; 1).
It is not possible, however, to satisfy the condition qr > 2, r = 0 : : : (d ; 1)=2 in
general.
To deal with this problem we need to use a sharper estimate on the derivative
@ mh(
)=@ Im()]m than sums of terms of the form (4.11). For r j satisfying
0 j r 0 r m ; 1, let rj be nonnegative integers satisfying the inequality
mX
;1 X
r
(4.20)
r=0 j =0
(1 + r + j )rj m:
If we dene r for 0 r m ; 1, by
r =
(4.21)
r
X
j =0
rj +
mX
;1
j =r
jr
then we see from (4.20) that r dened by (4.21) satises (4.12). It is also easy to
see that @ mh(
)=@ Im()]m is bounded in absolute value by a sum of terms,
(4.22)
Qm;1 Qr D @ r () @ j () Erj
r=0 j =0 @r
@j
P
m
;1
+ e(
) q(
)e( ) m+1; r=1 rr
j
; j
where the rj satisfy (4.20) and the r are dened by (4.21). Evidently the Schwarz
inequality implies that (4.22) is bounded by (4.11). We dene for r j = 0 1 2 : : :
Re() > 0 and p satisfying d=(r + j + 1) p < 1, functions hprj (
) by
hprj (
) = jIm()j(r+j+1;d=p)=2
X
d D @ r (
k
@r
kk0 =1
) @ j k (
) E1=2 :
0
@j
Joseph G. Conlon and Ali Naddaf
202
We shall show that for p > max2 d=(r + j +1)], the function hprj 2 Lpw (; ]d )
and there is a constant Cprj
d depending only on p r j d such that
(4.23) khprj kpw Cprj
d Re() > 0 max2 d=(r + j + 1)] < p < 1:
In fact it is easy to see that (4.23) follows by arguing exactly as we did in Lemma 3.9
for the case d=2 jj < d.
On replacing (4.11) by (4.22) the inequality (4.17) gets replaced by
(4.24)
Z je()j2 Q(d;1)=2 Qr
2
r=0
j =0 hqrj rj (
) rj
d
P
je()j>1=pt t(d+1)=2 0
e(
) d+3; (rd=0;1)=2 (2r+1)r
Z
1
j j
)]
Cd
Prd = dPIm(
rj (r+j +1;d=qrj )rj td=2
Im()]
(
;1) 2
=0
=0
where the rj r satisfy (4.20), (4.21). Let q be dened
;1)=2 X
r 2
1 = (dX
rj :
(4.25)
q
qrj
Then from (4.25) it follows that if qrj > max2 d=(r + j + 1)], 0 j r 0 r (d ; 1)=2, and q > 1 then the function
(d;
1)=2 Y
r
Y
r=0 j =0
r=0 j =0
hqrj rj (
)2rj 2 Lqw (; ]d )
with norm bounded by a constant depending only on d and the qrj . We have
now from (4.20), (4.25) that
(dX
;1)=2 X
r
r=0
0
(r + j + 1 ; d=qrj )rj = (d 2+ 1) ; 2dq
j =0
where d0 is an integer satisfying ;1
0 d0 d. 0 As before, the RHS
of the last
inequality is strictly less than 1 if d 1 or d > 1 q < d=(d0 ; 1). Hence
if q < d=(d0 ; 1) the power of Im() on the LHS of (4.24) is strictly less than
1 and hence integrable. Finally, observe that in (4.22) one has rj = 0 unless
r + j m ; 1 = (d ; 1)=2. Note that if r + j < (d ; 1)=2 then d=(r + j + 1) > 2.
On the other hand if rj =
6 0 for some (r j ) with r + j = m ; 1 then rj = 1 and
r j = 0 for (r0 j 0 ) =
6 (r j ). In that case d0 = d and (4.25) becomes 1=q = 2=qrj ,
whence the condition q < d=(d ; 1) becomes qrj < 2d=(d ; 1), so we may still
0
0
choose qrj > 2. Thus (4.24) holds on appropriate choice of the qrj .
The proof of the lemma for d odd is complete if we make the observation that
from Lemma 4.4 one has
1 Z 1 @k(
) f1 ; cosIm()t]gdIm()] =
t 0 @ Im()]
1 Z 1 @ (d+1)=2k(
) sinIm()t]dIm()]
t(d+1)
=2 0 @ Im( )](d+1)=2
Z 1 (d+1)=2
1
) f1 ; cosIm()t]gdIm()]
t(d+1)=2 0 @@Im()]k(d(+1)
=2
Green's Functions for Equations with Random Coecients
203
depending on the value of d. Since the estimates on @ m k(
)=@ Im()]m are the
same as those on @ m h(
)=@ Im()]m the argument proceeds as before. The case
for d even is similar.
Lemma 4.6. For d 3 there is a constant C ( d) depending only on d
such that
@Ga (x
@t
t) C ( d)=1 + t1+d=2 ] t > 0:
Proof. Suppose d is odd and the rst integral in (4.9) is the appropriate representation for this value of d. Then we have that
@ Z 1 h(
) cosIm()t]dIm()]
@t 0
is a sum of the terms,
Z 1 (d+1)=2
) sinIm()t]dIm()]
2(td(d++3)1)=2 0 @@Im()]h(d(+1)
=2
Z 1 (d+1)=2
@
h(
)
1
t(d+1)
=2 0 @ Im( )](d+1)=2 Im( ) cosIm( )t]dIm( )]:
Evidently the rst term is bounded by C ( d)=1 + t1+d=2 ] by Lemma 4.5. On
integration by parts we can write the second term as a sum of two integrals,
Z 1 (d+1)=2
) sinIm()t]dIm()]
1
t(d+3)=2 0 @@Im()]h(d(+1)
=2
Z 1 (d+3)=2
1
@
h(
)
t(d+3)
=2 0 @ Im( )](d+3)=2 Im( ) sinIm( )t]dIm( )]:
Again Lemma 4.5 implies that the rst integral is bounded by C ( d)=1+t1+d=2],
so we are left to deal with the second integral. Using the method of Lemma 4.1 we
see that we are left to deal with
Z je()j2 (d+3)=2
Z
1
@
h
(
)
d
Im(
)
sinIm(
)
t
]
d
Im(
)]
:
p
(
d
+3)
=
2
(
d
+3)
=
2
t
@
Im(
)]
0
je()j>1= t
Arguing exactly as in Lemma 4.5 we see this integral is bounded by C ( d)=1 +
t1+d=2]. We can similarly bound the corresponding integral in k(
) and hence the
result follows.
Lemma 4.7. For d 3 there is a constant C ( d) depending only on d
such that
jri Ga(x t)j C ( d)=1 + t(d+1)=2] :
Let satisfy 0 < 1. Then there is a constant C ( d) depending only on
d such that
jri rj Ga(x t)j C ( d)=1 + t(d+1+)=2] :
Proof. Same as for Lemma 4.6.
Joseph G. Conlon and Ali Naddaf
204
5.
Proof of Theorem 1.4|O Diagonal case
Here we shall complete the proof of Theorem 1.4. Thus we need to establish the
exponential fallo in (1.10) and in the inequalities of Theorem 1.4. Evidently (1.10)
implies that the periodic function G^ a (
t) 2 Rd , can be analytically continued
to C d . Our goal will be to establish analyticity properties of G^ a (
t) and derive
from them the inequality (1.10) and Theorem 1.4.
Lemma 5.1. Suppose " satises 0 < " < 1. Then the periodic function G^a(
t) 2
Rd , can be analytically continued to the strip f
2 C d : jIm(
)j < "g. There are
constants C1 ( d) C2 ( d) depending only on d such that
jG^a(
t)j C1( d) expC2 ( d)"2 t] jIm(
)j < " t > 0 :
Proof. We have already seen in Section 2 that the matrix q(
) of (2.3) is dened
for all 2 Rd Re() > 0, is continuous in (
) and analytic in for xed . We
shall show now that for any > 0 there exists a constant C ( d ) > 0 depending
only on d such that if Re() > C ( d )"2 then q(
) 2 Rd , can be
analytically continued to the strip f
2 C d : jIm(
)j < "g and
(5.1)
kq(
) ; q(Re(
) )k < jIm(
)j < " Re() > C ( d )"2 :
In view of (3.6), (3.7) this will follow if we can show that for any > 0 there exists
a constant C (d ) > 0 depending only on d , such that if Re() > C (d )"2 ,
then the operator Tkk of (3.4), which is bounded on L2 () for 2 Rd , extends
analytically to a bounded operator on L2 () for jIm(
)j < " and
(5.2)
kTkk ; Tkk Re()k < jIm(
)j < " Re() > C (d )"2 :
To prove (5.2) observe that the Green's function G (x) satises an inequality
Cd exp ; g(Re())jxj
x 2 Zd
jrk rk G (x)j 1 + jxj]d
where g(z ) z > 0, is the function
( c pz
0<z<1
d
g(z ) =
cd log(1 + z ) z 1:
Here Cd cd are positive constants depending only on d. It follows in particular
that there is a constant C1 (d), depending only on d, such that if 0 < " < 1 and
Re() > C1 (d)"2 , then the function rk rk G (x)eix decreases exponentially in x
as jxj ! 1, provided jIm(
)j < ". It follows from (3.4) that if Re() > C1 (d)"2 then
the bounded operators Tkk on L2 (), 2 Rd , extend analytically to bounded
operators on L2() provided 2 C d satises jIm(
)j < ".
To prove (5.2) we use Bochner's Theorem 9]. Thus for any ' 2 L2() there is
a positive nite measure d' on ; ]d such that
0
0
0
0
0
0
D
E
'(x )'(y ) =
Z
;]d
ei(x;y)
d' ( ):
Green's Functions for Equations with Random Coecients
Hence,
205
Z
kTkk ';Tkk Re()'k2 = ;]d d'( )
X
h
i
rk rk G (x) expfix ( + )g ; expfix ( + Re(
))g 2
0
0
=
0
x2Zd
Z
;]
d
( )ek ( + )ek (; ; )G^ ( + )
'
d
0
; ek ( + Re(
))ek (; ; Re(
))G^ ( + Re(
))2 :
0
We have now that
ek ( + )ek (; ; )G^ ( + ) = Pd ek ( + )ek (; ; )
:
j =1 ej ( + )ej (; ; ) + Observe that
jek ( + ) ; ek ( + Re(
))j e" ; 1 < 2" 2 ; ]d jIm
j < " < 1:
Hence if C1 (d) > 24d and Re() > C1 (d)"2 then
X
d
d
1 X
(5.3) ej ( + )ej (; ; ) + 2 ej ( + Re(
))ej (; ; Re(
)) + j =1
j =1
12 C1(d)"2 2 ; ]d jIm
j < ":
Similarly we see that
ek ( + )ek (; ; ) ; ek ( + Re(
))ek (; ; Re(
))
0
0
0
0
X
d
1000"2 + 100"
j =1
1=2
ej ( + Re(
))ej (; ; Re(
))
2 ; ]d jIm
j < ":
We conclude therefore that the integrand in the d' integral is bounded as
^ ( + ) ; ek ( + Re(
))ek (; ; Re(
))G^ ( + Re(
))
ek ( + )ek (; ; )G
0
0
hP
i
d e ( + Re(
))e ( Re(
)) 1=2
j
j =1 j
Pd
j =1 ej ( + )ej ( ) + nP
o1=2 d
2
2
ek ( + Re(
)) ek0 ( Re(
)) 4d" + 2" d j=1 ej ( + Re(
))
:
Pd
Pd
j =1 ej ( + )ej ( ) + j =1 ej ( + Re(
))ej ( Re(
)) + ;;
j
;; j
p
jj ; ;
j
; ; jj
1000"2 + 100"
+
j
j
j
;;
j
j
We can see from (5.3) that the expression on the RHS of the last inequality is
bounded by
p
2000 + p200 + 4d + p2 d :
C1 (d)
C1 (d) C1 (d)
C1 (d)
Evidently this last expression can be made smaller than by choosing C1 (d) suciently large, whence (5.2) follows.
Joseph G. Conlon and Ali Naddaf
206
We have shown that (5.1) holds. Next consider the function h(
) dened by
(2.18) for 2 Rd , Re() > 0. Furthermore, (2.19) holds. We show now that there
is a constant C
d depending only on d such that if Re() > C
d "2 then
h(
) may be analytically continued to the strip f
2 C d : jIm(
)j < "g and
(5.4)
Z 1
0
jh(
)jdIm()] C
d jIm(
)j < " :
To do this we need to rewrite the identities (2.14), (2.15) in such a way that they
extend analytically in from 2 Rd to the strip jIm(
)j < ": First consider (2.14).
We dene a function Aj (
!) for 2 Rd Re() > 0 ! 2 , by
Aj (
) = ej (;
) + e;iej @j + ej (
)] 12 f(
) + (
)g
where (
) is dened just before (2.26). It is easy to see that the complex
conjugate Aj (
) = Aj (;
). We conclude from this and (2.14), (2.26) that
(5.5) Ree(
)q(
)e(;
)] =
X
d
ij =1
aij ()Ai (;
)Aj (
)
+ Re(4 ) h(
) + (
)] (;
) + (;
)]i
D
E
+ 14 (;
:) ; (;
)] L + Re()] (
) ; (
)] :
Similarly we have from (2.15) that
(5.6)
Ime(
)q(
)e(;
)] = 21 Im() h(
)(;
)i
+ 12 Im() h(;
)(
)i :
We write now h(
) as
(5.7)
) + Ree(
)q(
)e(;
)]
h(
) = Re() + Ree(
)q(Re(
)e(;
)]]2 + Im() + Ime(
)q(
)e(;
)]]2
and use the expressions (5.5), (5.6) to analytically continue h(
) 2 Rd , to
complex 2 C d . Note now that it follows from our proof of (5.1) that for any > 0
there exists a constant C ( d ) > 0 depending only on d such that if
Re() > C ( d )"2 then the function k (
) of (2.2) from Rd to L2 () can
be analytically continued to the strip f
2 Rd : jIm(
)j < "g and
(5.8)
2 E
D
;iej @j + ej (
)]k (
) ; e;iej Re() @j + ej (Re(
))]k (Re(
) ) e
jj jk (
) ; k (Re(
) )j2 1 j k d jIm(
)j < " Re() > C ( d )"2 :
Green's Functions for Equations with Random Coecients
207
It is easy to see from this and (5.5), (5.6), (5.7) that there is a constant C ( d)
depending only on d such that
4je(Re(
))j2
(5.9)
jh(
)j 2 min Re() + 1je(Re(
))j2 Re() +Im(
)]2
jIm(
)j < " Re() > C ( d )"2 :
It is clear now from this last inequality that h(
) as dened by (5.7), (5.6), (5.5)
can be analytically continued to jIm(
)j < " and that (5.4) holds.
Next consider the function k(
) dened by (2.18). We shall show that there is
a constant C
d depending only on d such that the function @k(
)=@ Im(
)]
can be analytically continued to the strip jIm(
)j < ", provided Re() > C
d "2 .
Furthermore there is the inequality
(5.10)
j@k(
)=@ Im()]j jIm(2)j2 jIm(
)j < " Re() > C
d"2:
To see this we use the identity,
@k(
) = Re 1+ < (;
)(
) >
2 Rd :
@ Im()]
+ e(
)q(
)e(;
)]2
The inequality (5.10) follows now from the proof of (2.27) and the inequalities (5.8).
The proof of the lemma is completed by using the representation (2.17) with
Re() = C
d "2 and the inequalities (5.4), (5.10).
Corollary 5.1. There are constants C1(d ) and C2(d ) > 0 depending only
on d such that
x 2 Zd t > 0:
0 Ga (x t) C1 (d ) exp ;C2 (d ) minfjxj jxj2 =tg
Proof. Follows from Lemma 5.1 onZ writing
Ga (x t) = (21 )d
;]d
G^ a (
t)e;ix dRe(
)]
and deforming the contour of integration to 2 C d with jIm(
)j = min1 jxj=2C2t],
where C2 = C2 ( d) is the constant in the statement of Lemma 5.1.
Next we extend Lemma 2.3 to 2 C d .
Lemma 5.2. Let " satisfy 0 " < 1 and G^ a(
t) 2 C d jIm(
)j < " the
function of Lemma 5.1. Then for any , 0 < < 1, there is a constant C1 ( d )
depending only d and a constant C2 ( d) depending only on d such
that
d ) expC ( d)"2 t] jIm(
)j < " t > 0:
jG^a (
t)j 1 +C1j(e(Re(
2
))j2 t]
Proof. Observe that one may choose C
d suciently large, depending only on
d such that both (5.10) holds and the inequality
+ )
(5.11) j@k(
)=@ Im()]j 2 je2(1
(Re(
))j2 jj
jIm(
)j < "
Re() > C
d "2 :
The inequality (5.11) is analogous to (2.30) and is proved using (5.8), We conclude then from (5.10), (5.8) just as we did in Lemma 2.3 that the LHS of (2.28)
208
Joseph G. Conlon and Ali Naddaf
is bounded by C1 ( d )=1 + je(Re(
))j2 t] if jIm(
)j < " provided Re() >
C
d "2 . Since we can do similar estimates on @h(
)=@ Im()] the result follows
just as in Lemma 2.3.
Corollary 5.2. The function Ga(x t) satises the inequality
0 Ga (x t) C1 (p) exp ;C2 ( ) minfjxj jxj2 =tg
if d = 1
(1 + t)
where the constants C1 ( ) and C2 ( ) depend only on . For d > 1 it
satises an inequality,
( d ) exp ;C ( d) minfjxj jxj2 =tg
0 Ga (x t) C1(1
2
+ t )
for any , 0 < 1. The constant C1 ( d ) depends only on d and
C2 ( d) only on d.
Proof. Same as for Corollary 4.1 on using the method of proof of Corollary 5.1
and Lemma 5.2.
Next we generalize Lemma 2.4.
Lemma 5.3. Let " satisfy 0 < " < 1 and G^a (
t) 2 C d jIm(
)j < ", be the
function of Lemma 5.1. Then G^ a (
t) is dierentiable for t > 0. For any ,
0 < 1, there is a constant C1 ( d ) depending only d and a constant
C2 ( d) depending only on such that
@G
C1 ( d ) expC ( d)"2 t] jIm(
)j < " t > 0 :
^ a (
t) 2
@t t1 + je(Re(
))j2 t]
Proof. In analogy to the proof of the inequality (2.41) we see that there are con-
stants C1 ( d) > 0 and C2 ( d) > 0 such that
(5.12)
2
C
(
d
)
1
Re(
)
+
j
e
(Re(
))
j
1
j@h(
)=@ Im()]j jIm()j min Re() + je(Re(
))j2
jIm()j2
jIm(
)j < " Re() > C2 ( d)"2 :
It follows then just as in Lemma 2.4 that
(5.13)
Z 1
@
h(
@t
0
) cosIm()t]dIm()]
C
d=t jIm(
)j < " :
We can also see that there are constants C1 ( d) > 0 and C2 ( d) > 0 such
that
2
@ h(
) C1 ( d)
1
1
(5.14)
min
@ Im( )]2 jIm()j2
Re() + je(Re(
))j2 jIm()j
jIm()j < "
Re() > C2 ( d)"2 :
We conclude from this last inequality and (5.13) that for any > 0 there is a
constant C ( d ) such that
Z 1
@
d )
h(
) cosIm()t]dIm()] t1 +C (je(Re(
@t
))j2 t] jIm(
)j < "
0
Green's Functions for Equations with Random Coecients
209
provided Re() > C2 ( d)"2 . Arguing similarly we see also that
@ 1 Z 1 @k (
)
C ( d )
f
1
;
cosIm(
)
t
]
g
d
Im(
)]
t1 + je(Re(
))j2 t]
@t t
@
Im(
)]
0
jIm(
)j < " Re() > C ( d)"2 :
The result of the lemma follows from these last two inequalities and
@
expRe( )t] 1 exp2Re( )t]:
@t
t
Corollary 5.3. The function Ga(x t) is dierentiable with respect to t for t > 0
and satises the inequality
@Ga (x t) ) exp ;C ( ) minfjxj jxj2 =tg if d = 1
C1 ( p
2
@t
t(1 + t)
where the constants C1 ( ) and C2 ( ) depend only on . For d > 1 it
satises an inequality,
@Ga (x t) C1 ( d )
2 =tg
exp
;
C
(
d
)
min
fj
x
j
j
x
j
2
@t
t1 + t ]
for any 0 < 1. The constant C1 ( d ) depends only d and
C2 ( d) only on d.
Proof. Same as for Corollary 5.2 on using Lemma 5.3.
Lemma 5.4. There are constants C1 ( d) and C2 ( d) depending only on
d such that
@G
^ a (
t) 2
2
2
@t C1 ( d)je(Re(
))j + " ] expC2 ( d)" t]
jIm(
)j < "
t > 0:
Proof. In analogy to the inequalities (2.49) we have from (5.9), (5.12) the inequalities
(5.15)Z
1 1 jh(
)jf1 ; cosIm()t]gdIm()] C ( d) je(Re(
))j2 + Re()]
1
t
0
1 Z 1 @h(
) jIm()jf1 ; cosIm()t]gdIm()] C ( d)je(Re(
))j2 + Re()]
1
t 0 @ Im()] provided 2 C d and 2 C satisfy
(5.16)
jIm(
)j < " Re() > C2 ( d)"2
where C1 ( d) and C2 ( d) depend only on d.
Next we need to deal with the integral (2.50) in k(
). Observe rst from (5.8)
that there are constants C1 ( d) and C2 ( d) depending only on d such
that if (5.16) holds then
jIm()jjk(
)j C1 ( d) jIm()j Re() + je(Re(
))j2 :
Joseph G. Conlon and Ali Naddaf
210
We can write k(
) as in Lemma 2.5 to be
k(
) = a(
)b2(
+b)(
)2
where a(
) is the sum of Re() and the RHS of (5.5) whence b(
) is the sum
of Im() and the RHS of (5.6). We have now from (5.8) that
jb(
) ; b(Re(
) )j 101 jIm()j jIm()j > Re() + je(Re(
))j2
provided (5.16) holds and the constant C2 ( d) is suciently large. We also have
that
ja(
) ; a(Re(
) )j 101 a(Re(
) )
provided (5.16) holds and C2 ( d) is suciently large. It follows from these last
two inequalities that
Z 1
jIm()j ja(
)j2 dIm()] C ( d)Re()+ je(Re(
))j2 ]
Re()+je(Re())j2 jb(
)j ja(
)2 + b(
)2 j
for some constant C ( d), provided (5.16) holds with suciently large constant
C2 ( d). Now the following lemma implies that
Z 1
0
j(
)j2 dIm()] C ( d)je(Re(
))j2 + "2]
for some constant C ( d) depending only on d, provided (5.16) holds with
suciently large C2 ( d). We conclude then from these last inequalities that
(5.17) Z m=t
mlim
!1 0
k(
)Im() cosIm()t]dIm()] C ( d)Re() + je(Re(
))j2 ]
for some constant C ( d) provided (5.16) holds. The Lemma follows now from
(5.15), (5.17).
Lemma 5.5. Let k (
) be the function dened by (2.2), and 0 < " < 1. There
is a constant C1 ( d) depending on d such that if Re() > C1 ( d)"2 then
k (
), regarded as a mapping from Rd to L2 (), can be analytically continued
to f
2 C d : jIm(
)j < "g. Furthermore, there is a constant C ( d) depending
only on d such that
Z 1D
0
jk (
)j2
E
dIm()] C2 ( d) Re() > C1 ( d)"2
jIm()j < ":
Proof. We proceed as in Lemma 2.6. Let k (t ) t > 0, be the solution to the
initial value problem,
@k (t ) + L + Re()] (t ) = 0 t > 0
(5.18)
@t
k (0 ) +
d
X
j =1
k
@j + ej (;
)]eiej akj () ; hakj ()i] = 0:
Green's Functions for Equations with Random Coecients
211
It is clear that (5.18) is soluble for 2 Rd Re() > 0, and
k (
) =
(5.19)
Z 1
0
e;i Im()t k (t )dt:
Taking Re() > C1 ( d)"2 for suciently large C1 ( d) it is clear that one
can solve (5.18) for 2 C d with jIm(
)j < " and the resulting function is analytic
in . Evidently the corresponding function k (
) dened by (5.19) is analytic
in for jIm(
)j < ". Furthermore, from the Plancherel Theorem we have that
Z 1D
Z 1D
E
E
jk (
j dIm()] 2 0 jk (t )j2 dt:
0
Now for 2 C d let L be the adjoint of L acting on L2 (). Thus L = L where
is the complex conjugate of . Let 'k (t ) be the solution of the equation
L + Re()]'k (t ) = k (t ) t > 0 jIm(
)j < ":
)2
It follows from (5.18) that
(5.20)
'k (t ) @k (@tt ) + jk (t )j2 = 0 t > 0:
We also have that
(5.21)
@
'
@
(
t
)
@'
k
k
k
= Re L + Re()]'k @t = Re @t L + Re()]'k
'k (t ) @t
k ' + Re @ 'k L ; L ]' :
= Re (L + Re()] @'
@t k
@t k
We conclude
that
E 1 @ '
D
1
@
@
k
k
'k @t = 2 @t Re L + Re()]'k 'k + 2 Re @t L ; L ]'k :
Observe now that
@ 'k L ; L ]' =
@t
k
; k L + Re()]L + Re()];1 L ; L ]L + Re()];1 k :
Hence if C1 ( d) is suciently large one has
@ 'k
2 t > 0 jIm(
)j < ":
@t L ; L ]'k jk (t )j
Putting this inequality together with (5.20), (5.21) we have that
1 @ Re DL + Re()]' ' E + 1 j (t )j2 0 jIm(
)j < " t > 0:
k k
2 @t
2 k
Integrating this inequality with respect to t we conclude
Z 1
0
jk (t )j2 dt D
E
Re 'k (0 )k (0 ) :
Arguing as in Lemma 2.6 we see that the RHS of this last inequality is bounded by
a constant C ( d) depending only on d.
Joseph G. Conlon and Ali Naddaf
212
Corollary 5.4. There are constants C1( d) and C2( d) > 0 depending only
on d such that
@Ga (x t) C1 (d ) exp ;C2 (d ) minfjxj jxj2 =tg
@t
x 2 Zd t > 0:
Proof. Same as for Corollary 5.1 on using Lemma 5.4.
Corollary 5.2 proves (1.10) for d = 1. Following the argument of Lemma 4.1 we
shall use our methods to prove (1.10) for d = 2.
Lemma 5.6. For d = 2 there are positive constants C1 ( ) C2( ) depending
only on such that
) exp ;C ( ) minfjxj jxj2 =tg x 2 Z2 t > 0:
0 Ga (x t) C11(+
2
t
Proof. It will be sucient to show that there are constants C1 ( ) C2 ( )
such that
Z
) expC ( )"2 t] jIm(
)j < " t > 0:
j
G^ a (
t)jdRe(
)] C11(+
2
t
;]
In view of Lemma 5.1 it will be sucient to prove (5.22) for t 1. It is also evident
(5.22)
2
from Lemma 5.1 that
Z
C1 ( ) expC ( )"2 t] jIm(
)j < " t > 1
2
p jG^ a (
t)jdRe(
)] t
je(Re())j<1= t
whence we are left to show that
(5.23)
Z
C1 ( ) expC ( )"2 t] jIm(
)j < " t > 1:
2
p jG^ a (
t)jdRe(
)] t
je(Re())j>1= t
Now the integral on the LHS of (5.23) is a sum of an integral in h(
) and k(
).
We rst consider the integral in h(
). Following the argument of Lemma 4.1 and
using (5.14) we see that it is sucient to show that
Z
Z
1 je(Re())j2
d
Re(
)]
2
p
t 0
je(Re())j>1= t
@ 2 h(
) f1 ; cosIm()t]gdIm()]
@ Im()]2
C1 (t ) jIm(
)j < "
Re() > C2 ( )"2
for suciently large constant C2 ( ) depending only on . Again, arguing as
in Lemma 4.1, we see it is sucient to show that
(5.24)
Z je(Re())j2
Z
1
< j(
)j2 > f1 ; cosIm()t]gdIm()]
d
Re(
)]
t2 0
je(Re(
))j4 Im()
je(Re())j>1=pt
C1 ( ) jIm(
)j < " Re() > C ( )"2:
t
2
Green's Functions for Equations with Random Coecients
Dene for 2 < p < 1 a function hp (
) by
hp (
) = jIm()j1=2 ; 1=p
X
2 k=1
jk (
)j2 2
213
2 C d jIm(
)j < ":
Suppose now Im(
) 2
is xed and regard h2p (
) as a function of Re(
) 2
; ]2 . Then one can see just as in Lemma 4.1 that if jIm(
)j < " and Re() >
C2 ( )"2 for suciently large constant C2 ( ) depending only on , then
hp 2 Lpw (; ]2 ) and there is a constant Cp
such that
khpkpw Cp
jIm(
)j < " Re() > C2 ( )"2:
The inequality (5.24) follows from this last inequality just as in Lemma 4.1. Hence
we have found an appropriate bound for the contribution to the LHS of (5.23)
from the integral in h(
). The contribution from the integral in k(
) can be
estimated similarly.
We can similarly generalize Lemma 4.2 to obtain the following.
Lemma 5.7. For d = 2 there are positive constants C1 ( ) C2( ) depending
only on such that
@Ga (x t) C1 ( )
2 2
@t (1 + t2 ) exp ;C2 ( ) minfjxj jxj =tg x 2 Z t > 0:
Next we wish to consider derivatives of Ga (x t) with respect to x 2 Zd. If d = 1
then it is clear from Corollary 5.2 that
jri Ga(x t)j (1C1+(p)t) exp ;C2( ) minfjxj jxj2 =tg x 2 Z2 t > 0:
We can use Lemma 5.2 to obtain an improvement on this inequality.
Lemma 5.8. Suppose d = 1 and 0 < 1. Then there exist constants C1 ( )
depending only on and C2 ( ) depending only on such that
jriGa(x t)j C(11(+t )) exp ;C2( ) minfjxj jxj2=tg x 2 Z2 t > 0:
Proof. The result Zfollows from Lemma 5.2 and the observation that
je(Re(
))j dRe(
)] C (0 )
1 + je(Re(
))j2 t]
1 + t
R2
;
0
for any 0 satisfying 1=2 < < 0 < 1, where C (0 ) is a constant depending
only on 0 .
We can improve Lemma 5.8 by using the techniques developed in Section 3.
Lemma 5.9. Suppose d = 1 and 0 < 1. Then there are constants C1( ),
C2 ( ) depending only on and a constant C3 ( ) depending on such that
(5.25)
jri Ga(x t)j C(11(+ t)) exp ;C2( ) minfjxj jxj2=tg
(5.26)
jrirj Ga(x t)j 1C3+(t1+=2)] exp ;C2( ) minfjxj jxj2=tg
x 2 Z2 t > 0:
Joseph G. Conlon and Ali Naddaf
214
Proof. It will be sucient to show that for any , 0 < 1, there are constants
C3 ( ) and C2 ( ) such that
Z j
e(Re(
))j1+ jG^ a (
t)jdRe(
)] C1 3+(t1+=2) expC2 ( )"2 t]
;
(5.27)
for jIm(
)j < ", t > 1. Now from Lemma 5.1 we have
Z
je(Re(
))j
je(Re())j<1=pt
1+
jG^ a(
t)jdRe(
)] 1C+3 (t1+)=2 expC2 ( )"2t]
for jIm(
)j < ", so we are left to prove
(5.28)
Z
C3 ( )
1+
2
p je(Re(
))j jG^ a (
t)jdRe(
)] 1+=2 expC2 ( )" t]
t
je(Re())j>1= t
for jIm(
)j < ", t > 1. Proceeding now as in Lemma 5.6 we write the integral on the
LHS of (5.28) as an integral in h(
) and an integral in k(
). We rst consider
the integral in h(
). If we use (5.14) we see that it is sucient to show that
(5.29)
Z
dRe(
)]je(Re(
))j
je(Re()) j>1=pt
1 Z je(Re())j2 @ 2 h(
)
1+
dIm()]
@ Im()]2 f1 ; cosIm()t]g
C3t(1+ =2 ) jIm(
)j < " Re() > C2 ( )"2 0 < 1
2
t 0
for suciently large C2 ( ) depending only on . Arguing as in Lemma 5.6 we
see that to prove (5.29) it is sucient to show that
(5.30)
Z
dRe(
)]je(Re(
))j
je(Re())j>1=pt
2
1 Z je(Re())j < j(
)j2 >
1+
je(Re(
))j4 Im() f1 ; cosIm()t]gdIm()]
C3t(1+ =2 ) jIm(
)j < " Re() > C2 ( )"2 0 < 1:
Dene for 2 < p < 1 a function hp (
) by
hp (
) = jIm()j1=2;1=2p j1 (
)j2 1=2 2 C d jIm(
)j < ":
We x now Im(
) 2 R and regard hp (
) as a function of Re(
) 2 ; ]. Then
one can see just as in Lemma 5.6 that if jIm(
)j < " and Re() > C2 ( )"2 for
suciently large constant C2 ( ) depending only on , then hp 2 Lpw (; ])
t2
0
and there is a constant Cp
such that
khpkpw Cp
jIm(
)j < "
Re() > C2 ( )"2 :
Green's Functions for Equations with Random Coecients
215
Arguing as in Lemma 4.1 we see that (5.30) holds if
Z
1
dRe(
)] je(Re(
))j1; t1=2+2=p
je(Re())j>1=pt
The LHS of this last expression is bounded by
1 X
pt 1; Z 2 n
Z je(Re())j2
0
hp (
)2 dIm()]
Im()]1=2+1=p
C3t(1+ =2 ) t > 1:
dIm()] Z
h (
)2 dRe(
)]
t1=2+2=p n=0 2n
Im()]1=2+1=p je())j<2n+1 =pt p
0
1 pt 1; 22n+2 1=2 ; 1=p 2n+1 1 ; 2=p C ( )
X
C
p
pt
t1=2+2=p 2n
t1+=2
t
n=0
provided we choose p to satisfy 2 < p < 4=(1 + ). Hence the contribution to the
LHS of (5.28) from h(
) is bounded appropriately. Since we can similarly estimate
the contribution from k(
) we have proved (5.28) and hence (5.27). Now (5.25)
follows by taking = 0 in (5.27), and (5.26) by taking close to 1.
We have proven Theorem 1.4 for d = 1. It is clear by now that we can use the
C
2 +2
=t
methods developed in Section 4 to extend the results of Lemmas 5.6, 5.7, 5.9 to all
dimensions d 1.
6.
Proof of Theorem 1.2
For > 0 x 2 Rd , let G (x) be the Green's function which satises the equation,
d
2
G (x) + G (x) = (x) x 2 Rd
; X @ @x
2
i
i=1
where (x) is the Dirac function. Analogously to (3.4) we dene an operator
Tkk on L2() by
Z
2 (x)
(6.1)
Tkk '(!) = ; d dx @@xG@x
e;ix '(x !) ! 2 :
k
k
R
Evidently Tkk is a bounded operator on L2(). Just as in Section 3 we can
dene operators Tb and Tj j = 1 : : : d associated with the operators (6.1).
0
0
0
0
We then have the following.
Lemma 6.1. Let Tb and Tj j = 1 : : : d, be the operators associated with
(6.1). Then if kbk < 1 the equation (3.6) has a unique solution "k (
) 2
H() 2 Rd > 0 which satises an inequality,
k"k (
)k C ( d)=1 ; kbk] k = 1 : : : d
where the constant C ( d) depends only on d. The function (
) !
"k (
) from Rd R+ to H() is continuous.
Next we put b() = Id ; a()]= and dene a d d matrix q(
) by (3.7),
where "k (
) k = 1 2 : : : d are the functions of Lemma 6.1.
216
Joseph G. Conlon and Ali Naddaf
Lemma 6.2. For 2 Rd > 0, the matrix q(
) is Hermitian and is a contin-
uous function of (
). Furthermore, there is the inequality
(6.2)
Id q(
) Id :
Proof. We use the operators Tkk of (6.1) to dene an operator T on H().
Thus if ' = ('1 : : : 'd ) 2 H() then
0
(T ')k =
d
X
k =1
Tkk 'k :
0
0
0
It is easy to see that T is a bounded self-adjoint operator on H() which is
nonnegative denite and has norm kT k 1. We can see this by using Bochner's
Theorem as in Lemma 3.3. Now for 2 C d , we put " to be
" =
d
X
k=1
k "k
where "k is the solution of (3.6). Then from (3.7) we have that
q(
) = ha() + a()" (
)i :
It is also clear from (3.6) that " satises
(6.3) " (
!) ; PTb=
" (
!) + 1 T=
a(!); < a() >] = 0:
To obtain the upper bound in (6.2) we observe that
D
E
(6.4)
a()" (
) 0:
To prove (6.4) we generate " from (6.3) by a Neumann series. The rst order
approximation to the solution is then
" (
) ' ; 1 T=
a(!) ; ha()i] :
Putting this approximation into (6.4) yields
1
a() ; T=
a(!) ; ha()i
= ; 1 a() ; hai]T=
a() ; hai] 0
since T=
is nonnegative denite. One can similarly argue that each term in the
Neumann series makes a negative contribution to (6.4).
To prove the lower bound in (6.2) we use the fact that
2 = T 1 ; A ]
T
2
where A is the bounded operator on L () dened by
A (!) =
Z
Rd
dxG (x)e;ix (x !):
Note that A is self-adjoint, nonnegative denite, and commutes with T . It
follows now from (6.3) that
h
i
T=
" = 1 ; A=
" :
Green's Functions for Equations with Random Coecients
217
Hence if we multiply (6.3) by " and take the expectation value we have that
D
D
E
E
k"(
)k2 ; "(
)b()" (
) + "(
)A=
b()" (
)
D
h
i
E
+ 1 " (
) 1 ; A=
a() ; hai ] = 0:
If we dene the operator K by
K = 1 ; A=
then the previous equation can be written as
D
E
E
D
k"(
)k2 = K "(
)b()" (
) ; 1 K "(
)a() ; Id ] :
Applying the Schwarz inequality to this last equation, we obtain
D
E
D
E
k"(
)k2 12 K "(
)b()K "(
) + 21 "(
)b()" (
)
1 DK " (
)a() ; I ]K " (
)E + 1 ha() ; I ]i :
+ 2
d
d
2
Observing now that K is also nonnegative denite and bounded above by the
identity, we see from this last inequality that
E
D
" (
)a()" (
) ha() ; Id ]i :
The lower bound in (6.2) follows now from the Schwarz inequality on writing
ha()" (
)i = ha() ; Id ]"(
)i :
We have dened the functions "k (
) k = 1 : : : d corresponding to the
solutions of (3.6). Next we wish to dene functions k (
) corresponding to
the solutions of (2.2). To do this we consider an equation adjoint to (3.6). Since
Tb = T b(), the adjoint Tb of Tb is Tb = b()T . For k = 1 : : : d
let "k (
!) 2 H() be the solution to the equation,
"k (
!) ; PTb=
"k (
!) + 1 ak (!) ; hak ()i] = 0
(6.5)
where ak (!) is the k th column vector of the matrix a(!). Just as in Lemma 6.1
we see that "k regarded as a mapping from Rd R+ to H() is continuous. We
also dene an operator S : H() ! L2 () by
d Z
X
S '(!) =
dx @G (x) e;ix ' ( !)
(6.6)
k x
@xk
where ' = ('1 : : : 'd ) 2 H(). Evidently S is a bounded operator. We dene
the functions k (
!) k = 1 : : : d, then by
(6.7)
k (
!) = S=
"k (
!) ! 2 where "k (
!) is the solution to (6.5). It is easy to see that there is a constant
C ( d) depending only on d, such that
kk (
)k C ( d)=p :
(6.8)
k =1 Rd
0
0
0
Joseph G. Conlon and Ali Naddaf
218
Let us dene G^ a (
) similarly to (2.4) by
(6.9)
G^ a (
) = + q1(
)
2 Rd > 0
where the matrix q(
) is as in Lemma 6.2. Suppose now f : Rd ! R is a C 1
function with compact support and Fourier transform f^(
). We dene v^(
!) to
be
v^(
!) = 1 ; i
d
X
k=1
k k (
!) G^ a (
)f^(
):
Let > 0 be xed. Then v^(
!), regarded as a function of 2 Rd to L2(), is
continuous and rapidly decreasing. Hence the Fourier inverse
Z
1
v(x !) = (2)d d d
e;ix v^(
!) x 2 Rd
R
d
regarded as a mapping of x 2 R to L2() is C 1 . In particular it follows that
v(x !), regarded as a function of (x !) 2 Rd to C , is measurable and in
L2(Rd ). Dene now the function u(x !) = v(x x !). It is clear that
u(x !), regarded as a function of (x !) 2 Rd to C is measurable and in
L2(Rd ), with the same norm as v.
Lemma 6.3. With probability one the function u(x ) x 2 Rd , is in L2(Rd ) and
its distributional gradient ru(x ) is also in L2 (Rd ). Furthermore u(x ) is a
weak solution of the equation
(6.10)
d
; X @x@ i
ij =1
@u (x ) + u(x ) = f (x) x 2 Rd :
aij (x ) @x
j
Proof. Since u(x !) 2 L2(Rd ), it follows that with probability one u(x ),
x 2 Rd is in L2 (Rd ). To see that the distributional gradient of u(x ) is also in
L2(Rd ) with probability one, we shall establish a formula for ru(x ). To do this
we dene for any C 1 function of compact support g : Rd ! C , an operator Ag
on L2 () by
Ag '(!) =
Z
Rd
dxg(x)e;ix '(x !) ' 2 L2():
Evidently Ag is a bounded operator on L2 (). Suppose now "k (
) 2 H() is
the function of Lemma 6.1 with components "k = ("k1 : : : "kd) and k (
)
is given by (6.7). Then
(6.11)
Arj g k (
) = ;Ag "kj (
)
where rj g is the j th partial of g. To see that (6.11) holds, observe that if S is
the operator of (6.6) then for ' 2 H(),
(6.12)
Arj g S ' = ;Ag
d
X
k =1
0
Tjk 'k
0
0
Green's Functions for Equations with Random Coecients
219
where ' = ('1 : : : 'd ) and Tjk are the operators (6.1). The identity (6.11)
follows now from (6.12) by observing that "k (
) = T=
"k (
) . We have
now that
0
;
(6.13)
Z
dxrj g(x)u(x ) = ;
d
R
dxrj g(x)v(x x )
Rd Z
= ; (21 )d d
Arj g v^(
)
d
Z R
1
= (2)d d
Ag v^j (
)
Rd
where
v^j (
!) = ;i j +
(6.14)
Z
d
X
k=1
Now we put,
vj (x !) = (21 )d
k "kj (
!) G^ a (
)f^(
):
Z
Rd
d
e;ix v^j (
!):
It is clear that vj (x !), regarded as a function of (x !), is in L2 (Rd ), whence
vj (x x !) is also in L2 (Rd ). It follows now from (6.13) that the function
rj u(x !) = vj (x x !) is in L2(Rd ) with probability 1 in ! and is the distributional derivative of u(x !).
Next we wish to show that with probability 1, u(x ) is a weak solution of the
equation (6.10). To do that we need to observe that for any ' 2 H() and C 1
function g : Rd ! C of compact support, one has
d
X
(6.15)
jk =1
0
Arj g Tjk 'k = ;Ag S ' +
0
0
d
X
k =1
0
Ark g 'k :
0
0
We have now, for any C 1 function g : Rd ! R with compact support,
(6.16)
Z
R
dx
d
d
X
ij =1
ri g(x)aij (x )rj u(x ) =
Z
R
ij =1
d
X
ij =1
ri g(x)aij (x )vj (x x )
Z
d
X
= (21 )d d
Ari g aij ()^vj (
)] :
d
R
Observe next that for any k 1 k d,
d
X
dx
d
Ari g aij ()"kj (
)] = d
X
i=1
ij =1
Ari g "ki (
)
d
; X Ari g bij ()"kj (
)] :
i=1
Joseph G. Conlon and Ali Naddaf
220
If we use the fact that "k = T=
"k and (6.15), then we see that
d
X
ij =1
d
X
Ari g "ki (
) = ; Ag S=
"k (
) + Ari g "ki (
):
i=1
We also have from (6.5) that
b(!)"k (
!) = b(!)T=
"k (
!)
= PTb=
"k (
!) + hb()"k (
)i
= "k (
!) + 1 ak (!) ; hak ()i] + hb()"k (
)i
= "k (
!) + 1 ak (!) ; 1 hak () + a()"k (
)i :
It follows now from the last three equations that
d
X
ij =1
Ari g aij ()"kj (
)] = ;Ag k (
) ;
+ ig^(
)
d
X
j =1
d
X
j =1
Arj g akj ()
j qjk (
):
Hence from (6.14), (6.16) we have that
Z
dx
d
R
d
X
ij =1
ri g(x)aij (x)rj u(x )
= ; (2 )d
Z
Z
d
Ag v^(
) + (21 )d d d
+ q(
)
]G^a (
)f^(
)^g(
)
d
R
R
Z
Z
= ; dxg(x)u(x ) + dxg(x)f (x)
d
d
R
R
where we have used (6.9). The result follows from this last equation.
Next, let Ga (x y ) x y 2 Rd be the Green's function for the equation (6.10).
It follows easily now from Lemma 6.3 that if Ga (x ) is the Fourier inverse of the
function G^ a (
) of (6.9), then
D
E
Ga (x ; y ) = Ga (x y ) :
We can now use the methods of Section 3 to estimate Ga (x ). We shall restrict
ourselves to the case d = 3 since the method generalizes to all d 3. Evidently
one can generalize Lemma 3.6 to obtain:
Lemma 6.4. Let d = 3 > 0 1 k k0 d. Then qkk (
) is a C 1 function
of 2 Rd and for any i j 1 i j d, the function @qkk =@
i 2 L3w (Rd ) and
@ 2 qkk =@
i @
j 2 L3w=2 (Rd ). Further, there is a constant C
, depending only on
such that
0
0
0
k@qkk =@
ik3w C
k@ 2qkk =@
i@
j k3=2 w C
:
0
0
Green's Functions for Equations with Random Coecients
221
We can use Lemma 6.4 to estimate Ga (x ). To do this suppose ' : Rd ! R is
a C 1 function of compact support satisfying
Z
Rd
'(y)dy = 1:
For R > 0 we put 'R (y) = Rd '(Ry). Evidently the Fourier transform of 'R is a
rapidly decreasing function and '^R (
) = '^(
=R), where '^(0) = 1. We dene now
the function Ga (x ) by
1 Z d
'^R (
)e;ix :
(6.17)
Ga (x ) = Rlim
!1 (2)d Rd + q(
)
]
Lemma 6.5. Let d = 3. The function Ga(x ) dened by (6.17) is continuous
in (x ) x 2 Rd nf0g > 0. The limit Ga (x) = lim!0 Ga (x ) exists for all
x 2 Rd nf0g and Ga (x) is a continuous function of x x 6= 0. Further, there is a
constant C
depending only on such that
(6.18)
0 Ga (x ) C
=jxj x 2 Rd nf0g > 0:
Proof. We argue as in Proposition 3.1. Thus for satisfying 1 2, we write
(6.19)
It is clear that
Ga (x ) = Rlim
!1
lim
R!1
Z
jj<=jxj
Z
jj<=jxj
Z
= (21 )d
+ Rlim
!1
jj<=jxj
Z
jj>=jxj
:
;ix
d
+e
q(
)
] :
To evaluate the limit as R ! 1 in the second integral on the RHS of (6.19) we
integrate by parts. Thus for xed R > 0, assuming x1 6= 0,
Z
Z
1
1
@
'
^
(
=R
)
;
ix
=
(6.20)
d
e @
+ q(
)
:
1
jj>=jxj ix1 (2)d jj>=jxj
Z
;
ix
)
1 :
+ ix1 (21 )d
d
e + q'^(
(
=R
)
]j
j
1
jj==jxj
Evidently for the surface integral in the last expression one has
Z
Z
1
1
e;ix 1
lim
=
d
:
R!1 jj==jxj ix1 (2 )d jj==jxj + q (
)
]j
j
To evaluate the limit of the volume integral in (6.20) as R ! 1, we need to
integrate by parts again. Thus, for the integral over fj
j > =jxjg on the RHS of
(6.20) one has
Z
Z
@2
'^(
=R)
(6.21)
= ;x21 (21 )d
d
e;ix @
2
jj>=jxj
jj>=jxj
1
1 + q (
)
Z
1
1
@
'
^
(
=R
)
;
ix
; x21 (2)d jj==jxj d
e @
1 + q(
)
j
1j :
In view of Lemma 6.4 it follows that the limit of the volume integral on the RHS
of (6.21) is given by
Z
1
;1 1 Z
;ix @ 2
d
e
lim
=
R!1 jj>=jxj x21 (2 )d jj>=jxj
@
12 + q(
)
:
Joseph G. Conlon and Ali Naddaf
222
We can similarly see that the limit of the average of the surface integral on the
RHS of (6.21) is given by
lim
Av
Z
R!1 1 2
jj==jxj
=
(
Av
;1
1
Z
x21 (2)d jj==jxj
1 2
d
e;ix @
)
1
1
@
1 + q(
)
j
j :
We have therefore established a formula for the function Ga (x ). It easily follows
from this that Ga (x ) is continuous in (x ) x 2 R3 nf0g > 0. To show that
the function Ga (x) = lim
!0Ga (x ) exists we observe that q(
) converges as ! 0
to a function q(
0). This follows from the fact that the operators Tkk of (6.1)
converge strongly as ! 0 to bounded operators Tkk 0 on L2 (). One can prove
this last fact by using Bochner's Theorem. Suppose p satises 1 < p < 3. Then
@qkk (
)=@
i can be written as a sum,
0
0
0
@qkk (
)=@
i = fkk i (
) + gkk i (
):
0
0
0
The function fkk i (
) 2 L3w (Rd ) and converges in L3w (Rd ) as ! 0 to the
distributional derivative @qkk (
0)=@
i of the function @qkk (
0). The function
gkk i (
) 2 Lp (Rd ) and converges as ! 0 in Lp (Rd ) to 0. This follows by
writing
0
0
0
0
j
j2 + ];1 = f (
) + g(
)
where f 2 L1 (Rd ) kf ( )k1 1=2 and g 2 L1 (Rd ) kg( )k1 1, g(
) = 0
if j
j > 1=4 . One can also make a similar statement about convergence of the
derivative @ 2 qkk (
)=@
i @
j as ! 0. We conclude that one can take the limit
as ! 0 in the integral formula we have established for Ga (x ). In view of
Lemmas 6.2 and 6.4 the limiting function Ga (x) is also continuous for x 6= 0.
Finally the inequality (6.18) follows by exactly the same argument as we used in
Proposition 3.1.
0
We can complete the proof of Theorem 1.2 by applying the argument for the
proof of Theorem 1.5 at the end of Section 3.
Lemma 6.6. Let d = 3 and Ga(x) be the function dened in Lemma 6.5. Then
Ga (x) is a C 1 function for x 6= 0 and there is a constant C
, depending only on
such that
(6.22)
@Ga (x) @xi Cjxj2
x 6= 0 i = 1 2 3:
Green's Functions for Equations with Random Coecients
223
Proof. Let r > 0 and suppose x 2 R3 satises 10r < jxj < 20r. From Lemma 6.5
we have that
(6.23)
Z 2 Z
;ix
1
Ga (x) = (2)3 d
d
qe(
0)
1
jj<=r
Z 2 Z
;ix
d
qe(
0)
j
1j
+ ix1 (21 )3 d
1
1
jj==r
Z 2 Z
1
1
@
1
;
ix
; x21 (2)3 1 d jj==r d
e @
1 q(
0)
j
1j
Z 2 Z
2 1
@
1
1
;
ix
; x21 (2)3 1 d jj>=r d
e @
12 q(
0)
where q(
0) is dened in Lemma 6.5. Let Ha (x) be the nal integral on the RHS of
(6.23). Then it follows from Lemmas 6.2 and 6.4 that if x1 jxj then Ga (x) ; Ha(x)
is a C 1 function and
@
@xj
Ga (x) ; Ha (x)] Cjxj2
x 6= 0 j = 1 2 3:
To show the dierentiability of Ha (x) we expand
3
2
2
X
@
1
;
2
q
(
0)
8
1
1
(6.24)
=
+
Re
q
(
0)
1
j
j
2
@
1 q(
0)
q(
0)
]2 q(
0)
]3
j =1
plus terms involving derivatives of q(
0). The contribution of the rst term on the
RHS of (6.24) to Ha(x) is given by
Z 2 Z
q11 (
0)
(6.25) x22 (21 )3 d
d
e;ix q
(
0)
]2
1
j
j
>=r
1
Z 2 Z
q11 (
0) 1
= ix23 (21 )3 d
d
e;ix q
(
0)
]2 j
j
1
jj==r
1
Z 2 Z
q11 (
0) :
+ ix23 (21 )3 d
d
e;ix @
@ (
q
(
0)
)2
1
1
jj>=r
1
Using the fact that @q11 =@
1 2 L3w (R3 ), it is easy to see that the RHS of (6.25) is
a C 1 function of x x 6= 0, and its derivative is bounded by the RHS of (6.22). The
same argument can be used to estimate the contribution to Ha (x) from all terms
on the RHS of (6.24) except the term involving the second derivative of q(
0). The
contribution to Ha (x) from this term is given by Ka(x)=x21 , where
Z 2 Z
2
d
e;ix q(
10)
]2 @ q@
(
2 0)
:
Ka(x) = (21 )3 d
1
jj>=r
1
We can see now just as in Lemma 3.10 that for any 2 R3 , the function
@ 2 q(
+ 0)=@
12 ; @ 2 q(
0)=@
12 ]=jj1;" is in L3w=(3;") (R3 ) and
k@ 2q(
+ 0)=@
12 ; @ 2q(
0)=@
2]=jj1;" k3=(3;")w C
"
(6.26)
Joseph G. Conlon and Ali Naddaf
224
where " is any number satisfying 0 < " < 1. For h 2 R3 we write
(6.27)
Z 2 Z
1
d
e;ix gh(
) ; gh(
+ )]
Ka(x + h) ; Ka(x) = (2)3 d
1
jj>=r
Z
+
d
e;ix e;ih ; 1] f (
)
1=4r<jj<4=r
j
j2
where
2
g(
) = 2
q(
1 0)
]2 @ q@
(
2 0)
1
gh (
) = e;ih ; 1]g(
)
f 2 L3w=2 (R3 ) and 2 R3 has the property that x = . In view of (6.26) and the
fact that f 2 L3w=2 (R3 ) it follows from the Dominated Convergence Theorem that
one can take the limit in (6.27) as h ! 0 to obtain that Ka (x) is dierentiable in
x and
@Ka(x) = ;i Z 2 d Z
d
e;ix f
j g(
) ; g(
+ )] ; j g(
+ )g
@xj
(2)3 1
jj>=r
Z
; i 1=4r<jj<4=r d
e;ix j fj
(j
2) :
We can see from this last expression that @Ka(x)=@xj is continuous in x and also
j@Ka(x)=@xj j C
.
References
1] D. G. Aronson, Non-negative solutions of linear parabolic equations, Ann. Sci. Norm. Sup.
Pisa (3) 22 (1968), 607{694, MR 55 #8553, Zbl 182.13802.
2] K. Astala and M. Miettinen, On Quasiconformal Mappings and 2-Dimensional G-Closure
Problems, Arch. Rational Mech. Anal. 143 (1998), 207{240, MR 2000f:74068, Zbl 912.65106.
3] E. A. Carlen, S. Kusuoka and D. Stroock, Upper bounds for symmetric Markov transition
functions , Ann. Inst. H. Poincare 23 (1987), 245{87, MR 88i:35066, Zbl 634.60066.
4] J. Conlon and A. Naddaf, On homogenisation of elliptic equations with random coecients,
Electronic Journal of Probability 5 (2000), paper 9, 1{58, CMP 1 768 843.
5] E. B. Davies, Heat Kernels and Spectral Theory, Cambridge Tracts in Mathematics 92, Cambridge University Press, Cambridge-New York, 1989, MR 90e:35123, Zbl 699.35006.
6] A. Fannjiang and G. Papanicolaou, Diusion in Turbulence, Prob. Theory Relat. Fields 105
(1996), 279{334, MR 98d:60156, Zbl 847.60062.
7] C. Landim, S. Olla and H. T. Yau, Convection-diusion equation with space-time ergodic
random ow, Probab. Theory Relat. Fields 112 (1998), 203{220, MR 99j:35084, Zbl 914.60070.
8] G. Papanicolaou and S. Varadhan, Boundary value problems with rapidly oscillating random
coecients, Random Fields, Vol. 2 of Coll. Math. Soc. Janos Bolya, no. 27, North Holland,
Amsterdam, 1981, pp. 835{873, MR 84h:82005, Zbl 499.60059.
9] M. Reed and B. Simon, Methods of Modern Mathematical Physics II: Fourier Analysis, SelfAdjointness, Academic Press, New York-London, 1975, MR 58 #12429b, Zbl 308.47002.
10] E. Stein and G. Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton
Mathematical Series, No. 32, Princeton University Press, Princeton, N.J., 1971, MR 46 #4102,
Zbl 232.42007.
11] V. Zhikov, S. Kozlov and O. Oleinik, Homogenization of Dierential Operators and Integral
Functionals, Springer Verlag, Berlin, 1994, MR 96h:35003b.
Green's Functions for Equations with Random Coecients
Department of Mathematics, University of Michigan, Ann Arbor, MI 48109
conlon@math.lsa.umich.edu http://www.math.lsa.umich.edu/~conlon/
Department of Mathematics, University of Michigan, Ann Arbor, MI 48109
naddaf@math.lsa.umich.edu
This paper is available via http://nyjm.albany.edu:8000/j/2000/6-10.html.
225
Download