as a PDF

advertisement
ASYMPTOTIC BEHAVIOR OF MINIMIZERS
OF A GINZBURG - LANDAU EQUATION WITH WEIGHT
NEAR THEIR ZEROES.
Anne Beaulieu
Equipe d'analyse et de mathematiques appliquees,
universite de Marne la Vallee, cite Descartes, 5 boulevard Descartes,
Champs-sur-Marne, 77454, Marne-la-Vallee cedex 2, France.
Rejeb Hadiji
CMLA, ENS de Cachan, 61, avenue du President Wilson, 94235 Cachan Cedex,
France.
And Universite de Picardie, 33, rue Saint-Leu, 80039, Amiens Cedex 01, France.
1. Introduction.
Let G be a smooth bounded domain in R2 and p a smooth positive bounded map from
G into R. For a given map g 2 C1 (@G; S 1 ) such that deg(g; @G) = d > 0, we consider
the problem
(1:1)
min E (u)
u2Hg1 (G;C) "
where E" is the Ginzburg-Landau type functional
(1:2)
and
Z
Z
1
1
2
E"(u) = 2 p j ru j + 4"2 p(1? j u j2)2
G
G
Hg1(G; C) = fu 2 H 1(G; C); u = g on @Gg:
The problem (1.1) is a generalized form of a problem introduced and completely studied
by Bethuel, Brezis and Helein in [BBH1,2]. These authors proved in particular that in a
starshaped domain G and with p = 1 there exists a subsequence "n, exactly d singularities
a1 ; :::; ad in G and a smooth harmonic map u? in Gnfa1; :::; ad g such that u"n ! u? as
1; (Gnfa ; :::; a g) for all 0 < < 1. Moreover, (a ; :::a ) minimizes a
"n tends to 0 in Cloc
1
d
1
d
1
renormalized energy W . For "n < "0 depending only on g and G, u"n has exactly d zeroes
x"1n ; :::; x"dn . These results have been generalized to arbitrary smooth domains (see [St],
[dPF]).
In [CM], Comte and Mironescu estimate the rate of convergence of the zero x"i n to
ai if (a1 ; :::ad ) is a nondegenerate critical point of W . For a general bounded smooth and
positive p the problem (1.1) is studied in [AS1,2] by Andre and Shafrir and in [BH1,2].
Recall that (1.1) is an approximation of the Ginzburg-Landau model for the energy of
superconductors. The weight p is modeling the problem of pinning of vortices, the physical
motivations are to study a thin lm with variable thickness (see [DG]), or to introduce
impurities in the superconductor (see [R]).
Let us summarize some results obtained in [AS1,2] and [BH1,2]. We denote
p0 = min p(x)
x2G
and
1 = fx 2 G; p(x) = p0 g ; 2 = fx 2 G; p(x) = p0 g:
We know that there exists m points a1 ; :::; am in 1, associated to the positive degrees
P d = d, a subsequence u and a map u such that u converges
(d1 ; :::; dm), m
"n
"n
i=1 i
to u . For n large enough there exists exactly di zeroes for u"n in a neighborhood of
ai denoted x"ijn , j = 1; :::; di . There exists > 0 such that j xij"n ? ai j "n and
deg(u"n ; @B(x"ijn ; "n)) = 1.
Now we suppose that for all y 2 1 there exists Cy > 0 such that
p(x) = p0 + Cy j x ? y j2 +o(j x ? y j2):
In what follows we denote ai 2 2 for i = 1; :::; l and ai 2 1n2 for i = l + 1; :::; m. Let
l2 = Card2. If l2 d we have m = d, fa1; :::; ad g 2 and minimizes WG(a; 1; :::; 1; g; p),
that is dened in (2.1) of the present paper with G = and g = h0. If l2 < d, the integers
m; l; d1; :::; dm realizes
F (d) = minf
l d2 ? d
X
i
i
i=1
2
m 2d2 ? d X
m
X
i
i
+
2 ; di = dg;
i=l+1
i=1
(the cases l = m and l = 0 are included). In particular we have
2 fa1; :::; am g:
2
The results related to the zeroes of u"n are the following. The sequence
j x"ijn ? x"ikn j2 log "1
n
tends to a positive constant as "n tends to 0, for all i and all j 6= k. Moreover
j x"ijn ? ai j2 log "1
n
is bounded for all i and all j = 1; :::; di. More precisely, if i = 1; :::; l and di = 1, j x"i1n ? ai j2
log "1n tends to 0. If i = 1; :::; l and di > 1, j x"ijn ? ai j2 log "1n tends to a positive constant
for at least di ? 1 zeroes x"ijn . If i = l + 1; :::; m, j x"ijn ? ai j2 log "1n and dist(x"ijn ; @G) tend
to positive constants. In the particular case di = 1 these constants are both equal to 2Cp0ai .
In the present paper we complete the results given in [AS1,2] and [BH,12] concerning the
choice of the location of the singularities (a1 ; :::; am ) and the estimate of the energy E"(u"),
that we do in a general setting. We denote
n =
x"ijn ? ai
"
n
!ij = ;
n
1
;
1
log 2 "1n
!ij = "lim
!"n and yij"n = ai + n !ij :
n !0 ij
This work concerns also the characterization for all i of the conguration (!i1 ; :::; !idi ) and
the estimate of the rate of convergence of the sequence x"ijn ? yij"n to 0. We set for i = 1; :::; l
and for (1 ; :::; di ) 2 (R2 )di
Hi (1; :::; di ) = X
log j ?1 j
j
k
j 6=k
and for i = l + 1; :::; m and (1 ; :::; di ) 2 (R2+)di
Hi (1 ; :::; di ) = where
X
j 6=k
log
X
1
+
log
j j ? k j j;k j j ? k j
1
R2+ = f(x1 ; x2 ) 2 R2; x2 > 0g
3
and k = r(k ), r being the reection with respect to f(x1 ; x2 ) 2 R2 ; x2 = 0g. Let us
recall (see [AS2]) that for i = 1; :::; l, (!i1 ; :::; !idi ) realizes the minimum in R2 of
Gi(1 ; :::; di ) = Hi (1 ; :::; di ) + Cai
di
X
i=1
j j j2 :
We have the following theorems
THEOREM 1. Let i = 1; :::; l. If di = 1,
j x"i n ? ai j= O(
1
"n3 ):
log 12 "1n
If di > 1 and if the conguration (!i1 ; :::; !idi ) realizes a nondegenerate minimum of Gi ,
then
j x"ijn ? yij"n j= O( 123 1 log log "1 ):
n
log "n
THEOREM 2. For i = l + 1; :::; m we assume that, locally near ai , G is the half plane
R2+. The conguration (!i1 ; :::; !idi ) realizes the minimum of Gi in (R2+ )di . If it is a
nondegenerate point, then
j xij"n ? yij"n j= O( 158
21 (log 1 )):
log
"n
log "1n
Next we set
and
Ai = di ?2 di i = 1; :::; l
2
2 ? di
2
d
Ai = i 2
i = l + 1; :::; m:
We consider the universal constant dened in [BBH2] by
1 );
= "lim
(
I
(
";
1)
?
log
!0
"
Z
Z
1
1
2
I ("; 1) = u2min
(2
j
r
u
j
+ 4"2
(1? j u j2)2 ):
1
Hx
B (0;1)
B (0;1)
jxj
For b = (b1 ; :::; bm ), WG (b; d1 ; :::; dm; g; p) is dened by (2.1) with = G, and h0 = g.
4
THEOREM 3. The conguration (a1 ; :::; am ) minimizes in l2 (1 n2)m?l the map
W (b) = WG (b; d1 ; :::; dm; g; p) +
f;Cbi
Pdmin
i jj j2 =p0 Ai g
j=1
m
X
i=1
Hi
and there exists a map X (") that tends to 0 as " tends to 0 such that
E"(u") = dp0 log 1" + p0 F (d)(log log 1" + 1) +
min
l2 (1 n2 )m?l
W + dp0 + X ("):
2. Proof of Theorems 1, 2 and 3.
The domain will be G or B(0; 1). Let h0 2 C 1(@ ; S 1) be such that
deg(h0; @ ) = d. Let m 2 N . We consider the positive integers d1; :::; dm that verify
Pm d = d and b = (b ; :::; b ) 2 l (@ )m?l . We denote d = d for j = 1; :::; l,
1
m
j
j
j =1 j
P
m
dj = 2dj , j = l + 1; :::; m and d = j=1 dj . In the sequel we distinguish two cases. In
the case I, the points bj are xed in , bj 2 for j = 1; :::; l, bj 2 @ for j = l + 1; :::m
and in a neighborhood of bj we have p(x) = p0 + C j x ? bj j2 +o(j x ? bj j2 ). In the
case II we have bj 2 for all j = 1; :::; m, that is l = m, and there exists > 0 such that
b = (b1 ; :::; bm ) 2 A where we denote A the set of the b = (b1 ; :::; bm ) 2 m verifying
j bj ? bk j 2, j 6= k, the distance from bj to @ is greater than a positive constant and
is not taken into account. Moreover there exists aj 2 with j aj ? bj j and with
p(x) = p0 + C j x ? aj j2 +o(j x ? aj j2) in a neighborhood of aj . Let h = (h1; :::; hm) 2
C 1(S 1; S 1)m be such that
deg(hj ; S 1) = dj ; j = 1; :::; l
and
deg(hj ; S 1) = 2dj ; j = l + 1; :::; m:
In the case I, since @ is regular, there exists, for j = l + 1; :::; m, a neighborhood Uj of
bj and a conformal change of variables Hj : Q ! Uj , such that Hj (Q+) = Uj \ and
Hj (Q0 ) = Uj \ @ , where Q = B(0; 1), Q+ = f(x1 ; x2 ) 2 Q; x2 > 0g, Q0 = f(x1 ; x2 ) 2
Q; x2 = 0g. Let r be the reection with respect to Q0 . We suppose that
Hj oroHj?1 (x) ? bj
x
?
b
j
hj ( j x ? b j ) = hj (
j Hj oroHj?1 (x) ? bj j ) for all x 2 Uj :
j
5
In the sequel we may assume for simplicity, that there exists > 0 such that
\ B(bj ; ) is the half disc f(x1 ; x2 ); x21 + x22 2; x2 > 0g;
and
@ \ B(bj ; ) = f(x1 ; x2 ); x21 + x22 2; x2 = 0g:
We suppose that the balls B(bj ; ) , j = 1; :::; m, are disjoint. Now hj veries, for all
x 2 B(bj ; ),
? bj ) = hj ( rj (x) ? bj )
hj ( j xx ?
b j
j r (x) ? b j
j
j
j
where rj is the reection associated to the at boundary @ \ B(bj ; ), j = l + 1; :::m. We
suppose that
hj ( x ? bj ) = h0(x) for j = l + 1; :::; m and for all x 2 @ \ @B(bj ; ) :
Let M > 0 be xed. We denote HM the set of the regular maps h : S 1 ! S 1 such that
@ k kh j M, k 3. In the sequel we suppose that h0 is in HM and that h1 ; ::::; hm are as
j @
above and are in HM . We dene for < in the case I and for < in the case II
0 = (@ n [mj=l+1 B(bj ; )) [mj=l+1 (@B(bj ; ) \ );
j = @B(bj ; ); j = 1; :::; l;
and
8 h (x) on @ n [m B(b ; )
j
< 0
j =l+1
g0(x) = :
? bj ) on @B(b ; ) \ ; j = l + 1; :::; m;
hj ( j xx ?
j
b j
j
? bj ) on @B(b ; ); j = 1; :::; l:
gj (x) = hj ( j xx ?
j
b j
j
We set
b = n [mj=1 B(bj ; ):
We consider the following minimization problem
(Eb)
min
Z
u2E b
p j ru j2
6
where
E = fv 2 H 1 (
b ; S 1); 8j = 0; :::; m; 9j 2 C; j j j= 1 such that v(z) = j gj on j g:
We turn to the denition of a renormalized energy in the sense of [BBH2], with an observation of Ragazzo (see [B]). This denition is the same as in [BH2] with a minor modication.
We dene
(2:1)
W
(b; d1 ; :::; dm ; h0; p) = ?
m
X
i6=j
didj p(bi ) log jbi ? bj j ? X
i;j
di dj R(bi ; bj );
where the function Rj = R(:; bj ), j = 1; :::; m is dened by
(2:2)
8 1
1
>
< div( p rRj ) = ?p(bj )r p r log jx ? bj j in 1 @Rj = (1 ? p(bj ) ) @ log jx ? bj j + 1 @'0 on @ ;
>
:
p @
p
@
d @
with the normalization condition
Z
(2:3)
@
(h0 (h0 ) )Rj = ?p(bj )
Z
@
(g g ) log jx ? bj j:
The function '0 is dened on @ by
? bi )dj ei'0 (z):
h0(z) = mj=1 ( jzz ?
bj
i
The function Rj exists in . Indeed, we have
Z
Z
1
jx ? bj j + 1 @0 ):
?p(bj ) r p :r log jx ? bj j = ((1 ? p(pbj ) ) @ log @
d @
@
We note that R is the regular part of the function Hj = H (:; bj ) dened by
H (x; bj ) = R(x; bj ) + p(bj ) log jx ? bj j
that is the solution of
(2:4)
8 1
>
< div( p rHj ) = 2bj in 1 @Hj = @ log jx ? bj j + 1 @'0 on @ ;
>
:
p @
@
d @
7
with the normalization condition
(2:5)
Z
@
(h0 (h0) )Hj = 0:
1
Note that, for j = l + 1; :::; m, @ log@jx?bj j is in the dual of W 1? p ;p(@ ) for all p > 2 and
1
that if f 2 W 1? p ;p(@ ) we have
Z
@
log
j
x
?
b
j
j
<
; f >= r log jx ? bj j:rPf
@
1
where P : W 1? p ;p(@ ) ! W 1;p(
) is a continuous linear operator. Let us dene 0 by
0(x) =
d2j R
1
We dene W (hj ) = 2 B(0;1) j r j2
(2:6)
m
X
j =1
dj H (x; bj ):
, being the solution of
8 = 0 in B(0; 1)
>
>
>
< @ = hj (hj ) ? 1 on S 1
@
dj
>
Z
>
>
: 1 (hj (hj ) ) = 0:
S
Let us remark that W 1(hj ) is the renormalized energy in B(0; 1), for the boundary value
hj , a singularity of degree dj at the point 0 and for p constant and equal to 1.
We note that we have in the case I
(2:7)
jj Rj jjW 2;2 (
) M
In the case II, we have for all 1 q < 2
jj Rj jjW 2;q (
) M;
and these estimates are valid uniformly for h0 in HM. More precisely, there exists a map
bj which is bounded in W 2;2(
), uniformly for h0 2 HM and for b 2 A such that, for
j = 1; :::; m,
(2:8)
2
R(bj ; x) = ? p (2bj ) r 1p (bj ):(x ? bj ) log j x ? bj j +bj (x);
8
Note that is clear by standard elliptic estimates (see [GT]) applied to the equation
satised by the map bj , that is
8 1
X p2(bi ) 1
1 (b )r log j x ? b j +r 1 (b ) log j x ? b j)
>
r
)
=
r
:
((
x
?
b
)
:
r
div(
>
b
i
i
i
j
p
2 p
p i
p i
>
i
>
>
>
< + p(pbi ) (p(bi )r p1 (bi ):r log j x ? bi j ?pr p1 :r log j x ? bi j in >
1 @bi = (1 ? p(bi ) ) @ log j x ? b j + 1 @0
>
i
>
p @
p @
d @
>
>
2
2
>
: ? p (bi ) r 1 (b ):(x ? b ) log j x ? b j ? p (bi ) log j x ? b j r 1 (b ): on @ :
2p
p
i
i
i
2p
i
p
i
Thus we have the following estimates, uniformly for h0 2 HM,
(2:9)
j R(x; bj ) j= O(C j bj ? aj jj x ? bj j log j x ?1 b j ) + O(1)
j
and
(2:10)
j rR(x; bj ) j= O(C j bj ? aj j log j x ?1 b j ) + O(1):
j
Let us derive an estimate, in the case II, which will be useful in the sequel. Let cj be a
point of that realizes the same conditions that bj . We dene
Rb;c(x) =
m
X
i=1
and
b;c =
(R(bi ; x) ? R(ci ; x))
m
X
i=1
(bi ? ci ):
Using standard elliptic estimates, we obtain
j b;c jjW 2;2 (
)=
thus (2.8) gives
(2:11)
j Rb;c jjL1(
)=
X
i
X
i
9
O(C j bi ? ci j);
O(C j bi ? ci j):
In the next two results, the parameters and C are taken into account only in the case II.
We have the following theorem
THEOREM 4. Let u^b be a minimizer for (Eb). We have, uniformly for (h0; h1 ; :::; hm) in
HmM+1 , in the case I
l
m
1 Z p j ru^b j2= X
2 p0 log 1 + 2 X d2 p0 log 1 + W
(b; d1 ; :::; dm ; h0 ; p)
d
j
j
2 b
j =1
j =l+1
+
l
X
j =1
p0W 1 (hj ) +
m p
X
0 W 1 (h ) + O( log 2 1 );
j
2
j =l+1
and in the case II,
m
1 Z p j ru^b j2= X
2 p(bj ) log 1 + W
(b; d1 ; :::; dm ; h0 ; p)
d
j
2 b
j =1
+
) + X O(C j b ? a j log 1 ):
1 (hj ) + X O(
W
j
j
p0
j
j=
6 k j b j ? bk j
k p(b )2
X
j
j =1
We need the following proposition
PROPOSITION 1.1. We have, uniformly for (h0; h1; :::; hm ) in HmM+1 ,
m
l
X
1 Z 1 j r j2 = X
1
2
2 p(bj ) log 1 + W
(b; d1 ; :::; dm ; h0 ; p)
d
p
(
b
)
log
+
2
d
0
j
j
j
2 p
j =1
j =l+1
+ X ();
where X () = O( log 1 ) in the case I and X () =
P O( ), in the case II.
j 6=k jbj ?bk j
P O(C j b ? a j log 1 ) +
j
j
j
We postpone he proof of Proposition 1 and Theorem 4 to the section 3 and we present
the proofs of Theorems 1, 2 and 3. We start with some notations. Let ai 2 G and let x"ij ,
j = 1; :::; di be the zeroes of u" which tend to ai . Let > 0 be such that j ai ? aj j> 2
for i 6= j and 0 < R < such that B(x"ij ; R) B(ai ; ), j = 1; :::; di. We set hi(ei ) =
u"
u" "
i
i
i
1
ju" j (ai + e ) and kij (e ) = ju" j (xij + Re ). For i = l + 1; :::; m we dene hi on S by
reection with respect to the at boundary @G \ B(ai ; ): We note that kij and hi are in
10
HM, j = 1; :::; di for some M independent of ". Indeed, we follow the proof of Proposition
1.2, Part II in [AS2], we deduce that for any k 1, there exists M independent of " such
that
and
in B(x"ij ; R0 )nB(x"ij ; ")
jDk u"(x)j jx M
" ? xjk
jDk u"(x)j jx ?Ma jk in B(ai ; 1)nB(ai ; 0)
i
where B(x"ij ; R0) does not contain any other zero of u" than x"ij and 0 is such that
x"ij 2 B(ai ; 0), j = 1; :::; di. This implies that kij and hi are in HM, j = 1; :::; di for some
M. In all what follows we denote " = "n.
Proof of Theorem 1. We suppose now that ai 2 G and di = 1. We denote a = ai
and x" the zero of u" that tends to a. We dene h = hi and k = kij as below. We know
that " j x" ? a j= o( log112 1 ). Let us give a lower bound for E"(u"). As in [CM], proof
"
of Theorem 5, we have
Z
2
1
pjr( juu"j )j2 + O( " 2 )
E"(u"; GnB(a; )) 2
"
GnB (a;)
thus
Z
Z
1
1
u
"
2
E"(u" ) 2
pjr( ju j )j + 2
p
jr
( juu"j )j2 + p0I ("; R; k)
"
"
G(a;)
B (a;)nB (x";R)
2
+ O( R" 2 ):
Applying Theorem 4, case II we infer
(2:12)
Z
1
E"(u" ) 2
pjr( juu"j )j2 + p(x" ) log R1 + WB(a;)(x" ; 1; h; p)
"
GnB (a;)
2 "
+ p p(x ) W 1(k) + p0 I ("; R; k) + O(R)
0
2
+ O(j x" ? a j R log R1 ) + O( R" 2 ):
We turn now to an upper bound for E"(u"). We construct a function w equal to juu""j in
GnB(a; ), such that w realizes (ERa ) in B(a; )nB(a; R) with the boundary dates h and k
and realizes I ("; R; k) in B(a; R).
11
Theorem 4 case II gives
(2:13)
Z
1
pjr( juu"j )j2 + p0 log R1 + WB(a;)(a; 1; h; p) + p0W 1(k)
E"(u") 2
"
GnB (a;)
2
+ (p0 + CR )I ("; R; k) + O(R) + o(R2 I ("; R; k)):
Now, using the fact that I ("; R; k) = O(log R" ), soustraying (2.13) and (2.12) we deduce
2 "
2
C jx" ? aj2 log R1 + p (x p) ? p0 W 1 (k) + WB(a;)(x" ; 1; h; p) ? WB(a;)(a; 1; h; p)
0
(2:14)
2
O( R" 2 ) + O(R) + O(j x" ? a j R log R1 ) + O(R2 log 1" ):
We have
and
p2 (x" ) ? p20 = O(j x" ? a j2);
WB(a;)(x" ; 1; h; p) ? WB(a;)(a; 1; h; p) = RB(a;)(x" ; x" ) ? RB(a;)(a; a)
where RB(a;) is dened in (2.2) with = B(a; ) and h0 = h. By renormalization we get
RB(a;)(x" ; x) = R(!"; y)
where !" = x"?a , y = x? a and R(!" ; :) is dened by (2.2) with = B(0; 1), h0 = h and
p is replaced by p~, p~(y) = p(a + y). Thus
RB(a;)(x" ; x" ) ? RB(a;)(a; a) = R(!"; !") ? R(0; 0):
Using the notation of (2.11) we set
R(!"; !" ) ? R(0; 0) = R!" ;0(!" ) + R!" ;0(0):
In (2.11) we replace C by 2 , in view of the denition of p~, and we get
R(!"; !") ? R(0; 0) = O(2 j !" j);
that gives
WB(a;)(x" ; 1; h; p) ? WB(a;)(a; 1; h; p) = O( j x" ? a j):
12
Now we turn to (2.14) and we choose = j x" ? a j for some given > 0 independent of
". We obtain
2
C jx" ? aj2 log R1 + O(j x" ? a j2) O( R" 2 ) + O(R)
+ O(j x" ? a j R log R1 ) + O(R2 log 1" ):
(2:15)
Let X =j x" ? a j. We deduce from (2.15) that there exists K1 > 0 and K2 > 0 such that
2
K1X 2 log R1 ? K2 XR log R1 O( R" 2 + R + R2 log 1" ):
(2:16)
The optimal choice for R is R = " 23 . This choice of R is possible if we suppose that there
exists > 0 such that j x" ? a j " 32 . In this case (2.16) gives
K1 X 2 log 1" ? K2X" 23 log 1" O(" 23 )
and consequently
1
3
"
X = O( 12 1 ):
log "
We have proved the rst part of Theorem 1.
We suppose now that ai 2 G and di > 1. We denote ai = a, x"ij = x"j and = log121 1 .
"
x"j ?a
"
"
We choose of the order of . We denote !j = and !j = lim"!0 !j . We set
p~(x) = p(a + x) and juu~~""j (x) = juu""j (a + x). We have the following lower bound for E"(u" )
Z
Z
2
1
u~" j2
"
u
"
2
E"(u") 2
p
~
j
r
p j r j u j j +O( 2 ) +
j u~" j
"
GnB (a;)
B (0;1)n[j B (!j" ; R )
Z
2
X
"
u
"
2
+
p j r j u j j +O( R2 ):
"
j B (x"j;R)
By Theorem 4, case II, we have
Z
X
p~ j r j uu~~" j j2 p(x"j ) log R + WB(0;1)(!" ; 1; :::; 1; h; p~)
"
B (0;1)n[j B (!j" ; R )
j
X p2 (x"j ) 1
+
W
(kj ) + O( R ) + O(R log ):
R
j p0
13
We have used the fact that p~(x) = p0 + O(2 j x j2). Thus the constant C used in Theorem
4 case II is here 2. Moreover
Z
Z
u
u" j2
"
2
2
p
j
r
j
(
p
+
C
min
j
x
?
a
j
)
j
r
0
"
j u" j
j u" j
x2B (xj ;R)
B (x"j;R)
B (x"j;R)
(p0 + C2(j !j" j ? R )2 )I ("; R; kj )
and
I ("; R; kj ) = log R" + O(1):
We are led to the following lower bound ( being of the order of )
(2:17)
Z
X
1
E"(u" ) 2
p j r j uu" j j2 + p(x"j ) log R
"
GnB (a;)
j
X
X p2(x"j ) 1
W
(kj ) + p0 I ("; R; kj )
+ WB(0;1)(!" ; 1; :::; 1; h; p~) +
p0
j
j
2
X 2 "2 R
+ C j !j j log " + O( R" 2 ) + O(R log R )
j
+ O( R ) + O(R log R" ):
We turn now to an upper bound for E"(u"). Let yj , j = 1; :::; di be any points in R2 and
yj" = a+yj . We construct a map w such that w = juu""j in GnB(a; ). In B(a; )n[j B(yj" ; R)
we set w(x) = w~( x? a ) where w~ realizes (E yR ) in B(0; 1)n [j B(yj ; R ) with the weight p~
and the boundary conditions h and kj , j = 1; :::; di . In B(yj" ; R), w realizes I ("; R; kj ).
Using Theorem 4, case II we obtain
(2:18)
Z
X
1
p j r j uu" j j2 + p(yj" ) log R
E"(u") 2
"
GnB (a;)
j
2
"
X p (yj ) 1
X
+ WB(0;1)(y; 1; :::; 1; h; p~) +
W
(kj ) + p0 I ("; R; kj )
j p0
j
X 2 2 R
+ C j yj j log " + O(R log R" )
j
+ O( R ) + O(R log R ):
14
Soustraying (2.17) and (2.18) we are led to
(2:19)
X
(p(x"j ) ? p(yj" )) log R + WB(0;1)(!"; 1; :::; 1; h; p~) ? WB(0;1)(y; 1; :::; 1; h; p~)
j
+ C
X
2
X 2 "
R
"
"
2
2
2
(j !j j ? j yj j ) log " + O( j !j ? yj j) O( R2 )
j
j
+ O(R log R ) + O( R ) + O(R log R" ):
We have used that p2 (x"j ) ? p2 (yj") = O(2 j !j" ? yj j). If we take the optimal choice
2
R = " 23 13 , the right hand side of (2.19) is O( " 323 ) and we obtain
X
3
C (j !j" j2 ? j yj j2)2 log 23 31 + C (j !j" j2 ? j yj j2)2 log 13
" "
j
j
+ WB(0;1)(!" ; 1; :::; 1; h; p~) ? WB(0;1)(y; 1; :::; 1; h; p~)
X
(2:20)
1
X
32
"
O( 23 ) + O(2 j !j" ? yj j):
j
In view of (2.1) we have
WB(0;1)(!"; 1; :::; 1; h; p~) ? WB(0;1)(y; 1; :::; 1; h; p~) = ?
?
=
?
X
(p(x"i ) ? p(yi" )) log(j x"i ? x"j j) ? i6=j
X
i;j
(RB(0;1)(!i" ; !j") ? RB(0;1)(yi ; yj ))
X
i6=j
p0 log jj!y"i ?? y!j" jj
i
yj j + O(2 log 1 ) X j y ? !" j
p0 log jj !y"i ?
i
i
? !" j
Using (2.11) (with C = 2, bi = yi and ci = !i") we deduce
X
i;j
(RB(0;1)(!i" ; !j") ? RB(0;1)(yi ; yj )) =
15
j
j !" ? !" j
(p(yi" ) ? p0) log j yi ? y j j
i
j
i6=j
j
i
i
(RB(0;1)(!i"; !j" ) ? RB(0;1)(yi ; yj )):
Xi6=j
i;j
X
X
X
i
O(2 j yi ? !i" j)
and nally
WB(0;1)(!"; 1; :::; 1; h; p~) ? WB(0;1)(y; 1; :::; 1; h; p~) = (2:21)
X
+ O(2 log 1 ) j yi ? !i" j :
X
i6=j
yj j
p0 log jj !y"i ?
? !" j
i
j
i
We dene
H (y) = ?p0
We are led by (2.20) to
X
i6=j
log j yi ? yj j :
X
3
H (!" ) ? H (y) + C j !i j2 ?C j yi j2 O( " 23 )
i
i
(2:22)
X
+ j yi ? !i" j O(2 log 1 ):
i
P
Letting " ! 0 we nd that ! minimizes H () + C i j i j2, that was proved in [AS]. If
P
! is a nondegenerate minimum of H () + C i j i j2 , and if we set y = ! in (2.22) we
X
2
obtain that for some k > 0
32
"
O( 23 );
j !" ? ! j2 ?k j !" ? ! j 2 log 1
this gives
j !" ? ! j= O(2 log 1 ):
We have proved the second part of Theorem 1.
Proof of Theorem 2.
Let us suppose that the points a1 ; :::; al are in G and al+1; :::; am in @G. As usual, the
conclusion will follow from precise upper and lower bounds for the energy. We start with
giving a lower bound for the energy. Using Theorem 4, case I, we deduce that
(2:23)
E"(u"; a2) p0
+
l
X
i=1
l
X
i=1
m
X
d2i log 21 + 2p0
d2i log 21 + WG(a; d1 ; :::; dm ; g; p)
p0W 1 (hi) +
i=l+1
m p
X
0 W 1 (h ) + O( "2 ) + O( log 2 1 ):
i
2
2
i=l+1
16
On the other hand, for each i, we have
1
(2:24) E"(u" ; B(ai ; ) \ Gn [j B(x"ij ; R)) 2
Z
B (ai ;)\Gn[j B (x"ij ;R)
2
pjr juu"j j2 + O( R" 2 ):
"
Now, let i = l + 1; :::; m, that is ai 2 @G. We can write
u" (x) = x ? x"ij x ? x"ij ei"(x)
j j x ? x" j j x ? x" j
j u" j
ij
ij
i
(
x
)
= v" (x)e " :
(2:25)
We may suppose that locally, @G \ B(ai ; ) is at and we may extend " in B(ai ; ) by
reection. Following respectively the proof of the Theorem 4 and Theorem 5 in [BMR],
we get respectively
Z
1
u
"
2
2
j
p
jr
p
jr
v
j
"
ju"j 2 B(ai;)n[j B(x"ij ;R)n[j B(x"ij ;R)
B (ai ;)\Gn[j B (x"ij ;R)
X j x"ij ? ai j
);
+ O(R) + O(
j
Z
(2:26)
and
1Z
? 2p X log j x"ij ? x"ik j
2
p
jr
v
j
2
p
d
log
"
0 i
0
2 B(ai;)n[j B(x"ij ;R)n[j B(x"ij ;R)
R
k6=j
(2:27)
"
"
X "
X
X
? 2p0 log j xij ? xik j + O( j xij ? ai j ) + O( j x" ?R x" j ):
ij
ik
j
j 6=k
j;k
Since deg( juu""j ; ai ) = di we have
E"(u"; B(ai ; 2) \ GnB(ai ; )) 2p0 d2i log 2:
We denote Ci = Cai . Using (2.24)-(2.27), we conclude that, for ai 2 @G,
X j x" ? x" j
E"(u"; B(ai ; 2) \ G) p0 di log R + 2p0 d2i log 2 ? p0 log ij ik
(2:28)
k6=j
"
"
2
X
? p0 log j xij ? xik j + (p0 + Ci(j x"ij ? ai j ?R)2)I ("; R; kij ) + O( R" 2 )
j
j;k
X
+ O( ) + O( R )
17
and for ai 2 G, we nd
X j x" ? x" j
E"(u"; B(ai ; 2) p0di log R + p0d2i log 2 ? p0 log ij ik
(2:29)
+
k6=j
2
(p0 + Ci(j x"ij ? ai j ?R)2)I ("; R; kij ) + O( R" 2 )
X
j
+ O( ) + O( R ):
In order to obtain an adequate upper bound for the energy, we construct the following
map
! = u0 in Gn [i B(ai ; 2);
where u0 is the canonical map associated to (a1 ; :::; am ), (d1 ; :::; dm ), G and g (see [BH2]).
Next, we extend ! to B(ai ; ) \ G, for i in fl + 1; :::; mg. Let yij" , j = 1; :::; di, be any
points in B(ai ; ) \ G such that dist(yij" ; @G) and j yij" ? yik" j are of the order of for all
k 6= j . Let '0(x) be the smooth map dened on the at boundary @G \ B(ai ; ) by
g(x) = ei'0 (x):
We dene for 0 r and ? 2 2
'0(r; ) = ( 12 + )'0(r; 2 ) + ( 21 ? )'0 (r; ?2 );
thus '0 = '0 on @G \ B(ai ; ). Let 0 2]0; 2 [, we set for 0 2
?
l"(r; ) = ?? 0 '0(r; ) + 2 ? '0(a);
0
0
2
2
for ? 2 ?0
and for ? 0
+
l" (r; ) = ?++0 '0(r; ) + 2 ? '0(a)
0
0
2
2
l" (r; ) = '0(a):
By reection, we obtain l" in B(ai ; ): Direct computation gives for 0 2
(2:30)
j rl" j M
:
2 ? 0
18
Finally, on each B(ai ; ) \ Gn [j B(yij" ; R) we set
x ? y" x ? y"
!(x) = j j x ? yij" j j x ? yij" j eil"(x)
ij
ij
il
(
x
)
= v(x)e " :
Following the proof of the Theorem 4 in [BMR] we obtain
Z
B (ai;)n[j B (yij" ;R)n[j B (y"ij ;R)
+ O(
Z
p j r! j2=
B (ai ;)n[j B (yij" ;R)n[j B (y"ij ;R)
Z
B (ai;)n[j B (yij" ;R)n[j B (y"ij ;R)
p j rl" j2 +
X j yij" ? ai j
j
p j rv j2
+ R):
Next, we use the proof of the Theorem 5 of [BMR] and we are led to
X j yij" ? yik" j
1Z
2
2 B(ai;)n[j B(yij" ;R)n[j B(y"ij ;R) p j rv j = 2p0 di log R ? 2p0 j6=k log
X j yij" ? y "ik j
X j yij" ? ai j X
R )
+
O
(
)
+
O
(
(2:31) ? 2p0 log
"
"
j
j;k
j 6=k j yij ? yik j
X
+ O(j yij" ? ai j2 log R1 ) + O(2 log 1 ):
j
Combining (2.30) and (2.31) we obtain
(2:32)
"
"
1Z
2 = p0 di log ? p0 X log j yij ? yik j
p
j
r
!
j
2 B(ai;)\Gn[j B(yij" ;R)
R
j 6=k
"
"
X
? p0 log j yij ? yik j + O( ? )2 + O( ) + O(2 log 1 )
0
2
j;k
+ O(2 log R1 ) + O( R ):
Now, we extend ! to B(ai ; 2)nB(ai ; ) as in [AS], Lemma 4.6. So, in a neighborhood
of ai we set
u0 = e2idi ei i
and
! = e2idi eii :
19
We have
j ri j= O(j rl" j) + O( 2 )
that gives by (2.30)
For r 2 we set
Thus,
j ri j= O( ?1 ) + O( 2 ):
0
2
(r; ) = (2 ? r )i + (?1 + r ) i :
j @@r j= O( ?1 ) + O( 2 )
0
2
and
j @@ j= O( ?1 ) + O( 2 ):
0
2
We set
! = ei
We have
therefore
(2:33)
+2idi in B(ai ; 2)nB(ai ; ):
j r!(x) j= j x ?1 a j + O( 2 + ?1 )
0
i
2
Z
2
2
p j r! j2= 4p0 d2i log 2 + O( 2 + ( ? )2 ):
0
B (ai ;2)nB (ai;)
2
On each B(yij" ; R) we take ! as a minimizer for I ("; R; kij ). Moreover, we may choose 0
such that 2 ? 0 is independent of ". By (2.32) and (2.33), we obtain for ai 2 @G
(2:34)
"
"
1Z
2 = 2p0 di log + 2p0 d2 log 2 ? p0 X log j yij ? yik j
p
j
r
!
j
i
2 B(ai;2)\G
R
k6=j
"
"
X
X
? p0 log j yij ? yik j + (p0 + Ci (j yij" ? ai j +R)2)I ("; R; kij )
j
j;k
+ O( ) + O( R ) + O(2 log 1 ) + O(2 log R1 );
20
For ai 2 G we need not to use the function l" and it suces to choose yij" such that
j yij" ? yik" j is of the order of , for j 6= k and the order of j yij" ? ai j is not greater than for j = 1; :::; di. Hence, we obtain
"
"
1Z
2 = p0 di log + p0 d2 log 2 ? p0 X log j yij ? yik j
p
j
r
!
j
i
2 B(ai;2)
R
k6=j
X
+ (p0 + Ci(j yij" j +R)2 )I ("; R; kij ) + O( ) + O( R ) + O(2 log 1 )
j
+ O(2 log R1 ):
(2:35)
Recall that
Z
1 j r j2 :
1
1Z
2
p
j
r
u
j
=
0
0
2 Gn[i B(ai;2)
2 Gn[i B(ai;2) p
" = x" for all
Let ai 2 @G. In order to study the zeroes of u" near ai , we may choose ykj
kj
k 6= i and j = 1; :::; dk . Let yij be any points in R2+ and set yij" = ai + yij . Using (2.23),
(2.28), (2.29), (2.34), (2.35) and Proposition 1, we deduce that for all i = l + 1; :::; m
X
?
Xk6=j
+
j
m p
l
X
X j x"ij ? x"ik j X
j
x"ij ? x"ik j
0 W 1 (h )
1
p
W
(
h
)
+
log
p0 log
?
p
+
0
i
0
i
2
j;k
(p0 + Ci(j x"ij ? ai j ?R)2 )I ("; R; kij )
X
?
k6=j
X
+
j
i=1
i=l+1
X j yij" ? y"ik j
j
yij" ? yik" j
p0 log
? p0 log
j;k
2
"
2
(p0 + Ci(j yij j +R) )I ("; R; kij ) + O( R2 ) + O( R ) + O( )
+ O( log2 1 ) + O(2 log R1 ):
We are led to
l
X
i=1
p0 W 1(hi ) +
2
m p
X
0 W 1 (h ) + H (! " ) ? H (y ) + C X j ! " j2 ?C X j y j2
ij
i
i
i i
i i
i
ij
2
i=l+1
O( R" 2 ) + O( R ) + O( ) + O( log2 1 ) + O(2 log R1 ):
21
j
j
Now, we set = and R = . The optimal choice for is = 21 log 1 , thus
l
X
i=1
p0 W 1(hi ) +
O( 12
log 1 ):
m
X
i=l+1
p0W 1 (hi) + Hi (!i") ? Hi (yi ) + Ci
X
j
j !ij" j2 ?Ci
X
j
j yij j2
We know that for, i = 1; :::; m, W 1 (hi) 0. Letting " ! 0 we see that (!i1 ; :::; !idi )
P i j j2. Next we choose y = ! for k = 1; :::; d . If
minimizes Hi (1 ; :::di ) + Ci dj=1
j
ik
ik
i
P
2
(!i1 ; :::; !idi ) realizes a nondegenerate minimum of Hi (yi1 ; :::; yidi ) + Ci j j yij j we get
X
k
j !ik ? !ik" j2= O( 12 log 1 ):
In any case, we obtain for all i = 1; :::; m
W 1 (hi) = O( 12 log 1 ):
We have proved Theorem 2.
We now turn to the proof of Theorem 3. Using (2.23), (2.28) and (2.29) we obtain
m
X
1
2
di log + p0 di log R
E"(u") p0
i=1
i=1
i=l+1
m
"
X X j xij ? x"ik j
log
+ WG (a; d1 ; :::; dm; g; p) ? p0
i=1 k6=j
m X log j x" ? x" j
X
ij
ik
? p0
i=l+1 k;j
X
+ (p0 + Ci(j x"ij ? ai j ?R)2 )I ("; R; kij )
i;j
2
+ O(R log R" ) + O( R" 2 ) + O( ) + O( R ) + O(R log 1" ) + O( log2 1 ):
l
X
(2:36)
d2i log 1 + 2p0
m
X
Now, we take = , and R = where is a constant independent of ". As in [AS] we
have I ("; R; kij ) = log " + + X ("; ) where X ("; ) tends to 0 as " tends to 0, being
22
2
2
xed. Setting Ai = di ?2 di for i = 1; :::l and Ai = 2di2?di for i = l +1; :::m, we obtain, using
(2.36),
(2:37)
m
m
X
X
1
1
E"(u") WG(a; d1 ; ::::; dm ; g; p) + p0d log " + 2p0 Ai log + Hi (!i )
i=1
i=1
+
m
X
i=1
Ci
X
j
j !ij j2 +dp0 + O() + X ("; );
where X ("; ) tends to o as " tends to 0 and is xed.
Using Theorem 2, we know that for all i, !i minimizes
Hi (1 ; :::di ) + Ci
This leads for all i to
Ci
di
X
j =1
di
X
j =1
j j j2 :
j j j2= p0Ai :
Thus
(2:38)
Hi (!i1 ; :::!idi ) + Ci
di
X
j =1
j !ij j2=
f;Cai
Hi + p0 Ai:
Pdmin
i jj j2 =p0 Ai g
j=1
Moreover, we know (see [BH]) that m and (d1 ; :::; dm ) realizes
F (d) =
m
X
min
Ai :
P
fm;d1 ;:::dm; mi=1 di =dg i=1
Next, we may use the upper bound for E"(u") in the proof of Theorem 2 with m, (d1 ; :::; dm),
any m distinct points (b1 ; :::; bm ) in m , and with, for all i, (!i1 ; :::; !idi ) being chosen as
in (2.38). We are led to the following:
the conguration (a1 ; :::; am ) minimizes in l2 (1n2)m?l the map
W (b) = WG (b; d1 ; :::; dm; g; p) +
23
X
i f;Cbi
Hi :
Pdmin
i jj j2 =p0Ai g
j=1
Finally, we obtain
E"(u") = dp0 log 1" + p0 F (d)(log log 1" + 1) +
min
l2 (1 n2 )m?l
W + dp0 + X (")
where X (") tends to 0 as " tends to 0, which is is the desired estimate.
3. Proof of Proposition 1 and Theorem 4.
Proof of Proposition 1. In the course of the proof, we shall take into account the two
cases. We have
(3:1)
Z 1
Z
2
j r0 j =
b p
m Z
1 @ 0 :
1 @ 0 ? X
0
0
p
@
p
@
@B
(
b
;
)
\
@ n[m
B
(
b
;
)
i
i
i=l+1
i=1
1 @
Using the fact that on @ n [m
i=l+1 B (bi ; ), we have p @0 = h0 (h0 ) and by the normalization condition (1.4), we obtain that
(3:2)
Z
m Z
1 @ 0 = ? X
(g g )0 = X ()
0
p
@
@ n[m
B
(
b
;
)
B
(
b
;
)
\
@
i
i
i=l+1
i=1
where X () = O( log 1 ) in the case I and X () = 0 in the case II. We set
Sj (x) = 0(x) ? dj p(bj ) log jx ? bj j:
We have for j = 1; :::; m
1 ( @Sj
1 @ 0 = Z
0
@B (bj ;)\
p @
@B (bj ;)\
p @
jx ? bj j )(S + d p(b ) log jx ? b j):
+ dj p(bj ) @ log @
j j
j
j
Z
(3:3)
For x 2 @B(bj ; ) \ , we have @ log@jx?bjj = 1 . Using (3.3) we have
Z
1 @ 0 = Z
1 @Sj S
0
j
@B (bj ;)\
p @
@B (bj ;)\
p @
Z
Z
d
p
(
b
)
S
1 @Sj log j
j
j
+
d
p
(
b
)
+ j
j
(3:4)
@B (bj ;)\
p
@B (bj ;)\
p @
d2j p(bj )2 Z
log :
+ @B (bj ;)\
p
24
Let us estimate the rst term of (3.4). By (2,7), (2.9) and (2.10) we obtain in the case I
jj Sj jjL1 (B(bj;))= O(1) and jj rSj jjL1 (B(bj;))= O(1)
and in the case II
jj Sj jjL1 (B(bj;))=
X
O(log j b ?1 b j )
k j
k6=j
and for x 2 B(bj ; )
X
j rSj (x) j= O(C j bj ? aj j log j x ?1 b j + j b ?1 b j ):
j
k6=j j k
Thus we have in the case I
(3:5)
Z
1 @Sj S = O()
j
@B (bj ;)\
p @
and in the case II
Z
1 j rS j2
1
1 @Sj S = Z
div(
r
S
)
S
+
j
j
j
j
p
B (bj ;)
B (bj ;) p
@B (bj ;) p @
2
X
= O(C j bj ? aj j log j b ?1 b j + j b ? b j2 + C 2 j bj ? aj j2 2 log2 1 ):
j k
j k
k6=j
Z
(3:6)
Next, we estimate the second term of (3.4). Let us prove that, respectively in the cases I
and II, we have
(3:7)
dj p(bj ) Z
Sj = 2d S (b ) + O()
j j j
@B(bj ;)\
p
X
= O(C j bj ? aj j log 1 + j b ? b j + C2 log j b ?1 b j ):
j k
j k
k6=j
We rst remark that we have respectively in the case I and in the case II
(3:8)
1 ? 1 = O(C j x ? b j2 ); and
j
p(x) p(bj )
1 ? 1 = O(C j x ? b jj b ? a j +C j x ? b j2);
j j
j
j
p(x) p(bj )
25
this gives
dj p(bj ) Z
Sj = dj Z
2
@B(bj ;)\
p @B(bj ;)\
Sj + O(C ) in the case I
Z
X
d
j
= Sj + O((C j bj ? aj j +C2) log j b ?1 b j ) in the case II.
j k
@B (bj ;)
k6=j
Respectively in the cases I and II we have for x 2 @B(bj ; )
(3:9)
j Sj (x) ? Sj (bj ) j = O()
X
=
O(C j bj ? aj j log 1 + j b ? b j )):
j k
k6=j
We have proved (3.7). Now we use
Sj (bj ) =
X
i
di R(bj ; bi ) +
X
i6=j
di p(bi ) log jbi ? bj j
to infer, in view of (3.7)
Z
d
p
(
b
)
Sj = 2d X d R(b ; b )+2d X d p(b ) log jb ? b j + X ();
j
j
(3:10) i j i
j
i j
i j
j
@B (bj ;)\
p
i
i6=j
P
where X () is equal respectively in the cases I and II to O() and k6=j O( jbj ? bk j + C j
bj ? aj j log 1 + C2 log jbj ?1 bk j ). We estimate the third term of (3.4). We easily get
respectively in the cases I and II
(3:11)
dj p(bj )
Z
1 @Sj log = O( log 1 )
@B (bj ;)\
p @
= O(C j aj ? bj j log 1 + C2 log 1 ):
Finally, we evaluate the fourth term of (3.4). Using (3.8) we get in the case I
(3:12)
2d2j p(bj )2 Z
log = 2d2p(b ) log + O(2 log 1 ); j = l + 1; :::; m
j j
@B (bj ;)\
p
= 4d2j p(bj ) log + O(2 log 1 ); j = 1; :::; l;
26
and, in the case II
2d2j p(bj )2 Z
log = 4d2p(b ) log + O(C log 1 + C2 log 1
j j
@B (bj ;) p
+ C2 log 1 ):
(3:13)
Combining (3.4), (3.5), (3.6), (3.7), (3.10), (3.11) and (3.12) we deduce that
(3:14)
1Z
1 @ 0 2 @B(aj ;)\
p @ 0
=
m
X
i=1
dj di R(bj ; bi ) + X
i6=j
didj p(bi ) log jbi ? bj j + 2dj dj p(bj ) log + X ():
P
where X () = O( log 1 ) in the case I and X () = k6=j O( jbj ? bk j + C j bj ? aj j log 1 +
C2 log 1 ) in the case II. Using (3.1), (3.2) and (3.14), we are led to the proof of Proposition
1.
We dene h~ j (x) = hj (hj ) ( x?bj ) for x 2 @B(bj ; ): The proof of the following
proposition is a consequence of the proof of Theorem I.4 of [BBH2].
PROPOSITION 2. Let u^b be a minimizer for the problem (Eb ). We have
Z
Z
1
p
j
r
u
^b j2=
b p
b
j rb j2
where b is the solution of the linear problem
(3:15)
8 1 b
>
div( p r) = 0 in b
>
>
>
<
1 @ b = h~ j on @B(b ; ) \ ; j = 1; :::; m
j
>
p @ >
>
1 @ b = h (h ) on @ n [m B(b ; )
>
:
0
0
j
j =l+1
p @
R
with the normalization condition @
(h0 (h0) )b = 0.
Now we extend the function p by a function still denoted p such that in the case I, for
j = l + 1; :::; m, p(x) = p(r(x)), where r is the reection associated to the at boundary
27
@ \ B(bj ; ), and such that in the both cases p(x) = p0, for j x j large enough. We dene
bj by
8 1 bj
>
div( p r ) = 0 in R2 n@B(bj ; )
>
>
< bj
1 @ = 1 ( h~ j ? 1) on @B(b ; )
>
j
p @ dj
>
>
: bj bounded
R
with the normalization condition @B(bj ;) ~hj bj = 0. In order to justify the existence of
bj in R2nB(bj ; ), let us remark that by the inversion !(z) = bj (bj + z ) (we use the
complex notation), the system that denes bj is equivalent to
8 1
>
div( p~ r!) = 0 in B(0; 1)
>
j
>
>
< 1 @!
hj ? 1) on @B(0; 1)
=
?
(
>
p~j @
dj
>
Z
>
>
: 1 (hj (hj ) )! = 0;
S
where p~j (z) = p(bj + z ) and hj (z) = hj (hj ) (z). Hence, the existence and the uniqueness
of bj are assured by the existence of a unique !, that follows from standard results. For
j = 1; :::; m, we dene ~j by
8 ~ = 0 in R2 nS 1
j
>
>
>
@ ~j = h~ j ? 1 on S 1
>
< @ dj
>
~j bounded
>
Z
>
>
h~ j ~j = 0:
:
@B (0;1)
Let bj (x) = ~j ( x?bj ), for x 2 R2nB(bj ; ): We note that bj is given by the explicit
representation (given in [CM])
1
bj (x) = ? 2
where H is a constant such that
Z
@B (bj ;)
log j x ? z j2 (h~ j (z) ? 1) + H
Z
@B (bj ;)
~hj bj = 0:
28
We also use
1
rj(x) = ? Z
to infer that , for all x 2 R2nB(bj ; )
x ? z (h~ (z) ? 1)
j
@B (bj ;) j x ? z j2
j bj (x) ? H j= O( j x ? b j ? )
j
(3:16)
and
(3:17)
j rbj (x) j= O(
2
+
):
(j x ? bj j2 ?2) 12 (j x ? bj j ?)2
LEMMA 1. We have the following estimations, uniformly for (h1; :::; hm ) in HmM in the case
I, for x 2 R2nB(bj ; )
(3:18)
j rbj (x) j= O(
2
)
+
) + O(C)
(j x ? bj j2 ?2) 21 (j x ? bj j ?)2
and for x 2 @ \ B(bj ; )nB(bj ; )
(3:19)
bj
@
j @ (x) j= O(
) + O(C):
(j x ? bj j2 ?2) 21
In the case II we have for all x 2 R2 nB(bj ; )
(3:20) j rbj (x) j= O(
)+
(j x ? bj j2 ?2) 12
2
(j x ? bj j ?)2 ) + O(C j bj ? aj j +C)
and
(3:21)
j rbj (x) j= O( (j x ? b j ?)2 ) + O( C jj bxj ??bajj j?+C ):
2
j
j
Proof. We note here !(z) = bj (bj + z ), z 2 B(0; 1), p~bj (z) = p~(z) = p(bj + z ) and
!(z) = p(bj )~j ( 1z ) = p(bj )j (bj + z ). We have
8 1
1
>
< div p~r(! ? !)(z) = ?p(bj )(r p~ :r!)(z) in B(0; 1)
1 @ (! ? !) = ?( p(bj ) ? 1)( h h ? 1) on @B(0; 1):
>
:
p~ @
p~
d
29
There exists > 0 such that
r 1p (bj + z ) = O(C j bj ? aj + z j) for j z j = 0 for j z j and for any second derivative of 1p denoted D2 p1 we have
D2 p1 (bj + z ) = O(C ) for j z j = 0 for j z j :
Thus
jj r p1~:r! jjH 1(B(0;1) = O(C2) in the case I
= O(C + C2) in the case II:
On the other hand we get
jj ( p(pb~j ) ? 1)( h d h ? 1) jjH 23 (S1) = O(C2) in the case I
= O(C + C2) in the case II :
We deduce from standard estimates that
jj ! ? ! jjH 3(B(0;1)) M jj f jjH 1(B(0;1)) +M jj (~pbj ? p(bj ))( h d h ? 1) jjH 23 (S1);
thus
(3:22)
jj ! ? ! jjH 3(B(0;1)) = O(C2) in the case I
= O(C + C2) in the case II:
In particular we obtain
(3:23)
jj rbj ? p(bj )rbj jjL2 (
nB(bj ;)) = O(C2) in the case I
= O(C + C2) in the case II:
By (3.22) we are led to
(3:24)
jj rbj ? p(bj )rbj jjL1(
nB(bj ;)) = O(C) in the case I
= O(C + C) in the case II:
30
Now we use (3.17) and (3.24) to get (3.18) and (3.20). If x 2 @ \ B(bj ; )nB(bj ; ), for
an appropriate choice of coordinates z = (z1 ; z2 ) we set
z2 (h~ (z) ? 1):
@bj (x) = 1 Z
@
@B(bj ;) j x ? z j2 j
We denote z2 = sin and we use j x ? z jj x ? bj j ? cos to get
@bj (x) = O(
);
@
(j x ? bj j2 ?2) 21
thus, using (3.24), (3.19) is proved. In order to prove (3.21), we rst remark that (3.22)
leads to
jj bj ? p(bj )bj jjL1 (
nB(bj ;)) = O(C2) in the case I
= O(C + C2) in the case II;
and this gives, in view of (3.16)
(3:25)
j bj (x) ? p(bj )H j = O( j x ? b j ? ) + O(C2) in the case I
j
= O( j x ? b j ? ) + O(C + C2) in the case II:
j
Now we let for x 2 R2nB(0; 1), v(x) = bj (bj + x) and, for y 2 B(0; 1), v~(y) = v(x + (j
x j ?1)y). Standard estimates in B(0; 1) for the map v~ ? p(bj )H give
j rv~(0) j M jj v~ ? p(bj )H jjL1(B(0; 21 );
thus
j rv(x) j j xMj ?1 : jj v ? p(bj )H jjL1(B(x; 21 (jxj?1)) :
We set z = bj + x. We are led to
j rbj (z) j j z ?Mb j ? jj bj ? p(bj )H jjL1(B(z; 21 (jz?bj j?))
j
and we use (3.25) to get (3.21).
We have the following Lemma
31
LEMMA 2. In the case I, that is if bj 2 and p(x) = p0 + C j x ? bj j2 +o(j x ? bj j2 ) we
have respectively when bj 2 and when bj 2 @ , uniformly for (h1 ; :::; hm) 2 HmM
1 Z 1 j rbj j2= p0 W 1(h ) + O(2) + O(C2 log 1 );
j
2 b p
d2j
and
1 Z 1 j rbj j2= p0 W 1 (h ) + O(2 ) + O(C2 log 1 );
j
2 b p
2d2j
Now in the case II, that is if bj 2 G, j bj ?aj j and p(x) = p0 +C j x?aj j2 +o(j x?aj j2 )
we have uniformly for (h1 ; :::; hm) 2 HmM
1 Z 1 j rbj j2= p2(bj ) W 1(h ) + O(C2 log 1 ) + O(C) + O( 2 ):
j
2 b p
p0 d2j
2
Proof. We rst remark that the W 1 (hj ), j = 1; :::; m are bounded uniformly for
bj
x?bj
(h1; :::; hm ) 2 Hm
M . Recall that (x) = ~j ( ). We claim that, respectively for bj 2 @G
and for bj 2 G,
(3:26)
1 Z 1 j rbj j2 = 1 W 1 (h ) + O(2 ) + O(C2 log 1 )
j
2 b p
2p0dj
= 1 W 1 (hj ) + O(2 ) + O(C2 log 1 ):
p0 dj
We only give the proof of the claim for bj 2 @ , since the case bj 2 remains to [CM]. Using
the fact that hj (z ) = hj (z) for z 2 S 1 we transform j , by the inversion (z) = j(bj + z ),
into the solution of the problem (2.6) and we transform B(bj ; ) \ nB(bj ; ) into the half
ring (B1 ?)nB(0; ), where B1? = f(x1 ; x2 ) 2 R2; x21 + x22 1; x2 < 0g. The domain
nB(bj ; ) is transformed into a domain D contained in B(0; ). We have, by (3.17)
Z
1 Z 1 j rbj j2 1 Z
1
b
2
bj 2
j
2 b p
2p0 fx2
;jx?bjjg j r j + 2p0 nB(bj ;) j r j
Z
2
1
2 +O( ):
2p
j
r
j
2
0 (B1 ?)nB (0; )
Since hj (z) = hj (z), we see that (z ) = (z), thus
Z
1 Z
1
2
2:
j
r
j
=
j
r
j
2p0 (B1 ?)nB(0; )
4p0 B1 nB(0; )
32
This gives, using the denition of W 1(hj ),
1 Z 1 j rbj j2 1 W 1(h ) + O(2 ):
j
2 b p
2p0d2j
Now
1 j rbj j2
1 Z 1 j rbj j2 1 Z
2 b p
2 fx2
;jx?bj jg p
Z
1 j r j2
1
2
(B1 ?)nB (0; ) p~
where p~(z) = p(bj + z ) By the symmetry of ! and p~, we obtain
1 j r j2= 1 Z
1 j r j2 :
1Z
2 (B1?)nB(0; ) p~
4 B1 nB(0; ) p~
2
Since p~(z) p0 + C jzj2 for all z 2 B1nB(0; ), we have
1 Z 1 j rbj j2 1 W 1(h ) + O(C2 log ) + O(2 );
j
2 b p
2p0d2j
and this gives the proof of the claim (3.26). We use the claims (3.23) and (3.26) to get
the proof of the Lemma 2 in the case I. In the case II we denote !bj (z) = p(1bj) bj (bj + z );
z 2 B(0; 1) and we have, using (3.21),
Z
Z
1 j rbj j2 1 Z
1
1 j r!bj j2 + 1
bj 2
2 b pp2 (bj )
2 B1 nB(0; ) p~bj
2p30 nB(bj ;) j r j
(3:27)
Z
1 j r!bj j2 +O( 2 );
1
2
2
B1 nB (0; ) p~bj
where p~bj (z) = p(bj + z ), and
Z
Z
1
1
1
1 j r!bj j2 :
b
2
j
(3:28)
j
r
j
2
2 b pp (bj )
2 B1 nB(0; ) p~bj
Now, aj being in , we set hj (x) = hj hj ( x?aj ) for x 2 @B(aj ; ) and we consider the
function aj dened by
8 1 a
>
div( p r j ) = 0 in R2n@B(aj ; )
>
>
< aj
1 @ = 1 ( hj ? 1) on @B(a ; )
j
>
p @ dj
>
>
: aj bounded
33
R
with the normalization condition @B(aj ;) hj aj = 0. Using (3.27) and (3.28), we deduce
Z 1
1
b
2
j
j b pp2 (b ) j r j ? a pp2 j raj j2j
j
0
Z
1 j r!bj j2 ? 1 j r!aj j2) j +O( 2 )
=j
(
p~aj
2
B1 nB (0; ) p~bj
Z 1
2
=j ( p~ j r!bj j2 ? p~1 j r!aj j2) j +O( 2 ):
aj
B1 bj
Z
By the rst part of the proof of Lemma 1, we have
jj !bj ? !aj jjH 2(B(0;1))j !bj ? !~ jjH 2(B(0;1)) + j !aj ? !~ jjH 2(B(0;1))
= O(C j bj ? aj j +C2):
where !~ (z) = ~j ( z1 ). In particular we have
jj r!bj ? r!aj jjL2 (B(0;1))= O(C j bj ? aj j +C2):
Now
Z 1
Z 1
1
1
b
2
a
2
j
j
j 2 ( p~ j r! j ? p~ j r! j ) j= p~ (j r!bj j2 ? j r!aj j2)
aj
B1 b j
Z B1 1 bj 1
+ ( p~ ? p~ )(j r!aj j2= O(C j aj ? bj j):
aj
B1 bj
Now we apply the result of the case I in the present Lemma and we see that
1 Z 1 j raj j2= p0 W 1(h ) + O(2 ) + O(C2 log 1 );
j
2 a p
d2j
and this gives the proof of Lemma 2.
Let bj be a solution in R2 n@B(bj ; ) of
8 bj >
@ = ? 1 @bj
>
< @x1
p @x2
b
b
j
>
> @ = 1 @j :
:
@x2
p @x1
34
R
bj
The existence of bj follows from the fact that @B(bj;) p1 @@ = 0 and bj veries
8 div(prbj ) = 0 in R2n@B(b ; )
>
j
<
b
j
~
>
: @@ = 1 ( hdj ? 1) on @B(bj ; ):
j
By (3.20), for any compact set K in R2nfbj g we obtain, in the case II
bj ? min bj = O( diam(K ) + Cdiam(K ) + Cdiam(K ):
max
(3:29)
K
K dist(K; bj )
Now we claim that in the case I
bj ?
bj = O( log 1 ):
(3:30)
max
min
m
m
@ n[j=l+1 B (bj ;)
@ n[j=l+1 B (bj ;)
Indeed,
j = l + 1; :::; m and fx0 ; x1 g = @ \ @B(bj ; ), the symmetry of hj gives
R x1 (h letting
bj bj x0 j (hj ) ? 1) = 0, that is (x0 ) = (x1 ). Combining this fact with (3.19) we
obtain (3.30). Now we dene b as the solution of the linear problem
8 1 b
>
div( p r) = 0 in b
>
>
>
b
<
1 @ = dj on @B(b ; ) \ ; j = 1; :::; m
j
p @ >
>
b
>
@
>
1
= h (h ) on @ n [m
:
0
0
j =l+1 B (bj ; )
p @
R
with the normalization condition @
(h0 (h0 ) )b = 0. The function b is equal to the
function b in the particular case where hj (x) = ( jxxj )dj . We write
(3:31)
b = b +
m
X
j =1
dj bj + b;
thus b is the solution of
8 1
>
div( p rb) = 0 in b
>
>
>
>
1 @b = ? X d 1 @bk on @B(b ; ) \ ; j = 1; :::; m
<
k p @
j
p @
>
k
=
6
j
>
m 1 @ bk
>
1 @b = ? X
on @ n [m
>
d
>
k
j =l+1 B (bj ; )
:
p @
k=1 p @
35
R
R
P
with the normalization condition @
(h0 (h0) )b = ? mj=1 dj @
(h0 (h0) )bj .
We have the following Lemma
LEMMA 3. In the case I
Z
b j2 = O(2 log3 1 )
j
r
b
and in the case II
Z
b j2 = O(2 ):
j
r
b
Proof. We have, for j = 1; :::; l
1 rbk ) = 0;
1 @bk = Z
div(
p B (bj ;)
@B (bj ;) p @
Z
and consequently
Z 1 @b
p @ = 0; j = 0; 1; :::; l:
j
Thus we may dene b . We set in this proof = b . First let us prove that we may
choose such that
(3:32)
j jL1(
) = O( log 1 ) in the case I
We have
Z
= O() in the case II:
@ Z @
p = @ = 0; j = 1; :::; l;
j
j @
hence we are in position to apply Lemma I.4 of [BBH2] in order to infer
X
max ? min (max ? min ) + max ? min :
b b l
j =1 j
j 0
For j = 1; :::; l, there exists x and y 2 j such that
max
? min = (x) ? (y)
j j 36
0 We have
max ? min = O(j
j
j
Z y X @k (z)
@
x k6=j
j):
Using respectively (3.18) and (3.20), we get respectively in the cases I and II
max
? min = O(2 )
j j and
2
? min = O(C + ):
max
j
j
Let x and y be such that max0 ? min0 = (x) ? (y). Let us consider the
case I. If x and y are in the same connex component of @ n [mj=l+1 B(bj ; ), we use
(3.19) to get (x) ? (y) = O( log 1 ). If there exists j such that x and y are in
@B(bj ; ) \ , we conclude as in the previous case. Now, in the case II, we use (3.21) to get
(x) ? (y) = O(). Hence we have (3.32). Let us dene Aj2; = B(bj ; 2)nB(bj ; ) \ G.
Let us prove that in the both cases I and II we have
jj r jjL1(Aj2; ) = O(log 1 ) in the case I
(3:33)
= O(1) in the case II:
First we consider the case I and j = l + 1; :::; m. Recall that we may assume that @ \
B(bj ; ) is the axis x2 = 0. We extend ? to a map still denoted ? , dened in
B(bj ; )nB(bj ; ) by ( ? )(r(x)) = ( ? )(x), r being the reection associated to
the axis x2 = 0. Let
v(x) = ( ? ? dj bj )(bj + x) = (
X
k6=j
dk bk + )(bj + x):
Since we have @(@? ) = 0 on @ \ B(bj ; ), the map v is in W 2;p(B(0; )nB(0; 1)) for
all 1 < p < 1 and veries
8 1
>
< div( p~rv) = 0 in B(0; )nB(0; 1)
>
@v = 0 on @B(0; 1);
: p1~ @
where p~(x) = p(bj + x). Thanks to (3.18) we may choose bk such that
(3:34)
jj bk jjL1(Aj2; )= O(2 ); for k 6= j
37
P
and we may dene v (x) = + k6=j bk (bj + x). It veries
8
< div(~prv) = 0 in B(0; )nB(0; 1)
:
v = C () on @B(0; 1);
where C () is a constant that veries by (3.32) and (3.34) j C () j= O( log 1 ). Standard
elliptic estimates give
j rv jL1 (B(0;2)nB(0;1)) M j v ? C () jL1(B(0;3)nB(0;1)) :
The constant M is independent of because p~ is bounded uniformly in . But we have
v(r(x)) = v(x), so, using (3.32) and (3.34) we obtain
j v jL1(B(0;3)nB(0;1))=j +
X
k6=j
bk jL1 (B(bj;3)nB(bj;)\
)= O( log 1 ):
We are led to
j r jL1(Aj2; )= O()+ j
X
k6=j
rbk jL1 (Aj2;)= O( log 1 ):
We have proved (3.33) in the case I, for j = l + 1; :::; m. The proof of (3.33) in the case
I with j = 1; :::; l is the same, without the reection r. In the case II, thanks to (3.29)
2
we choose jj bk jjL1(Aj2; )= O( + C), and the same proof as above, without the
reection r, leads to (3.33). So we have (3.33) in the both cases. We claim now that we
have in the case I, for x 2 @ n [m
j =l+1 B (bj ; ),
(3:35)
log 1
)
j =l+1 j x ? bj j ?
m
j r(x) j= O( max
and in the case II, for x 2 @ ,
(3:36)
j r(x) j= O():
In order to prove (3.35) and (3.36), we dene
u(x) = +
38
m
X
j =1
dj bj :
P
bj We may dene u = + m
j =1 dj ; and due to (3.30) and (3.32) we may choose, for
all x 2 @ n [m
j =l+1 B (bj ; ), in the case I
j u (x) j= O( log 1 ):
In the case II, for all x 2 @ we may choose, due to (3.21) and (3.32)
j u(x) j= O():
In the case I , let j = l + 1; :::; m, x 2 @ \ B(bj ; )nB(bj ; ), and =j x ? bj j ?. We
dene
u~ (x) = u (x + y) y 2 B1 + :
m
We remark that @u
@ = 0 on @ n [j =l+1 B (bj ; ), thus u is equal to a constant C () in
1
the connex component of @ n[m
j =l+1 B (bj ; ) that contains x. We have C () = O( log ).
The map u~ ? C () veries
(
div(~pr(~u ? C ())) = 0 in B1+
u~ ? C () = 0 on @ (B1 +) \ (x2 = 0):
Standard elliptic estimates in B1+ give
j ru~ jL1 (B(0; 21 )+) M j u~ ? C () jL1 (B1+);
(3:37)
where the constant M is independent of , since p~(y) = p(x + y) is bounded uniformly on
. The proof of (3.35) follows directly from (3.37). Now in the case II we have C () = O()
and a similar proof, gives (3.36). We write
Z
j r j2=
b
where
Z
b2
X
j r j2 +
m
Z
j
j =1 A2;
j r j2;
b2 = n [mj=1 B(bj ; 2):
Thanks to (3.33), we get
Z
Aj2;
j r j2 = O(2 log2 1 ) in the case I
= O(2 ) in the case II:
39
Now
Z
b2
j r j2=
m Z
@ @ X
+
;
@ n[m
j=1 B (bj ;2) @
j =1 @B (bj ;2)\
@
Z
By (3.32), (3.33),(3.35) and (3.36) this is O(2 log3 1 ) in the case I, and O(2 ) in the case
II. We have proved Lemma 3.
LEMMA 4. We have in the case I
Z
j
r
? r0 j2= O(2 log 1 )
b
and in the case II
Z
2
? r0 j2= O( 2 ) + O(C 2 22 log2 1 ):
j
r
b
Proof. Let u = ? 0. It veries
8 1
>
div( p ru) = 0 in b
>
>
>
< 1 @u
= 0 on @ n [mj=l+1 B(bj ; )
p
@
>
>
>
dj ? 1 @ 0 on @B(b ; ) \ ; j = 1; :::; m:
>
=
: 1p @u
j
@ p @
We have
Z 1 @u
p @ = 0; j = 1; :::; l
j
so we may dene u. Moreover, by (2.7) and (2.10), we have on @B(bj ; ) \ ; j = 1; :::; m;
respectively in the cases I and II
dj (1 ? p(bj ) ) ? 1
j
=
j
j 1p @u
@
p
p
and
X @Rk X p(bk ) @ log j x ? bk j
j= O(1)
dk @ ? dk p
@
k6=j
k
= O( 1 ) + O(C log 1 ):
40
As in the proof of Lemma 3, Lemma I.4 of [BBH2] gives in the case I
max
u ? min
u = O()
b
b
and in the case II
? min u = O(C log 1 + )
max
u
b
b
and we choose respectively in the cases I and II
(3:38)
j u jL1 (
b) = O() and
= O( + C log 1 ):
The same proof as for (3.35) and (3.36) gives in the case I, for x 2 @ n [mj=l+1 B(bj ; 2)
(3:39)
m
j ru(x) j= O(jmax
)
=l+1 j x ? bj j ?
while in the case II, for x 2 @ ,
(3:40)
j ru(x) j= O( + C log 1 ):
Now let us prove that in the case I we have for all x 2 @B(bj ; 2) \ ,
(3:41)
j ru(x) j= O(1)
and in the case II we have for all x 2 @B(bj ; 2),
(3:42)
j ru(x) j= O( 1 + C log 1 ):
In the case I let j = l + 1; :::; m and x0 and x1 be the two points of @ \ @B(bj ; 2) and
K = @B(bj ; 2) \ nB(x0 ; 2 )nB(x1 ; 2 ):
Using the change of variable x = bj + y and standard estimates we get
(3:43)
j ru jL1 (K) M:
41
Then, x being in @ \ @B(bj ; 2) we let u~ (x) = u(x + y). We remark that u is equal
to a constant C () on @ \ B(bj ; )nB(bj ; ), and that by (3.38) we have j C () j O():
Thus u~ veries
8 1 < div( r(~u ? C ())) = 0 in B1+
p~
: u~ ? C () = 0 on @ (B1 +) \ (x2 = 0):
We use the following estimate
j ru~ jL1 (B 21 +) M j u~ ? C () jL1 (B1+);
to get
j ru jL1(B(x0; 2 )\G) M:
(3:44)
The estimates (3.43) and (3.44) give (3.41). Now, in the case II, we use the change of
variable x = bj + y and standard estimates together with (3.38) to prove (3.42). Next we
write, using (3.38)-(3.42),
Z
(3:45)
b2
j ru j2=
m Z
@u u
@u u ? X
@ n[m
j=1 B (bj ;2) @
j =1 @B (bj ;2)\
@
Z
= O(2 log 1 ) in the case I;
= O( + C log 1 )2 in the case II:
In order to achieve the proof of Lemma 4, we prove that, respectively in the cases I and II
Z
(3:46)
Aj2;
j ru j2 = O(2)
and
2
= O( 2 + C 222 log2 1 ):
In the case I, let j = l + 1; :::; m. We have by (3.37), (3.38) and (3.41)
8
div(pru ) = 0 in Aj2;
>
>
< j u j= O() on @Aj2;
>
> j @u j= O(1) on @Aj :
:
2;
@
42
Thus, letting f (x) = u (bj + x),
8
div(~prf ) = 0 in A2;1+
>
>
<
j f j= O() on @ (A2;1 +)
>
>
: j @f j= O() on @ (A2;1 +);
@
where A2;1 + = B(0; 2)nB(0; 1) \ (x2 > 0) and p~(x) = p(aj + x). We use an extension
theorem, valid for domains with lipschitz boundary (see[G]) to get a map v 2 H 1 (A2;1 +)
having the same trace than f on @ (A2;1 +) and such that
j v jH 1(A2;1+) M j f jH 21 (@A2;1 +) :
We have
and thus
Z
A2;1 +
j r(f ? v) j2=
Z
A2;1 +
?rvr(f ? v)
j rf jL2(A2;1 +) 2 j rv jL2(A2;1+) M;
that proves (3.46) in the case I, for j = l + 1; :::; m. In the case I with j = 1; :::; l , the
proof remains to [CM]. In the case II we have
8 div(~prf ) = 0 in A2;1
>
>
< j f j= O( + C log 1 ) on @ (A2;1 )
>
>
: j @f j= O( + C log 1 ) on @ (A2;1 );
@
thus
jj f jjH 1(A2;1 ) M jj f jjH 12 (@A2;1 )= O( + C log 1 ):
We have proved (3.46) in the both cases. By (3.45) and (3.46) we have proved Lemma 4.
LEMMA 5. We have in the case I
(3:47)
Z 1
bj :r = O( log2 1 );
r
b p
(3:48)
Z 1
b :r = O( log2 1 )
r
b p
43
and
Z 1
bj r bk = O( log 1 ); j 6= k;
r
b p
(3:49)
while in the case II we have
(3:50)
Z 1
bj :r = O( + C log 1 );
r
b p
(3:51)
Z 1
2
b :r = O(C log 1 + + log 1 ):
r
b p and
Z 1
2
bj r bk = O( + ); j 6= k:
r
2
b p (3:52)
Proof. We begin with the proof of (3.47) and (3.50). We denote u = ? 0. Lemma
4 and Cauchy-Schwarz inequality give
Z 1
bj :ru = O( log 21 1 ) in the case I
r
b p
Moreover
= O( + C log 1 ) in the case I:
Z 1
Z 1
Z 1
b
b
j
j
r :r = b p r :ru + b p rbj :r0:
b p By (3.19), we have in the case I,
(3:53)
Z
1 @bj = O( Z r= j log r j dr) + O() = O( log2 1 ):
0
1
@ n[m
r= (r2 ? 2 ) 2
j=l+1 B (bj ;) p @
In the case II we have by (3.21),
(3:54)
Z 1 @bk
= O():
p @ 0
@
44
For k 6= j , (3.18) and (3.20) give
Z
1 @bj = O() in the case I
0
@B (bk ;)\
p @
2
= O( log 1 + C log 1 ) in the case II:
Now we use the same notations as in the proof of Proposition 1 and (3.9) to get for
x 2 @B(bj ; ) \ 0(x) = log + Sj (bj ) + O() in the case I
= log + Sj (bj ) + O(C log 1 + ) in the case II
and we are led to
Z
1 @bj = O() in the case I
0
@B (bj ;)\
p @
= O(C log 1 + ) in the case II:
We have proved (3.47) and (3.50).
Let us prove (3.48) and (3.51). We set
Z 1
Z
XZ
1
1 @u
@u
r
:
r
u
=
+
b p
@ n[m
@B (bk ;2)\
p @
j=l+1 B (bj ;2) p @
k
XZ 1 r :ru :
+
k p k
A2;
We use Lemmas 3 and 4 and (3.32), (3.39)-(3.42) to get in the case I
Z 1
r
:
r
u
= O(2 log2 1 )
b p
and in the case II
Now
Z 1
2 + C2 log 1 )):
r
:
r
u
=
O
(
b p
Z 1
XZ
r:r0 =
b p
1 @bk + X Z
1 @bj :
0
0
k @ n[i B (bi ;) p @
j 6=k @B (bk ;)\
p @
45
For j 6= k we have by (3.18), (3.20), (2.7) and (2.9)
Z
1 @bj = O(2 log 1 ) in the case I
0
@B (bk ;)\
p @
2
= O(C log 1 ) + O( log 1 ) in the case II:
We use (3.53) and (3.54) to complete the proof (3.48) and (3.51).
Let us prove (3.49) and (3.52). Let k 6= j .
m Z
1 @bj bk :
1 @bj bk ? X
@ n[m
i=1 @B (bi ;)\
p @
i=l+1 B (bi ;) p @
Z
Z 1
b
b
j
k
r :r =
b p For i = j we get respectively in the cases I and II, in view of (3.18) and (3.21)
Z
1 @bj bk = Z
1 @bi (bk ? bk (b ))
i
@B (bi ;)\
p @
@B (bi ;)\
p @
= O(2 )
2
= O( 2 ):
For i 6= j we have, using (3.18) and (3.21), respectively in the cases I and II
Z
1 @bj bk = O(2 )
@B (bi ;)\
p @
2
= O( 2 ):
Moreover, in the case I we have by (3.19)
Z
1 @bj bk = O() + O(Z r=
) = O( log 1 )
1
2
@ n[m
r= (r ? 2 ) 2
i=l+1 B (bi ;) p @
and in the case II, by (3.21)
Z 1 @bj
bk = O()
@
p
@
We have proved (3.49) and (3.52).
46
Proof of Theorem 4. We have, using (3.31)
Z 1
Z 1
Z 1
X Z 1
1
1
1
2
bj
2
2
2 p j r j = 2 p j r j + 2 p j r j + j dj p r:r
Z 1
X Z 1 bj
+
r:r + dj p r :r
p
j
Z
X
X Z 1 bj bk
1
1
1
2
b
2
j
+ 2 dj
j r j + 2 dj dk p r :r :
p
j
j 6=k
We use Lemma 5 and Lemma 3 to obtain
m Z 1
1 Z 1 j r j2= 1 Z 1 j r j2 + 1 X
2
bj j2 +X ();
j
r
d
j
2 p
2 p
2 j=1 p
(3:55)
where X () = O( log2 1 ) in the case I, and
X () = O(C log 1 + )
in the case II. Moreover we have
Z 1
Z 1
Z 1
2
2
2
j
r
j
r
j
+
j
r
j
=
?
r
j
0
0
p
p
p
Z 1
(3:56)
+ 2 p r( ? r0):r0
and
Z 1
XZ
1 @ ( ? 0)
r
(
?
)
:
r
=
0
0
0
@
p
i @B (bi ;)\
p
Z
Z
@
1
d
p
(
b
)
0
i
i
=
0(1 ? p(pbi ) ) di :
0( p ? p @ ) +
(3:57)
@B (bi ;)\
@B (bi ;)\
We estimate separately the last two terms. Using the proof of Proposition 1 ((3.5), (3.6)
and (3.11)) give
Z
(3:58)
0 ( p(bi ) di ? 1 @ 0 )
p p @
@B (bZi ;)\G
1 @Si (S + d p(b ) log j x ? b j) = O( log 1 ) in the case I
=?
i i
i
i
@B (bi ;)\G p @
2
= O(C log 1 + 2 ) in the case II:
47
On the other hand we directly estimate, by the use of (3.8)
Z
@B (bi ;)\G
(3:59)
0(1 ? p(pbi ) ) di = O(2 log 1 ) in the case I
= O(C log 1 ) in the case II:
Combining (3.57)-(3.59) we obtain
Z 1
1 ) in the case I
r
(
?
)
:
r
=
O
(
log
0
0
p
(3:60)
2
= O(C log 1 + 2 ) in the case I :
We conclude by (3.56), (3.60) and Lemma 4 that
Z 1
Z 1
2 + O( log 1 ) in the case I
2
j
r
j
r
j
j
=
0
p
p
2
= O( 2 + C log 1 ) in the case II:
We are led to
m Z 1
1 Z 1 j r j2= 1 Z 1 j r j2 + 1 X
2
bj j2 +X ();
d
j
r
0
j
2 p
2 p
2 j=1 p
where in the case I X () = O( log2 1 ) and in the case II X () = O(C log 1 + ): We
use Lemma 2 and Proposition 1 to prove Theorem 4.
References
[AS1] N.ANDRE - I.SHAFRIR, Minimization of the Ginzburg-Landau functional with
weight, C.R. Acad. Sci. Paris. t. 321, Serie I, n. 8, (1995), 999-1004.
[AS2] N.ANDRE - I.SHAFRIR, Asymptotic behavior of minimizers for the GinzburgLandau Functional with weight, Parts I and II, Arch. Rat. Mech. and Anal.
[BH1] A.BEAULIEU - R.HADIJI, On a class of Ginzburg-Landau equations with weight,
PanAmerican Mathematical Journal, 5, 4, (1995), 1-33.
[BH2] A.BEAULIEU - R.HADIJI, A Ginzburg-Landau problem with weight having minima
on the boundary, Proceeding of the Royal Society of Edinburgh, 128A, 1181-1215, 1998.
48
[BBH1] F.BETHUEL - H.BREZIS - F.HE LEIN, Asymptotics for the minimization of
a Ginzburg-Landau functional, Calculus of Variations and PDE, 1, (1993), 123-148.
[BBH2] F.BETHUEL - H. BREZIS - F.HE LEIN, Ginzburg-Landau-vortices,
Birkhauser, (1994).
[B] H.BREZIS, Lecture Note on Ginzburg-Landau vortices, Scola Normale Superiore, Pisa,
(1995).
[BMR] H.BREZIS- F.MERLE - T.RIVIE RE, Quantization eects for ?u = u(1? j u j2 )
in IR2 , Arch. Rat. Mech. Anal.126, (1994), 35-58.
[CM] M.COMTE-P.MIRONESCU, The behavior of a Ginzburg-Landau minimizer near its
zeros, Calculus of Variations and PDE 4, (1996), 323-340.
[dPF] M.DEL PINO-P.FELMER, Local minimizers for the Ginzburg-Landau energy,
Math.Z. 225, 671-684, 1997.
[DG] Q.DU-M.GUNZBURGER, A model for supraconducting thin lms having variable
thickness, Physica. D, 69, 215-231, 1994.
[GT] D.GILBARG-N.S.TRUDINGER, Elliptic partial dierential equations of second order, Springer-Verlag, 1983.
[G] P.GRISVARD, Elliptic problems in nonsmooth domains, Pitman 1985.
[R] J.RUBINSTEIN, On the asymptotic behavior of minimizers of the Ginzburg-Landau
vortices, Z.Angew.Math. Phys. 46, 739-751, 1995.
[St] M.STRUWE, On the asymptotic behavior of minimizers of the Ginzburg-Landau
model in 2 dimensions, Dierential and Int. Equations, 7, (1994), 1613-1624, and erratum,
Dierential and Int. Equations, 8, (1995), 124.
49
Download