Introduction Abstract on: “On the number of sums and differences”

advertisement
Abstract on:
“On the number of sums and differences”
by I. Z. Ruzsa
Introduction
Let A be a set of integers. We define:
A + A = {a1 + a2 : a1 , a2 ∈ A},
A − A = {a1 − a2 : a1 , a2 ∈ A},
kA = {k · a, a ∈ A}.
If |A| = n, then we have
n2 + n
2
2n − 1 ≤ |A − A| ≤ n2 − n + 1,
2n − 1 ≤ |A + A| ≤
where there is equality on the left side for arithmetical progressions and the
right hand side appear for “generic” sets, in which there is no trivial coincidence
between sums and differences (for example if A = {2i , i ∈ [1, n]}).
We can observe that, if a set A satisfies the genericity for the sums (reaches
the righthand side bound), then it also satisfies the genericity for the differences,
and vice-versa: a nontrivial coincidence of sums, say a + b = a0 + b0 implies a
non-trivial coincidence of difference, namely a − a0 = b0 − b.
However, the “almost” version of these observations are far from equivalent;
almost all sums may be different while almost all differences are represented
multiply, and conversely:
Theorem 1. For every n > n0 there is a set A such that
|A| = n, |A + A| ≤ n2−c
but
|A − A| ≥ n2 − n2−c ,
where c is a positive absolute constant. Also there is a set B such that
|B| = n, |B − B| ≤ n2−c
but
|B + B| ≥
n2
− n2−c .
2
Idea of the proof
For the first part of the theorem: take a set U such that
D = |U − U | > S = |U + U |
and form the set
V = U k ⊂ Zk , V = {(u1 , . . . , uk ) : u1 , . . . , uk ∈ U }.
This set has |V + V | = S k , |V − V | = Dk . Thus, V − V is much larger than
V + V . We will consider here a random subset X of V by picking elements
from V uniformly and random. Since the number of sums is much smaller than
the number of differences, we may expect that the many sums start to collide
(coincide) in a random set while the differences do not.
1
Technical Lemmas
Since X has been chose from V , with |V | = m, by picking elements uniformly
and random with probability 0 < ρ < 1, we have E(|X|) = ρm. We will use the
Chebyshev’s inequality as a concentration result:
ρ(1 − ρ)m
.
K2
We will also need
X| and S = E|X + X|.
P estimations for D = E|X −P
By definition D =
P(n
∈
X
−
X)
and
S
=
n
n P(n ∈ X + X). Let
σ(n) = |{v1 + v2 = n : v1 , v2 ∈ V }| and δ(n) = |{v1 − v2 = n : v1 , v2 ∈ V }|.
P(k|X| − ρmk ≥ K) ≤
Lemma 2. We have
P(n 6∈ X + X) =
(1 − ρ2 )σ(n)/2
for n 6∈ 2V
(1 − ρ)(1 − ρ2 )(σ(n)−1)/2 for n ∈ 2V
.
By using estimates on the probability of choosing two consecutive numbers
by selecting elements uniformly and random from the set [1 . . . , n] we can show:
Lemma 3. We have: (1 − ρ2 )δ(n) ≤ P(n 6∈ X − X) ≤ (1 − ρ2 + ρ3 )δ(n) .
Summarizing, we obtain:
Lemma 4.
X
X
1 − (1 − ρ2 )σ(n)/2 − ρm ≤ S ≤
1 − (1 − ρ2 )σ(n)/2
n
and
n
X
X
1 − (1 − ρ2 )δ(n) ) ≤ D ≤
1 − (1 − ρ2 + ρ3 )σ(n)
n
n
Using
Lemma 5. For every 0 ≤ x ≤ 1, y ≥ 0 integer and 0 < < 1 we have
1 − (1 − x)y ≤ (xy)1−
1 − (1 − x)y ≥ xy − (xy)1+ .
We can
P approximate the sums by powers. Defining Σ(t) =
∆(t) = n δ(n)t , we can show:
P
n
σ(n)t and
Lemma 6. For every 0 < < 1 we have:
ρ 2
2m −
2 2
ρm − (ρ2 /2)1+ Σ(1 + ) ≤ S ≤ (ρ2 /2)1− Σ(1 − ),
ρ m − ρ3 m2 − ρ2(1+) ∆(1 + ) ≤ D ≤ ρ2(1−) ∆(1 − ).
To prove each part of the result we will approximate Σ(1 ± ) and ∆(1 ± )
depending on the part of the theorem we want to show.
We will also exclude (for the first part of the theorem): |X| is far from
ρm, |X − X| too big and |X + X| is too small. We will do this by letting
the probability of each of those events be ≤ 1/4. For the first one we choose
Chebichev inequality and for the last two we use Markov inequality (recall that
E(|X + X|) = S and E(|X − X|) = D).
2
Download