Document 10945077

advertisement
Hindawi Publishing Corporation
Journal of Probability and Statistics
Volume 2011, Article ID 202015, 16 pages
doi:10.1155/2011/202015
Research Article
Complete Convergence for Weighted
Sums of Sequences of Negatively Dependent
Random Variables
Qunying Wu
College of Science, Guilin University of Technology, Guilin 541004, China
Correspondence should be addressed to Qunying Wu, wqy666@glite.edu.cn
Received 30 September 2010; Accepted 21 January 2011
Academic Editor: A. Thavaneswaran
Copyright q 2011 Qunying Wu. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Applying to the moment inequality of negatively dependent random variables the complete
convergence for weighted sums of sequences of negatively dependent random variables is
discussed. As a result, complete convergence theorems for negatively dependent sequences of
random variables are extended.
1. Introduction and Lemmas
Definition 1.1. Random variables X and Y are said to be negatively dependent ND if
P X ≤ x, Y ≤ y ≤ P X ≤ xP Y ≤ y
1.1
for all x, y ∈ R. A collection of random variables is said to be pairwise negatively dependent
PND if every pair of random variables in the collection satisfies 1.1.
It is important to note that 1.1 implies that
P X > x, Y > y ≤ P X > xP Y > y
1.2
for all x, y ∈ R. Moreover, it follows that 1.2 implies 1.1, and, hence, 1.1 and 1.2 are
equivalent. However, 1.1 and 1.2 are not equivalent for a collection of 3 or more random
variables. Consequently, the following definition is needed to define sequences of negatively
dependent random variables.
2
Journal of Probability and Statistics
Definition 1.2. The random variables X1 , . . . , Xn are said to be negatively dependent ND if,
for all real x1 , . . . , xn ,
⎛
P⎝
⎞
n
Xj ≤ xj ⎠ ≤
j1
⎛
P⎝
n
n
P Xj ≤ xj ,
j1
⎞
Xj > xj ⎠ ≤
j1
n
1.3
P Xj > xj .
j1
An infinite sequence of random variables {Xn ; n ≥ 1} is said to be ND if every finite subset
X1 , . . . , Xn is ND.
Definition 1.3. Random variables X1 , X2 , . . . , Xn , n ≥ 2, are said to be negatively associated
NA if, for every pair of disjoint subsets A1 and A2 of {1, 2, . . . , n},
cov f1 Xi ; i ∈ A1 , f2 Xj ; j ∈ A2 ≤ 0,
1.4
where f1 and f2 are increasing in every variable or decreasing in every variable, provided
this covariance exists. A random variables sequence {Xn ; n ≥ 1} is said to be NA if every
finite subfamily is NA.
The definition of PND is given by Lehmann 1, the concept of ND is given by
Bozorgnia et al. 2, and the definition of NA is introduced by Joag-Dev and Proschan 3.
These concepts of dependence random variables have been very useful in reliability theory
and applications.
First, note that by letting f1 X1 , X2 , . . . , Xn−1 IX1 ≤x1 ,X2 ≤x2 ,...,Xn−1 ≤xn−1 , f2 Xn IXn ≤xn and f 1 X1 , X2 , . . . , Xn−1 IX1 >x1 ,X2 >x2 ,...,Xn−1 >xn−1 , f 2 Xn IXn >xn , separately, it is easy to see
that NA implies 1.3. Hence, NA implies ND. But there are many examples which are ND
but are not NA. We list the following two examples.
Example 1.4. Let Xi be a binary random variable such that P Xi 1 P Xi 0 0.5 for
i 1, 2, 3. Let X1 , X2 , X3 take the values 0, 0, 1, 0, 1, 0, 1, 0, 0, and 1, 1, 1, each with
probability 1/4.
It can be verified that all the ND conditions hold. However,
P X1 X3 ≤ 1, X2 ≤ 0 4
3
P X1 X3 ≤ 1P X2 ≤ 0 .
8
8
1.5
Hence, X1 , X2 , and X3 are not NA.
In the next example X X1 , X2 , X3 , X4 possesses ND, but does not possess NA
obtained by Joag-Dev and Proschan 3.
Example 1.5. Let Xi be a binary random variable such that P Xi 1 .5 for i 1, 2, 3, 4. Let
X1 , X2 and X3 , X4 have the same bivariate distributions, and let X X1 , X2 , X3 , X4 have
joint distribution as shown in Table 1.
Journal of Probability and Statistics
3
Table 1
X1 , X2 X3 , X4 0, 0
0, 1
1, 0
1, 1
Marginal
0, 0
.0577
.0623
.0623
.0577
.24
0, 1
.0623
.0677
.0677
.0623
.26
1, 0
.0623
.0677
.0677
.0623
.26
1, 1
.0577
.0623
.0623
.0577
.24
marginal
.24
.26
.26
.24
It can be verified that all the ND conditions hold. However,
P Xi 1, i 1, 2, 3, 4 > P X1 X2 1P X3 X4 1,
1.6
violating NA.
From the above examples, it is shown that ND does not imply NA and ND is
much weaker than NA. In the papers listed earlier, a number of well-known multivariate
distributions are shown to possess the ND properties, such as a multinomial, b
convolution of unlike multinomials, c multivariate hypergeometric, d dirichlet, e
dirichlet compound multinomial, and f multinomials having certain covariance matrices.
Because of the wide applications of ND random variables, the notions of ND random
variables have received more and more attention recently. A series of useful results have
been established cf. Bozorgnia et al. 2, Amini 4, Fakoor and Azarnoosh 5, Nili Sani et al.
6, Klesov et al. 7, and Wu and Jiang 8. Hence, the extending of the limit properties of
independent or NA random variables to the case of ND random variables is highly desirable
and of considerable significance in the theory and application. In this paper we study and
obtain some probability inequalities and some complete convergence theorems for weighted
sums of sequences of negatively dependent random variables.
In the following, let an bn an bn denote that there exists a constant c > 0 such
that an ≤ cbn an ≥ cbn for sufficiently large n, and let an ≈ bn mean an bn and an bn .
nj1 Xj .
Also, let log x denote lnmaxe, x and Sn Lemma 1.6 see 2. Let X1 , . . . , Xn be ND random variables and {fn ; n ≥ 1} a sequence of Borel
functions all of which are monotone increasing (or all are monotone decreasing). Then {fn Xn ; n ≥ 1}
is still a sequence of ND r. v. ’s.
Lemma 1.7 see 2. Let X1 , . . . , Xn be nonnegative r. v. ’s which are ND. Then
⎛
⎞
n
n
E⎝ Xj ⎠ ≤
EXj .
j1
j1
1.7
4
Journal of Probability and Statistics
In particular, let X1 , . . . , Xn be ND, and let t1 , . . . , tn be all nonnegative (or non-positive) real numbers.
Then
⎞⎞
⎛
n
n
E exp tj Xj .
E⎝exp⎝ tj Xj ⎠⎠ ≤
⎛
j1
1.8
j1
Lemma 1.8. Let {Xn ; n ≥ 1} be an ND sequence with EXn 0 and E|Xn |p < ∞, p ≥ 2. Then for
Bn ni1 EXi2 ,
p
E|Sn | ≤ cp
n
p
E|Xi | i1
p/2
Bn
,
1.9
n
p/2
p
p
p
E max|Si | ≤ cp log n
,
E|Xi | Bn
1≤i≤n
1.10
i1
where cp > 0 depends only on p.
Remark 1.9. If {Xn ; n ≥ 1} is a sequence of independent random variables, then 1.9 is
the classic Rosenthal inequality 9. Therefore, 1.9 is a generalization of the Rosenthal
inequality.
n
Proof of Lemma 1.8. Let a > 0, Xi minXi , a, and Sn i1 Xi . It is easy to show that
{Xi ; i ≥ 1} is a negatively dependent sequence by Lemma 1.6. Noting that ex − 1 − x/x2
is a nondecreasing function of x on R and that EXi ≤ EXi 0, tXi ≤ ta, we have
tXi
E e
1
tEXi
E
etXi − 1 − tXi
t2 Xi2
t Xi
2
2
≤ 1 eta − 1 − ta a−2 EXi2
≤ 1 eta − 1 − ta a−2 EXi2
≤ exp eta − 1 − ta a−2 EXi2 .
1.11
Here the last inequality follows from 1 x ≤ ex , for all x ∈ R.
Note that Bn ni1 EXi2 and {Xi ; i ≥ 1} is ND, we conclude from the above inequality
and Lemma 1.7 that, for any x > 0 and h > 0, we get
−hx
e
hSn
E e
−hx
e
n
E
ehXi
i1
≤ e−hx
n
E ehXi
i1
≤ exp −hx eha − 1 − ha a−2 Bn .
1.12
Journal of Probability and Statistics
5
Letting h lnxa/Bn 1/a > 0, we get
ha
e
x Bn
xa
x
−2
− 1 − ha a Bn − 2 ln
1 ≤ .
a a
Bn
a
1.13
Putting this one into 1.12, we get furthermore
xa
x x
e−hx E ehSn ≤ exp
− ln
1 .
a a
Bn
1.14
Putting x/a t into the above inequality, we get
P Sn ≥ x ≤
n
P Xi > a P Sn ≥ x
i1
≤
n
P Xi > a e−hx EehSn
i1
n
x2
x
exp t − t ln
≤
P Xi >
1
t
tBn
i1
1.15
−t
n
x2
x
t
e 1
P Xi >
.
t
tBn
i1
Letting −Xi take the place of Xi in the above inequality, we can get
−t
n
x2
x
t
e 1
P −Xi >
P −Sn ≥ x P Sn ≤ −x ≤
t
tBn
i1
−t
n
x2
−x
t
e 1
P Xi <
.
t
tBn
i1
1.16
Thus
−t
n
x2
x
t
P |Sn | ≥ x P Sn ≥ x P Sn ≤ −x ≤
2e 1 P |Xi | <
.
t
tBn
i1
1.17
Multiplying 1.17 by pxp−1 , letting t p, and integrating over 0 < x < ∞, according
to
p
E|X| p
∞
0
xp−1 P |X| ≥ xdx,
1.18
6
Journal of Probability and Statistics
we obtain
p
E|Sn | p
∞
xp−1 P |Sn | ≥ xdx
0
≤p
n ∞
i1
x
p−1
0
−p
∞
x2
x
p
p−1
1
P |Xi | ≥
x
dx
dx 2pe
p
pBn
0
1.19
n
p/2 ∞ up/2−1
p
p1
p
E|Xi | pe pBn
du
p
1 up
0
i1
pp1
n
p p p/2
Bn ,
E|Xi |p pp/21 ep B ,
2 2
i1
1
∞
where Bα, β 0 xα−1 1 − xβ−1 dx 0 xα−1 1 x−αβ dx, α, β > 0 is Beta function. Letting
cp maxpp1 , p1p/2 ep Bp/2, p/2, we can deduce 1.9 from 1.19. From 1.9, we can
prove 1.10 by a similar way of Stout’s paper 10, Theorem 2.3.1.
Lemma 1.10. Let {Xn ; n ≥ 1} be a sequence of ND random variables. Then there exists a positive
constant c such that, for any x ≥ 0 and all n ≥ 1,
2 n
P |Xk | > x ≤ cP max |Xk | > x .
1 − P max |Xk | > x
1≤k≤n
1.20
1≤k≤n
k1
Proof. Let Ak |Xk | > x and αn 1 − P nk1 Ak 1 − P max1≤k≤n |Xk | > x. Without loss of
generality, assume that αn > 0. Note that {IXk >x −EIXk >x ; k ≥ 1} and {IXk <−x −EIXk <−x ; k ≥
1} are still ND by Lemma 1.6. Using 1.9, we get
E
n
k1
2
IAk − EIAk E
n
k1
≤ 2E
2
IXk >x − EIXk >x IXk <−x − EIXk <−x
2
n
k1
≤c
IXk >x − EIXk >x
2E
2
n
k1
IXk <−x − EIXk <−x
n
P Ak .
k1
1.21
Combining with the Cauchy-Schwarz inequality, we obtain
⎞
⎛
n
n
n
n
P Ak P ⎝ A k , Aj ⎠ E IAk Inj1 Aj
k1
j1
k1
k1
⎛
⎞
n
n
n
E
IAk − EIAk Inj1 Aj P Ak P ⎝ Aj ⎠
k1
k1
j1
Journal of Probability and Statistics
7
⎞1/2
⎛ 2
n
n
≤ ⎝E
IAk − EIAk EInj1 Aj ⎠ 1 − αn P Ak k1
k1
1/2
n
n
c1 − αn αn P Ak 1 − αn P Ak αn
k1
k1
n
n
1 c1 − αn ≤
αn P Ak 1 − αn P Ak .
2
αn
k1
k1
≤
1.22
Thus
α2n
n
P Ak ≤ c1 − αn ,
1.23
k1
that is,
1 − P max |Xk | > x
2 n
1≤k≤n
P |Xk | > x ≤ cP max |Xk | > x .
k1
1≤k≤n
1.24
2. Main Results and the Proofs
The concept of complete convergence of a sequence of random variables was introduced by
Hsu and Robbins 11 as follows. A sequence {Yn ; n ≥ 1} of random variables converges
completely to the constant c if ∞
n1 P |Xn − c| > ε < ∞, for all ε > 0. In view of the BorelCantelli lemma, this implies that Yn → 0 almost surely. Therefore, complete convergence is
one of the most important problems in probability theory. Hsu and Robbins 11 proved that
the sequence of arithmetic means of independent and identically distributed i.i.d. random
variables converges completely to the expected value if the variance of the summands is
finite. Baum and Katz 12 proved that if {X, Xn ; n ≥ 1} is a sequence of i.i.d. random variables
with mean zero, then E|X|pt2 < ∞1 ≤ p < 2, t ≥ −1 is equivalent to the condition that
∞ t n
1/p
> ε < ∞, for all ε > 0. Recent results of the complete convergence
n1 n P 11 |Xi |/n
can be found in Li et al. 13, Liang and Su 14, Wu 15, 16, and Sung 17.
In this paper we study the complete convergence for negatively dependent random
variables. As a result, we extend some complete convergence theorems for independent
random variables to the negatively dependent random variables without necessarily
imposing any extra conditions.
Theorem 2.1. Let {X, Xn ; n ≥ 1} be a sequence of identically distributed ND random variables and
{ank ; 1 ≤ k ≤ n, n ≥ 1} an array of real numbers, and let r > 1, p > 2. If, for some 2 ≤ q < p,
Nn, m 1
k ≥ 1; |ank | ≥ m 1−1/p ≈ mqr−1/p ,
EX 0
n
k1
a2nk nδ
n, m ≥ 1,
for 1 ≤ qr − 1,
for 2 ≤ qr − 1 and some 0 < δ <
2.1
2.2
2
,
p
2.3
8
Journal of Probability and Statistics
then, for r ≥ 2,
E|X|pr−1 < ∞
2.4
if and only if
∞
r−2
n
P
n1
k
1/p
< ∞,
max ani Xi > εn
1≤k≤n
i1
∀ε > 0.
2.5
For 1 < r < 2, 2.4 implies 2.5, conversely, and 2.5 and nr−2 P max1≤k≤n |ank Xk | > n1/p decreasing on n imply 2.4.
For p 2, q 2, we have the following theorem.
Theorem 2.2. Let {X, Xn ; n ≥ 1} be a sequence of identically distributed ND random variables and
{ank ; 1 ≤ k ≤ n, n ≥ 1} an array of real numbers, and let r > 1. If
Nn, m 1
k; |ank | ≥ m 1−1/2 ≈ mr−1 ,
EX 0,
n, m ≥ 1,
2.6
1 ≤ 2r − 1,
2.7
n
|ank |2r−1 O1,
k1
then, for r ≥ 2,
2.8
E|X|2r−1 log|X| < ∞
if and only if
∞
nr−2 P
n1
k
1/2
< ∞,
max ani Xi > εn
1≤k≤n
i1
∀ε > 0.
2.9
For 1 < r < 2, 2.8 implies 2.9, conversely, and 2.9 and nr−2 P max1≤k≤n |ank Xk | > n1/2 decreasing on n imply 2.8.
Remark 2.3. Since NA random variables are a special case of ND r. v. ’s, Theorems 2.1 and 2.2
extend the work of Liang and Su 14, Theorem 2.1.
Remark 2.4. Since, for some 2 ≤ q ≤ p,
k∈N
|ank |qr−1 1 as n → ∞ implies that
Nn, m 1
k ≥ 1; |ank | ≥ m 1−1/p mqr−1/p
as n −→ ∞,
2.10
Journal of Probability and Statistics
9
taking r 2, then conditions 2.1 and 2.6 are weaker than conditions 2.13 and 2.9 in
Li et al. 13. Therefore, Theorems 2.1 and 2.2 not only promote and improve the work of
Li et al. 13, Theorem 2.2 for i.i.d. random variables to an ND setting but also obtain their
necessities and relax the range of r.
Proof of Theorem 2.1. Equation 2.4⇒2.5. To prove 2.5 it suffices to show that
∞
r−2
n
P
n1
k
±
1/p
< ∞,
max ani Xi > εn
1≤k≤n
i1
∀ε > 0,
2.11
where ani maxani , 0 and a−ni max−ani , 0. Thus, without loss of generality, we can
assume that ani > 0 for all n ≥ 1, i ≤ n. For 0 < α < 1/p small enough and sufficiently large
integer K, which will be determined later, let
1
Xni −nα Iani Xi <−nα ani Xi Iani |Xi |≤nα nα Iani Xi >nα ,
2
Xni ani Xi − nα Inα <ani Xi <εn1/p /K ,
3
Xni ani Xi nα I−εn1/p /K<ani Xi <−nα ,
4
1
2
2.12
3
Xni ani Xni − Xni − Xni − Xni
ani Xi nα Iani Xi ≤−εn1/p /K ani Xi − nα Iani Xi ≥εn1/p /K ,
j
Snk Thus Snk k
i1
ani Xi k
i1
j
Xni ,
4
j1
j 1, 2, 3, 4 ; 1 ≤ k ≤ n, n ≥ 1.
j
Snk . Note that
max |Snk | > 4εn1/p
1≤k≤n
⊆
4 j1
j max Snk > εn1/p .
1≤k≤n
2.13
So, to prove 2.5 it suffices to show that
∞
j Ij nr−2 P max Snk > εn1/p < ∞,
n1
1≤k≤n
j 1, 2, 3, 4.
2.14
10
Journal of Probability and Statistics
For any q > q,
n
i1
q r−1
ani
∞
j1 j1 −1 ≤ap <j −1
ni
q r−1
ani
≤
∞
j −q r−1/p
j1 j1 −1 ≤ap <j −1
ni
∞
N n, j 1 − N n, j j −q r−1/p
j1
∞
−q r−1/p N n, j j −q r−1/p − j 1
2.15
j1
∞
j −1−q −qr−1/p < ∞.
j1
Now, we prove that
1 n−1/p max ESnk −→ 0,
n −→ ∞.
1≤k≤n
2.16
i For 0 < qr − 1 < 1, taking q < q < p such that 0 < q r − 1 < 1, by 2.4 and 2.15,
we get
1 n−1/p max ESnk 1≤k≤n
≤ n−1/p
n
i1
E|ani Xi |I|ani Xi |≤nα nα P |ani Xi | > nα n
n
q r−1
1−q r−1
q r−1
−1/p
α−αq r−1
α
≤n
E|ani Xi |
I|ani Xi |≤n n
E|ani Xi |
|ani Xi |
i1
2.17
i1
n−1/pα−αq r−1 −→ 0,
n −→ ∞.
ii For 1 ≤ qr − 1, letting q < q < p, by 2.2, 2.4, and 2.15, we get
1 n−1/p max ESnk 1≤k≤n
≤ n−1/p
n
i1
≤ n−1/p
n
i1
E|ani Xi |I|ani Xi |>nα nα P |ani Xi | > nα |ani Xi | q r−1−1
q r−1
α−αq r−1
E|ani Xi |
I|ani Xi |≤nα n
E|ani Xi |
nα
n−1/pα−αq r−1 −→ 0.
2.18
Journal of Probability and Statistics
11
Hence, 2.16 holds. Therefore, to prove I1 < ∞ it suffices to prove that
I1 ∞
1
1 nr−2 P max Snk − ESnk > εn1/p < ∞,
∀ε > 0.
1≤k≤n
n1
1
2.19
1
Note that {Xni ; 1 ≤ i ≤ n, n ≥ 1} is still ND by the definition of Xni and Lemma 1.6. Using the
Markov inequality and Lemma 1.8, we get for a suitably large M, which will be determined
later,
I1 ∞
n1
∞
M
1
1 nr−2−M/p E max Snk − ESnk 1≤k≤n
⎤
⎡
n
n
M
2 M/2
1 1
⎦
nr−2−M/p logM n⎣ EX E X
n1
ni
i1
2.20
ni
i1
I11 I12 .
Taking M > max2, pr − 11 − αq /1 − αp, then r − 2 − M/p αM − αq r − 1 < −1, and,
by 2.15, we get
I11 ≤
∞
n1
≤
∞
n E|ani Xi |M I|ani Xi |≤nα nMα P |ani Xi | > nα nr−2−M/p logM n
i1
n E|ani Xi |q r−1 nαM−q r−1 nαM−q r−1 E|ani Xi |q r−1
nr−2−M/p logM n
n1
∞
i1
2.21
nr−2−M/pαM−αq r−1 logM n
n1
< ∞.
i For qr−1 < 2, taking q < q < p such that q r−1 < 2 and taking M > max2, 2pr−
1/2 − 2αp αpq r − 1, from 2.15 and r − 2 − M/p αM − Mαq r − 1/2 < −1, we have
I12 ≤
∞
r−2−M/p
n
n1
#
n
log n
E|ani Xi |q r−1 nα2−q r−1 I|ani Xi |≤nα M
i1
2α−αq r−1
n
E|ani Xi |
∞
n−r−2−M/pαM−Mαq r−1/2 logM n
n1
< ∞.
q r−1
$M/2
2.22
12
Journal of Probability and Statistics
ii For qr − 1 ≥ 2, taking q < q < p and M > max2, 2pr − 1/2 − pδ, where δ is
defined by 2.3, we get, from 2.3, 2.4, 2.15, and r − 2 − M/p δM/2 < −1,
$M/2
#
∞
n
M
q
r−1
r−2−M/p
2
2α−αq
r−1
n
log n
ani n
E|ani Xi |
I12 n1
i1
2.23
∞
nr−2−M/pδM/2 logM n
n1
< ∞.
Since
n
n
2
1/p
α
1/p
Xni > εn
ani Xi − n Inα <ani Xi <εn1/p /K > εn
i1
i1
2.24
⊆ there at least exist K indices k such that ank Xk > nα ,
we have
n
2
1/p
≤
P
Xni > εn
i1
P ani1 Xi1 > nα , ani2 Xi2 > nα , . . . , aniK XiK > nα .
1≤i1 <i2 <···<iK ≤n
2.25
By Lemma 1.6, {ani Xi ; 1 ≤ i ≤ n, n ≥ 1} is still ND. Hence, for q < q < p we conclude
that
P
n
i1
2
Xni
> εn
1/p
K
P anij Xij > nα
≤
1≤i1 <i2 <···<iK ≤n j1
≤
n
K
α
P |ani Xi | > n 2.26
i1
≤
n
K
n−αq r−1 E|ani Xi |q r−1
i1
n−αq r−1K ,
2
2
via 2.4 and 2.15. Xni > 0 from the definition of Xni . Hence by 2.26 and by taking α > 0
and K such that r − 2 − αKq r − 1 < −1, we have
∞
I2 nr−2 P
n1
3
n
i1
2
Xni
Similarly, we have Xni < 0 and I3 < ∞.
> εn
1/p
∞
nr−2−αq r−1K < ∞.
n1
2.27
Journal of Probability and Statistics
13
4
Last, we prove that I4 < ∞. Let Y KX/ε. By the definition of Xni and 2.1, we have
4 1/p
P max Snk > εn
≤P
1≤k≤n
n 4 1/p
Xni > εn
i1
n
εn1/p
ani |Xi | >
≤P
K
i1
n
εn1/p
P ani |Xi | >
≤
K
i1
∞
1/p P |Y | > nj
j1 j1 −1 ≤ap <j −1
ni
∞
∞
P l ≤ |Y |p < l 1
N n, j 1 − N n, j
j1
2.28
lnj
∞ l/n
N n, j 1 − N n, j P l ≤ |Y |p < l 1
ln j1
≈
∞ qr−1/p
l
P l ≤ |Y |p < l 1 .
n
ln
Combining with 2.15,
I4 ≈
∞
r−2
n
n1
l
∞ ∞ qr−1/p
l
P l ≤ |Y |p < l 1
n
ln
nr−2−qr−1/p lqr−1/p P l ≤ |Y |p < l 1
l1 n1
≈
∞
2.29
lr−1 P l ≤ |Y |p < l 1
l1
≈ E|Y |pr−1 ≈ E|X|pr−1 < ∞.
Now we prove 2.5⇒2.4. Since
j
j−1
max anj Xj ≤ max ani Xi max ani Xi ,
1≤j≤n
1≤j≤n
1≤j≤n
i1
i1
2.30
then from 2.5 we have
∞
n1
nr−2 P maxanj Xj > n1/p < ∞.
1≤j≤n
2.31
14
Journal of Probability and Statistics
Combining with the hypotheses of Theorem 2.1,
P maxanj Xj > n1/p −→ 0,
1≤j≤n
n −→ ∞.
2.32
1
.
2
2.33
Thus, for sufficiently large n,
P maxanj Xj > n1/p
1≤j≤n
<
By Lemma 1.6, {anj Xj ; 1 ≤ j ≤ n, n ≥ 1} is still ND. By applying Lemma 1.10 and 2.1, we
obtain
n
P |ank Xk | > n1/p ≤ 4CP max |ank Xk | > n1/p .
1≤k≤n
k1
2.34
Substituting the above inequality in 2.5, we get
∞
nr−2
n1
n
P |ank Xk | > n1/p < ∞.
2.35
k1
So, by the process of proof of I4 < ∞,
E|X|pr−1 ≈
∞
n1
nr−2
n
P |ank Xk | > n1/p < ∞.
2.36
k1
Proof of Theorem 2.2. Let p 2, α < 1/p 1/2, and K > 1/2α. Using the same notations and
method of Theorem 2.1, we need only to give the different parts.
Letting 2.7 take the place of 2.15, similarly to the proof of 2.19 and 2.26, we
obtain
1 n−1/2 max ESnk n−1/2α−2αr−1 −→ 0,
1≤k≤n
n −→ ∞.
2.37
Taking M > max2, 2r − 1, we have
I11 ∞
n−1−1−2αM/2−r−1 logM n < ∞.
2.38
n1
For r − 1 ≤ 1, taking M > max2, 2r − 1/1 − 2α 2αr − 1 , we get
I12 ∞
n1
n−1−1−2αr−1−2αM/2r−1 logM n < ∞.
2.39
Journal of Probability and Statistics
15
2
< ∞ from 2.8. Letting M > 2r − 12 , by the Hölder inequality,
For r − 1 > 1, EXni
$M/2
#
∞
n
M
2r−1
r−2−M/2
2
2α−2αr−1
n
log n
ani n
Eani Xi I12 n1
i1
⎡
1/r−1 r−2/r−1 ⎤M/2
∞
n
n
2r−1
⎦
nr−2−M/2 logM n⎣
a
1
n1
i1
ni
2.40
i1
∞
n−1−M/2r−1r−1 logM n < ∞.
n1
By the definition of K,
I2 ∞
n−1−r−12αK−1 < ∞.
n1
2.41
Similarly to the proof 2.31, we have
I4 l
∞ n−1 lr−1 P l ≤ |Y |2 < l 1
l1 n1
lr−1 log lP l ≤ |Y |2 < l 1
∞
l1
≈ E |Y |2r−1 log|Y |
2.42
≈ E |X|2r−1 log|X|
< ∞.
Equation 2.9⇒2.8 Using the same method of the necessary part of Theorem 2.1, we can
easily get
∞
n
E |X|2r−1 log|X| ≈
nr−2 P |ank Xk | > n1/2 < ∞.
n1
k1
2.43
Acknowledgments
The author is very grateful to the referees and the editors for their valuable comments and
some helpful suggestions that improved the clarity and readability of the paper. This work
was supported by the National Natural Science Foundation of China 11061012, the Support
Program the New Century Guangxi China Ten-Hundred-Thousand Talents Project 2005214,
and the Guangxi China Science Foundation 2010GXNSFA013120. Professor Dr. Qunying
Wu engages in probability and statistics.
16
Journal of Probability and Statistics
References
1 E. L. Lehmann, “Some concepts of dependence,” Annals of Mathematical Statistics, vol. 37, no. 5, pp.
1137–1153, 1966.
2 A. Bozorgnia, R. F. Patterson, and R. L. Taylor, “Limit theorems for ND r.v.’s,” Tech. Rep., University
of Georgia, Athens, Ga, USA, 1993.
3 K. Joag-Dev and F. Proschan, “Negative association of random variables, with applications,” The
Annals of Statistics, vol. 11, no. 1, pp. 286–295, 1983.
4 M. Amini, Some contribution to limit theorems for negatively dependent random variable, Ph.D. thesis, 2000.
5 V. Fakoor and H. A. Azarnoosh, “Probability inequalities for sums of negatively dependent random
variables,” Pakistan Journal of Statistics, vol. 21, no. 3, pp. 257–264, 2005.
6 H. R. Nili Sani, M. Amini, and A. Bozorgnia, “Strong laws for weighted sums of negative dependent
random variables,” Islamic Republic of Iran. Journal of Sciences, vol. 16, no. 3, pp. 261–265, 2005.
7 O. Klesov, A. Rosalsky, and A. I. Volodin, “On the almost sure growth rate of sums of lower negatively
dependent nonnegative random variables,” Statistics & Probability Letters, vol. 71, no. 2, pp. 193–202,
2005.
8 Q. Y. Wu and Y. Y. Jiang, “The strong consistency of M estimator in a linear model for negatively
dependent random samples,” Communications in Statistics, vol. 40, no. 3, pp. 467–491, 2011.
9 H. P. Rosenthal, “On the subspaces of Lp P > 2 spanned by sequences of independent random
variables,” Israel Journal of Mathematics, vol. 8, pp. 273–303, 1970.
10 W. F. Stout, Almost Sure Convergence, Academic Press, New York, NY, USA, 1974.
11 P. L. Hsu and H. Robbins, “Complete convergence and the law of large numbers,” Proceedings of the
National Academy of Sciences of the United States of America, vol. 33, pp. 25–31, 1947.
12 L. E. Baum and M. Katz, “Convergence rates in the law of large numbers,” Transactions of the American
Mathematical Society, vol. 120, pp. 108–123, 1965.
13 D. L. Li, M. B. Rao, T. F. Jiang, and X. C. Wang, “Complete convergence and almost sure convergence
of weighted sums of random variables,” Journal of Theoretical Probability, vol. 8, no. 1, pp. 49–76, 1995.
14 H.-Y. Liang and C. Su, “Complete convergence for weighted sums of NA sequences,” Statistics &
Probability Letters, vol. 45, no. 1, pp. 85–95, 1999.
15 Q. Y. Wu, “Convergence properties of pairwise NQD random sequences,” Acta Mathematica Sinica.
Chinese Series, vol. 45, no. 3, pp. 617–624, 2002 Chinese.
16 Q. Y. Wu, “Complete convergence for negatively dependent sequences of random variables,” Journal
of Inequalities and Applications, vol. 2010, Article ID 507293, 10 pages, 2010.
17 S. H. Sung, “Moment inequalities and complete moment convergence,” Journal of Inequalities and
Applications, vol. 2009, Article ID 271265, 14 pages, 2009.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download