Document 10945075

advertisement
Hindawi Publishing Corporation
Journal of Probability and Statistics
Volume 2011, Article ID 181409, 13 pages
doi:10.1155/2011/181409
Research Article
A Limit Theorem for Random Products of Trimmed
Sums of i.i.d. Random Variables
Fa-mei Zheng
School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China
Correspondence should be addressed to Fa-mei Zheng, 16032@hytc.edu.cn
Received 13 May 2011; Revised 25 July 2011; Accepted 11 August 2011
Academic Editor: Man Lai Tang
Copyright q 2011 Fa-mei Zheng. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Let {X, Xi ; i ≥ 1} be a sequence of independent and identically distributed positive random
variables with acontinuous distribution function F, and
tail. Denote Sn nF has a medium
2
n
n
2
i1 Xi , Sn a i1 Xi IMn − a < Xi ≤ Mn and Vn i1 Xi − X , where Mn max1≤i≤n Xi ,
X 1/n ni1 Xi , and a > 0 is a fixed constant. Under some suitable conditions, we show that
t
nt
d
k1 T k a/μkμ/Vn → exp{ 0 Wx/xdx} in D0, 1, as n → ∞, where Tk a Sk − Sk a is
the trimmed sum and {Wt; t ≥ 0} is a standard Wiener process.
1. Introduction
Let {Xn ; n ≥ 1} be a sequence of random variables and define the partial sum Sn ni1 Xi
and Vn2 ni1 Xi − X2 for n ≥ 1, where X 1/n ni1 Xi . In the past years, the asymptotic
behaviors of the products of various random variables have been widely studied. Arnold
and Villaseñor 1 considered sums of records and obtained the following form of the central
limit theorem CLT for independent and identically distributed i.i.d. exponential random
variables with the mean equal to one,
n
k1 log Sk − n logn n d
−→ N as n −→ ∞.
1.1
√
2n
d
p
a.s.
Here and in the sequel, N is a standard normal random variable, and → → , → stands
for convergence in distribution in probability, almost surely. Observe that, via the Stirling
formula, the relation 1.1 can be equivalently stated as
1/√n
n
√
Sk
d
−→ e 2N .
1.2
k
k1
2
Journal of Probability and Statistics
In particular, Rempała and Wesołowski 2 removed the condition that the distribution is
exponential and showed the asymptotic behavior of products of partial sums holds for any
sequence of i.i.d. positive random variables. Namely, they proved the following theorem.
Theorem A. Let {Xn ; n ≥ 1} be a sequence of i.i.d. positive square integrable random variables with
EX1 μ, Var X1 σ 2 > 0 and the coefficient of variation γ σ/μ. Then, one has
n
k1 Sk
n!μn
1/γ √n
d
−→ e
√
2N
as n −→ ∞.
1.3
Recently, the above result was extended by Qi 3, who showed that whenever {Xn ; n ≥
1} is in the domain of attraction of a stable law L with index α ∈ 1, 2, there exists a numerical
√
sequence An for α 2, it can be taken as σ n such that
n
k1 Sk
n!μn
μ/An
d
1/α
−→ eΓα1
L
,
1.4
∞
as n → ∞, where Γα 1 0 xα e−x dx. Furthermore, Zhang and Huang 4 extended
Theorem A to the invariance principle.
In this paper, we aim to study the weak invariance principle for self-normalized products of trimmed sums of i.i.d. sequences. Before stating our main results, we need to introduce
some necessary notions. Let {X, Xn ; n ≥ 1} be a sequence of i.i.d. random variables with a
continuous distribution function F. Assume that the right extremity of F satisfies
γF sup{x : Fx < 1} ∞,
1.5
and the limiting tail quotient
lim
x→∞
Fx a
Fx
,
1.6
exists, where Fx 1 − Fx. Then, the above limit is e−ca for some c ∈ 0, ∞, and F or X is
said to have a thick tail if c 0, a medium tail if 0 < c < ∞, and a thin tail if c ∞. Denote
Mn max1≤j≤n Xj . For a fixed constant a > 0, we say Xj is a near-maximum if and only if
Xj ∈ Mn − a, Mn , and the number of near-maxima is
Kn a : Card j ≤ n; X j ∈ Mn − a, Mn .
1.7
These concepts were first introduced by Pakes and Steutel 5, and their limit properties have
been widely studied by Pakes and Steutel 5, Pakes and Li 6, Li 7, Pakes 8, and Hu and
Su 9. Now, set
Sn a :
n
i1
Xi I{Mn − a < Xi ≤ Mn },
1.8
Journal of Probability and Statistics
3
where
I{A} ⎧
⎨1,
ω ∈ A,
⎩0,
ω∈
/ A,
1.9
Tn a : Sn − Sn a,
which are the sum of near-maxima and the trimmed sum, respectively. From Remark 1 of
a.s.
Hu and Su 9, we have that if F has a medium tail and EX /
0, then Tn a/n → EX, which
implies that with probability one Card{k : Tk a 0, k ≥ 1} is finite at most. Thus, we can
redefine Tk a 1 if Tk a 0.
2. Main Result
Now we are ready to state our main results.
Theorem 2.1. Let {X, Xn ; n ≥ 1} be a sequence of positive i.i.d. random variables with a continuous
distribution function F, and EX μ, Var X σ 2 . Assume that F has a medium tail. Then, one has
nt
μ/Vn
Tk a
k1
μk
d
−→ exp
t
0
Wx
dx
x
in D0, 1, as n −→ ∞,
2.1
where {Wt; t ≥ 0} is a standard Wiener process.
In particular, when we take t 1, it yields the following corollary.
Corollary 2.2. Under the assumptions of Theorem 2.1, one has
n
Tk a μ/Vn
k1
μk
d
−→ e
√
2N
,
2.2
as n → ∞, where N is a standard normal random variable.
1
Remark 2.3. Since 0 Wx/xdx is a normal random variable with
1
E
0
1
E
0
Wx
dx
x
2
Wx
dx x
1
0
EWx
dx 0,
x
1
EWxW y
min x, y
dx dy dx dy 2.
xy
xy
0
0
1
Corollary 2.2 follows from Theorem 2.1 immediately.
2.3
4
Journal of Probability and Statistics
3. Proof of Theorem 2.1
In this section, we will give the proof of Theorem 2.1. In the sequel, let C denote a positive
constant which may take different values in different appearances and x mean the largest
integer ≤ x.
a.s.
Note that via Remark 1 of Hu and Su 9, we have Ck : Tk a/μk → 1. It follows that
for any δ > 0, there exists a positive integer R such that
P sup|Ck − 1| > δ
3.1
< δ.
k≥R
Consequently, there exist two sequences δm ↓ 0δ1 1/2 and R∗m ↑ ∞ such that
P sup |Ck − 1| > δm
3.2
< δm .
k≥R∗m
The strong law of large numbers also implies that there exists a sequence Rm ↑ ∞ such that
a.s.
sup |Ck − 1| ≤
k≥Rm
1
.
m
3.3
Here and in the sequel, we take Rm max{R∗m , Rm }, and it yields
P sup |Ck − 1| > δm
< δm
k≥Rm
3.4
1
.
sup |Ck − 1| ≤
m
k≥Rm
a.s.
Then, it leads to
P
nt
μ
logCk ≤ x
Vn k1
P
nt
μ
logCk ≤ x, sup |Ck − 1| > δm
Vn k1
k≥Rm
P
nt
μ
logCk ≤ x, sup |Ck − 1| ≤ δm
Vn k1
k≥Rm
: Am,n Bm,n ,
3.5
Journal of Probability and Statistics
5
and Am,n < δm . By using the expansion of the logarithm log1 x x − x2 /21 θx2 , where
θ ∈ 0, 1 depends on |x| < 1, we have that
nt
μ
P
logCk ≤ x, sup |Ck − 1| ≤ δm
Vn k1
k≥Rm
⎞
⎛
Rm ∧nt−1
nt
μ
μ
logCk log1 Ck − 1 ≤ x, sup |Ck − 1| ≤ δm ⎠
P⎝
Vn
Vn kR ∧nt−11
k≥Rm
k1
Bm,n
m
⎛
nt
μ Rm ∧nt−1
μ
P⎝
logCk Ck − 1
Vn
Vn kR ∧nt−11
k1
m
⎞
nt
μ
Ck − 12
−
≤ x, sup |Ck − 1| ≤ δm ⎠
Vn kR ∧nt−11 21 θk Ck − 12
k≥Rm
m
⎛
nt
μ Rm ∧nt−1
μ
P⎝
logCk Ck − 1
Vn
Vn kR ∧nt−11
k1
m
nt
μ
Ck − 12
−
I
Vn kR ∧nt−11 21 θk Ck − 12
m
⎞
sup |Ck − 1| ≤ δm
≤ x⎠
k≥Rm
⎛
⎞
Rm ∧nt−1
nt
μ
μ
− P⎝
logCk Ck − 1 ≤ x, sup |Ck − 1| > δm ⎠
Vn
Vn kR ∧nt−11
k≥Rm
k1
m
: Dm,n − Em,n ,
3.6
where θk k 1, . . . , nt are 0-1-valued and Em,n < δm .
Also, we can rewrite Dm,n as
Dm,n P
nt
μ
μ Rm ∧nt−1
logCk − Ck 1 Ck − 1
Vn
Vn k1
k1
nt
μ
Ck − 12
−
I
Vn kR ∧nt−11 21 θk Ck − 12
m
sup |Ck − 1| ≤ δm
⎞
3.7
≤ x⎠.
k≥Rm
Observe that, for any fixed m, it is easy to obtain
p
μ Rm ∧nt−1
logCk − Ck 1 −→ 0 as n −→ ∞,
Vn
k1
p
by noting that Vn2 → ∞.
3.8
6
Journal of Probability and Statistics
And if Rm ≥ nt − 1, then we have
2
a.s. C
Cnt − 1
μ
p
≤
−→ 0,
2
Vn 2 1 Cnt − 1 θnt
Vn
3.9
as n → ∞. If Rm < nt − 1, then Rm 1 < nt. Denote
Fm,n :
μ
Vn
nt
Ck − 12
kRm 1
21 θk Ck − 12
I
sup |Ck − 1| ≤ δm ,
3.10
k≥Rm
and, by observing that x2 /1 θx2 ≤ 4x2 , then we can obtain
Fm,n
2
nt
nt Sk − Sk a
C C 2
−1
≤
Ck − 1 Vn kR 1
Vn kR 1
μk
m
m
2
nt nt C C Sk
Sk a 2
−1 ≤
Vn kR 1 μk
Vn kR 1
μk
m
3.11
m
: Hm,n Lm,n .
For any ε > 0, by the Markov’s inequality, we have
P
nt 2
2 nt C
Sk
Sk
1 −1 >ε ≤ √ E
−1
√
μk
n kRm 1 μk
ε n
kRm 1
nt
nt
Sk
C Cσ 2 1 a.s.
Var
−→ 0.
√
2√
μk
ε n kRm 1
εμ n kRm 1 k
3.12
p
Then, Hm,n → 0. To obtain this result, we need the following fact:
Vn2 a.s.
n
→ σ2,
n 2
X
−
X
i
i1
a.s.
2 −→ 1,
n i1 Xi − μ
as n −→ ∞.
3.13
Indeed,
n 2 2
2
n
Xi − X
−n μ−X
i1 Xi − μ
2 2
n n −
μ
X
i
i1
i1 Xi − μ
2
μ−X
1 − 2 .
n
n
i1 Xi − μ
i1
3.14
Journal of Probability and Statistics
7
Now, we choose two constants N > 0 and 0 < δ < 1 such that P |X − μ| > N < δ. Hence, in
view of the strong law of large numbers, we have for n large enough
2
2
μ−X
μ−X
2 ≤ n 2 n
n
n
i1 Xi − μ
i1 Xi − μ I Xi − μ > N
2
μ−X
≤ 2 n n
N
i1 I Xi − μ > N
N2
3.15
o1
a.s.
o1,
P X − μ > N o1
which together with 3.14 implies that
n 2
Xi − X
Vn2
a.s.
2 n 2 −→ 1,
n i1 Xi − μ
i1 Xi − μ
i1
3.16
as n → ∞. Furthermore, in view of the strong law of large numbers again, we obtain
n 2
2
n X
−
X
i
i1
a.s.
i1 Xi − μ
−→ σ 2 ,
2 ·
n
n
n
i1 Xi − μ
Vn2
3.17
a.s.
as n → ∞, where σ 2 VarX > 0. For Lm,n , by noting that Sn a/Sn → 0, as n → ∞ see
Hu and Su 9, thus we can easily get
Sn a Sn a Sn a.s.
·
−→ 0,
n
Sn
n
3.18
as n → ∞. Then, for any 0 < δ < 1, there exists a positive integer R such that
Sk a
≥ δ
P sup
k
k≥R
< δ .
3.19
Consequently, coupled with 3.18, we have
n C
Sk a
Sk a
Sk a 2
< δ P sup
≥δ
P Lm,n > δ ≤ P
> δ , sup
Vn k1
μk
k
k
k≥R
k≥R
n
Sk a
C
> δ δ .
≤P
Vn k1 k
3.20
8
Journal of Probability and Statistics
p
Clearly, to show Lm,n → 0, as n → ∞, it is sufficient to prove
n
Sk a p
1 −→ 0.
Vn k1 k
3.21
Indeed, combined with 3.17, we only need to show
n
1 Sk a p
−→ 0.
√
n k1 k
3.22
As a matter of fact, by the definitions of Sn a and Kn a, we have
Mn − aKn a < Sn a ≤ Mn Kn a.
3.23
In view of the fact Mn ↑ ∞a.s., we can get from Hu and Su 9 that
Sn a a.s.
∼ Kn a,
Mn
3.24
n
Mk Kk a p
1 −→ 0.
√
k
n k1
3.25
and thus it suffices to prove
Actually, for all ε, δ > 0, and N1 large enough, we can have that
P
n
Kk aMk
1 >ε
√
k
n k1
P
≤P
n
1 Kk a Mk
· √ >ε
√
√
n k1
k
k
n
Kk a
Mk
1 · δ > ε, sup √ < δ
√
√
n k1
k
k
k≥N1
Mk
P sup √ ≥ δ .
k
k≥N1
3.26
√
√ a.s.
Observe that if F has a medium tail, then we have Mn / n Mn / log nlog n/ n → 0 by
a.s.
noting that Mn / log n → 1/c 9, where c is the limit defined in Section 1. Thus it follows
Mk
P sup √ ≥ δ
k
k≥N1
−→ 0,
3.27
Journal of Probability and Statistics
9
as N1 → ∞. Further, by the Markov’s inequality and the bounded property of EKk a from
Hu and Su 9, we have
P
n
1 Kk a
Mk
· δ > ε, sup √ < δ
√
√
n k1
k
k
k≥N1
≤P
n
δ Kk a
>ε
√
√
n k1
k
n
EKk a
δ ≤C √
√
ε n k1
k
3.28
n
1
δ δ
≤C √
√ ≤C ,
ε
ε n k1 k
p
and, hence, the proof of 3.22 is terminated. Thus Lm,n → 0 follows. Finally, in order to
complete the proof, it is sufficient to show that
Yn t :
t
nt
μ
Wx
d
dx,
Ck − 1 −→
Vn k1
x
0
3.29
and, coupled with 3.21, we only need to prove
t
nt
μ
Wx
Sk
d
− 1 −→
dx.
Yn t :
Vn k1 μk
x
0
3.30
Let
⎧ t
⎪
⎨ fx dx,
H f t x
⎪
⎩0,
⎧
nt
⎪
Sk − μk
⎪
⎨ 1
,
Yn, t Vn kn1 k
⎪
⎪
⎩0,
t > ,
0 ≤ t ≤ ,
3.31
t > ,
0 ≤ t ≤ .
It is obvious that
t Wx
t Wx a.s.
max
dx − H Wt sup dx −→ 0,
0≤t≤ 0 x
0≤t≤1 0
x
as −→ 0.
3.32
Note that
nt
n
1 Sk − μk
1 Sk − μk
max|Yn t − Yn, t| max
≤
,
0≤t≤
0≤t≤ Vn
k
Vn k1
k
k1
3.33
10
Journal of Probability and Statistics
and then, for any 1 > 0, by the Cauchy-Schwarz inequality and 3.17, it follows that
n 1 Sk − μk
lim lim sup P max|Yn t − Yn, t| ≥ 1 ≤ lim lim sup P
≥ 1
→0 n→∞
→0 n→∞
0≤t≤
Vn k1
k
n
C ESk − μk
≤ lim lim sup √
→0 n→∞
k
n k1
1/2
n
Sk − μk
C 1
≤ lim lim sup √
√ Var
√
→0 n→∞
n k1 k
k
C 1
lim lim sup √
√
→0 n→∞
n k1 k
n
C ≤ lim lim sup √
n.
→0 n→∞
n
3.34
Furthermore, we can obtain
nt
nt
Sx − xμ 1 Sk − μk
−
dx
sup k
x
n
≤t≤1 Vn kn1
nt
Sx − xμ 1 nt1 Sx − xμ
≤ sup dx
dx −
x
x
n
≤t≤1 Vn n1
1 n1 Sx − xμ 1 nt1 Sx − xμ ≤
dx sup dx
≤t≤1 Vn nt
Vn n
x
x
1
1
1 nt1 sup −
dx
Sx − xμ
V
x
x
n
n1
≤t≤1
maxk≤n Sk − μk
2
1
2
≤
sup
Vn
nt n
≤t≤1 n
maxk≤n Sk − μk
maxk≤n ki1 Xi − μ
≤C
≤C
nVn
nVn
n C i1 Xi − μ a.s.
−→ 0.
Vn
n
3.35
Therefore, uniformly for t ∈ , 1, we have
t
nt
1 Sk − μk
1 nt Sx − xμ
Wn t
dx oP 1 dx oP 1,
Vn kn1 k
Vn n
x
x
3.36
Journal of Probability and Statistics
11
where Wn t : Snt − ntμ/Vn . Notice that H · is a continuous mapping on the space
D0, 1. Thus, using the continuous mapping theorem c.f., Theorem 2.7 of Billingsley 10,
it follows that
d
Yn, t H Wn t oP 1 −→ H Wt,
in D0, 1, as n −→ ∞.
3.37
Hence, 3.32, 3.34, and 3.37 coupled with Theorem 3.2 of Billingsley 10 lead to 3.30.
The proof is now completed.
4. Application to U-Statistics
A useful notion of a U-statistic has been introduced by Hoeffding 11. Let a U-statistic be
defined as
Un −1
n
m
hXi1 , . . . , Xim ,
4.1
1≤i1 <···<im ≤n
where h is a symmetric real function of m arguments and {Xi ; i ≥ 1} is a sequence of i.i.d.
random variables. If we take m 1 and hx x, then Un reduces to Sn /n. Assume that
EhX1 , . . . , Xm 2 < ∞, and let
h1 x Ehx, X2 , . . . , Xm ,
n
n m
U
h1 Xi − Eh Eh.
n i1
4.2
n Rn ,
Un U
4.3
Thus, we may write
where
Rn −1
n
m
HXi1 , . . . , Xim ,
1≤i1 <···<im ≤n
m
Hx1 , . . . , xm hx1 , . . . , xm − h1 xi − Eh − Eh.
4.4
i1
It is well known cf. Resnick 12 that
n , Rn 0,
Cov U
⎛ −1 ⎞
n
n Var⎝
Rn ⎠ −→ 0,
m
Theorem 2.1 now is extended to U-statistics as follows.
4.5
as n −→ ∞.
12
Journal of Probability and Statistics
Theorem 4.1. Let Un be a U-statistic defined as above. Assume that Eh2 < ∞ and PhX1 ,
0. Then,
. . . , Xm > 0 1. Denote μ Eh > 0 and σ 2 Varh1 X1 /
nt
μ/mVn
t
Uk
Wx
d
dx ,
−→ exp
n
μ m
x
0
km
where Wx is a standard wiener process, and Vn2 in D0, 1, as n −→ ∞,
n
i1 Xi
4.6
− X2 .
In order to prove this theorem, by 3.17, we only need to prove
t
nt μ Wx
Uk
d
dx,
−
1
−→
σ
√
n
x
m n km μ m 0
in D0, 1, as n −→ ∞.
4.7
n −1
If this result is true, then with the fact that m
Un → Eh μ deduced from E|h| < ∞ see
Resnick 12 and 4.3, Theorem 4.1 follows immediately from the method used in the proof
n −1
of Theorem 2.1 with Sk /k replaced by m
Un . Now, we begin to show 4.7. By 4.3, we
have
a.s.
nt
nt nt
k
μ μ μ U
Rk
Uk
−
1
−
1
√
√
√
n
n
n .
m n km μ m
m n km μ m
m n km μ m
4.8
By applying 3.30 to random variables mh1 Xi for i ≥ 1, we have
nt
μ √
m n km
k
k
n
m−1
k
μ U
i1 h1 xi i1 h1 xi −1 −
−1
√
n −1
μ m
μk
μk
n k1
k1
d
−→ σ
t
0
4.9
Wx
dx,
x
in D0, 1, as n → ∞, since the second expression converges to zero a.s. as n → ∞. Therefore,
for proving 4.7, we only need to prove
nt
μ p
Rk
−→ 0,
√
n
m n km μ m as n −→ ∞,
4.10
and it is sufficient to demonstrate
Rn :
n
μ p
Rk
−→ 0,
√
n
m n km μ m
as n −→ ∞.
4.11
Indeed, we can easily obtain ER2n → 0 as n → ∞ from Hoeffding 11. Thus, we complete
the proof of 4.7, and, hence, Theorem 3.1 holds.
Journal of Probability and Statistics
13
Acknowledgment
The author thanks the referees for valuable comments that have led to improvements in this
work.
References
1 B. C. Arnold and J. A. Villaseñor, “The asymptotic distributions of sums of records,” Extremes, vol. 1,
no. 3, pp. 351–363, 1999.
2 G. Rempała and J. Wesołowski, “Asymptotics for products of sums and U-statistics,” Electronic
Communications in Probability, vol. 7, pp. 47–54, 2002.
3 Y. Qi, “Limit distributions for products of sums,” Statistics & Probability Letters, vol. 62, no. 1, pp.
93–100, 2003.
4 L.-X. Zhang and W. Huang, “A note on the invariance principle of the product of sums of random
variables,” Electronic Communications in Probability, vol. 12, pp. 51–56, 2007.
5 A. G. Pakes and F. W. Steutel, “On the number of records near the maximum,” The Australian Journal
of Statistics, vol. 39, no. 2, pp. 179–192, 1997.
6 A. G. Pakes and Y. Li, “Limit laws for the number of near maxima via the Poisson approximation,”
Statistics & Probability Letters, vol. 40, no. 4, pp. 395–401, 1998.
7 Y. Li, “A note on the number of records near the maximum,” Statistics & Probability Letters, vol. 43, no.
2, pp. 153–158, 1999.
8 A. G. Pakes, “The number and sum of near-maxima for thin-tailed populations,” Advances in Applied
Probability, vol. 32, no. 4, pp. 1100–1116, 2000.
9 Z. Hu and C. Su, “Limit theorems for the number and sum of near-maxima for medium tails,”
Statistics & Probability Letters, vol. 63, no. 3, pp. 229–237, 2003.
10 P. Billingsley, Convergence of Probability Measures, Wiley Series in Probability and Statistics: Probability
and Statistics, John Wiley & Sons Inc., New York, NY, USA, 2nd edition, 1999.
11 W. Hoeffding, “A class of statistics with asymptotically normal distribution,” Annals of Mathematical
Statistics, vol. 19, pp. 293–325, 1948.
12 S. I. Resnick, “Limit laws for record values,” Stochastic Processes and their Applications, vol. 1, pp. 67–82,
1973.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download