This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: V. T. N. Anh 22institutetext: Department of Mathematics, Hoa Lu University, Ninh Binh, Vietnam
22email: [email protected]
33institutetext: N. T. T. Hien 44institutetext: Department of Mathematics, Vinh University, Nghe An, Vietnam
44email: [email protected]
55institutetext: L. V. Thanh 66institutetext: Department of Mathematics, Vinh University, Nghe An, Vietnam
66email: [email protected]
77institutetext: V. T. H. Van 88institutetext: Department of Mathematics, Vinh University, Nghe An, Vietnam
88email: [email protected]

The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences thanks: The paper was supported by NAFOSTED, Grant No. 101.03-2015.11.

Vu T. N. Anh    Nguyen T. T. Hien    Le V. Thanh Corresponding author    Vo T. H. Van
Abstract

This paper establishes complete convergence for weighted sums and the Marcinkiewicz–Zygmund-type strong law of large numbers for sequences of negatively associated and identically distributed random variables {X,Xn,n1}\{X,X_{n},n\geq 1\} with general normalizing constants under a moment condition that ER(X)<ER(X)<\infty, where R()R(\cdot) is a regularly varying function. The result is new even when the random variables are independent and identically distributed (i.i.d.), and a special case of this result comes close to a solution to an open question raised by Chen and Sung (Statist Probab Lett 92:45–52, 2014). The proof exploits some properties of slowly varying functions and the de Bruijin conjugates. A counterpart of the main result obtained by Martikainen (J Math Sci 75(5):1944–1946, 1995) on the Marcinkiewicz–Zygmund-type strong law of large numbers for pairwise i.i.d. random variables is also presented. Two illustrated examples are provided, including a strong law of large numbers for pairwise negatively dependent random variables which have the same distribution as the random variable appearing in the St. Petersburg game.

Keywords:
Weighted sum Negative association Negative dependence Complete convergence Strong law of large numbers Normalizing constant Slowly varying function
MSC:
60F15

1 Introduction

The motivation of this paper is an open question raised recently by Chen and Sung ChenSung14 . Let 1<α21<\alpha\leq 2, γ>0\gamma>0 and let {X,Xn,n1}\{X,X_{n},n\geq 1\} be a sequence of negatively associated and identically distributed random variables with E(X)=0E(X)=0. Sung Sung11 proved that if

{E|X|γ< for γ>α,E|X|αlog(|X|+2)< for γ=α,E|X|α< for γ<α,\begin{cases}E|X|^{\gamma}<\infty\text{ for }\ \gamma>\alpha,\\ E|X|^{\alpha}\log(|X|+2)<\infty\text{ for }\ \gamma=\alpha,\\ E|X|^{\alpha}<\infty\text{ for }\ \gamma<\alpha,\end{cases} (1.1)

then

n=1n1P(max1kn|i=1kaniXi|>εn1/αlog1/γ(n))< for all ε>0,\sum_{n=1}^{\infty}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}a_{ni}X_{i}\right|>\varepsilon n^{1/\alpha}\log^{1/\gamma}(n)\right)<\infty\ \text{ for all }\ \varepsilon>0, (1.2)

where {ani,n1,1in}\{a_{ni},n\geq 1,1\leq i\leq n\} are constants satisfying

supn1i=1n|ani|αn<.\sup_{n\geq 1}\dfrac{\sum_{i=1}^{n}|a_{ni}|^{\alpha}}{n}<\infty. (1.3)

Here and thereafter, log\log denotes the logarithm to the base 22. Chen and Sung ChenSung14 proved that for the case where γ>α\gamma>\alpha, the condition E|X|γ<E|X|^{\gamma}<\infty is optimal. They raised an open question about finding the optimal condition for (1.2) when γα\gamma\leq\alpha. For the case where γ<α\gamma<\alpha, Chen and Sung (ChenSung14, , Corollary 2.2) proved that (1.2) holds under an almost optimal condition that

E(|X|αlog1α/γ(|X|+2))<.E\left(|X|^{\alpha}\log^{1-\alpha/\gamma}(|X|+2)\right)<\infty.

In this note, by using some results related to regularly varying functions, we provide the necessary and sufficient conditions for

nn1P(max1kn|i=1kaniXi|>εn1/αL~(n1/α))< for all ε>0,\sum_{n}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}a_{ni}X_{i}\right|>\varepsilon n^{1/\alpha}\tilde{L}(n^{1/\alpha})\right)<\infty\ \text{ for all }\ \varepsilon>0, (1.4)

where L~()\tilde{L}(\cdot) is the de Bruijn conjugate of a slowly varying function L()L(\cdot), defined on [A,)[A,\infty) for some A>0A>0. This result is new even when the random variables are i.i.d. By letting L(x)log1/γ(x),x2L(x)\equiv\log^{-1/\gamma}(x),\ x\geq 2, we obtain optimal moment condition for (1.2).

Weak laws of large numbers with the norming constants are of the form n1/αL~(n1/α)n^{1/\alpha}\tilde{L}(n^{1/\alpha}) were studied by Gut Gut04 , and Matsumoto and Nakata MN13 . The Marcinkiewicz–Zygmund strong law of large numbers has been extended and generalized in many directions by a number of authors, see BaKa ; DedeckerMerlevede ; GutStadmueller ; HechnerHeinkel ; Rio95a ; Rio95b ; Szewczak and references therein. To our best knowledge, there is not any result in the literature that considers strong law of large numbers with general normalizing constants n1/αL~(n1/α)n^{1/\alpha}\tilde{L}(n^{1/\alpha}) except Gut and Stadmüller GutStadmueller who studied the Kolmogorov strong law of large numbers, but for delay sums. The main result of this paper fills this gap. Recently, Miao et al. MMX have studied the Marcinkiewicz–Zygmund-type strong law of large numbers where the norming constants are of the form n1/αlogβ/αnn^{1/\alpha}\log^{\beta/\alpha}n for some β0\beta\geq 0, which is a special case of our result.

The concept of negative association of random variables was introduced by Joag-Dev and Proschan JoPr . A collection {X1,,Xn}\{X_{1},\dots,X_{n}\} of random variables is said to be negatively associated if for any disjoint subsets A,BA,B of {1,,n}\{1,\dots,n\} and any real coordinatewise nondecreasing functions ff on |A|{\mathbb{R}}^{|A|} and gg on |B|{\mathbb{R}}^{|B|},

Cov(f(Xk,kA),g(Xk,kB))0\text{Cov}(f(X_{k},k\in A),g(X_{k},k\in B))\leq 0 (1.5)

whenever the covariance exists, where |A||A| denotes the cardinality of AA. A sequence {Xn,n1}\{X_{n},n\geq 1\} of random variables is said to be negatively associated if every finite subfamily is negatively associated.

There is a weaker concept of dependence called negative dependence, which was introduced by Lehmann Lehmann66 and further investigated by Ebrahimi and Ghosh EG81 and Block et al. BSS . A collection of random variables {X1,,Xn}\{X_{1},\dots,X_{n}\} is said to be negatively dependent if for all x1,,xnx_{1},\dots,x_{n}\in\mathbb{R},

P(X1x1,,Xnxn)P(X1x1)P(Xnxn),P(X_{1}\leq x_{1},\dots,X_{n}\leq x_{n})\leq P(X_{1}\leq x_{1})\dots P(X_{n}\leq x_{n}),

and

P(X1>x1,,Xn>xn)P(X1>x1)P(Xn>xn).P(X_{1}>x_{1},\dots,X_{n}>x_{n})\leq P(X_{1}>x_{1})\dots P(X_{n}>x_{n}).

A sequence of random variables {Xi,i1}\{X_{i},i\geq 1\} is said to be negatively dependent if for any n1n\geq 1, the collection {X1,,Xn}\{X_{1},\dots,X_{n}\} is negatively dependent. A sequence of random variables {Xi,i1}\{X_{i},i\geq 1\} is said to be pairwise negatively dependent if for all x,yx,y\in\mathbb{R} and for all iji\not=j,

P(Xix,Xjy)P(Xix)P(Xjy).P(X_{i}\leq x,X_{j}\leq y)\leq P(X_{i}\leq x)P(X_{j}\leq y).

It is well known and easy to prove that {Xi,i1}\{X_{i},i\geq 1\} is pairwise negatively dependent if and only if for all x,yx,y\in\mathbb{R} and for all iji\not=j,

P(Xi>x,Xj>y)P(Xi>x)P(Xj>y).P(X_{i}>x,X_{j}>y)\leq P(X_{i}>x)P(X_{j}>y).

By Joag-Dev and Proschan (JoPr, , Property P3), negative association implies negative dependence. For examples about negatively dependent random variables which are not negatively associated, see (JoPr, , p. 289). Of course, pairwise independence implies pairwise negative dependence, but pairwise independence and negative dependence do not imply each other. Joag-Dev and Proschan JoPr pointed out that many useful distributions enjoy the negative association properties (and therefore, they are negatively dependent) including multinomial distribution, multivariate hypergeometric distribution, Dirichlet distribution, strongly Rayleigh distribution and distribution of random sampling without replacement. Limit theorems for negatively associated and negatively dependent random variables have received extensive attention. We refer to JiLi ; Matula92 ; Shao00 and references therein. These concepts of dependence can be extended to the Hilbert space-valued random variables; see, e.g., BDD ; HTV ; KKH ; Thanh13 , among others.

The rest of the paper is arranged as follows. Section 2 presents some results on slowly varying functions needed in proving the main results. Section 3 focuses on complete convergence for weighted sums of negatively associated and identically distributed random variables. In Sect. 4, we apply a result concerning slowly varying functions developed in Sect. 2 to give a counterpart of Martikainen’s strong law of large numbers (see Martikainen ) for sequences of pairwise negatively dependent and identically distributed random variables. As an application, we prove a strong law of large numbers for pairwise negatively dependent random variables which have the same distribution as the random variable appearing in the St. Petersburg game.

2 Some Facts Concerning Slowly Varying Functions

Some technical results concerning slowly varying functions will be presented in this section.

The notion of regularly varying function can be found in (Seneta76, , Chapter 1). A real-valued function R()R(\cdot) is said to be regularly varying with index of regular variation ρ\rho (ρ\rho\in\mathbb{R}) if it is a positive and measurable function on [A,)[A,\infty) for some A>0A>0, and for each λ>0\lambda>0,

limxR(λx)R(x)=λρ.\lim_{x\to\infty}\dfrac{R(\lambda x)}{R(x)}=\lambda^{\rho}. (2.1)

A regularly varying function with the index of regular variation ρ=0\rho=0 is called slowly varying. It is well known that a function R()R(\cdot) is regularly varying with the index of regular variation ρ\rho if and only if it can be written in the form

R(x)=xρL(x)R(x)=x^{\rho}L(x) (2.2)

where L()L(\cdot) is a slowly varying function (see, e.g., (Seneta76, , p. 2)). On the regularly varying functions and their important role in probability, we refer to Seneta Seneta76 , Bingham, Goldie and Teugels BGT , and more recent survey paper by Jessen and Mikosch JeMi . Regular variation is also one of the key notions for modeling the behavior of large telecommunications networks; see, e.g., Heath et al. HRS , Mikosch et al. MRRS .

The basic result in the theory of slowly varying functions is the representation theorem (see, e.g., (BGT, , Theorem 1.3.1)) which states that for a positive and measurable function L()L(\cdot) defined on [A,)[A,\infty) for some A>0A>0, L()L(\cdot) is slowly varying if and only if it can be written in the form

L(x)=c(x)exp(Bxε(u)duu)L(x)=c(x)\exp\left(\int_{B}^{x}\dfrac{\varepsilon(u)du}{u}\right)

for some BAB\geq A and for all xBx\geq B, where c()c(\cdot) is a positive bounded measurable function defined on [B,)[B,\infty) satisfying limxc(x)=c(0,)\lim_{x\to\infty}c(x)=c\in(0,\infty) and ε()\varepsilon(\cdot) is a continuous function defined on [B,)[B,\infty) satisfying limxε(x)=0\lim_{x\to\infty}\varepsilon(x)=0. Seneta Seneta73 (see also in (BGT, , Lemma 1.3.2)) proved that if L()L(\cdot) is a slowly varying function defined on [A,)[A,\infty) for some A>0A>0, then there exists BAB\geq A such that L(x)L(x) is bounded on every finite closed interval [a,b][B,)[a,b]\subset[B,\infty).

Let L()L(\cdot) be a slowly varying function. Then by (BGT, , Theorem 1.5.13), there exists a slowly varying function L~()\tilde{L}(\cdot), unique up to asymptotic equivalence, satisfying

limxL(x)L~(xL(x))=1 and limxL~(x)L(xL~(x))=1.\lim_{x\to\infty}L(x)\tilde{L}\left(xL(x)\right)=1\ \text{ and }\lim_{x\to\infty}\tilde{L}(x)L\left(x\tilde{L}(x)\right)=1. (2.3)

The function L~\tilde{L} is called the de Bruijn conjugate of LL, and (L,L~)\left(L,\tilde{L}\right) is called a (slowly varying) conjugate pair (see, e.g., (BGT, , p. 29)). By (BGT, , Proposition 1.5.14), if (L,L~)\left(L,\tilde{L}\right) is a conjugate pair, then for a,b,α>0a,b,\alpha>0, each of (L(ax),L~(bx))\left(L(ax),\tilde{L}(bx)\right), (aL(x),a1L~(x)),\left(aL(x),a^{-1}\tilde{L}(x)\right), ((L(xα))1/α,(L~(xα))1/α)\left(\left(L(x^{\alpha})\right)^{1/\alpha},(\tilde{L}(x^{\alpha}))^{1/\alpha}\right) is a conjugate pair. Bojanić, R. and Seneta Bojanic71 (see also Theorem 2.3.3 and Corollary 2.3.4 in BGT ) proved that if L()L(\cdot) is a slowly varying function satisfying

limx(L(λ0x)L(x)1)log(L(x))=0,\lim_{x\to\infty}\left(\dfrac{L(\lambda_{0}x)}{L(x)}-1\right)\log(L(x))=0, (2.4)

for some λ0>1\lambda_{0}>1, then for every α\alpha\in\mathbb{R},

limxL(xLα(x))L(x)=1,\lim_{x\to\infty}\dfrac{L(xL^{\alpha}(x))}{L(x)}=1, (2.5)

and therefore, we can choose (up to aymptotic equivalence) L~(x)=1/L(x)\tilde{L}(x)=1/L(x). In particular, if L(x)=log(x)L(x)=\log(x) then L~(x)=1/log(x)\tilde{L}(x)=1/\log(x).

The following lemma follows from Theorem 1.5.12 and Proposition 1.5.15 in BGT . Here and thereafter, for a slowly varying function L()L(\cdot) defined on [A,)[A,\infty) for some A>0A>0, we denote the Brujin conjugate of L()L(\cdot) by L~()\tilde{L}(\cdot). Without loss of generality, we assume that L~()\tilde{L}(\cdot) is also defined on [A,)[A,\infty), and that L(x)L(x) and L~(x)\tilde{L}(x) are both bounded on finite closed intervals.

Lemma 2.1

Let α,β>0\alpha,\beta>0 and L()L(\cdot) be a slowly varying function. Let f(x)=xαβLα(xβ)f(x)=x^{\alpha\beta}L^{\alpha}(x^{\beta}) and g(x)=x1/(αβ)L~1/β(x1/α)g(x)=x^{1/{(\alpha\beta)}}\tilde{L}^{1/\beta}(x^{1/\alpha}). Then

limxf(g(x))x=limxg(f(x))x=1.\lim_{x\to\infty}\dfrac{f(g(x))}{x}=\lim_{x\to\infty}\dfrac{g(f(x))}{x}=1. (2.6)

The second lemma shows that we can approximate a slowly varying function L()L(\cdot) by a differentiable slowly varying function L1()L_{1}(\cdot). See Galambos and Seneta (GS73, , p. 111) for a proof.

Lemma 2.2

For any slowly varying function L()L(\cdot) defined on [A,)[A,\infty) for some A>0A>0, there exists a differentiable slowly varying function L1()L_{1}(\cdot) defined on [B,)[B,\infty) for some BAB\geq A such that

limxL(x)L1(x)=1 and limxxL1(x)L1(x)=0.\lim_{x\to\infty}\dfrac{L(x)}{L_{1}(x)}=1\ \text{ and }\ \lim_{x\to\infty}\dfrac{xL_{1}^{\prime}(x)}{L_{1}(x)}=0.

Conversely, if L()L(\cdot) is a positive differentiable function satisfying

limxxL(x)L(x)=0,\lim_{x\to\infty}\dfrac{xL^{\prime}(x)}{L(x)}=0, (2.7)

then L()L(\cdot) is a slowly varying function.

Because of Lemma 2.2, we can work with differentiable slowly varying functions L()L(\cdot) that satisfy (2.7) in our setting.

The proof of Lemma 2.3 (i) follows from direct calculations (by taking the derivative). Lemma 2.3 (ii) is an easy consequence of the representation theorem stated above.

Lemma 2.3

Let p>0p>0 and let L()L(\cdot) be a slowly varying function defined on [A,)[A,\infty) for some A>0A>0, satisfying (2.7). Then the following statements hold.

(i) There exists BAB\geq A such that xpL(x)x^{p}L(x) is increasing on [B,)[B,\infty), xpL(x)x^{-p}L(x) is decreasing on [B,)[B,\infty), and limxxpL(x)=,limxxpL(x)=0\lim_{x\to\infty}x^{p}L(x)=\infty,\ \lim_{x\to\infty}x^{-p}L(x)=0.

(ii) For all λ>0\lambda>0,

limxL(x)L(x+λ)=1.\lim_{x\to\infty}\dfrac{L(x)}{L(x+\lambda)}=1.
Remark 2.4

If we do not have the assumption that L()L(\cdot) satisfies (2.7), then we still have xpL(x),xpL(x)0x^{p}L(x)\rightarrow\infty,\ x^{-p}L(x)\rightarrow 0 as xx\to\infty (see Seneta (Seneta76, , p. 18)), but we do not have the monotonicity as in Lemma 2.3 (i).

The following lemma is a direct consequence of Karamata’s theorem (see (BGT, , Theorem 1.5.10)) as was so kindly pointed out to us by the referee.

Lemma 2.5

Let p>1p>1, qq\in\mathbb{R} and L()L(\cdot) be a differentiable slowly varying function defined on [A,)[A,\infty) for some A>0A>0. Then

k=nLq(k)kpLq(n)(p1)np1.\begin{split}\sum_{k=n}^{\infty}\dfrac{L^{q}(k)}{k^{p}}\sim\dfrac{L^{q}(n)}{(p-1)n^{p-1}}.\end{split} (2.8)

The following proposition gives a criterion for E(|X|αLα(|X|+A))<E\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty.

Proposition 2.6

Let α1\alpha\geq 1, and let XX be a random variable. Let L()L(\cdot) be a slowly varying function defined on [A,)[A,\infty) for some A>0A>0. Assume that xαLα(x)x^{\alpha}L^{\alpha}(x) and x1/αL~(x1/α)x^{1/\alpha}\tilde{L}(x^{1/\alpha}) are increasing on [A,)[A,\infty). Then

E(|X|αLα(|X|+A))< if and only if nAαP(|X|>bn)<,E\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty\ \text{ if and only if }\ \sum_{n\geq A^{\alpha}}P(|X|>b_{n})<\infty, (2.9)

where bn=n1/αL~(n1/α)b_{n}=n^{1/\alpha}\tilde{L}\left(n^{1/\alpha}\right), nAαn\geq A^{\alpha}.

Proof

Let f(x)=xαLα(x)f(x)=x^{\alpha}L^{\alpha}(x), g(x)=x1/αL~(x1/α)g(x)=x^{1/\alpha}\tilde{L}(x^{1/\alpha}). Since L()L(\cdot) is positive and bounded on finite closed intervals,

E(|X|αLα(|X|+A))< if and only if E(f(|X|+A))<.E\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty\text{ if and only if }E\left(f(|X|+A)\right)<\infty.

For a non negative random variable YY, EY<EY<\infty if and only if n=1P(Y>n)<\sum_{n=1}^{\infty}P(Y>n)<\infty. Applying this, we have that E(f(|X|+A))<E\left(f(|X|+A)\right)<\infty if and only if

n=1P(f(|X|+A)>n)<.\sum_{n=1}^{\infty}P\left(f(|X|+A)>n\right)<\infty. (2.10)

By using Lemma 2.1 with β=1,\beta=1, we have f(g(x))g(f(x))xf(g(x))\sim g(f(x))\sim x as xx\to\infty. Combining this with the assumption that f(x)f(x) and g(x)g(x) are increasing on [A,)[A,\infty), we see that (2.10) is equivalent to

nAαP(|X|>n1/αL~(n1/α))<.\sum_{n\geq A^{\alpha}}P\left(|X|>n^{1/\alpha}\tilde{L}(n^{1/\alpha})\right)<\infty. (2.11)

The proof of the proposition is completed.

3 Complete Convergence for Weighted Sums of Negatively Associated and Identically Distributed Random Variables

In the following theorem, we establish complete convergence for weighted sums of negatively associated and identically distributed random variables. Theorem 3.1 is new even when the random variables are i.i.d. A special case of this result comes close to a solution of an open question of Chen and Sung ChenSung14 . In subsequent derivations, the symbol CC denotes a generic positive constant whose value may be different for each appearance.

Theorem 3.1

Let 1α<21\leq\alpha<2, {X,Xn,n1}\{X,X_{n},\,n\geq 1\} be a sequence of negatively associated and identically distributed random variables and L()L(\cdot) a slowly varying function defined on [A,)[A,\infty) for some A>0A>0. When α=1\alpha=1, we assume further that L(x)1L(x)\geq 1 and is increasing on [A,)[A,\infty). Let bn=n1/αL~(n1/α)b_{n}=n^{1/\alpha}{\tilde{L}}(n^{1/\alpha}), nAαn\geq A^{\alpha}. Then the following four statements are equivalent.

(i) The random variable XX satisfies

E(X)=0,E(|X|αLα(|X|+A))<.E(X)=0,\ E\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty. (3.1)

(ii) For every array of constants {ani,n1,1in}\{a_{ni},n\geq 1,1\leq i\leq n\} satisfying

i=1nani2Cn,n1,\sum_{i=1}^{n}a_{ni}^{2}\leq Cn,\ n\geq 1, (3.2)

we have

nAαn1P(max1kn|i=1kaniXi|>εbn)< for all ε>0.\sum_{n\geq A^{\alpha}}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}a_{ni}X_{i}\right|>\varepsilon b_{n}\right)<\infty\text{ for all }\varepsilon>0. (3.3)

(iii)

nAαn1P(max1kn|i=1kXi|>εbn)< for all ε>0.\sum_{n\geq A^{\alpha}}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|>\varepsilon b_{n}\right)<\infty\text{ for all }\varepsilon>0. (3.4)

(iv) The strong law of large numbers

limnmax1kn|i=1kXi|bn=0 a.s.\begin{split}\lim_{n\to\infty}\dfrac{\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|}{b_{n}}=0\ \text{ a.s.}\end{split} (3.5)

holds.

Proof

For simplicity, we assume that AαA^{\alpha} is an integer number since we can take [Aα]+1[A^{\alpha}]+1 otherwise. By Lemmas 2.2 and 2.3, without loss of generality, we can assume that x1/αL~(x1/α)x^{1/\alpha}\tilde{L}(x^{1/\alpha}) and xα1Lα(x)x^{\alpha-1}L^{\alpha}(x) are increasing on [A,)[A,\infty) and that xα2Lα(x)x^{\alpha-2}L^{\alpha}(x) is decreasing on [A,)[A,\infty). We may also assume that ani0a_{ni}\geq 0 since we can using the identity ani=ani+ania_{ni}=a_{ni}^{+}-a_{ni}^{-} in the general case.

Firstly, we prove the implication ((i) \Rightarrow (ii)). Assume that (3.1) holds and {ani,n1,i1}\{a_{ni},n\geq 1,i\geq 1\} are constants satisfying (3.2), we will prove that (3.3) holds. For nAαn\geq A^{\alpha}, set

Xni=bnI(Xi<bn)+XiI(|Xi|bn)+bnI(Xi>bn), 1in,X_{ni}=-b_{n}I(X_{i}<-b_{n})+X_{i}I(|X_{i}|\leq b_{n})+b_{n}I(X_{i}>b_{n}),\ 1\leq i\leq n,

and

Snk=i=1k(aniXniE(aniXni)), 1kn.S_{nk}=\sum_{i=1}^{k}\Big{(}a_{ni}X_{ni}-E(a_{ni}X_{ni})\Big{)},\ 1\leq k\leq n.

Let ε>0\varepsilon>0 be arbitrary. For nAαn\geq A^{\alpha},

P(max1kn|i=1kaniXi|>εbn)P(max1kn|Xk|>bn)+P(max1kn|i=1kaniXni|>εbn)P(max1kn|Xk|>bn)+P(max1kn|Snk|>εbni=1n|E(aniXni)|).\begin{split}&P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}a_{ni}X_{i}\right|>\varepsilon b_{n}\right)\leq P\left(\max_{1\leq k\leq n}|X_{k}|>b_{n}\right)+P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}a_{ni}X_{ni}\right|>\varepsilon b_{n}\right)\\ &\leq P\Big{(}\max_{1\leq k\leq n}|X_{k}|>b_{n}\Big{)}+P\Big{(}\max_{1\leq k\leq n}|S_{nk}|>\varepsilon b_{n}-\sum_{i=1}^{n}\left|E(a_{ni}X_{ni})\right|\Big{)}.\end{split} (3.6)

By the second half of (3.1) and Proposition 2.6, we have

nAαn1P(max1kn|Xk|>bn)nAαn1k=1nP(|Xk|>bn)=nAαP(|X|>bn)<.\begin{split}\sum_{n\geq A^{\alpha}}n^{-1}P\Big{(}\max_{1\leq k\leq n}|X_{k}|>b_{n}\Big{)}&\leq\sum_{n\geq A^{\alpha}}n^{-1}\sum_{k=1}^{n}P\Big{(}|X_{k}|>b_{n}\Big{)}\\ &=\sum_{n\geq A^{\alpha}}P(|X|>b_{n})<\infty.\end{split} (3.7)

For n1n\geq 1, by the Cauchy-Schwarz inequality and (3.2),

(i=1n|ani|)2n(i=1nani2)Cn2.\begin{split}\Big{(}\sum_{i=1}^{n}|a_{ni}|\Big{)}^{2}&\leq n\Big{(}\sum_{i=1}^{n}a_{ni}^{2}\Big{)}\leq Cn^{2}.\end{split} (3.8)

For nAαn\geq A^{\alpha}, the first half of (3.1) and (3.8) imply that

i=1n|E(aniXni)|bni=1n|ani|(|EXiI(|Xi|bn)|+bnP(|Xi|>bn))bnCn(|E(XI(|X|bn))|+bnP(|X|>bn))bn=Cn(|E(XI(|X|>bn))|+bnP(|X|>bn))bnCnE|X|I(|X|>bn)bn.\begin{split}\dfrac{\sum_{i=1}^{n}|E(a_{ni}X_{ni})|}{b_{n}}&\leq\dfrac{\sum_{i=1}^{n}|a_{ni}|\left(\left|EX_{i}I(|X_{i}|\leq b_{n})\right|+b_{n}P(|X_{i}|>b_{n})\right)}{b_{n}}\\ &\leq\dfrac{Cn\left(\big{|}E(XI(|X|\leq b_{n}))\big{|}+b_{n}P(|X|>b_{n})\right)}{b_{n}}\\ &=\dfrac{Cn\left(\big{|}E(XI(|X|>b_{n}))\big{|}+b_{n}P(|X|>b_{n})\right)}{b_{n}}\\ &\leq\dfrac{CnE|X|I(|X|>b_{n})}{b_{n}}.\end{split} (3.9)

For nn large enough and for ω(|X|>bn)\omega\in(|X|>b_{n}), we have

nbn=n(α1)/αL~α1(n1/α)L~α(n1/α)=(n1/αL~(n1/α))α1Lα(n1/αL~(n1/α))L~α(n1/α)Lα(n1/αL~(n1/α))Cbnα1Lα(bn)C|X(ω)|α1Lα(|X(ω)|),\begin{split}\dfrac{n}{b_{n}}&=\dfrac{n^{(\alpha-1)/\alpha}\tilde{L}^{\alpha-1}(n^{1/\alpha})}{\tilde{L}^{\alpha}(n^{1/\alpha})}\\ &=\dfrac{\left(n^{1/\alpha}\tilde{L}(n^{1/\alpha})\right)^{\alpha-1}L^{\alpha}\left(n^{1/\alpha}\tilde{L}(n^{1/\alpha})\right)}{\tilde{L}^{\alpha}(n^{1/\alpha})L^{\alpha}\left(n^{1/\alpha}\tilde{L}(n^{1/\alpha})\right)}\\ &\leq Cb_{n}^{\alpha-1}L^{\alpha}(b_{n})\leq C|X(\omega)|^{\alpha-1}L^{\alpha}(|X(\omega)|),\end{split} (3.10)

where we have applied (2.3) in the first inequality and the monotonicity of xα1Lα(x)x^{\alpha-1}L^{\alpha}(x) in the second inequality. Combining (3.9), (3.10), the second half of (3.1) and using Lemma 2.3 (ii), we have

i=1n|E(aniXni)|bnCE(|X|αLα(|X|)I(|X|>bn))CE(|X|αLα(|X|+A)I(|X|>bn))0 as n.\begin{split}\dfrac{\sum_{i=1}^{n}|E(a_{ni}X_{ni})|}{b_{n}}&\leq CE\left(|X|^{\alpha}L^{\alpha}(|X|)I\left(|X|>b_{n}\right)\right)\\ &\leq CE\left(|X|^{\alpha}L^{\alpha}(|X|+A)I\left(|X|>b_{n}\right)\right)\\ &\to 0\text{ as }n\to\infty.\end{split} (3.11)

From (3.6), (3.7) and (3.11), to obtain (3.3), it remains to show that

nAαn1P(max1jn|Snj|>bnε/2)<.\sum_{n\geq A^{\alpha}}n^{-1}P\Big{(}\max_{1\leq j\leq n}|S_{nj}|>b_{n}\varepsilon/2\Big{)}<\infty. (3.12)

Set bAα1=0b_{A^{\alpha}-1}=0. For BB large enough, we have

nAα1nP(max1kn|Snk|>bnε/2)nAα4ε2nbn2E(max1jn|Snj|)2nAα4ε2nbn2i=1nE(aniXniE(aniXni))2nAα4(i=1nani2)(EX2I(|X|bn)+bn2P(|X|>bn))ε2nbn2CnAα(E(X2I(|X|bn))bn2+P(|X|>bn))C+CnAα1n2/αL~2(n1/α)AαinE(X2I(bi1<|X|bi))=C+CiB(ni1n2/αL~2(n1/α))E(X2I(bi1<|X|bi))C+CiBi(α2)/αL~2(i1/α)E(X2I(bi1<|X|bi)),\begin{split}&\sum_{n\geq A^{\alpha}}\dfrac{1}{n}P\Big{(}\max_{1\leq k\leq n}|S_{nk}|>b_{n}\varepsilon/2\Big{)}\leq\sum_{n\geq A^{\alpha}}\dfrac{4}{\varepsilon^{2}nb_{n}^{2}}E\Big{(}\max_{1\leq j\leq n}|S_{nj}|\Big{)}^{2}\\ &\leq\sum_{n\geq A^{\alpha}}\dfrac{4}{\varepsilon^{2}nb_{n}^{2}}\sum_{i=1}^{n}E\Big{(}a_{ni}X_{ni}-E(a_{ni}X_{ni})\Big{)}^{2}\\ &\leq\sum_{n\geq A^{\alpha}}\dfrac{4\left(\sum_{i=1}^{n}a_{ni}^{2}\right)\left(EX^{2}I(|X|\leq b_{n})+b_{n}^{2}P(|X|>b_{n})\right)}{\varepsilon^{2}nb_{n}^{2}}\\ &\leq C\sum_{n\geq A^{\alpha}}\left(\dfrac{E(X^{2}I(|X|\leq b_{n}))}{b_{n}^{2}}+P(|X|>b_{n})\right)\\ &\leq C+C\sum_{n\geq A^{\alpha}}\dfrac{1}{n^{2/\alpha}\tilde{L}^{2}(n^{1/\alpha})}\sum_{A^{\alpha}\leq i\leq n}E\left(X^{2}I(b_{i-1}<|X|\leq b_{i})\right)\\ &=C+C\sum_{i\geq B}\left(\sum_{n\geq i}\dfrac{1}{n^{2/\alpha}\tilde{L}^{2}(n^{1/\alpha})}\right)E\left(X^{2}I(b_{i-1}<|X|\leq b_{i})\right)\\ &\leq C+C\sum_{i\geq B}i^{(\alpha-2)/\alpha}\tilde{L}^{-2}(i^{1/\alpha})E\left(X^{2}I(b_{i-1}<|X|\leq b_{i})\right),\end{split} (3.13)

where we have used Chebyshev’s inequality in the first inequality, the Kolmogorov maximal inequality (see Shao (Shao00, , Theorem 2)) in the second inequality, (3.2) in the fourth inequality, Proposition 2.6 and the second half of (3.1) in the fifth inequality and Lemma 2.5 in the last inequality. For ω(bi1<|X|bi)\omega\in(b_{i-1}<|X|\leq b_{i}), we have

i(α2)/αL~2(i1/α)=i(α2)/αL~α2(i1/α)Lα(i1/αL~(i1/α))L~α(i1/α)Lα(i1/αL~(i1/α))C(i1/αL~(i1/α))α2Lα(i1/αL~(i1/α))=Cbiα2Lα(bi)C|X(ω)|α2Lα(|X(ω)|),\begin{split}i^{(\alpha-2)/\alpha}\tilde{L}^{-2}(i^{1/\alpha})&=\dfrac{i^{(\alpha-2)/\alpha}\tilde{L}^{\alpha-2}(i^{1/\alpha})L^{\alpha}\left(i^{1/\alpha}\tilde{L}(i^{1/\alpha})\right)}{\tilde{L}^{\alpha}(i^{1/\alpha})L^{\alpha}\left(i^{1/\alpha}\tilde{L}(i^{1/\alpha})\right)}\\ &\leq C\left(i^{1/\alpha}\tilde{L}(i^{1/\alpha})\right)^{\alpha-2}L^{\alpha}\left(i^{1/\alpha}\tilde{L}(i^{1/\alpha})\right)\\ &=Cb_{i}^{\alpha-2}L^{\alpha}\left(b_{i}\right)\leq C|X(\omega)|^{\alpha-2}L^{\alpha}\left(|X(\omega)|\right),\end{split} (3.14)

where we have applied (2.3) in the first inequality and the monotonicity of x2αLα(x)x^{2-\alpha}L^{\alpha}(x) in the second inequality. Combining (3.13), (3.14), the second half of (3.1) and using Lemma 2.3 (ii), we have

nAα1nP(max1kn|Snk|>bnε/2)C+CE(|X|αLα(|X|+A))<,\begin{split}\sum_{n\geq A^{\alpha}}\dfrac{1}{n}P\Big{(}\max_{1\leq k\leq n}|S_{nk}|>b_{n}\varepsilon/2\Big{)}&\leq C+CE\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty,\end{split} (3.15)

thereby proving (3.12).

The implication [(ii) \Rightarrow (iii)] is immediate by letting ani1a_{ni}\equiv 1. Now, we assume that (iii) holds. Since

bn=n1/αL~(n1/α) and b2nbn=21/αL~((2n)1/α)L~(n1/α)C,b_{n}=n^{1/\alpha}\tilde{L}(n^{1/\alpha})\uparrow\infty\text{ and }\dfrac{b_{2n}}{b_{n}}=\dfrac{2^{1/\alpha}\tilde{L}((2n)^{1/\alpha})}{\tilde{L}(n^{1/\alpha})}\leq C,

it follows from the proof of (Sung14, , Lemma 2.4) that (see (2.1) in Sung14 )

limkmax1i2k+1|j=1iXj|b2k=0 a.s.\lim_{k\to\infty}\dfrac{\max_{1\leq i\leq 2^{k+1}}|\sum_{j=1}^{i}X_{j}|}{b_{2^{k}}}=0\text{ a.s.} (3.16)

For 2kn<2k+12^{k}\leq n<2^{k+1},

max1in|j=1iXj|bnmax1i2k+1|j=1iXj|b2k.\dfrac{\max_{1\leq i\leq n}|\sum_{j=1}^{i}X_{j}|}{b_{n}}\leq\dfrac{\max_{1\leq i\leq 2^{k+1}}|\sum_{j=1}^{i}X_{j}|}{b_{2^{k}}}. (3.17)

Combining (3.16) and (3.17), we obtain (3.5).

Finally, we prove the implication [(iv)\Rightarrow(i)]. It follows from (3.5) that

limnmax1kn|Xk|bn=0a.s.\lim_{n\to\infty}\dfrac{\max_{1\leq k\leq n}\left|X_{k}\right|}{b_{n}}=0\ \text{a.s.} (3.18)

Since {X,Xn,n1}\{X,X_{n},n\geq 1\} is a sequence of negatively associated random variables, {(Xn>bn),n1}\{(X_{n}>b_{n}),n\geq 1\} are pairwise negatively correlated events, and so are {(Xn<bn),n1}\{(X_{n}<-b_{n}),n\geq 1\}. By the generalized Borel-Cantelli lemma (see, e.g., Petrov ), it follows from (3.18) that

nAαP(|X|>bn)=nAαP(|Xn|>bn)<\sum_{n\geq A^{\alpha}}P(|X|>b_{n})=\sum_{n\geq A^{\alpha}}P(|X_{n}|>b_{n})<\infty (3.19)

which, by Proposition 2.6, is equivalent to

E(|X|αLα(|X|+A))<.E\left(|X|^{\alpha}L^{\alpha}(|X|+A)\right)<\infty. (3.20)

From (3.20), we have E|X|<E|X|<\infty. Since |XEX||X|+E|X||X-EX|\leq|X|+E|X| and L()L(\cdot) is differentiable slowly varying, (3.20) further implies

E(|XEX|αLα(|XEX|+A))<.E\left(|X-EX|^{\alpha}L^{\alpha}(|X-EX|+A)\right)<\infty. (3.21)

From (3.21) and the proof of ((i)\Rightarrow(iv)), we have

limn(i=1nXibnn(α1)/αEXL~(n1/α))=limni=1n(XiEXi)bn=0a.s.\begin{split}\lim_{n\to\infty}\left(\dfrac{\sum_{i=1}^{n}X_{i}}{b_{n}}-\dfrac{n^{(\alpha-1)/\alpha}EX}{\tilde{L}(n^{1/\alpha})}\right)&=\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}(X_{i}-EX_{i})}{b_{n}}\\ &=0\ \text{a.s.}\end{split} (3.22)

For the case where 1<α<21<\alpha<2, we have from Remark 2.4 that n(α1)/α/L~(n1/α)n^{(\alpha-1)/\alpha}/\tilde{L}(n^{1/\alpha})\rightarrow\infty as nn\to\infty. For the case where α=1\alpha=1, we have from (2.3) that n(α1)/α/L~(n1/α)=1/L~(n)L(nL~(n))1n^{(\alpha-1)/\alpha}/\tilde{L}(n^{1/\alpha})=1/\tilde{L}(n)\sim L(n\tilde{L}(n))\geq 1. It thus follows from (3.5) and (3.22) that E(X)=0E(X)=0, i.e., the first half of (3.1) holds. The proof is completed.

By letting L(x)1L(x)\equiv 1, Theorem 3.1 generalizes a seminal result of Baum and Katz BaKa on complete convergence for sums of independent random variables to weighted sums of negatively associated random variables. Recently, Miao et al. MMX proved the following proposition.

Proposition 3.2 (MMX , Theorem 2.1)

Let 0<α<20<\alpha<2 and let {X,Xn,n1}\{X,X_{n},n\geq 1\} be a strictly stationary negatively associated sequence with E|X|αlogβ(|X|+2)<E|X|^{\alpha}\log^{-\beta}(|X|+2)<\infty for some β0\beta\geq 0. In the case where 1<α<21<\alpha<2, assume further that EX=0EX=0. Then for any δ(1+α2α)/α\delta\geq(1+\alpha^{2}-\alpha)/\alpha, we have

limni=1nXin1/α(logn)β(1α+δ)=0 a.s.\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n^{1/\alpha}(\log n)^{\beta(1-\alpha+\delta)}}=0\ \text{ a.s.} (3.23)

We observe that one only needs to verify (3.23) for the case where δ=(1+α2α)/α\delta=(1+\alpha^{2}-\alpha)/\alpha. In this case, (3.23) becomes

limni=1nXin1/α(logn)β/α=0 a.s.\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n^{1/\alpha}(\log n)^{\beta/\alpha}}=0\ \text{ a.s.} (3.24)

For the case where (i) 1<α<2,β01<\alpha<2,\beta\geq 0 or (ii) α=1,β=0\alpha=1,\beta=0, by letting L(x)logβ/α(x+2)L(x)\equiv\log^{-\beta/\alpha}(x+2), we see that (3.5) reduces to (3.24). Therefore, Proposition 3.2 is a special case of Theorem 3.1. For the case where α=1\alpha=1 and β>0\beta>0, we will show in the next section (Sect. 4) that Proposition 3.2 holds under a weaker condition that {X,Xn,n1}\{X,X_{n},n\geq 1\} are pairwise negatively dependent.

Now, we consider another special case where 1<α<21<\alpha<2, γ>0\gamma>0 and L(x)=log1/γ(x),x2L(x)=\log^{-1/\gamma}(x),\ x\geq 2. Then

bn=n1/αL1(n1/α)=(1α)1/γn1/αlog1/γ(n),n2,b_{n}=n^{1/\alpha}L^{-1}(n^{1/\alpha})=\left(\dfrac{1}{\alpha}\right)^{1/\gamma}n^{1/\alpha}\log^{1/\gamma}(n),\ n\geq 2,

and we have the following corollary. This result comes close to a solution to the open question raised by Chen and Sung ChenSung14 which we have mentioned in Introduction.

Corollary 3.3

Let 1<α<21<\alpha<2, γ>0\gamma>0 and {X,Xn,n1}\{X,X_{n},\,n\geq 1\} be a sequence of negatively associated and identically distributed random variables. Then the following statements are equivalent.

(i) The random variable XX satisfies

E(X)=0 and E(|X|α/logα/γ(|X|+2))<.E(X)=0\ \text{ and }\ E\left(|X|^{\alpha}/\log^{\alpha/\gamma}(|X|+2)\right)<\infty.

(ii) For every array of constants {ani,n1,1in}\{a_{ni},n\geq 1,1\leq i\leq n\} satisfying (3.2), we have (1.2).

(iii) The strong law of large numbers

limnmax1kn|i=1kXi|n1/αlog1/γn=0 a.s.\begin{split}\lim_{n\to\infty}\dfrac{\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|}{n^{1/\alpha}\log^{1/\gamma}n}=0\ \text{ a.s.}\end{split}

holds.

Remark 3.4

When α2\alpha\leq 2, by Hölder’s inequality, our condition (3.2) implies (1.3). From Corollary 3.3, we see that by slightly extending (1.3), we obtain optimal moment condition for (1.2).

In the following example, we show that the moment condition provided by Chen and Sung (ChenSung14, , Corollary 2.2) is violated, but Corollary 3.3 can still be applied.

Example 3.5

Let 1<α<21<\alpha<2, γ>0\gamma>0 and {X,Xn,n1}\{X,X_{n},n\geq 1\} be a sequence of negatively associated and identically distributed random variables with the common density function

f(x)=b|x|α+1log1α/γ(|x|+2)log2(log(|x|+2))I(|x|>1),f(x)=\dfrac{b}{|x|^{\alpha+1}\log^{1-\alpha/\gamma}(|x|+2)\log^{2}(\log(|x|+2))}I(|x|>1),

where bb is the normalization constant. Then

EX=0,E(|X|α/logα/γ(|X|+2))<.EX=0,\ E\left(|X|^{\alpha}/\log^{\alpha/\gamma}(|X|+2)\right)<\infty.

Therefore, by applying Corollary 3.3 with ani1a_{ni}\equiv 1, we obtain

n=1n1P(max1kn|i=1kXi|>εn1/αlog1/γ(n))< for all ε>0,\sum_{n=1}^{\infty}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|>\varepsilon n^{1/\alpha}\log^{1/\gamma}(n)\right)<\infty\ \text{ for all }\ \varepsilon>0,

and

limnmax1kn|i=1kXi|n1/αlog1/γn=0 a.s.\begin{split}\lim_{n\to\infty}\dfrac{\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|}{n^{1/\alpha}\log^{1/\gamma}n}=0\ \text{ a.s.}\end{split}

In this example, we cannot apply Corollary 2.2 in Chen and Sung ChenSung14 since

E(|X|αlog1α/γ(|X|+2))=.E\left(|X|^{\alpha}\log^{1-\alpha/\gamma}(|X|+2)\right)=\infty. (3.25)

4 Strong Law of Large Numbers for Sequences of Pairwise Negatively Dependent and Identically Distributed Random Variables

For a sequence of i.i.d. random variables {X,Xn,n1}\{X,X_{n},n\geq 1\}, the classical Hartman–Wintner law of the iterated logarithm states that E(X)=0E(X)=0 and E(X2)<E(X^{2})<\infty are necessary and sufficient conditions for the law of the iterated logarithm to hold.

By letting L(x)1L(x)\equiv 1 in Theorem 3.1, we see that the Marcinkiewicz–Zygmund strong law of large numbers holds for sequences of negatively associated and identically distributed random variables under optimal condition E|X|α<E|X|^{\alpha}<\infty. However, in Theorem 3.1, for the case where α=1\alpha=1, we require L(x)1L(x)\geq 1 for xAx\geq A. The reason behind this is because we need E|X|<E|X|<\infty in the proof. The aim of this section is to establish the strong law of large numbers for the case where E|X|=E|X|=\infty. It turns out that a similar strong law of large numbers still holds even for pairwise negatively dependent random variables. On the law of the iterated logarithm, this line of research was initiated by Feller Feller68a and completely developed by Kuelbs and Zinn KZinn83 , Einmahl Einmahl93 , Einmahl and Li EL05 ; EL08 where the authors proved general laws of the iterated logarithm for sequences of i.i.d. random variables with E(X2)=E(X^{2})=\infty. The normalizing sequences in laws of the iterated logarithm in Einmahl and Li EL05 ; EL08 are also of the form nL(n)\sqrt{nL(n)}, where L(n)L(n) is a slowly varying increasing function.

It is worth noting that for pairwise i.i.d. random variables, the Marcinkiewic-Zygmund strong law of large numbers holds under optimal moment condition E|X|α<E|X|^{\alpha}<\infty, 1α<21\leq\alpha<2 (see Etemadi Etemadi81 for the case where α=1\alpha=1 and Rio Rio95b for the case where 1<α<21<\alpha<2). On the case where the random variables are pairwise independent, but not identically distributed, Csörgő et al. CTT proved that the Kolmogorov condition alone does not ensure the strong law of large numbers. Bose and Chandra BoseChandra , and Chandra and Goswami ChandraGoswami03 generalized the Marcinkiewicz–Zygmund-type law of large numbers for pairwise independent case under the so-called Cesàro uniform integrability condition.

For pairwise negatively dependent random variables, Shen et al. SZV established a strong law of large numbers for pairwise negatively dependent and identically distributed random variables under a very general condition. Precisely, Shen et al. (SZV, , Theorems 3 and 5) proved that if {bn,n1}\{b_{n},n\geq 1\} is a sequence of positive constants with bn/nb_{n}/n\uparrow\infty and if {X,Xn,n1}\{X,X_{n},n\geq 1\} is a sequence of pairwise negatively dependent and identically distributed random variables, then n=1P(|X|>bn)<\sum_{n=1}^{\infty}P(|X|>b_{n})<\infty if and only if n=1n1P(max1kn|i=1kXi|>bnε)<\sum_{n=1}^{\infty}n^{-1}P\left(\max_{1\leq k\leq n}|\sum_{i=1}^{k}X_{i}|>b_{n}\varepsilon\right)<\infty for all ε>0\varepsilon>0. By combining this result of Shen et al. SZV with Proposition 2.6, we have the following theorem.

Theorem 4.1

Let {X,Xn,n1}\{X,X_{n},\,n\geq 1\} be a sequence of pairwise negatively dependent and identically distributed random variables, and let L()L(\cdot) be a slowly varying function defined on [A,)[A,\infty) for some A>0A>0 with L~(x)\tilde{L}(x)\uparrow\infty as xx\to\infty. Then the following statements are equivalent.

(i) The random variable XX satisfies

E(|X|L(|X|+A))<.E\left(|X|L(|X|+A)\right)<\infty. (4.1)

(ii)

nAn1P(max1kn|i=1kXi|>εnL~(n))< for all ε>0.\sum_{n\geq A}n^{-1}P\left(\max_{1\leq k\leq n}\left|\sum_{i=1}^{k}X_{i}\right|>\varepsilon n\tilde{L}(n)\right)<\infty\text{ for all }\varepsilon>0. (4.2)

(iii) The following strong law of large numbers holds:

limni=1n|Xi|nL~(n)=0 a.s.\begin{split}\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}|X_{i}|}{n\tilde{L}(n)}=0\ \text{ a.s.}\end{split} (4.3)

Martikainen Martikainen proved that if {X,Xn,n1}\{X,X_{n},n\geq 1\} is a sequence of pairwise i.i.d. mean 0 random variables, then E|X|logγ(|X|+2)<E|X|\log^{\gamma}(|X|+2)<\infty for some γ>0\gamma>0 if and only if limni=1nXinlogγ(n)=0 a.s.\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n\log^{-\gamma}(n)}=0\ \text{ a.s.} In Theorem 4.1, by letting L(x)logγ(x)L(x)\equiv\log^{-\gamma}(x) for some γ>0\gamma>0, then for sequences of pairwise negatively dependent and identically distributed random variables, we have E|X|logγ(|X|+2)<E|X|\log^{-\gamma}(|X|+2)<\infty if and only if limni=1nXinlogγ(n)=0 a.s.\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n\log^{\gamma}(n)}=0\ \text{ a.s.} Therefore, a very special case of Theorem 4.1 can be considered as a counterpart of the main result in Martikainen Martikainen . This special case also extends Proposition 3.2 (for the case where α=1,β>0\alpha=1,\beta>0) to pairwise negatively dependent random variables.

Finally, we present the following example to illustrate Theorem 4.1. This example concerns a random variable appearing in the St. Petersburg game.

Example 4.2

The St. Petersburg game which is defined as follows: Tossing a fair coin repeatedly until the head appears. If this happens at trial number n,n, you receive 2n2^{n} Euro. The random variable XX behind the game has probability mass function:

P(X=2n)=12n,n1.P(X=2^{n})=\dfrac{1}{2^{n}},n\geq 1. (4.4)

Since E(X)=E(X)=\infty, a fair price for you to participate in the game would be impossible. To set the fee as a function of the number of games, Feller (Feller68, , Chapter X) (see also in Gut Gut04 ) proved that

limni=1nXinlogn=1 in probability,\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n\log n}=1\text{ in probability}, (4.5)

where {Xn,n1}\{X_{n},n\geq 1\} are independent random variables which have the same distribution as XX.

By Theorem 2 of Chow and Robbins ChowRobbins , it is impossible to have almost sure convergence in (4.5). The natural question that comes to mind is what would be an “optimal” (or “smallest”) choice of {bn,n1}\{b_{n},n\geq 1\} in order for

limni=1nXibn=0 a.s.\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{b_{n}}=0\text{ a.s.}

to hold? It turns out that we can have such a strong law of large numbers even by requiring only the random variables {Xn,n1}\{X_{n},n\geq 1\} are pairwise negatively dependent and have the same distribution as XX. To see this, let

L(x)=((log|x|)(log(log(4+|x|)))1+γ)1,L(x)=\left((\log|x|)(\log(\log(4+|x|)))^{1+\gamma}\right)^{-1},

where γ\gamma is positive, arbitrary small, but fixed, then E(|X|L(|X|))<E\left(|X|L(|X|)\right)<\infty. By Theorem 4.1, the Borel-Cantelli lemma and some easy computations, we can show that

limni=1nXin(logn)(log(log(4+n)))1+γ=0 a.s.,\lim_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n(\log n)(\log(\log(4+n)))^{1+\gamma}}=0\text{ a.s.,} (4.6)

and

lim supni=1nXin(logn)(log(log(4+n)))log(log(log(4+n)))= a.s.\limsup_{n\to\infty}\dfrac{\sum_{i=1}^{n}X_{i}}{n(\log n)(\log(\log(4+n)))\log(\log(\log(4+n)))}=\infty\text{ a.s.} (4.7)
Remark 4.3

For the i.i.d. case, Csörgő and Simons CsorgoSimons obtained (4.6) and (4.7) by applying their strong law of large numbers for trimmed sums.

Acknowledgments. The authors are grateful to the referee for constructive, perceptive and substantial comments and suggestions which enabled us to greatly improve the paper. In particular, the referee’s suggestions on the asymptotic inverse of the regularly varying function xαL(x)x^{\alpha}L(x) enabled us to obtain Theorems 3.1 and 4.1 which are considerably more general than those of the initial version of the paper.

References

  • (1) Baum, L.E., Katz, M.: Convergence rates in the law of large numbers. Trans. Amer. Math. Soc. 120, 108–123 (1965)
  • (2) Bose, A., Chandra, T. K.: Cesàro uniform integrability and LpL_{p}-convergence. Sankhya Ser. A 55, 12–28 (1993)
  • (3) Bingham, N. H., Goldie, C. M., Teugels, J. L.: Regular variation (Encyclopedia of Mathematics and its Applications, 27). Cambridge University Press, Cambridge (1989)
  • (4) Block, H. W., Savits, T. H., Shaked, M.: Some concepts of negative dependence, Ann. Probab., 10, no. 3, 765–772 (1982)
  • (5) Bojanić, R., Seneta, E.: Slowly varying functions and asymptotic relations. J. Math. Anal. Appl. 34, 302–315 (1971)
  • (6) Burton, R. M., Dabrowski A. R., Dehling, H.: An invariance principle for weakly associated random vectors, Stochastic Process. Appl. 23, 301–306 (1986)
  • (7) Chandra, T. K., Goswami, A.: Cesàro α\alpha-integrability and laws of large numbers. I. J. Theoret. Probab. 16, no. 3, 655–669 (2003)
  • (8) Chen, P., Sung, S. H.: On the strong convergence for weighted sums of negatively associated random variables. Statist. Probab. Lett. 92, 45–52 (2014)
  • (9) Chow, Y. S., Robbins, Herbert: On sums of independent random variables with infinite moments and “fair” games. Proc. Nat. Acad. Sci. U.S.A. 47, 330–335 (1961)
  • (10) Csörgő, S., Simons, G. A strong law of large numbers for trimmed sums, with applications to generalized St. Petersburg games. Statist. Probab. Lett. 26, no. 1, 65–73. (1996)
  • (11) Csörgő, S., Tandori, K., Totik, V.: On the strong law of large numbers for pairwise independent random variables, Acta Math. Hungar., 42, no. 3–4, 319–330 (1983)
  • (12) Dedecker, J., Merlevède, F.: Convergence rates in the law of large numbers for Banach-valued dependent variables. Theory Probab. Appl. 52, no. 3, 416–438 (2008)
  • (13) Ebrahimi, N., Ghosh, M.: Multivariate negative dependence. Comm. Statist. A - Theory Methods. 10, no. 4, 307–337 (1981)
  • (14) Einmahl, U: Toward a general law of the iterated logarithm in Banach space. Ann. Probab. 21, no. 4, 2012–2045 (1993)
  • (15) Einmahl, U., Li, D-L.: Some results on two-sided LIL behavior. Ann. Probab. 33, no. 4, 1601–1624 (2005)
  • (16) Einmahl, U., Li, D-L.: Characterization of LIL behavior in Banach space. Trans. Amer. Math. Soc. 360, no. 12, 6677–6693 (2008)
  • (17) Etemadi, N.: An elementary proof of the strong law of large numbers. Z. Wahrsch. Verw. Gebiete 55, no. 1, 119–122 (1981)
  • (18) Feller, W.: An extension of the law of the iterated logarithm to variables without variance. J. Math. Mech. 18, 343–355 (1968)
  • (19) Feller, W.: An Introduction to Probability Theory and Its Applications, Vol 1, 3rd edn. John Wiley, New York (1968)
  • (20) Galambos, J., Seneta, E.: Regularly varying sequences. Proc. Amer. Math. Soc. 41, 110–116 (1973)
  • (21) Gut, A.: An extension of the Kolmogorov-Feller weak law of large numbers with an application to the St. Petersburg game. J. Theoret. Probab. 17, no. 3, 769–779 (2004)
  • (22) Gut, A., Stadtmüller, U.: On the strong law of large numbers for delayed sums and random fields. Acta Math. Hungar. 129, no. 1-2, 182–203 (2010)
  • (23) Heath, D., Resnick, S., Samorodnitsky, G.: Heavy tails and long range dependence in ON/OFF processes and associated fluid models. Math. Oper. Res. 23, 145–165 (1998)
  • (24) Hechner, F., Heinkel, B.: The Marcinkiewicz–Zygmund LLN in Banach spaces: a generalized martingale approach. J. Theoret. Probab. 23 (2010), no. 2, 509–522.
  • (25) Hien, N. T. T., Thanh, L. V., Van, V. T. H., On the negative dependence in Hilbert spaces with applications. Appl. Math. 64, no. 1, 45–59 (2019)
  • (26) Jessen, A. H., Mikosch, T.: Regularly varying functions. Publ. Inst. Math. (Beograd) (N.S.) 80, 171–192 (2006)
  • (27) Jing, B. Y., Liang, H. Y.: Strong limit theorems for weighted sums of negatively associated random variables. J. Theoret. Probab. 21, 890–909 (2008)
  • (28) Joag-Dev, K., Proschan, F.: Negative association of random variables, with applications. Ann. Statist. 11, 286–295 (1983)
  • (29) Ko, M. H., Kim T. S., Han, K. H.: A note on the almost sure convergence for dependent random variables in a Hilbert space, J. Theoret. Probab. 22, 506–513 (2009)
  • (30) Kuelbs, J., Zinn, J.: Some results of LIL behavior. Ann. Probab. 11, no. 3, 506–557 (1983)
  • (31) Lehmann, E. L.: Some concepts of dependence, Ann. Math. Statist., 37, 1137–1153 (1966)
  • (32) Martikainen, A. I.: A remark on the strong law of large numbers for sums of pairwise independent random variables. J. Math. Sci. 75, no. 5, 1944–1946 (1995)
  • (33) Matsumoto, K., Nakata, T.: Limit theorems for a generalized Feller game. J. Appl. Probab. 50, no. 1, 54–63 (2013)
  • (34) Matula, P.: A note on the almost sure convergence of sums of negatively dependent random variables. Statist. Probab. Lett. 15, 209–213 (1992)
  • (35) Miao, Y., Mu, J., Xu, J.: An analogue for Marcinkiewicz–Zygmund strong law of negatively associated random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM. 111, no. 3, 697–705 (2017)
  • (36) Mikosch, T., Resnick, S., Rootzén, H., Stegeman, A.: Is network traffic approximated by stable Lévy motion or fractional Brownian motion? Ann. Appl. Probab. 12, 23–68 (2002)
  • (37) Petrov, V.V.: A note on the Borel-Cantelli lemma. Stat. Prob. Lett., 58, 283–286 (2002)
  • (38) Rio, E.: A maximal inequality and dependent Marcinkiewicz–Zygmund strong laws. Ann. Probab. 23, no. 2, 918–937 (1995)
  • (39) Rio, E.: Vitesses de convergence dans la loi forte pour des suites dépendantes. (French) [Rates of convergence in the strong law for dependent sequences] C. R. Acad. Sci. Paris Sér. I Math. 320, no. 4, 469–474 (1995)
  • (40) Seneta, E.: An interpretation of some aspects of Karamata’s theory of regular variation. Publ. Inst. Math. (Beograd) (N.S.), 15, 111–119 (1973)
  • (41) Seneta, E.: Regularly varying functions. Lecture Notes in Mathematics, Vol. 508. Springer-Verlag, Berlin-New York (1976)
  • (42) Shao, Q. M.: A comparison on maximum inequalites between negatively associated and independent random variables. J. Theort. Probab., 13, 343–356 (2000)
  • (43) Shen, A., Zhang, Y., Volodin, A.: On the strong convergence and complete convergence for pairwise NQD random variables. Abstr. Appl. Anal., Art. ID 949608, 7 pp (2014)
  • (44) Sung, S. H.: On the strong convergence for weighted sums of random variables. Statist. Papers. 52, no. 2, 447–454 (2011)
  • (45) Sung, S. H.: Marcinkiewicz–Zygmund type strong law of large numbers for pairwise i.i.d. random variables. J. Theoret. Probab. 27, no. 1, 96–106 (2014)
  • (46) Szewczak, Z.: On Marcinkiewicz–Zygmund laws. J. Math. Anal. Appl., 375, no. 2, 738–744 (2011)
  • (47) Thanh, L. V.: On the almost sure convergence for dependent random vectors in Hilbert spaces, Acta Math. Hungar. 139, no. 3, 276–285 (2013)