On the maxima and sums of homogeneous Gaussian random fields

On the maxima and sums of homogeneous Gaussian random fields

Statistics and Probability Letters xx (xxxx) xxx–xxx Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: ...

498KB Sizes 2 Downloads 56 Views

Statistics and Probability Letters xx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Q1

Q2

On the maxima and sums of homogeneous Gaussian random fields✩ Zhongquan Tan a,b,∗ , Linjun Tang b a

School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, PR China

b

College of Mathematics, Physics and Information Engineering, Jiaxing University, Jiaxing 314001, PR China

article

info

Article history: Received 16 November 2016 Received in revised form 25 January 2017 Accepted 25 January 2017 Available online xxxx

abstract Let {Xn }n≥1 be a sequence of homogeneous Gaussian random fields with mean 0 and variance 1. Under some conditions related to the correlation function of the field, the joint limiting distribution of the maximum and the sum of the Gaussian random field is obtained. The result is also extended to continuous time Gaussian random field. © 2017 Elsevier B.V. All rights reserved.

MSC: primary 62G70 secondary 60G60 Keywords: Maxima Sums Gaussian random field Weakly and strongly dependent

1. Introduction

1

Let {Xn }n≥1 be a sequence of random variables and denote by Mn = max{Xn , 0 ≤ k ≤ n} and Sn = k=1 Xk the partial maximum and the partial sum, respectively. The joint limiting distributions of the partial maxima and partial sums of a sequence of random variables have been studied by many authors since they played important role in theoretical and applied probability. Chow and Teugels (1979) first studied the joint asymptotic behavior of partial sum and maximum for independent and identically distributed random variables. Anderson and Turkman (1991a,b) considered the strong mixing case under some additional conditions which was got rid of by Hsing (1995). Based on the limit results for the maxima from Berman (1974) and Mittal and Ylvisaker (1975), many authors dealt with this problem for Gaussian sequences under some dependent conditions. Suppose that {Xn }n≥1 is a sequence of stationary Gaussian random variables with correlation function rn satisfying the following condition:

n

rn ln n → γ ∈ [0, ∞] as n → ∞.

lim P

n→∞



Sn ≤y an (Mn − bn ) ≤ x, √ Var(Sn )

4

Q3



y

exp(−e−x−γ +

=



2γ z

)φ(z )dz ,

(2)

−∞

5 6 7 8 9 10 11

Q4

✩ Research supported by National Science Foundation of China (No. 11501250), Natural Science Foundation of Zhejiang Province of China (Nos. LQ14A010012, LY15A010019), China Postdoctoral Science Foundation (No. 2016M600460) and Postdoctoral Research Program of Zhejiang Province (No. 507100-X81601). ∗ Corresponding author at: School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, PR China. E-mail address: [email protected] (Z. Tan).

http://dx.doi.org/10.1016/j.spl.2017.01.026 0167-7152/© 2017 Elsevier B.V. All rights reserved.

3

(1)

For γ ∈ [0, ∞), Ho and Hsing (1996) and McCormick and Qi (2000) showed for any x, y ∈ R,



2

12

13

2

1

2

3 4

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

where φ(·) stands for the standard normal density function and an = (2 ln n)1/2 ,

lim P

n→∞

6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

30



Sn rn1/2 (Mn − (1 − rn )1/2 bn ) ≤ x, √ ≤y Var(Sn )

= Φ (min{x, y}).

k ∈ Z2 . Let Φ (·) and φ(·) denote the cumulative distribution function of standard Gaussian random variable and its density function respectively. Choi (2002) studied the extremes for stationary Gaussian random fields and obtained the Gumbel type extreme limit theorem. Pereira (2010) extended Choi’s results to the non-stationary Gaussian random fields. The above mentioned work has been extended by Tan (2013) to more general cases. Let {Xn }n≥1 be a sequence of homogeneous Gaussian random field on Z2 with mean 0, variance 1 and covariance functions rn = rj,j+n = EXj Xj+n satisfying the following conditions:

A1: limn→∞ rn ln χn = r ∈ [0, ∞), both r(n1 ,0) ln n1 and r(0,n2 ) ln n2 are bounded; A2: |rn | < 1 for n ̸ = 0.

32

Under the above two conditions, Tan (2013) proved the following result.

34

Theorem 1.1. Let {Xn }n≥1 be a sequence of homogeneous Gaussian random field on Z2 with mean 0, variance 1. Assume that the covariance functions rn satisfy A1 and A2. Then,



35

lim P an M (In ) − bn ≤ x





n→∞

36

37 38 39 40 41 42 43 44 45 46 47

(4)

It is worth pointing out that the above conditions are provided by Mittal and Ylvisaker (1975) which derived the limit distribution of the maxima, and the convexity condition may be replaced by rn nonincreasing (see McCormick and Mittal, 1976) but at a considerable computational expense. In the literature condition (1) with γ = 0 is referred to as the weak dependence or the Berman’s condition and consequently the sequence X is called a weakly dependent Gaussian sequence. In analogy, the sequence X with correlation function satisfying condition (1) with γ > 0 is called strongly dependent Gaussian sequence. For some recent work on the joint limiting distribution of partial sums and maxima for Gaussian sequences, we refer to Tan and Yang (2015) and the references therein. In this work, we are interested in the limit theorem for the maxima and partial sums of weakly and strongly dependent Gaussian random fields. It is well known that Gaussian random fields play very important role in many applied sciences, such as, image analysis, atmospheric sciences, geostatistics, hydrology and agriculture, among others. See the monograph of Adler and Taylor (2007) for details. Throughout this paper, denote the set of all positive integers and all non-negative integers by Z and N, respectively. Let Zd and Nd be the d-dimensional product spaces of Z and N, respectively, where d ≥ 2. In this paper, we only consider the case of d = 2 since the results for higher dimensions follow analogous arguments. For i = (i1 , i2 ) and j = (j1 , j2 ), i ≤ j, j j i − j and ij mean ik ≤ jk , k = 1, 2, (i1 − j1 , i2 − j2 ) and (i11 , i22 ), respectively. |i| and n → ∞ mean (|i1 |, |i2 |) and nk → ∞, 2 2 k = 1, 2, respectively. Let In = {j ∈ Z : 1 ≤ ji ≤ ni , i =  1, 2} and Jn = {j ∈ N : 0 ≤ ji ≤ ni , i = 1, 2}. Let χE be the 2 number of elements in E for any subset E of Z . Let χk = i:ki ̸=0 |ki | for k = (k1 , k2 ) and χ0 = 1. Note that χk = χIk when

31

33

(3)

For the case γ = ∞, if rn is convex with rn = o(1) and (rn ln n)−1 is monotone with (rn ln n)−1 = o(1), McCormick and Qi (2000) and Peng and Nadarajah (2002) proved that

 5

bn = (2 ln n)1/2 − (ln ln n + ln 4π )/(2(2 ln n)1/2 ).

where M (In ) = maxi∈In Xi , an =





+∞

exp − exp(−x − r +



=



2rz ) φ(z )dz ,



(5)

−∞



2 ln χn and bn = an −

ln ln χn +ln(4π ) . 2an

The case r = ∞ has been studied by McCormick (1978) and Mathew and McCormick (1993), but the conditions are not of the simple extensions of those provided by Mittal and Ylvisaker (1975). However, the isotropy will constrain us to a complicated class of correlation functions and only a special class of convex isotropic covariance functions having an integral representation introduced by Mittal (1976) is tractable, see McCormick (1978) and Mathew and McCormick (1993) for more details. For further results concerning extremes in Gaussian random fields we refer the reader to Piterbarg (1996), Pereira (2010), Choi (2010), Tan and Wang (2014), De¸bicki et al. (2015) and Pereira and Tan (in press). For other related results on random fields, we refer to Pereira and Ferreira (2006), Pereira (2009) and Ferreira and Pereira (2012). In this paper, we derive the joint limiting distribution of the maximum and the partial sum for the Gaussian random field which satisfies the conditions of Theorem 1.1. We also extend the result to continuous time Gaussian random field. The paper is organized as follows. Section 2 displays the main result. Some extensions of the main result are discussed in Section 3. All the proofs are relegated to Section 4.

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

3

2. Main result

1

Now, we state our main result.

2

Theorem 2.1. Let {Xn }n≥1 be a sequence of homogeneous Gaussian random fields with mean 0, variance 1. Assume that the covariance functions rn = EXj Xj+n satisfy A1 and A2. Then,



lim P an M (In ) − bn ≤ x, √

n→∞

where S (In ) =



k∈In

S ( In )







Var(S (In ))



y

exp − exp(−x − r +



≤y =



2rz ) φ(z )dz ,



(6)

Xk and an , bn are defined in Theorem 1.1.

5

6

3. Extension

7 8 9 10 11

12

In this section, we extend Theorem 2.1 to continuous time Gaussian random fields. Let {X (t) : t ≥ 0} be a homogeneous Gaussian field with mean 0, variance 1 and covariance function r (t) = Cov(X (t), X (0)).

17 18 19

If conditions B1 and B2 hold, Theorem 7.1 of Piterbarg (1996) showed that for any fixed h > 0 max

t∈[0,h1 ]×[0,h2 ]

X ( t) > u



= Hα1 Hα2 h1 h2 u2/α1 +2/α2 (1 − Φ (u))(1 + o(1)),

20

(7)

as u → ∞, where Hαi , i = 1, 2 are the Pickands constants, which are defined by Hα = limλ→∞ Hα (λ)/λ, with

Hα (λ) = E exp



 max

t ∈[0,λ]

2Bα/2 (t ) − t α

t∈IT

S (IT ) =



22

23

X (t)dt.

24 25 26

27

IT

Theorem 3.1. Let {X (t) : t ≥ 0} be a homogeneous Gaussian field with covariance function r (t) satisfying B1, B2 and B3. Then we have

  S (IT ) ≤y lim P aT M (IT ) − bT ≤ x, √ T→∞ Var(S (IT )) 



21



and BH a fractional Brownian motion, that is a Gaussian zero mean process with stationary increments such that EB2H (t ) = |t |2H . It is also well known that 0 < Hα < ∞, see e.g. Pickands (1969), Leadbetter et al. (1983) and Piterbarg (1996). For T ≥ 0, denote IT = [0, T] and let M (IT ) = max X (t),

14

16

B1: r (t) = 1 − |t1 |α1 − |t2 |α2 + o(|t1 |α1 + |t2 |α2 ) as t → 0 with αi ∈ (0, 2]; B2: r (t) < 1 for t ̸ = 0; B3: limt→∞ r (t) ln χt = r ∈ [0, ∞), both r (t1 , 0) ln t1 and r (0, t2 ) ln t2 are bounded.



13

15

We assume that the covariance function satisfies the following conditions:

where aT =

4

−∞

Remark 2.1. (i) As in the one dimensional case, in the weakly dependent case, the maximum and the sum are asymptotically independent, while in the strongly dependent case, they are asymptotically dependent. (ii) Letting y → ∞ in Theorem 2.1, we get Theorem 1.1, so Theorem 2.1 is an extension of Theorem 1.1. (iii) To study the case of r = ∞, we need more precise arguments on the covariance functions and this will be done based on the work of McCormick (1978) and Mathew and McCormick (1993) in a future paper.

P

3





y

exp − exp(−x − r +



=



2rz ) φ(z )dz ,



(8)

29

30

−∞

1 −1/2 2 ln χT , bT = aT + a− Hα1 Hα2 (aT )1/α1 +1/α2 −1/2 . T ln (2π )



28



4. Proofs In this section, we give the proofs of Theorems 2.1 and 3.1. We will split the proof of Theorems 2.1 and 3.1 into two cases, the weakly dependent case and the strongly dependent case. In the former case, we use some ideas from McCormick and Qi (2000), while in the latter case, we use some techniques from Ho and Hsing (1996). For the simplicity, −1 1 let un = un (x) = a− n x + bn and uT = uT (x) = aT x + bT .

31

32

33 34 35 36

4

1

2 3 4 5

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

4.1. Proof of Theorem 2.1 We will prove that the maxima and the sum are asymptotically independent for the weakly dependent case. To that end, we will follow some procedures of McCormick and Qi (2000). Recall that In = {j ∈ Z2 : 1 ≤ ji ≤ ni , i = 1, 2},  Jn = {j ∈ N2 : 0 ≤ ji ≤ ni , i = 1, 2}, S (In ) = X k∈In k and M (In ) = maxk∈In Xk . Note that under the conditions A1 and A2 we have ln χn 

lim

6

χn

n→∞

7

|rk | = 0,

k∈Jn

thus there exists a sequence m(n) such that lim m(n) = ∞

8

n→∞ 9

and m(n) ln χn 

lim

10

χn

n→∞

11

|rk | = 0.

k∈Jn

Let



σn =

12

Var(S (In )),

δn (k) = EXk S (In ) =

 rk−i i∈In

13

I+ n = {k, δn (k) ≥ 0,

k ∈ In } and

s+ n =

17

Next define

18

K+ n =



δn (k) and s− n =





δn (k).

− k∈In

ln m(n) δn (k) ≥ k ∈ I+ n : + χn sn





P max Xk ≤ un





.

(9)

 k∈In

as n → ∞.

24

Proof. By the stationarity of Xk , we have



 max Xk ≥ un

25

k∈Kn

≤ ♯(Kn )(1 − Φ (un )),

where ♯(·) denotes cardinality. Observe that

♯(K+ n)

27

ln m(n)

χn

 δn (k)



+ k∈Kn

s+ n

≤1

and

♯(K− n)

ln m(n)

χn

 δn (k)



− k∈Kn

Therefore

♯(Kn ) ≤



 k∈Kn

23

31

ln m(n) δn (k) ≥ k ∈ I− n : − χn sn



− P max Xk ≤ un ≤ P max Xk ≥ un → 0

k∈Qn

30

K− n =

Lemma 4.1. For the Qn defined in (9), we have

22

29

and

and Qn = In \ Kn .

− Kn = K+ n ∪ Kn



28

k ∈ In }.

Finally, set

20

26

I− n = {k, δn (k) < 0,

Note that In ̸ = ∅. Let

+ k∈In

21

k ∈ In ,

+

16

19

,

and define sets

14 15

σn

2χn ln m(n)

.

s− n

≤ 1.

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

5

Thus, we have

1

2χn



 P max Xk ≥ un k∈Kn



ln m(n)

(1 − Φ (un )) = oT (1)

2

as n → ∞, since χn (un )2/α (1 − Φ (un )) → e−x as n → ∞ by the choice of an and bn . This completes the proof.



Next, we will produce an intermediary sequence sufficiently close to Xk but independent of Sn . For k ∈ Qn define constants cn (k) by cn (k) =

δn (k) s+ n

,

k ∈ I+ n ∩ Qn

4 5

6

and if In− ̸ = ∅ cn (k) =

3

7

δn (k) s− n

,

k ∈ I− n ∩ Qn .

8

Define {Yk , k ∈ Qn } by

9

+ Yk = Xk − cn (k)S (I+ n ) if k ∈ In ∩ Qn

10

and

11

− Yk = Xk − cn (k)S (I− n ) if k ∈ In ∩ Qn .

12

Lemma 4.2. For {Yk , k ∈ Qn } defined as above, we have

13

L1

an [max Xk − max Yk ] → 0 k∈Qn

14

k∈Qn

as n → ∞.

15

Proof. We have

16

| max Xk − max Yk | ≤ max |Xk − Yk | k∈Qn

k∈Qn

17

k∈Qn

− ≤ max |cn (k)|[|S (I+ n )| + |S (In )|]

18

k∈Qn



ln m(n)

χn

− [|S (I+ n )| + |S (In )|].

19

Therefore using the above fact and Cauchy–Schwarz inequality E | max Xk − max Yk | ≤ k∈Qn

ln m(n)

k∈Qn

≤2

≤2

χn

1/2 [Var1/2 (S (I+ (S (I− n )) + Var n ))]

ln m(n)

21

1/2

  

χn

|r (k, l)|

22

1≤k≤n 1≤l≤n

ln m(n)

1/2



χn

2χn



|rk |

23

1 ≤ k≤ n

ln m(n)

= O(1) √

20

m(n)



1

ln χn



m(n) ln χn 

χn

1/2 |rk |

1 = o(a− n ),

which completes the proof of the lemma.

24

1 ≤ k≤ n 25



26

Proof of Theorem 2.1 for the weakly dependent case. We first observe that for k ∈ Qn ,

27

E (Yk S (In )) = 0.

28

6

1

2

3

4

5

6

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

Therefore, since (Yk , S (In )) have a joint normal distribution, the variables are independent. Now observe that

      P max Xk ≤ un , S (In )/σn ≤ y − exp{−e−x }Φ (y)  k∈In          ≤ P max Xk ≤ un , S (In )/σn ≤ y − P max Xk ≤ un , S (In )/σn ≤ y  k∈In k∈Qn        + P max Xk ≤ un , S (In )/σn ≤ y − P max Yk ≤ un , S (In )/σn ≤ y  k∈Qn k∈Qn       + P max Yk ≤ un , S (In )/σn ≤ y − exp{−e−x }Φ (y) k∈Qn =:

3 

Wni .

i =1 7

By Lemma 4.1, it is easy to see that

 8



Wn1 ≤ P max Xk ≤ un k∈Qn



  − P max Xk ≤ un k∈In



≤ P max Xk > un → 0,

9

(10)

k∈Kn

10

11

as n → ∞. Using Lemma 4.2, Theorem 1.1 and (10), we have

  



Wn2 ≤ E I max Yk ≤ un k∈Qn

   − I max Xk ≤ un  k∈Qn



 ≤ P x − an | max Xk − max Yk | ≤ an (max Xk − bn ) ≤ x + an | max Xk − max Yk |

12

k∈Qn

15

20

21

  

  − exp{−e−x } k∈Qn                  ≤ Φ (y) P max Yk ≤ un − P max Xk ≤ un  + Φ (y) P max Xk ≤ un − P max Xk ≤ un  k∈Qn k∈Qn k∈Qn k∈In       −x   + Φ (y) P max Xk ≤ un − exp{−e } → 0 k∈In

ρn = r / ln χn

23

ξi =

26



Proof of Theorem 2.1 for the strongly dependent case. Let {Zn }n≥1 be an independent standardized Gaussian random field. Let

and define

25



as n → ∞. The proof of the theorem is complete.

22

24

k∈Qn

(11)

Wn3 = Φ (y) P max Yk ≤ un

17

19

k∈Qn

as n → ∞. For Wn3 , we have by the independence, (10), (11) and Theorem 1.1

16

18

k∈Qn

→ 0,

13

14

k∈Qn



1 − ρn Zi +

√ ρn V ,

i ∈ In ,

where V is a standard Gaussian random variable independent of {Zn }n≥1 . Note that the covariance function of {ξi }i∈In is equal to ρn . Let Mξ (In ) = maxi∈In ξi and  rk,n = E (Xk S (In )/σn ). Firstly, we prove that

       P M (In ) ≤ un , S (In )/σn ≤ y − P Mξ (In ) ≤ un , V ≤ y  → 0,  

(12)

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

7

as n → ∞. By the Normal Comparison Lemma (see Leadbetter et al., 1983 and Piterbarg, 1996), we have

       P M (In ) ≤ un , S (In )/σn ≤ y − P Mξ (In ) ≤ un , V ≤ y      1   √ φ(un , un , θk(h,l) )dh + ≤ |rk,l − ρn | | rk,n − ρn | 0

k̸ =l∈In

k∈In

1

2

1 0

φ(un , y, ϑk(h,n) )dh

3

=: Σn,1 + Σn,2 ,

(13)

(h)



(h)

where θk,l = hrk,l + (1 − h)ρn , ϑk,n = h rk,n + (1 − h) ρn and φ(x, y, r ) stands for the two dimensional normal density function, i.e.,

φ(x, y, r ) =







1

1 − r2

exp −

x2 − 2rxy + y2 2(1 − r 2 )



.

 |rk − ρn | exp −

 k∈Jn ,k̸ =0

u2n



1 + ωk

→ 0,

9

(h)

decreasing on (1, ∞) and for large u2n − 2ϑ yun + y2 2(1 − ϑ 2 )

u2n − 2ϑ yun + y2

 exp −

≤ ε < 1, we have that for large n

 ≤

2(1 − ϑ 2 )

u2n − 2ϑ yun + y2 2

 exp −

10 11

u2n − 2ϑ yun + y2

 12

2

and hence that

13

√  2   2  1 un − 2ϑ yun + y2 2 1 − ϑ2 u2n − 2ϑ yun + y2 un − 2ϑ yun + y2 exp − = 2 exp − √ 2(1 − ϑ 2 ) un − 2ϑ yun + y2 2(1 − ϑ 2 ) 2(1 − ϑ) 1 − ϑ2   2 2 u − 2ϑ yun + y . ≤ exp − n

14

15

2

Thus for n large enough

Σn , 2 ≤

6

8

as n → ∞. Now, to prove (12), it suffices to show that Σn,2 → 0 as n → ∞. For simplicity, write ϑ for ϑk,n . Since xe−x is (h) n supk∈In |ϑk,n |

5

7

By some computations and Lemma 3.2 of Tan (2013), we have for some constant K > 0

Σn , 1 ≤ K χ n

4



16

| r k,n −



ρn |

1



u2n − 2ϑ yun + y

 2

 exp −

2

0

k∈In

dh.

17

Now we assume that y ̸ = 0. The case y = 0 can be handled by a continuity argument or by slightly modifying the proof

18

below. Since e−un /2 ∼ 2 π e−x ( ln χn /χn ), un ∼

19

1









2



√

exp ϑ yun dh = exp 0



2 ln χn and

   exp{[ rk,n − (ρn )1/2 ]yun } − 1  , ρn yun   [ r − (ρ )1/2 ]yu k,n

n

20

n

we have for some constants C > 0

Σn , 2 ≤ C

exp(−x +

21



2ry − y2 /2) 1 

|y|

χn

| exp{[ rk,n − (ρn )1/2 ]yun } − 1|.

22

k∈In

Consequently, to show that Σn,2 → 0 as n → ∞, it suffices to show that for some sequence εn ∈ (0, 1) which tends to 1 and such that nεn −1 → (0, 0), we have max

k∈[nεn ,n−nεn ]

 [ rk,n − (ρn )1/2 ] ln χn → 0,

(14)

23 24

25

8

1

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

as n → ∞. First, note that

√ √ ρn ln χn = r and write 

rk,l   Sn l∈In  r k,n = E X k =   σn r k,l

2

k,l∈In





rl +

l∈Jk−1

=   .  [ rl + rl ]

3

k∈In l∈Jk−1 4

r



ln χk

k∈In

∼r



rl +

l∈Jk−1



 

rl ∼

13 14 15



ln χl

l∈In−k

r (χn )2 χk ∼ ln χk 2 ln χn

 

rl ∼



r

χ n− k r (χn )2 ∼ . ln χn−k 2 ln χn

k∈In

P Mξ (In ) ≤ un , V ≤ y



  = P max ξi ≤ un , V ≤ y i∈In    √ =P 1 − ρn max Zi + ρn V ≤ un , V ≤ y i∈In   y  √ = P 1 − ρn max Zi + ρn z ≤ un φ(z )dz

19

=

20

It is easy to check that as n → ∞

i∈In



un − ρn z P max Zi ≤ √ φ(z )dz . i ∈ I 1 − ρn n −∞ y





(16)

  √ ρn z 1 √ = (un − ρn z ) 1 + ρn + o(ρn ) 2 1 − ρn √ r − 2rz 1 + o(u− = un + n ).

u(nz ) :=

22

un −



un

Thus, it follows from limn→∞ χn (1 − Φ (un )) = e−x that lim χn (1 − Φ (u(nz ) )) = e−x−r +

n→∞

√ 2rz

.

Now, noting that Z = {Zn }n≥1 is an independent standardized Gaussian random field, we have

 26

,

Thus we get that Σn,2 → 0 as n → ∞. Secondly, in view of the definition of {ξi }i∈In , we have

−∞

25

χn ln χn

 √  rk,n ln χn → r .



24

∼r

Thus we conclude that as n → ∞

18

23

ln χl

and

17

21

r



+

r



16

r

l∈Jk−1

k∈In

k∈In l∈In−k 12

(15)

n



rl ∼

l∈In−k

k∈In l∈Jk−1

11

 χk r r (χn )2 ∼ , ln χk 2 ln χn k∈I

as n → ∞. Using fact (15) and conditions A1 and A2 again, we have as n → ∞

9

10

χn , ln χn

we conclude readily from condition A1 that, uniformly for k ∈ [nεn , n − nεn ]

7

8

l∈In−k

Using the simple facts that as n → ∞

5

6

rl

l∈In−k

(z )

P max Zi ≤ un i∈In

 =

 

(z )

P Zi ≤ un

i∈In



= χn (1 − Φ (u(nz ) )) = e−x−r +

√ 2rz

(1 + o(1)),

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

9

as n → ∞. Combining the above result with (12), (16) and applying the dominated convergence theorem we obtain the desired result. 

1 2

4.2. Proof of Theorem 3.1

3

To prove Theorem 3.1, the continuous set IT = [0, T] should be discretized. So, we introduce the uniform grids R(qi ) = {kqi : k = 0, 1, 2, . . .}, qi > 0, i = 1, 2, where qi satisfies for some ε > 0

4 5

qi = ε(2 ln Ti )1/αi .

6

For simplicity, let σT = Var(S (IT )). The following lemma plays an important role in the proof of Theorem 3.1.

7

Lemma 4.3. Assume that A1 and A2 hold. Then for rectangle Ih = [0, h] with fixed h > 0, we have

8

       2/α +2/α2 P X (t) ≤ uT − P max X (t) ≤ uT  ≤ h1 h2 κ(ε)uT 1 (1 − Φ (uT ))(1 + o(1)),  t∈Ih ∩Rmax t∈Ih (q1 )×R(q2 )

9

where κ(ε) → 0 as ε ↓ 0.

10

Proof. See Lemma 1 of De¸bicki et al. (2015).



11

Proof of Theorem 3.1 for the weakly dependent case. Using Lemma 4.3, it is easy to show that

12

        → 0, P X ( t ) ≤ u , S ( I )/σ ≤ y max X ( t ) ≤ u , S ( I )/σ ≤ y − P max T T T T T T   t∈IT ∩R(q1 )×R(q2 ) t∈IT

13

as ε ↓ 0. Thus, to establish the result it suffices to show that

 lim lim P

max

ε↓0 T→∞

t∈IT ∩R(q1 )×R(q2 )

X (t) ≤ uT , S (IT )/σT ≤ y



14

= exp(−e−x )Φ (y).

Applying first Theorem 2.1, we get that asymptotics independence and then letting ε ↓ 0, we obtain the desired result.

15



16

To deal with the strongly dependent case, divide [0, Ti ] into intervals with length δ alternating with shorter intervals with (i) length 1 − δ for some constant δ > 0 and denote the long intervals by Sk , k = 1, 2, . . . , ni = ⌊Ti ⌋, where ⌊x⌋ denotes (1)

(2)

the integral part of x. Denote S = ∪Si = ∪Si1 × ∪Si2 , R = IT \ S. It will be shown that the area R plays no role in our consideration. Let {Xi (t), t ≥ 0}, i ≥ 1 be independent copies of {X (t), t ≥ 0} and {η(t), t ≥ 0} be such that η(t) = Xi (t) for t ∈ Ei = [i1 − 1, i1 ) × [i2 − 1, i2 ). Denote by γ (s, t) the covariance function of {η(t), t ≥ 0}. It is easy to see that

γ (s, t) =

r (t, s), 0,



s ∈ Ei , t ∈ Ej , i = j; s ∈ Ei , t ∈ Ej , i ̸ = j .

17 18

Q5

19 20 21 22

23

Define

24

ξT (t) = 1 − ρ(T) 

1/2

η(t) + ρ

1/2

(T)U ,

t ∈ IT ,

25

where ρ(T) = r / ln χT and U is a standard normal random variable which is independent of {η(t), t ≥ 0}. Denote by ϱ(s, t) the covariance function of {ξT (t), t ∈ IT }. It is easy to check that

ϱ(s, t) =

r (t, s) + (1 − r (t, s))ρ(T), ρ(T),



s ∈ Ei , t ∈ Ej , i = j ; s ∈ Ei , t ∈ Ej , i ̸ = j.

(17)

Secondly, by Lemma 4.3, we have

           P aT max X (t) − bT ≤ x, S (IT )/σT ≤ y − P aT →0 max X ( t ) − b ≤ x , S ( I )/σ ≤ y T T T   t∈S t∈R(q1 )×R(q2 )∩S as T → ∞ and ε ↓ 0.

27

28

Proof of Theorem 3.1 for the strongly dependent case. Firstly, by the same arguments as in the proof of Lemma 3.1 of Tan and Wang (2015), we can show that as δ → 0

           P aT M (IT ) − bT ≤ x, S (IT )/σT ≤ y − P aT max X (t) − bT ≤ x, S (IT )/σT ≤ y  → 0.   t∈S

26

29 30

31

32

(18)

33

34

10

1

2

3 4

5

6

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

Thirdly, we show that

           P aT →0 max X ( t ) − b ≤ x , S ( I )/σ ≤ y − P a max ξ ( t ) − b ≤ x , U ≤ y T T T T T T   t∈R(q1 )×R(q2 )∩S t∈R(q1 )×R(q2 )∩S uniformly for ε > 0, as T → ∞. Let  r (kq, T) = E (X (kq)S (IT )/σT ). By Normal Comparison Lemma again, we have

           P aT max X (t) − bT ≤ x, S (IT )/σT ≤ y − P aT max ξT (t) − bT ≤ x, U ≤ y   t∈R(q1 )×R(q2 )∩S t∈R(q1 )×R(q2 )∩S  1  φ(uT , uT , θk(h,l) )dh ≤ |r (kq, lq) − ϱ(kq, lq)| 0

kq̸ =lq∈S



+

7

| r (kq, T) −



ρ(T)|

9

10

11 12 13

14

15

16

17

0

φ(uT , y, ϑk(h,T) )dh =: ΣT,3 + ΣT,4 ,

(h)



(h)

where θk,l = hr ((kq, lq)) + (1 − h)ρ(T) and ϑk,T = h r ((k, T)) + (1 − h) ρ(T). By some computations and Lemma B1 of Tan and Wang (2015), we have for some constant K > 0

ΣT , 3 ≤ K

T1 T2  q1 q2

u2T



|r (kq) − ρ(T)| exp −

kq∈IT



1 + ωkq

→ 0,

as T → ∞, where ωkq = max{ρ(T), |r (kq)|}. By the same arguments as for Σn,2 in (13), we can show that ΣT,4 = o(1), as T → ∞. Fourthly, we prove that as ε ↓ 0,

       P aT max ξT (t) − bT ≤ x, U ≤ y −  t∈R(q1 )×R(q2 )∩S

 n1 n2  

y

P

(z )

uT :=

bT + x/aT − ρ 1/2 (T)z

max η(t) ≤ uT



t ∈Si

−∞ i =1 i =1 1 2

  φ(z )dz  → 0,

(1 − ρ(T))

1/2

√ x+r −

=

2rz

aT

1 + b T + o( a− T ).

By the definition of {ξT (t), 0 ≤ t ≤ T}, we have P

aT



max

t∈R(q1 )×R(q2 )∩S

   ξT (t) − bT ≤ x, U ≤ y =

y

 P

−∞

 =

19

y

max

t∈R(q1 )×R(q2 )∩S

 n1 n2   P

−∞ i =1 i =1 1 2 20

(z )

where

 18

1



kq∈S

8

 η(t) ≤ u(Tz ) φ(z )dz

max

t∈R(q1 )×R(q2 )∩Si

(z )

η(t) ≤ uT



φ(z )dz .

As for the discrete case, a direct calculation leads to

√ 21

22

23

24 25

26

27

(19)

(z )

uT

=

x+r −

2rz

aT

1 + b T + o( a− T ).

Now, by Lemma 4.3 again and the dominated convergence theorem, we have

   

y

 n1 n2   P

−∞ i =1 i =1 1 2

(z )

max

t∈R(q1 )×R(q2 )∩Si

η(t) ≤ uT



φ(z )dz −



y

 n1 n2   P

−∞ i =1 i =1 1 2

(z )

max η(t) ≤ uT t∈Si

as ε ↓ 0. Thus, (20) is proved. By (17)–(20), in order to prove Theorem 3.1, it suffices to show that

   

y

 n1 n2   P

−∞ i =1 i =1 1 2

as T → ∞.

(z )

max η(t) ≤ uT t∈Si



φ(z )dz −



y

−∞

  √   −x−r + 2rz φ(z )dz  → 0 exp −e



  φ(z )dz  → 0

(20)

Z. Tan, L. Tang / Statistics and Probability Letters xx (xxxx) xxx–xxx

11

Using the stationarity of {η(t), t ≥ 0} and (7), we have n1

n2

  P

i1 =1 i2 =1

(z )

max η(t) ≤ uT t∈Oi





1



= 1 − P max η(t) > u(Tz )

n1 n2 2

t∈S1

    = exp n1 n2 ln 1 − P max η(t) > u(Tz ) t∈S1     (z ) = exp −n1 n2 P max η(t) > uT (1 + o(1)) t∈S1   = exp −n1 n2 Hα1 Hα2 (1 − δ)2 (u(Tz ) )2/α1 +2/α2 (1 − Φ (u(Tz ) ))(1 + o(1)) .

3

4

5

(z )

Now, letting δ → 0 and using the definitions of aT , bT and uT we have (z ) (z ) n1 n2 Hα1 Hα2 (1 − δ)2 (uT )2/α1 +2/α2 (1 − Φ (uT )) → e−x−r +

√ 2rz

6

,

as T → ∞, which together with the dominated convergence theorem completes the proof of Theorem 3.1.

7



8

Uncited references

9

Q6

Ferreira and Pereira, 2008 and Ho and McCormick, 1999.

10

Acknowledgments

11

The authors would like to thank the referee and the editor for the thorough reading and valuable suggestions, which greatly improve the original results of the paper.

12 13

References Adler, R., Taylor, J.E., 2007. Random Fields and Geometry. Springer, New York. Anderson, C.W., Turkman, K.F., 1991a. The joint limiting distribution of sums and maxima of stationary sequences. J. Appl. Probab. 28, 33–44. Anderson, C.W., Turkman, K.F., 1991b. Sums and maxima in stationary sequences. J. Appl. Probab. 28, 715–716. Berman, S.M., 1974. Sojourns and extremes of Gaussian processes. Ann. Probab. 2, 999–1026. Choi, H., 2002. Central limit theory and extemes of random fields (Ph.d. Dissertation), Univ. of North Carolina at Chapel Hill. Choi, H., 2010. Almost sure limit theorem for stationary Gaussian random fields. J. Korean Stat. Soc. 39, 449–454. Chow, T.L., Teugels, J.L., 1979. The sum and the maximum of i.i.d. random variables. In: Proceedings of the Second Prague Symposium on Asymptotic Statistics. (Hradec Kralove, 1978). North-Holland, New York, pp. 81–92. De¸bicki, K., Hashorva, E., Soja-Kukiela, N., 2015. Extremes of homogeneous Gaussian random fields. J. Appl. Probab. 52, 55–67. Ferreira, H., Pereira, L., 2008. How to compute the extremal index of stationary random fields. Statist. Probab. Lett. 78, 1301–1304. Ferreira, H., Pereira, L., 2012. Point processes of exceedances by random fields. J. Statist. Plann. Inference 142, 773–779. Ho, H.C., McCormick, W.P., 1999. Asymptotic distribution of sum and maximum for strongly dependent Gaussian processes. J. Appl. Probab. 36, 1031–1044. Hsing, T., 1995. A note on the asymptotic independence of the sum and maximum of strongly mixing stationary random variables. Ann. Probab. 23, 938–947. Leadbetter, M.R., Lindgren, G., Rootzén, H., 1983. Extremes and Related Properties of Random Sequences and Processes. Springer-Verlag, New York. Mathew, G., McCormick, W.P., 1993. A complete Poisson convergence result for a strongly dependent isotropic Gaussian random field. Comm. Statist. Stoch. Models 9, 13–29. McCormick, W.P., 1978. Weak and strong laws for the maxima of stationary Gaussian processes (Ph.d. Dissertation), Stanford University at California. McCormick, W.P., Qi, Y.C., 2000. Asymptotic distribution for the sum and the maximum of Gaussian processes. J. Appl. Probab. 37, 958–971. McCormick, W.P., Mittal, Y., 1976. On weak convergence of the maximum. Tech. Report No 81. Dept. of Statist. Stanford University. Mittal, Y., 1976. A class of isotropic covariances functions. Pacific J. Math. 64, 517–538. Mittal, Y., Ylvisaker, D., 1975. Limit distribution for the maximum of stationary Gaussian processes. Stochastic Process. Appl. 3, 1–18. Peng, Z.X., Nadarajah, S., 2002. On the joint limiting distribution of sums and maxima of stationary normal sequence. Theory Probab. Appl. 47, 817–820. Pereira, L., 2009. The asymptotic location of the maximum of a stationary random field. Statist. Probab. Lett. 79, 2166–2169. Pereira, L., 2010. On the extremal behavior of a nonstationary normal random field. J. Statist. Plann. Inference 140, 3567–3576. Pereira, L., Ferreira, H., 2006. Limiting crossing probabilities of random fields. J. Appl. Probab. 43, 884–891. Pereira, L., Tan, Z., 2016. Almost sure convergence for the maximum of nonstationary random fields. J. Theor. Probab. http://dx.doi.org/10.1007/s10959015-0663-3, (in press). Piterbarg, V.I., 1996. Asymptotic Methods in the Theory of Gaussian Processes and Fields. AMS, Providence. Tan, Z., 2013. The limit theorems on extremes for Gaussian random fields. Statist. Probab. Lett. 83, 436–444. Tan, Z., Wang, Y., 2014. Almost sure asymptotics for extremes of non-stationary Gaussian random fields. Chinese Ann. Math. Ser. B 35, 125–138. Tan, Z., Wang, K., 2015. On Piterbarg’s max-discretisation theorem for homogeneous Gaussian random fields. J. Math. Anal. Appl. 429, 969–994. Tan, Z., Yang, Y., 2015. The maxima and sums of multivariate non-stationary Gaussian sequences. Appl. Math. J. Chinese Univ. 30, 197–209.

14

15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

Q7

37 38 39 40 41 42 43 44