- Email: [email protected]

Statistics & Probability Letters 76 (2006) 340–348 www.elsevier.com/locate/stapro

Wild bootstrap estimation in partially linear models with heteroscedasticity Jinhong Youa,, Gemai Chenb a

Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7400, USA b Department of Mathematics & Statistics, University of Calgary, Calgary, Alta., Canada T2N 1N4 Received 9 December 2002; received in revised form 29 June 2005 Available online 6 September 2005

Abstract This paper uses the wild bootstrap technique in the estimation of a heteroscedastic partially linear regression model. We show that this approach provides reliable approximation to the asymptotic distribution of the semiparametric least-square estimators of the linear regression coefﬁcients and consistent estimators of the asymptotic covariance matrices even when the error variances are unequal. In comparison, this robustness property is not shared by the bootstrap estimation proposed in Liang et al. (2000. Bootstrap approximation in a partially linear regression model. J. Statist. Plann. Inference, 91, 413–426). r 2005 Elsevier B.V. All rights reserved. MSC: primary 62G05; 62G20 Keywords: Wild bootstrap; Partially linear regression models; Limit distribution; Consistency; Robustness

1. Introduction In recent years fueled by modern computing power, various attempts have been made to relax the linear regression model assumptions and hence widen their applicability, since a wrong model on the regression function can lead to excessive modeling biases and erroneous conclusion. Of importance is the partially linear regression model proposed by Engle et al. (1986) which widens the scope of applications by allowing the relationship between response and partial covariates to be not speciﬁed. A partially linear regression model can be written as yi ¼ x0i b þ gðti Þ þ ei ;

i ¼ 1; . . . ; n,

(1.1)

where yi ’s are responses, xi ¼ ðxi1 ; . . . ; xip Þ0 and ti ’s are design points, b ¼ ðb1 ; . . . ; bp Þ0 is an unknown parameter vector, gðÞ is an unknown function, and ei ’s are unobservable random errors. Eq. (1.1) deﬁnes a class of semiparametric regression models which involve unknown ﬁnite-dimensional parameters as well as an unknown functions. When ei are i.i.d. random variables, Heckman (1986), Chen Corresponding author. Tel.: +1 919 966 0229.

E-mail address: [email protected] (J. You). 0167-7152/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2005.08.027

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

341

(1988), Speckman (1988), Eubank and Speckman (1993), Hamilton and Truong (1997) and Shi and Li (1995) used various estimation methods, such as the kernel method, the spline method, the series estimation, the local linear estimation, the two-stage estimation, the robust estimation and so on, to obtain estimators of the unknown quantities in (1.1) and discussed the asymptotic properties of these estimators. A recent survey of the theory and applications of model (1.1) can be found in the monograph of Ha¨rdle et al. (2000). It is well known that the bootstrap technique is a useful tool for the approximation of an unknown probability distribution and its characteristics like moments. Several authors have studied the bootstrap estimation in the setting of model (1.1). For example, Hong and Cheng (1993) considered bootstrap estimators for b in model (1.1) with fx0i ; ti ; ei ; i ¼ 1; . . . ; ng being i.i.d. random variables. Liang et al. (2000) explained the advantage of the bootstrap method and constructed bootstrap estimators for b and error variance s2 , and studied their asymptotic normality. In their case ðx0i ; ti Þ are known ﬁxed design points and ei are i.i.d. random variables. However, the i.i.d. assumption for the errors is not always reasonable. For example, when dealing with gasoline demand across Organization for Economic Co-Operation and Development (OECD) countries, or when estimating cost functions for US airline ﬁrms, one should expect to ﬁnd heteroscedasticity in the disturbance term (Baltagi, 1995). The bootstrap scheme of Liang et al. (2000) did not take into account the fact that the variances of the observations are different when heteroscedasticity is present in the model. Therefore, their bootstrap estimator is neither consistent nor asymptotically unbiased under heteroscedasticity. According to Wu (1986) and Shao (1988), when the errors are heteroscedastic, a wild bootstrap procedure would be more appropriate. In this paper, for a semiparametric least squares estimator (SLSE) b^ n of b based onﬃﬃﬃ the partial kernel (Speckman, 1988), we construct a wild bootstrap statistic b^ w . It is shown that p pﬃﬃﬃ method ^ ^ ^ nðbw bn Þ and nðbn bÞ have the same limit distribution. This implies that the wild bootstrap technique provides a reliable method to approximate the asymptotic distribution of the SLSE b^ n . Moreover, an estimator based on b^ w for the covariance matrix of the SLSE is also presented and its consistency established even when the error variances are unequal. This robustness against heteroscedasticity is not shared by the bootstrap estimation proposed by Liang et al. (2000). The rest of the paper is organized as follows. The wild bootstrap procedure is presented in Section 2. The main results are given in Section 3. Conclusions are drawn in Section 5 and proofs of the main results are relegated to Section 6. 2. Wild bootstrap methodology Throughout this paper we assume that the design points xi and ti are ﬁxed, ti 2 ½0; 1 and xi and ti are related via xis ¼ hs ðti Þ þ uis , i ¼ 1; . . . ; n and s ¼ 1; . . . ; p. The reasonableness of this relation can be found in Speckman (1988). Further, we assume that Varðei Þ ¼ s2i where s2i may be different for i ¼ 1; . . . ; n, and that the vector ð1; . . . ; 1Þ0 is not in the space spanned by the column vectors of X ¼ ðx1 ; . . . ; xn Þ0 , which ensures the identiﬁability for model (1.1) according to Chen (1988). We ﬁrst introduce the partial kernel method which is used to construct a SLSE of b. Assume that fx0i ; ti ; yi ; i ¼ 1; . . . ; ng satisfy model (1.1). If b is known to be the true parameter, then from Eei ¼ 0 we have gðti Þ ¼ Eðyi x0i bÞ; i ¼ 1; . . . ; n: Hence, a natural nonparametric estimator of gðÞ given b is g~ n ðt; bÞ ¼ P n 0 i¼1 W ni ðtÞðyi xi bÞ where the weight function 1 ti t K W ni ðtÞ ¼ nh h with KðÞ being some density function and h being the bandwidth. To estimate b, we seek to minimize SSðbÞ ¼

n n X X ½yi x0i b g~ n ðti ; bÞ2 ¼ ðy^ i x^ 0i b^ n Þ2 , i¼1

where x^ i ¼ xi 0

i¼1

Pn

j¼1 0

W nj ðti Þxj and y^ i ¼ yi

^ b^ n ¼ ðXb Xb Þ1 Xb y,

Pn

j¼1

W nj ðti Þyj . The minimizer to (2.1) is found as

(2.1)

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

342

where Xb ¼ ðx^ 1 ; . . . ; x^ n Þ0 and y^ ¼ ðy^ 1 ; . . . ; y^ n Þ0 . The estimator b^ n is called a SLSE of b. An estimator of the Pn ^ nonparametric component gðÞ based on bn is g^ n ðtÞ ¼ i¼1 W ni ðtÞðyi x0i b^ n Þ. From b^ n and g^ n ðÞ we can deﬁne the estimated residuals as e^i ¼ yi x0i b^ n g^ n ðti Þ ¼ y^ i x^ 0i b^ n ;

i ¼ 1; . . . ; n.

These estimated residuals will be used to deﬁne the wild bootstrap statistics. The wild bootstrap method proposed by Wu (1986) proceeds as follows. Obtain a set of n numbers from a distribution with mean zero and unit variance, multiply them with the residuals, form a bootstrap sample, estimate the model using the new sample, repeat the procedure a large number of times, and then use the estimators from these bootstrap replications to obtain approximation to the asymptotic distribution and estimators of the covariance matrix. For model (1.1) we can construct a wild bootstrap statistic for b as below. Let ei ’s be random variables with a distribution function F ðÞ such that E F ei ¼ 0; E F e2i ¼ 1 and E F jei j3 o1: Then we set up the bootstrap sample ðy1 ; . . . ; yn Þ with yi ¼ x0i b^ n þ g^ n ðti Þ þ ei e^i ;

i ¼ 1; . . . ; n.

From the bootstrap sample, deﬁne a wild bootstrap statistic for b by 0 b^ w ¼ ðXb Xb Þ1 Xb y^ , P where y^ i ¼ yi nj¼1 W nj ðti Þyj , y^ ¼ ðy^ 1 ; . . . ; y^ n Þ0 . Under to the moment conditions on ei it is easy to see that E F b^ w ¼ b^ n . Therefore, the wild bootstrap estimator of S ¼ Covðb^ n Þ is given by

Sw ¼ E F ½ðb^ w b^ n Þðb^ w b^ n Þ0 , where E F denotes expectation under F. In some situations the parameter of interest is y ¼ mðbÞ, where mðÞ is a function from Rp to R. For example, y may be the roots and turning points of mean polynomials in polynomial regression models. Then ^ ^ 2 the wild bootstrap estimators for y and Varðmðb^ n ÞÞ are mðb^ w Þ and Sm w ¼ E F ðmðbw Þ mðbn ÞÞ , respectively. 3. The main results In order to investigate the asymptotic properties of the wild bootstrap statistics b^ w and Sw we begin with the following assumptions. These assumptions, while look a bit lengthy, are actually quite mild and can be easily satisﬁed; see Ha¨rdle et al. (2000). P P Assumption 3.1. max1pipn k nj¼1 W nj ðti Þuj k ¼ oð1Þ, limn!1 n1 ni¼1 ui u0i ¼ B40 and kuj kpc where k k denotes the Euclidean norm, c is a positive constant and ui ¼ ðui1 ; . . . ; uip Þ0 . Assumption 3.2. The functions gðÞ and hj ðÞ; j ¼ 1; . . . ; p; satisfy the Lipschitz condition of order 1 on ½0; 1. Assumption 3.3. The function KðÞ is a symmetric density function and satisﬁes the Lipschitz condition of order 1. Moreover, the bandwidth h satisﬁes nh8 ! 0 and nh2 =ðlog nÞ2 ! 1 as n ! 1 We are now ready to establish the main results. Theorem 3.1. Suppose that Assumptions 3.1–3.3 hold and there exist constants c1 , c2 and c3 such that 0oc1 ps2i pc2 o1 and m4i ¼ Ee4i oc3 o1 for all i ¼ 1; . . . ; n. Then nðSw SÞ!p 0 as n ! 1, where ‘‘!p ’’ denotes convergence in probability. next theorem shows that the bootstrap distribution under the wild bootstrap, namely the distribution of pﬃﬃThe ﬃ nðb^ w b^ n Þ under F, converges to a normal distribution.

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

343

Theorem 3.2. Under the conditions of Theorem 3.1, there is ! pﬃﬃﬃ 0 ^ x ^ sup PF ð na ðbw bn ÞpxÞ F !p 0, 1oxo1 ða0 SaÞ1=2 where a is any nonzero p-constant vector and FðÞ denotes the standard normal distribution function. pﬃﬃﬃ Remark 3.4. According to Lemma 6.4 and the proof of Theorem 3.1 we can see that nðb^ w b^ n Þ and pﬃﬃﬃ ^ nðbn bÞ have the same asymptotic distribution. For mðb^ w Þ and Sm w we have the following asymptotic results. Corollary 1. Under the conditions of Theorem 3.1 and assume that mðÞ has bounded continuous second derivative in some neighborhood of b, then h i pﬃﬃﬃ x ^ ^ !p 0, n mðbw Þ mðbn Þ px F sup PF 0 1=2 1oxo1 ½ðrmðbÞÞ SrmðbÞ ^ where rmðÞ denotes the gradient of mðÞ. Further n½Sm w Varðmðbn ÞÞ ¼ op ð1Þ: Remark 3.5. We can construct a consistent estimator of Varðmðb^ n ÞÞ by delta method. However, when the true parameter b is close to a discontinuity point of rmðÞ, the delta method may not provide an accurate variance estimator. Moreover, the use of the delta method requires a theoretical derivation of rmðÞ, which may not be directly evaluated. See Shao (1988) for more discussions. On the other hand, Sm w avoids all of these shortcomings. Theorems 3.1, 3.2 and Corollary 1 can be used to estimate b or mðbÞ. For example, a 100ð1 a0 Þ% twosided conﬁdence interval for mðbÞ is 1 1 ^ ^ mðbn Þ pﬃﬃﬃ zð1 a0 =2Þ; mðbn Þ pﬃﬃﬃ zða0 =2Þ , n n where pﬃﬃﬃ PF ð nðmðb^ w Þ mðb^ n ÞÞpzða0 ÞÞ ¼ a0 ;

0oa0 o1.

4. Simulation study We conduct a simulation study to demonstrate the gains of using the wild bootstrap over the usual normal approximation and bootstrap estimation proposed by Liang et al. (2000). The data are generated from yi ¼ xi b þ gðti Þ þ sðti Þei ;

i ¼ 1; . . . ; n,

where ti Uð0; 1Þ, xi Nð0; 1Þ, gðti Þ ¼ sinð2pti Þ and b ¼ 1:5. We consider two cases for sðÞ. One is homoscedasticity, i.e., sðtÞ 1 and the other is heteroscedasticity, i.e., sðtÞ ¼ 1:2 þ cosð2ptÞ. For a chosen sample size, 1000 samples are generated (the xi and ti values are generated only once), and for each sample, three 95% conﬁdence intervals for b ¼ 1:5 are calculated, one using the normal approximation (AN), and the other two using the wild bootstrap (400 bootstrap replicates) (WB) and the bootstrap proposed by Liang et al. (2000) (B(L)). Moreover, three estimators of the asymptotic variance are calculated, which are based on AN, WB and B(L), respectively. We use the weight function 1 ti tj 1 1 ðti tj Þ2 =2h2 pﬃﬃﬃﬃﬃﬃ e W ni ðtj Þ ¼ K . ¼ nh nh 2p h The bandwidth h is selected by grid search. In addition, we take F ðÞ to be the standard normal distribution. Some simulated coverage percentages (CP) and the means of the estimators of the asymptotic variance (AV) are displayed in Table 1.

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

344

Table 1 Simulated coverage percentages of the 95% conﬁdence intervals for b ¼ 1:5 and the estimators of the asymptotic variance n ¼ 50

n ¼ 100

n ¼ 200

CP

AV

CP

AV

CP

AV

Homoscedasticity

AN WB B(L)

0.9090 0.9390 0.9420

0.0210 0.0209 0.0213

0.9250 0.9440 0.9490

0.0108 0.0108 0.0110

0.9390 0.9620 0.9550

0.0055 0.0056 0.0057

Heteroscedasticity

AN WB B(L)

0.8940 0.9630 0.9510

0.0298 0.0298 0.0268

0.9270 0.9520 0.9740

0.0159 0.0159 0.0206

0.9340 0.9540 0.9700

0.0071 0.0071 0.0087

We can see from Table 1 that the wild bootstrap and the bootstrap proposed by Liang et al. (2000) have higher coverage percentages than the normal approximation. The estimator of the asymptotic variance based on the wild bootstrap is very close to the true asymptotic variance in both homoscedastic and heteroscedastic cases. However, the estimator of the asymptotic variance based on the bootstrap proposed by Liang et al. (2000) has a large discrepancy from the true asymptotic variance in the heteroscedastic case. 5. Concluding remarks We have shown in this paper that the wild bootstrap technique can be used in the estimation of model (1.1) to give accurate approximation to the distribution of the SLSE b^ n and good estimators of the corresponding asymptotic covariance matrix. The estimation is shown to be robust to heteroscedasticity. 6. Proofs of main results In order to prove the main results we ﬁrst introduce several lemmas. Lemma 6.1. Suppose that Assumptions 3.2 and 3.3 hold. Then as n ! 1, n X max max G s ðti Þ W nj ðti ÞG s ðtj Þ ¼ Oðh2 Þ, 0pspp 1pipn j¼1 where G 0 ðÞ ¼ gðÞ and G s ðÞ ¼ hs ðÞ; s ¼ 1; . . . ; p. 0 Lemma 6.2. Suppose that Assumptions 3.1–3.3 hold. Then we have limn!1 n1 Xb Xb ¼ B; where B is defined in Assumption 3.1.

The proof of Lemmas 6.1 and 6.2 can be found in Gao (1995). Lemma 6.3. For any sequence of independent random variables fV i ; i ¼ 1; . . . ; ng with mean zero and finite p1 ð2 þ dÞth moment, and for Pn for an arrayp of positive numbers faij ; i; j ¼ 1; . . . ; ng such that max1pi;jpn jaij jpn 2 some 0pp1 p1 and i¼1 aij ¼ Oðn Þ for some p2 X maxf0; 2=ð2 þ dÞ p1 g, there is X n max aij V i ¼ Op ðnðp1 p2 Þ=2 log nÞ. 1pjpn i¼1 The proof of Lemma 6.3 can be found in Ha¨rdle et al. (2000).

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

345

Lemma 6.4. Suppose that Assumptions 3.1–3.3 hold and there exist constants c1 and c2 such that 0oc1 ps2i oc2 o1. Then we have ( 1=2 ) pﬃﬃﬃ ^ log n 2 nðbn bÞ!D Nð0; B1 OB1 Þ and max jg^ n ðti Þ gðti Þj ¼ Op h þ , 1pipn nh where O ¼ limn!1 n1 U 0 diagðs21 ; . . . ; s2n ÞU provided the limit exists and U ¼ ðu1 ; . . . ; un Þ0 . Proof. The proof of the asymptotic normality of b^ n is similar to that of Theorem 1(i) of Gao (1995). We here omit the detail. According to the deﬁnition of g^ n ðti Þ it holds that X n 0 ^ W nj ðti Þxj ðb bn Þ max jg^ n ðti Þ gðti Þjp max 1pipn 1pipn j¼1 X X n n þ max W nj ðti Þgðtj Þ gðti Þ þ max W nj ðti Þej ¼ I 1 þ I 2 þ I 3 ; say. 1pipn 1pipn j¼1 j¼1 I 1 can be decomposed as p X X n ^ I 1 p max W nj ðti Þujs ðbs bns Þ 1pipn s¼1 j¼1 " # p X X n n X þ max W nj ðti Þ hs ðtj Þ W nj ðti Þxjs ðbs b^ ns Þ ¼ I 11 þ I 12 ; 1pipn j¼1 s¼1 j¼1

say,

where b^ ns and bs are the sth components of b^ n and b, respectively. It is easy to see that X n ^ W nj ðti Þuj ¼ op ðn1=2 Þ I 11 pp max jbs bns j max 1pspp 1pipn j¼1 1=2 Þ. by Assumption 3.1 and the root-n consistency of b^ n . Similarly, combining pﬃﬃﬃﬃﬃ Lemma 6.1 I 12 ¼ oðn 2 Moreover, by Lemmas 6.1 and 6.3 it holds I 2 ¼ Oðh Þ; and I 3 ¼ Op ðlog n= nhÞ: The proof then follows.

Proof of Theorem 3.1. We ﬁrst show n1

n X

e^ 2i x^ i x^ 0i !p O

as n ! 1.

i¼1

From the deﬁnition of e^i we have e^ i ¼ x0i ðb b^ n Þ þ ðgðti Þ g^ n ðti ÞÞ þ ei . Therefore, it holds that n n n 1X 1X 2X e2i x^ i x^ 0i þ ei ½x0i ðb b^ n Þ þ ðgðti Þ g^ n ðti ÞÞx^ i x^ 0i e^2i x^ i x^ 0i ¼ n i¼1 n i¼1 n i¼1

þ

n 1X ½x0 ðb b^ n Þ þ ðgðti Þ g^ n ðti ÞÞ2 x^ i x^ 0i ¼ J 1 þ J 2 þ J 3 ; n i¼1 i

say.

Similar to the proof of Lemma 6.2 we have 0

EJ 1 ¼ n1 Xb diagðs21 ; . . . ; s2n ÞXb ! O

as n ! 1.

Since ei ’s are independent with ﬁnite fourth moment m4i it holds ! n n 1X 1 X 0 2 Cov ei x^ i x^ i ¼ 2 ðm s4i Þx^ i x^ 0i x^ i x^ 0i , n i¼1 n i¼1 4i

(6.1)

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

346

where denotes the Hadamard product of two matrices. According to Miller (1974) expression (6.1) is bounded above by max ðm4i s4i Þ

1pipn

n max1pipn jx^ i x^ 0i j 1 X jx^ i x^ 0i j, n n i¼1

where jAj ¼ ðjakj jÞ and max1pipn Ai ¼ ðmax1pipn aikj Þ. Lemma 6.2 implies that max

1pipn

jx^ 0i x^ i j ! 0; n

a matrix of zeros.

(6.2)

Furthermore, n 1X jx^ i x^ 0i jpC n , n i¼1

(6.3)

where ApC means akj pckj for all k; j, and the ðk; jÞ element of C n is !1=2 !1=2 n n 1X 1X 1=2 1=2 2 2 ! bkk bjj pco1. cnkj ¼ x^ x^ n i¼1 ik n i¼1 ij This implies that n 1 X ðm s4i Þx^ i x^ 0i x^ i x^ 0i ! 0 n2 i¼1 4i

as n ! 1.

Therefore, J 1 !p O. On the other hand, combining Lemma 6.4, (6.2) and (6.3) we have

X n n 1X 0 2 2 1 0 0 ^ ^ ½xi ðb bn Þ þ ðgðti Þ g^ n ðti Þ x^ i x^ i p max ½xi ðb bn Þ þ ðgðti Þ g^ n ðti ÞÞ jx^ i x^ 0i j 1pipn n i¼1 n i¼1 X 1 n 0 0 ^ ^ p2 max xi ðb bn Þðb bn Þ xi jx^ i x^ 0i j 1pipn n i¼1 þ 2 max ðgðti Þ g^ n ðti ÞÞ2 1pipn

n 1X jx^ i x^ 0i j!p 0 n i¼1

as n ! 1.

Hence, J 3 !p 0. Moreover, by Cauchy–Schwarz inequality we can show that J 2 !p 0. On the other hand, according to the deﬁnition of b^ w , the moment conditions of ei and Lemma 6.2 we have 0 0 0 nE F ½ðb^ w b^ n Þðb^ w b^ n Þ0 ¼ nðXb Xb Þ1 Xb diagð^e21 ; ; e^2n ÞXb ðXb Xb Þ1 ¼ B1 OB1 .

Further, 0

Covðb^ n Þ ¼ ðXb Xb Þ

1

n X n X

" x^ i x^ 0j E

i¼1 j¼1 0 ¼ ðXb Xb Þ1

n X

ei

n X

#" W ni1 ðti Þei1

i1 ¼1

ðXb Xb Þ

1

0

þ ðXb Xb Þ

1

n X n X i¼1 j¼1

" x^ i x^ 0j

0

W nj 1 ðti Þej 1 ðXb Xb Þ1

j 1 ¼1

0 0 x^ i x^ 0i s2i ðXb Xb Þ1 ðXb Xb Þ1

i¼1 0

ej

#

n X

n X n X

x^ i x^ 0j fW ni ðti Þs2i þ W nj ðtj Þs2j g

i¼1 j¼1 n X

W 2nk ðti Þs2k

#

0

ðXb Xb Þ1 ¼ B1 OB1 þ oð1Þ

k¼1

by Lemma 6.2 and Assumption 3.3. Therefore, the proof is complete.

&

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

347

Proof of Theorem 3.2. According to the deﬁnition of b^ w , for any nonzero p-constant vector a it holds ! n n X X pﬃﬃﬃ 0 ^ p ﬃﬃ ﬃ 0 1 0 na ðbw b^ n Þ ¼ na ðXb Xb Þ W nj ðti Þej e^j x^ i ei e^i i¼1

j¼1

pﬃﬃﬃ ¼ nðe1 þ þ en Þ K 1 , where 0

ei ¼ a0 ðXb Xb Þ1 ei x^ i e^ i ; i ¼ 1; . . . ; n

and

K1 ¼

n n X pﬃﬃﬃ 0 0 1 X na ðXb Xb Þ W nj ðti Þej e^ j . x^ i i¼1

j¼1

Obviously, under F, ei are independent random variables. Denote s2n ¼ VarF

n pﬃﬃﬃ X n ei

! G3n ¼

and

i¼1

n X

E F jei j3 .

i¼1

By Berry–Essen theorem (Chow and Teicher, 1988), for some constant c4 o1, ! n pﬃﬃﬃ X x c4 n3=2 G3n n sup PF ei px F . p sn 1oxo1 ðs2 Þ3=2 i¼1 n

According to Theorem 3.1 s2n ¼ n

n X

E F e2 i ¼ n

i¼1

n X

0

0

a0 ðXb Xb Þ1 x^ i x^ 0i e^ 2i ðXb Xb Þ1 a!p a0 B1 OB1 a

as n ! 1.

i¼1

Therefore, in order to establish Theorem 3.2, it remains to prove that n3=2 G3n !p 0 and K 1 ¼ op ð1Þ. Combining Assumptions 3.1–3.3, Lemmas 6.1, 6.2 and the fact that if Ejei js are uniformly bounded it holds max1pipn jei j ¼ oðn1=s Þ, we have n3=2 G3n pn3=2

n X

E F jei j3

i¼1

" ! #3 n n X X 0 b 0 b 1 0 ^ a ðX X Þ x^ i ei W nj ðti Þej þ gðti Þ W nj ðti Þgðtj Þ þ x^ i ðb bn Þ j¼1 j¼1 " ( )# 3 n X log n 1=2 2 3=2 0 ¼ Oðn Þ ja x^ i ei j þ Op h þ E F jei j3 ¼ op ð1Þ. nh i¼1 Therefore, n3=2 G3n !p 0. By the deﬁnition of e^ j it is easy to see that X X X n n n max W nj ðti Þej e^j p max W nj ðti Þej ej þ max W nj ðti Þej ej 1pipn 1pipn 1pipn j¼1 j¼1 j¼1 ! X n n X 0 ^ max W nk ðtj Þek þ max gðtj Þ W nk ðtj Þgðtk Þ þ max x^ j ðb bn Þ 1pkpn 1pkpn 1pkpn k¼1 k¼1 ( 1=2 ) log n ¼ O p h2 þ . nh

ARTICLE IN PRESS J. You, G. Chen / Statistics & Probability Letters 76 (2006) 340–348

348

Therefore, applying Lemma 6.1 and Assumption 3.1, for s ¼ 1; . . . ; p, n X

x^ is

i¼1

n X j¼1

W nj ðti Þej e^ j ¼

n X n X

W nj ðti Þuis ej e^j þ

i¼1 j¼1

n X j¼1

W nj ðti Þej e^j

n X

hs ðti Þ

i¼1

" n n X X k¼1

i¼1

n X

! W nk ðti Þhs ðtk Þ

k¼1

W nk ðti Þ

n X

#

W nj ðti Þej e^j uks ¼ op ðn1=2 Þ.

j¼1

Combining Lemma 6.2 it holds that K 1 ¼ op ð1Þ. The proof is complete.

&

References Baltagi, B.H., 1995. Econometric Analysis of Panel Data. Wiley, New York. Chen, H., 1988. Convergence rates for parametric components in a partly linear model. Ann. Statist. 16, 136–146. Chow, Y.S., Teicher, H., 1988. Probability Theory. Springer, New York. Engle, R.F., Granger, W.J., Rice, J., Weiss, A., 1986. Semiparametric estimates of the relation between weather and electricity sales. J. Amer. Statist. Assoc. 80, 310–319. Eubank, R., Speckman, P., 1993. Trigonometric series regression estimators with an application to partially linear models. J. Multivariate Anal. 32, 70–84. Gao, J., 1995. Asymptotic theory for partially linear models. Commun. Statist.—Theory Methods 24, 1985–2009. Hamilton, A., Truong, K., 1997. Local linear estimation in partially linear models. J. Multivariate Anal. 60, 1–19. Ha¨rdle, W., Liang, H., Gao, J., 2000. Partially Linear Models. Physica-Verlag, Heidelberg. Heckman, N., 1986. Spline smoothing in a partially linear model. J. Roy. Statist. Soc. Ser. B 48, 244–248. Hong, S., Cheng, P., 1993. Bootstrap approximation of estimation for parameter in a semiparametric regression model. Sci. China Ser. A 23, 239–251. Liang, H., Ha¨rdle, W., Sommerfeld, V., 2000. Bootstrap approximation in a partially linear regression model. J. Statist. Plann. Inference 91, 413–426. Miller Jr., R.G., 1974. An unbalanced jackknife. Ann. Statist. 2, 880–891. Shao, J., 1988. On resampling methods for variance and bias estimation in linear models. Ann. Statist. 16, 996–1008. Shi, P., Li, G., 1995. A note of the convergence rates of M-estimates for partially linear model. Statistics 26, 27–47. Speckman, P., 1988. Kernel smoothing in partial linear models. J. Roy. Statist. Soc. Ser. B 50, 413–436. Wu, C.F.J., 1986. Jackknife, bootstrap and other resampling methods in regression analysis (with discussion). Ann. Statist. 14, 1261–1350.

Copyright © 2021 COEK.INFO. All rights reserved.