360文库 - 千万精品文档,你想要的都能搜到,下载即用。

师资博士后.pdf

脆弱让人放纵35 页 355.008 KB下载文档
师资博士后.pdf师资博士后.pdf师资博士后.pdf师资博士后.pdf师资博士后.pdf师资博士后.pdf
当前文档共35页 2.88
下载后继续阅读

师资博士后.pdf

Statistica Sinica: Supplement STATISTICAL INFERENCE FOR MEAN FUNCTION OF LONGITUDINAL IMAGING DATA OVER COMPLICATED DOMAINS Qirui Hu1 , Jie Li2 1 2 Tsinghua University, Beijing 100084, China. Renmin University of China, Beijing 100872, China. Supplementary Materials In this supplement, Section A and B provide some basic lemmas and proofs of theorems respectively. Section C reports additional simulation results. Throughout this supplementary document, Op (or Op ) denotes a sequence of random variables of certain order in probability. For instance, Op (n−1/2 ) means a smaller order than n−1/2 in probability, and by Oa.s. (or Oa.s. ) almost surely O (or O). In addition, Up denotes a sequence of random functions which are Op uniformly defined in the domain. For any vector a = (a1 , . . . , an ) ∈ Rn , denote the norm kakr = (|a1 |r + · · · + |an |r )1/r , 1 ≤ r < +∞, kak∞ = max (|a1 | , . . . , |an |). For any matrix A = (aij )m,n i=1,j=1 , denote its Lr norm as kAkr = maxa∈Rn ,a6=0 kAakr kak−1 r , for r < +∞ and kAkr = max1≤i≤m Pn j=1 |aij |, for r = ∞. For any random variable X, if it is Lp -integrable, denotes its Lp norm as kXkp = (E |X|p )1/p . INFERENCE FOR LONGITUDINAL IMAGING DATA A. Preliminaries In order to investigate the estimation structure in greater depth, we decompose the estimation error ηbt (x) − ηt (x) into three terms convenient. As the same se > M,Ni quence order of equation (2.5), denote the data vector m = {m(xij )}i=1,j=1 ,  >  > P M,Ni M,Ni et = {σ(xij )εt,ij }i=1,j=1 , φk = {φk (xij )}i=1,j=1 and Rt = ∞ k=1 ξtk φk . Then the estimator ηbt (x) can be decomposed into there terms: ηbt (x) = m(x) e + ξet (x) + eet (x), (S.1) where m(x), e ξet (x), eet (x) are the solutions of (2.6) with Yt,ij replaced by m (xij ) ,  −1 > e>e e > m, ξet (x) = e X Rt (xij ) , σ (xij ) εt,ij respectively, i.e. m(x) e = B(x) X X  −1  −1 > e>e > > e>e e e e e > et . B(x) X X X Rt and eet (x) = B(x) X X X For any L2 integrable functions φ(x) and ϕ(x) defined on Ω, take hφ, ϕi = R Ω φ(x)ϕ(x)dx as their theoretical inner product and hφ, ϕi2,N = N −1 PN i=1 φ(xi )ϕ(xi ) as their empirical inner product, with regular L2 norm kφk2L2 = hφ, φi and empirical norm kφk22,N = hφ, φi2,N . n oq e` (x) Recall the set of transformed Bernstein basis polynomials B `=1 e and B(x) = e Q> 2 B(x), X = XQ2 defined in Section 2.2, denote by e >X e = ΓN,0 = N −1 X D E e` (x), B e`0 (x) B q 2,N `,`0 =1 , > V = Q2 Γ−1 N,0 Q2 , INFERENCE FOR LONGITUDINAL IMAGING DATA two symmetric positive definite matrices. Lemma A.1. (Lemma B.6 of Wang et al. (2020)) Suppose that 4 is a π-quasiuniform triangulation, if N 1/2 |4| → ∞ as N → ∞, then there exists constants 0 < c < C < ∞, such that with probability approaching 1 as N → ∞, n → ∞, one has c |4|2 ≤ λmin (ΓN,0 ) ≤ λmax (ΓN,0 ) ≤ C |4|2 Lemma A.2. (Theorem 10.10 of Lai and Schumaker (2007)) Suppose that 4 is a π-quasi-uniform triangulation of a polygonal domain Ω, and g(·) ∈ W d+1,∞ (Ω). For bi-integer (a1 , a2 ) with 0 ≤ a1 + a2 ≤ d, there exists a spline g ∗ ∈ Sdr (4) (d ≥ 3r + 2) such that ∇az11 ∇az22 (g − g ∗ ) ∞ ≤ C |4|d+1−a1 −a2 |g|d+1,∞ , where C is a constant depending on d, r and the shape parameter π. Lemma A.3. (Theorem 2.6.7 of Csőrgő and Révész (1981)) Suppose that ξi , 1 ≤ i ≤ n are iid with E(ξ1 ) = 0, E(ξ12 ) = 1 and H(x) > 0 (x ≥ 0) is an increasing continuous function such that x−2−γ H(x) is increasing for some γ > 0 and x−1 log H(x) is decreasing with EH (|ξ1 |) < ∞. Then there exist constants C1 , C2 , a > 0 which depend only on the distribution of ξ1 and a sequence of Brownian motions {Wn (m)}∞ n=1 , such −1 that for any {xn }∞ (n) < xn < C1 (n log n)1/2 and Sm = n=1 satisfying H Pm i=1 ξi , then P {max1≤m≤n |Sm − Wn (m)| > xn } ≤ C2 n {H (axn )}−1 . Lemma A.4. (Theorem 1.5.4 of van der Vaart (1998)) T is a arbitrary set. Let INFERENCE FOR LONGITUDINAL IMAGING DATA Xα : Ωα → `∞ (T ) be arbitrary. Then Xα converges weakly to a tight limit if and only if Xα is asymptotically tight and the marginals (Xα (t1 ), . . . , Xα (tk )) converge weakly to a limit for every finite subset t1 , . . . , tk of T . If Xα is asymptotically tight and its marginals converge weakly to the marginals (Xα (t1 ), . . . , Xα (tk )) of a stochastic process X, then there is a version of X with uniformly bounded sample paths and d Xα → − X. Lemma A.5. (Theorem 1.5.6 of van der Vaart (1998)) A net Xα : Ωα → `∞ (T ) is asymptotically tight if and only if Xα is asymptotically tight in R for every t and s, for all ε, η > 0, there exists a finite partition T = ∪ki=1 Ti such that  lim sup P α→∞  sup sup |Xα (s) − Xα (t)| >ε< η. (S.2) 1≤i≤k s,t∈Ti Lemma A.6. For a π-quasi-uniform triangulation 4, if N 1/2 |4| → ∞ as N → ∞, then there exists constants 0 < c < C < ∞, such that with probability approaching 1 as N → ∞, n → ∞, one has c |4|−2 ≤ λmin (V) ≤ λmax (V) ≤ C |4|−2 > Proof. For any q-dimensional vector θ, one has θ > Vθ = θ > Q2 Γ−1 N,0 Q2 θ. INFERENCE FOR LONGITUDINAL IMAGING DATA According to Lemma A.1, there exists 2 2 −2 −1 > > Q> c |4|−2 Q> 2θ 2 2 θ 2 ≤ θ Q2 ΓN,0 Q2 θ ≤ C |4| 2 > > > Note that Q> 2 θ 2 = θ Q2 Q2 θ and the eigenvalues of Q2 Q2 are either 0 or 1, thus 2 > Q> 2 θ 2 ≤ θ θ, which leads to −2 > c |4|−2 kθk22 ≤ θ > Q2 Γ−1 kθk22 . N,0 Q2 θ ≤ C |4| Hence c |4|−2 ≤ λmin (V) ≤ λmax (V) ≤ C |4|−2 . Lemma A.7. For any Bernstein basis polynomials Bl (x), x ∈ Ω of degree d ≥ 0, we have Ni M X X  B` (xij ) = O N |4|2 , ∀` = 1, . . . , p, (S.3) i=1 j=1 p X B` (xij ) = O(1), ∀i = 1, . . . M, j = 1, . . . , Ni , (S.4) `=1 max 0 max  sup |B` (xij )B`0 (xij ) − B` (x)B`0 (x)| = O N −1/2 |4|−1 , 1≤l,l ≤p 1≤i≤M,1≤j≤Ni x∈Th (S.5) where Th ∈ 4 is the one which contains xij . Proof. It is trivial that (S.3) holds. For any fixed xij , i = 1, . . . M , j = INFERENCE FOR LONGITUDINAL IMAGING DATA 1, . . . , Ni , assume that Th ∈ 4 is the triangle that contains xij . Note that there are d∗ = (d + 1)(d + 2)/2 Bernstein basis polynomials on each triangle, then p X `=1 X B` (xij ) = Bl (xij ) ≤ (d + 1)(d + 2)/2 = O (1) {`:d`/d∗ e=h} Denote by ω(f, h) = max {|f (x, y) − f (x̃, ỹ)| : (x, y), (x̃, ỹ) ∈ T, |x − x̃|2 + |y − ỹ|2 ≤ h2 } the modulus of continuity of f relative to the triangle T . Since for any ` = 1, . . . , p,  B` (·) ∈ C 1 (T ), then ω B` , N −1/2 ≤ N −1/2 |B` |1,∞,T ≤ CN −1/2 |4|−1 , thus |B` (xij )B`0 (xij ) − B` (x)B`0 (x)| = |B` (xij )B`0 (xij ) − B` (x)B`0 (xij ) + B` (x)B`0 (xij ) − B` (x)B`0 (x)| = |B`0 (xij )| |B` (xij ) − B` (x)| + |B` (x)| |B`0 (xij ) − B`0 (x)|   = |B`0 (xij )| ω B` , N −1/2 + |B` (x)| ω B`0 , N −1/2  = O N −1/2 |4|−1 . The proof is completed.  For any function φ ∈ C (Ω), denote the vector φ = (φ(xij ))> as the order of e e > (x)(X e > X) e −1 X e > φ. (2.5) and the function φ(x) =B e ∞,Ω ≤ Lemma A.8. There exists cd ∈ (0, ∞) such that when n is large enough, kφk cd kφk∞,Ω for any φ ∈ C (Ω). Furthermore, if φ ∈ W d+1,∞ (Ω) for some µ ∈ (0, 1], INFERENCE FOR LONGITUDINAL IMAGING DATA ed,r , such that then there exists C d+1 ed,r |φ| kφe − φk∞,Ω ≤ C . d+1,∞,Ω |4| Proof. Note that for any x ∈ Ω, at most (d + 1)(d + 2)/2 numbers of B1 (x), · · · , Bp (x) are between 0 and 1, others being 0 , so φe = ∞,Ω N −1 B> (x)VX> φ ∞,Ω (d + 1)(d + 2) −1 N VX> φ ∞ 2 (d + 1)(d + 2) ≤ C |4|−2 kφk∞,Ω X> 1N ∞ , 2N ≤ in which 1N = (1, . . . , 1)> is a N -dimensional constant vector. Clearly, (S.3) ensures that T X 1N ∞ = max 1≤l≤p Ni M X X Bl (xij ) ≤ CN |4|2 , i=1 j=1 e ∞,Ω ≤ cd kφk∞,Ω . which implies kφk Now if φ ∈ W d+1,∞ (Ω), let g ∈ Sdr (4) be such that kg−φk∞ ≤ Cd,r |4|d+1 |φ|d+1,∞,Ω according to Lemma A.2, then ge ≡ g as g ∈ Sdr (4), hence kφe − φk∞,Ω = kφe − gek∞,Ω + kφ − gk∞,Ω d+1 ed,r |φ| ≤ (cd + 1)kφ − gk∞,Ω ≤ C d+1,∞,Ω |4| INFERENCE FOR LONGITUDINAL IMAGING DATA The proof is completed. Lemma A.9. For n > 2, a > 2, Wi ∼ N (0, σi2 ) , σi > 0, i = 1, . . . , n   p P max1≤i≤n |Wi /σi | > a log n < r 2 1−a2 /2 n . π As n → ∞, (max1≤i≤n |Wi |) / (max1≤i≤n σi ) ≤ max1≤i≤n |Wi /σi | = Oa.s. (S.6) √  log n . Proof. Note that  p Wi > a log n P max1≤i≤n σi    n n  p o X p Wi > a log n ≤ 2n 1 − Φ a log n ≤ P σi i=1  √  p  r2 φ a log n 2 < 2n √ ≤ 2nφ a log n = n1−a /2 , π a log n for n > 2, a > 2, which proves (S.6). The lemma follows by applying Borel-Cantelli Lemma with choice of a > 2. Lemma A.10. Assumption (A5) holds under Assumptions (A3), (A4) and (A50 ). Proof. Under Assumption (A50 ), E |ζtk |r1 < ∞, r1 > 4 + 2α, E |εt,ij |r2 < ∞, r2 ω > 2 + θ, where α is defined in Assumption (A4) and θ is defined in Assumption (A3), so there exists some β1 ∈ (0, 1/2), such that r1 > (2 + α) /β1 . Let H(x) = xr1 . Lemma A.3 entails that there exist constants c1k and ak depending on the distribution of ζtk , such that for xn = (n + In )β1 , (n+In )/H (ak xn ) = INFERENCE FOR LONGITUDINAL IMAGING DATA 1 (n + In )1−r1 β1 and iid N (0, 1) variables Ztk,ζ , a−r k ( τ X max P −In +1≤τ ≤n ζtk − t=−In +1 ) τ X Ztk,ζ > (n + In )β1 1 (n + In )1−r1 β1 , < c1k a−r k t=−In +1 n Since there are only a finite number of distinct distributions for {ζtk }n,k t=−In +1,k=1 by Assumption (A50 ), there exists a common c1 > 0, such that ( max P 1≤k≤kn τ X max −In +1≤τ ≤n ζt,k − t=−In +1 ) τ X Ztk,ζ > (n + In )β1 < c1 (n + In )1−r1 β1 . t=−In +1 Since In  nι , 0 < ι < 1, by definition, then Cr inequality leads to that (n + In )β1 ≤ C0 nβ1 for some constant C0 . Because 1 − r1 β1 < 0, it is clear that (n + In )1−r1 β1 < n1−r1 β1 . Thus one has ( max P 1≤k≤kn τ X max −In +1≤τ ≤n τ X ζt,k − ) Ztk,ζ > C0 nβ1 < c1 n1−r1 β1 . t=−In +1 t=−In +1 Recalling that r1 > (2 + α) /β1 , one can let γ1 = r1 β1 − 1 − α > 1, and there exists a C1 > 0 such that ( P max max 1≤k≤kn −In +1≤τ ≤n τ X t=−In +1 ζt,k − τ X ) Ztk,ζ > C0 nβ1 < kn c1 n1−r1 β1 ≤ C1 n−γ1 . t=−In +1 Similarly, under Assumption (A50 ), let H(x) = xr2 , Lemma A.3 entails that INFERENCE FOR LONGITUDINAL IMAGING DATA there exists constants c2 and b depending on the distribution of εij , such that for xN = N β2 , N/H(bxN ) = b−r2 N 1−r2 β2 and iid standard normal random variables i {Zt,ij,ε }n,M,N t=1,i=1,j=1 such that ( max P 1≤t≤n max 1≤τ ≤N τ X εt,f1 (k)f2 (k) − τ X k=1 ) ≤ c2 b−r2 N 1−r2 β2 Zt,f1 (k)f2 (k),ε > N β2 k=1  Assumption (A3) states that n = O N θ , so there is a C2 > 0 such that ( P max max 1≤t≤n 1≤τ ≤N τ X εt,f1 (k)f2 (k) − τ X ) Zt,f1 (k)f2 (k),ε > N β2 ≤ C2 N θ+1−r2 β2 k=1 k=1 Note that r2 ω > 2 + θ, one can choose β2 ∈ (0, ω), such that r2 β2 > 2 + θ, which ensures that γ2 = β2 r2 − 1 − θ > 1 and Assumption (A5) follows. The lemma holds consequently. Lemma A.11. Under Assumptions (A5) and (A50 ), as n → ∞, there are constants C3 , C4 ∈ (0, +∞), γ3 ∈ (1, +∞), β3 ∈ (0, 1/2) and a series of N (0, 1) variables Ztk,ξ = P∞ t0 =0 at0 k Zt−t0 ,k,ζ , P∞ t = 1, . . . , n, k = 1, . . . , kn , with Cov (Zjk,ξ , Zj+h,k,ξ ) = 1 ≤ j ≤ n, 1 ≤ h ≤ n − j, such that m=0 amk am+h,k , ( P Proof. max max 1≤k≤kn 1≤τ ≤n Since P∞ τ X t=1 ξtk − τ X ) Ztk,ξ > C3 nβ3 < C4 n−γ3 . (S.7) t=1 2 ρa t=0 atk = 1 and |atk | < Ca t , for t = 0, . . . , n, k = 1, . . . , kn , INFERENCE FOR LONGITUDINAL IMAGING DATA together with In  nι , then (t0 + In )ρa ≤ C (t0 In )ρa /2 for t0 ≥ 1 and some constant PIn t=0 |atk | < M . It is clear that C. There also exists a constant M > 0, such that ξtk = In X at0 k ζt−t0 ,k + t0 =0 ξtk − In X ξtk − at0 k ζt−t0 ,k , t0 =In +1 ∞ X at0 k ζt−t0 ,k ≤ t0 =0 In X ∞ X ρ Ca t0 a |ζt−t0 ,k | , t0 =In +1 ρa ι at0 k ζt−t0 ,k ≤ Cn t0 =0 ∞ X ρ /2 t0 a |ζt−In −t0 ,k | . t0 =1 Hence, max max 1≤k≤kn 1≤τ ≤n τ X ξtk − ρa ι/2+1 at0 k ζt−t0 ,k ≤ max max Cn 1≤k≤kn 1≤t≤n t=1 t0 =0 t=1 Denote Wtk = τ X In X P∞ 0 ρa |ζt−In −t0 ,k |, t0 =1 t ∞ X ρ /2 t0 a |ζt−In −t0 ,k | t0 =1 by noticing that supt,k E |ζt,k |r1 < ∞, where r1 > 4 + 2α, kWtk kr1 ≤ ∞ X ρ /2 t0 a kζt−In −t0 ,k kr1 < ∞. t0 =1 Therefore, EWtkr1 < K for some K > 0, t = 1, . . . , n, k = 1, . . . , kn . Note that kn = O (nα ) in Assumption (A4), thus  ρa ι/2+1 P Cn max max Wtk > M n 1≤k≤kn 1≤t≤n β3  C r1 K −(β3 −1)r1 C r1 K −(β3 −ρa ι/2−1)r1 +1+α < nkn n < n . M r1 M r1 INFERENCE FOR LONGITUDINAL IMAGING DATA So, " P max max 1≤k≤kn 1≤τ ≤n τ X ξtk − # a ζ t0 k t−t0 ,k > Mn β3 t=1 t0 =0 t=1 Next, define Utk = τ X In X P∞ t0 =In +1 C r1 K −(β3 −ρa ι/2−1)r1 +1+α < n . M r1 at0 k Zt−t0 ,k,ζ , then Utk ∼ N 0, P∞ t0 =I  2 a , k = 1, . . . , kn . 0 n +1 t k It is obvious that max max 1≤k≤kn 1≤τ ≤n Note that τ ∞ X X t=1 t0 =I at0 k Zt−t0 ,k,ζ ≤ n max max |Utk | . 1≤k≤kn 1≤t≤n n +1 P∞ 2 (2ρa +1)ι for some C > 0, k = 1, . . . , kn and kn = O (nα ) t0 =In +1 at0 k < Cn for some α > 0, one has  P n max max |Utk | > M n β3  1≤k≤kn 1≤t≤n Cn(2ρa +1)ι C (2ρa +1)ι−2β3 +α+3 < nkn < n , M2 M 2 (nβ3 −1 )2 which leads to " P max max # τ ∞ X X 1≤k≤kn 1≤τ ≤n at0 k Zt−t0 ,k,ζ > M nβ3 < t=1 t0 =In +1 C (2ρa +1)ι−2β3 +α+3 n . M2 Now Assumption (A5) entails that for 0 ≤ t0 ≤ In , 1 ≤ t ≤ n, −In + 1 ≤ t − t0 ≤ n ( P max max 1≤k≤kn −In +1≤τ ≤n τ X t=−In +1 ζtk − τ X t=−In +1 ) Ztk,ζ > C0 nβ3 < C1 n−γ1 . INFERENCE FOR LONGITUDINAL IMAGING DATA Then, " max max P 1≤k≤kn 1≤τ ≤n " =P τ X In X max max " ≤P max 1≤k≤kn at0 k t0 =0 (I n X at0 k (ζt−t0 ,k − Zt−t0 ,k,ζ ) > 2M C0 nβ3 t=1 t0 =0 In X 1≤k≤kn 1≤τ ≤n # τ X # (ζt−t0 ,k − Zt−t0 ,k,ζ ) > 2M C0 nβ3 t=1 |at0 k | max τ X 1≤τ ≤n t0 =0 t=1 τ X " ζt−t0 ,k − ≤ P M max max max 0 1≤k≤kn 1≤τ ≤n 0≤t ≤In ( t=1 τ X ≤ P 2 max max 1≤k≤kn 1≤τ ≤n ζt−t0 ,k − τ X # ) > 2M C0 nβ3 Zt−t0 ,k,ζ t=1 τ X # Zt−t0 ,k,ζ > 2M C0 nβ3 t=1 ) τ X ζtk − < C1 n−γ1 Ztk,ζ > 2C0 nβ3 t=−In +1 t=−In +1 τ ∞ X X ! Hence, " P max max 1≤k≤kn 1≤τ ≤n τ X t=1 t=1 " =P ξtk − max max 1≤k≤kn 1≤τ ≤n τ X ξtk − − at0 k ζt−t0 ,k + τ ∞ X X τ X In X at0 k ζt−t0 ,k − t=1 t0 =0 t=1 t0 =0 t=1 > 4M C0 nβ3 at0 k Zt−t0 ,k,ζ t0 =0 τ X In X # τ X In X at0 k Zt−t0 ,k,ζ t=1 t0 =0 # at0 k Zt−t0 ,k,ζ > 4M C0 nβ3 t=1 t0 =In +1 " ≤P ( max max 1≤k≤kn 1≤τ ≤n + τ X ξtk − t=1 τ X τ X In X at0 k ζt−t0 ,k + t=1 t0 =0 ∞ X τ X In X t=1 t0 =0 ) at0 k Zt−t0 ,k,ζ at0 k ζt−t0 ,k − τ X In X at0 k Zt−t0 ,k,ζ t=1 t0 =0 # > 4M C0 nβ3 t=1 t0 =In +1 " ≤P max max 1≤k≤kn 1≤τ ≤n τ X t=1 ξtk − In τ X X t=1 t0 =0 # β3 at0 k ζt−t0 ,k > M C0 n " +P max max 1≤k≤kn 1≤τ ≤n τ X In X t=1 t0 =0 at0 k ζt−t0 ,k INFERENCE FOR LONGITUDINAL IMAGING DATA − τ X In X # " at0 k Zt−t0 ,k,ζ > 2M C0 nβ3 + P t=1 t0 =0 ≤ # τ ∞ X X max max 1≤k≤kn 1≤τ ≤n at0 k Zt−t0 ,k,ζ > M C0 nβ3 t=1 t0 =In +1 C (2ρa +1)ι−2β3 +α+3 C r1 K −(β3 −ρa ι/2−1)r1 +1+α −γ1 + n + C n < C4 n−γ3 1n r M 1 M2 Denote C3 = 4M C0 and Ztk,ξ = P∞ t0 =0 at0 k Zt−t0 ,k,ζ , t = 1, . . . , n, k = 1, . . . , kn , then n {Ztk,ξ }n,k t=1,k=1 are N(0, 1) variables and Cov (Zj,k,ξ , Zj+h,k,ξ ) = P∞ m=0 amk am+h,k , 1≤ j ≤ n, 1 ≤ h ≤ n − j, thus " P max max 1≤k≤kn 1≤τ ≤n τ X τ X ξtk − t=1 # Ztk,ξ > C3 nβ3 < C4 n−γ3 . t=1 The proof is completed. Lemma A.12. Under Assumptions (A2), (A5) and (A6), −1 max (nN ) 1≤`≤p Ni n X M X X B`,p (xij )σ (xij ) Zt,ij,ε = Oa.s. (n−1/2 N −1/2 |4| log1/2 N ). t=1 i=1 j=1 Proof. Note that (nN )−1 N −1 Pn PM PNi t=1 i=1 j=1 B` (xij )σ (xij ) Zt,ij,ε = PM PNi i=1 j=1 B`,p (xij )σ (xij ) Z·ij,ε , where Z·ij,ε = n −1 Pn t=1 Zt,ij,ε , then apply Lemma A.9 to obtain the uniform bound for the zero mean Gaussian variables N −1 PN i=1 B` (xij )σ (xij ) Z·ij,ε , 1 ≤ ` ≤ p with variance ( E N −1 Ni M X X i=1 j=1 )2 B`,p (xij )σ (xij ) Z·ij,ε −1 =n N −2 Ni M X X i=1 j=1 2 B`,p (xij ) σ 2 (xij ) INFERENCE FOR LONGITUDINAL IMAGING DATA = n−1 N −1 kB`,p σk22,N  |4|2 N −1 n−1 . It follows from Lemma A.9 that max N −1 1≤`≤p Ni M X X n o 1/2 −1/2 −1/2 B`,p (xij )σ (xij ) Z·ij,ε = Oa.s. n N |4| log p i=1 j=1   = Oa.s. n−1/2 N −1/2 |4| log1/2 N , (S.8) where the last step follows from Assumption (A6) on the order of |4| relative to N . Thus the lemma holds. Lemma A.13. Under Assumptions (A2), (A5) and (A6), one has −1 sup n x∈Ω n X   eet (x) = Oa.s n−1/2 N −1/2 |4|−1 log1/2 N + N β2 −1/2 |4|−1 + N β2 −1 |4|−2 t=1 Proof. According to Assumption (A5), it is trivial that max max N 1≤t≤n 1≤τ ≤N −1 τ X εt,f1 (k)f2 (k) − Zt,f1 (k)f2 (k),ε   = Oa.s. N β2 −1 . k=1 The bivariate spline satisfies that   B` xf1 (k)f2 (k) − B` xf1 (k+1)f2 (k+1) ≤ C|4|−1 N −1/2 INFERENCE FOR LONGITUDINAL IMAGING DATA uniformly over 1 ≤ k ≤ N and 1 ≤ l ≤ p, while the Lipschitz continuity in Assumption (A2) ensures that   σ xf1 (k)f2 (k) − σ xf1 (k+1)f2 (k+1) ≤ LN −1/2 ≤ C|4|−1 N −1/2 uniformly over 1 ≤ k ≤ N . Note that for 1 ≤ ` ≤ p, both B` (·) and σ(·) are bounded on Ω, then     B` xf1 (k)f2 (k) σ xf1 (k)f2 (k) − B` xf1 (k+1)f2 (k+1) σ xf1 (k+1)f2 (k+1) =      B` xf1 (k)f2 (k) − B` xf1 (k+1)f2 (k+1) + B` xf1 (k+1)f2 (k+1) σ xf1 (k)f2 (k)   −B` xf1 (k+1)f2 (k+1) σ xf1 (k+1)f2 (k+1)    ≤ B` xf1 (k)f2 (k) − B` xf1 (k+1)f2 (k+1) σ xf1 (k)f2 (k)    + σ xf1 (k)f2 (k) − σ xf1 (k+1)f2 (k+1) B` xf1 (k+1)f2 (k+1) ≤C|4|−1 N −1/2 Noting the support set of B` (·), one obtains that N −1 X     B` xf1 (k)f2 (k) σ xf1 (k)f2 (k) − B` xf1 (k+1)f2 (k+1) σ xf1 (k+1)f2 (k+1) k=1 ≤ CN |4|2 |4|−1 N −1/2 ≤ CN 1/2 |4|. INFERENCE FOR LONGITUDINAL IMAGING DATA Hence, for 1 ≤ ` ≤ p, 1 ≤ t ≤ n, −1 N = N −1 N X    B` xf1 (k)f2 (k) σ xf1 (k)f2 (k) Zt,f1 (k)f2 (k),ε − εt,f1 (k)f2 (k) k=1 N −1 X    B` xf1 (k)f2 (k) σ xf1 (k)f2 (k) k=1  −B` xf1 (k+1)f2 (k+1) σ xf1 (k+1)f2 (k+1)   +N −1 B` xf1 (N )f2 (N ) σ xf1 (N )f2 (N ) k  X Zt,f1 (m)f2 (m),ε − εt,f1 (m)f2 (m) m=1 N X Zt,f1 (k)f2 (k),ε − εt,f1 (k)f2 (k) )   k=1 ( max max N −1 ≤ 1≤t≤n 1≤τ ≤N × CN 1/2 = Oa.s. N τ X ) εt,f1 (k)f2 (k) − Zt,f1 (k)f2 (k),ε  k=1 N X   −1 Zt,f1 (k)f2 (k),ε − εt,f1 (k)f2 (k) |4| + C N β2 −1/2 |4| + N k=1  β2 −1 Hence, −1 max (nN ) 1≤`≤p n X N X B` (xf1 (k)f2 (k) )σ xf1 (k)f2 (k)  εt,f1 (k)f2 (k) − Zt,f1 (k)f2 (k),ε t=1 k=1  = Oa.s N β2 −1/2 |4| + N β2 −1 . Apply the triangle inequality and the result in Lemma A.12, then −1 max (nN ) 1≤`≤p n X N X t=1 k=1  B`,p (xf1 (k)f2 (k) )σ xf1 (k)f2 (k) εt,f1 (k)f2 (k)  INFERENCE FOR LONGITUDINAL IMAGING DATA  −1/2 −1 > = Oa.s n It is clear that (nN ) X N −1/2 Pn |4| log t=1 et = n 1/2 N +N β2 −1/2 |4| + N β2 −1  P P PNi (nN )−1 nt=1 M i=1 j=1 B` (xij ) σ (xij ) εt,ij op `=1 then −1 (nN ) X > n X   = Oa.s n−1/2 N −1/2 |4| log1/2 N + N β2 −1/2 |4| + N β2 −1 . et t=1 ∞ By recalling the definition of eet (x) and Lemma A.6, one obtains sup n x∈Ω −1 n X −1 eet (x) = n N −1 > B(x) VX > n X t=1 et t=1 ∞   1/2 −1/2 −1/2 −1 β2 −1/2 −1 β2 −1 −2 = Oa.s n N |4| log N + N |4| + N |4| The proof is completed. B. Proof of theorems B.1 Proof of Theorem 1 Under Lemma A.11, Cov (Zjk,ξ , Zj+h,k,ξ ) = P∞ m=0 amk am+h,k , 1 ≤ j ≤ n, 1 ≤ h ≤ n − j. Then, Var Z ·k,ξ = E n−1  n X t=1 !2 Ztk,ξ " = n−2 n + 2E !# X 1≤t≤j≤n Ztk,ξ Zjk,ξ , INFERENCE FOR LONGITUDINAL IMAGING DATA ( = n−1 + 2n−2 (n − 1) ∞ X atk at+1,k + (n − 2) t=0 where Z ·k,ξ = Var t=0 =n i=1 ∞ X ) atk at+n−1,k t=0 ek (x) = Z ·k,ξ φk (x) , k = 1, . . . , ∞ and define t=1 Ztk,ξ /n. We denote ϕ P∞ ek (x). For x1 , · · · , xl ∈ Ω and b1 , · · · , bl ∈ R, k=1 ϕ ! = Var n1/2 bi ϕn (xi ) i=1 l X atk at+2,k + · · · + [n − (n − 1)] ∞ X Pn ϕn (x) = n1/2 Gϕ (x, x)−1/2 l X ∞ X l X ∞ X −1/2 bi Gϕ (xi , xi ) Z ·k,ξ φk (xi ) i=1 b2i Gϕ (xi , xi )−1 ∞ X φ2k (xi ) Var ! k=1  Z ·k,ξ + 2n k=1 X bi bj Gϕ (xi , xi )−1/2 Gϕ (xj , xj )−1/2 1≤i 0 such that G (x, x) ≥ CG , x ∈ Ω. Denote ω (ϕn , δ) = supx,x0 ∈Ω,|x−x0 |≤δ |ϕn (x) − ϕn (x0 )|. Given the partition Ω = ∪ki=1 Ti that |Ti | < δ, there exists sup1≤i≤k,x,x0 ∈Ti , |ϕn (x) − ϕn (x0 )| ≤ ω (ϕn , δ). The definition of ω (ϕn , δ) implies that ω (ϕn , δ) = |ϕn (x) − ϕn (x0 )| sup x,x0 ∈Ω,|x−x0 |≤δ ≤ sup n 1/2 x,x0 ∈Ω,|x−x0 |≤δ ≤ n1/2 Cϕ−1/2 δ µ Cϕ−1/2 ∞ X |φk (x) − φk (x0 )| Z ·k,ξ k=1 ∞ X kφk k0,µ Z ·k,ξ . k=1 Since E Z ·k,ξ = (2/π)1/2 Var Z ·k,ξ 1/2 , thus P [ω (ϕn , δ) ≥ ] ≤ P n1/2 δ µ Cϕ−1/2 ∞ X ! kφk k0,µ Z ·k,ξ ≥  k=1 ≤P ∞ X kφk k0,µ Z ·k,ξ ≥ k=1 ≤ ≤ −1/2 n1/2 δ µ Cϕ −1/2 P∞ k=1 kφk k0,µ E n1/2 δ µ Cϕ !  Z ·k,ξ   −1/2 P∞ kφ k (2/π)1/2 δ µ Cϕ k k=1 0,µ nVar  Z ·k,ξ  1/2 . INFERENCE FOR LONGITUDINAL IMAGING DATA Note that 2 P∞ k=1 kφk k0,µ < +∞ under Assumption (A4) and nVar Z ·k,ξ  → 1+ P∞ P∞ t=0 t0 =t+1 atk at0 k as n → ∞, it is clear that lim lim sup δ→0  −1/2 P∞ kφ k k k=1 0,µ nVar (2/π)1/2 δ µ Cϕ Z ·k,ξ  1/2  n→∞ = 0, thus equation (S.2) is satisfied. According to (S.10) and Lemma A.4, ϕn →D ϕ. Denote ξ ·k = Pn t=1 ξtk , and note that 1/2 n ∞ X −1/2 sup Gϕ (x, x) x∈Ω ≤ n1/2 sup Gϕ (x, x) k=1 kn X −1/2 x∈Ω 1/2 +n Z ·k,ξ − ξ ·k |φk (x)| k=1 −1/2 sup Gϕ (x, x) x∈Ω  Z ·k,ξ − ξ ·k φk (x) ∞ X Z ·k,ξ − ξ ·k |φk (x)| . k=kn +1 According to (S.7), there exists  P β3 −1 max ξ ·k − Z ·k,ξ > C3 n 1≤k≤kn  < C4 n−γ3 . By Borel Cantelli lemma, one has  max ξ ·k − Z ·k,ξ = Oa.s. nβ3 −1 . 1≤k≤kn (S.11) INFERENCE FOR LONGITUDINAL IMAGING DATA P∞ k=1 kφk k∞ By Assumption (A4), < +∞, thus Pkn k=1 kφk k∞ < C for some constant C. Together with (S.11) and Assumption (A3), one obtains that n 1/2 sup Gϕ (x, x) kn X −1/2 x∈Ω ≤n ≤n 1/2 1/2 Z ·k,ξ − ξ ·k |φk (x)| k=1 −1/2 CG sup x∈Ω −1/2 CG kn X kn X |φk (x)| max ξ ·k − Z ·k,ξ 1≤k≤kn k=1 kφk k∞,Ω max ξ ·k − Z ·k,ξ 1≤k≤kn k=1 −1/2 ≤ n1/2 CG   COa.s. nβ3 −1 = Oa.s. nβ3 −1/2 = Oa.s. (1) (S.12) Note that E ξ ·k 2 = E Z ·k,ξ 2 2 ≤ EZ ·k,ξ = n−1 + 2n−2 ( n−1 ∞ XX ) (n − m)atk at+m,k , m=1 t=0  thus E ξ ·k = E Z ·k,ξ = O n−1/2 . In addition, Assumption (A4) states that P∞  −1/2 kφ k = O n , then there exists k k=kn +1 ∞ En 1/2 sup Gϕ (x, x) x∈Ω −1/2 ∞ X Z ·k,ξ − ξ ·k |φk (x)| k=kn +1 ∞ X −1/2 ≤ n1/2 CG kφk k∞,Ω E Z ·k,ξ − ξ ·k k=kn +1 −1/2 ≤ n1/2 CG O   n−1/2 O n−1/2 = O (1) (S.13) INFERENCE FOR LONGITUDINAL IMAGING DATA Combining (S.12) and (S.13), one has 1/2 En −1/2 sup Gϕ (x, x) x∈Ω ∞ X  Z ·k,ξ − ξ ·k φk (x) = O (1) , k=1 hence n 1/2 sup Gϕ (x, x) x∈Ω −1/2 ∞ X  Z ·k,ξ − ξ ·k φk (x) = Op (1) . k=1 Note that 1/2 ϕn (x) − n −1/2 Gϕ (x, x) {m (x) − m (x)} = n 1/2 Gϕ (x, x) −1/2 ∞ X  Z ·k,ξ − ξ ·k φk (x), k=1 hence sup ϕn (x) − n1/2 Gϕ (x, x)−1/2 {m (x) − m (x)} = Op (1) . x∈Ω The proof is completed by applying Slutsky’s theorem. B.2 Proof of Theorem 2 e e > X) e −1 Xφ e k . According to the equation For any k = 1 . . . , ∞, let φek (x) = B(x)( X (S.1), ηbt (x) − ηt (x) = m(x) e − m(x) + ξet (x) − ξt (x) + eet (x) (S.9) INFERENCE FOR LONGITUDINAL IMAGING DATA By Lemma A.8, for any k = 1 . . . , ∞, there exist a constant Cd,r , independent of m and φk , such that km e − mk∞,Ω ≤ Cd,r |m|d+1,Ω,∞ |4|d+1 , φek − φk ≤ Cd,r |φk |d+1,Ω,∞ |4|d+1 , ∞,Ω which implies that ξet − ξt where Wt = ∞,Ω = ∞ X |ξtk | φek − φk k=1 ∞,Ω ≤ Cd,r Wt |4|d+1 , P∞ k=1 |ξtk | |φk |d+1,Ω,∞ , t = 1, . . . , n are identically distributed nonnega- tive random variables with r1 -th finite absolute moment under the Assumptions (A4) and (A5). Hence  P max Wi > (n log n) 2/r1 1≤t≤n  EWtr1 ≤n = EWtr1 n−1 (log n)−2 , (n log n)2  and the Borel-Canteill lemma ensures that max Wi = Oa.s (n log n)2/r1 . Together 1≤t≤n with equation (S.9) and Lemma A.13, one can obtain that sup |m(x) b − m(x)| = sup n x∈Ω x∈Ω −1 n X t=1 (b ηt (x) − ηt (x)) INFERENCE FOR LONGITUDINAL IMAGING DATA ≤ max sup ξet (x) − ξt (x) + sup |m(x) e − m(x)| + max sup n−1 1≤t≤n x∈Ω  1≤t≤n x∈Ω x∈Ω d+1 = Oa.s. |4| 2/r1 (n log n) d+1 + |4| +N β2 −1/2 −1 |4| eet (x) t=1 −1/2 +n n X N −1/2 −1 |4| log 1/2 N    = Oa.s. |4|d+1 (n log n)2/r1 + N β2 −1/2 |4|−1 = Op n−1/2 . B.3 Proof of Theorem 3 From Assumption (A5), one obtains that ρa < −2, satisfying the assumptions of Theorem 3 in Wu (2005). Together with Assumption (A50 ), one derives that there exists i.i.d. normal sequences Ztk,ξ , t = 1, . . . , n, k = 1, . . . , κn , such that max 1≤k≤κn n X ξtk − p λ∗k t=1 n X  Ztk,ξ = Oa.s. n1/4+α (log n)3/4 (log log n)1/2 . (S.13) t=1 Note that α < 1/4, thus (S.11) - (S.13) still hold. One can complete the poof following the same arguments in the proof of Theorem 1 and 2. B.4 Proof of Theorem 4 Define the B-approximate process as ξet (·) = E {ξt (·) |ζik , i ≥ t − B, k ≥ 1} = ∞ X k=1 ξetk,B φk (·) = ∞ B X X k=1 t0 =0 ! at0 ,k ζt−t0 ,k φk (·). INFERENCE FOR LONGITUDINAL IMAGING DATA Let δj (x) = B −1 PBj t=B(j−1)+1 {ηt (x) − m (x)} and δj (x) = B e −1 PBj t=B(j−1)+1 ξt (x). e Define the corresponding covariance functions Gϕ,B (x, x0 ) = ∞ X ( φk (x)φk (x0 ) 1 + 2 B X (1 − t/B) t=0 k=1 ∞ X ) atk at0 k , t0 =t+1 l X eϕ (x, x0 ) = B G δj (x) δj (x0 ) − δ (x) δ (x0 ) , l j=1 l o B X ne 0 0 0 e e e e δj (x) δj (x ) − δ (x) δ (x ) , Gϕ,B (x, x ) = l j=1 P in which Gϕ,B (x, x0 ) = BCov(δej (x), δej (x0 )), δ (x) = l−1 lj=1 δj (x) and δe (x) = l−1 Pl 0 0 0 e e e j=1 δj (x), while Gϕ (x, x ) and Gϕ,B (x, x ) are block estimators of Gϕ (x, x ) by infeasible trajectories and B-approximate trajectories respectively. bϕ (x, x0 ) into the following We decompose the difference between Gϕ (x, x0 ) and G four terms: bϕ (x, x0 ) sup Gϕ (x, x0 ) − G x,x0 ∈Ω ≤ bϕ (x, x0 ) − G eϕ (x, x0 ) + sup G eϕ (x, x0 ) − G eϕ,B (x, x0 ) sup G x,x0 ∈Ω x,x0 ∈Ω eϕ,B (x, x0 ) − Gϕ,B (x, x0 ) + sup |Gϕ,B (x, x0 ) − Gϕ (x, x0 )| . + sup G x,x0 ∈Ω x,x0 ∈Ω INFERENCE FOR LONGITUDINAL IMAGING DATA Lemma A.14. Under Assumptions (A1)–(A6), there exists   4/r1 2/r1 0 0 d+1 β2 −1/2 −1 b e sup Gϕ (x, x ) − Gϕ (x, x ) = Op B (n log n) |4| + B (n log n) N |4| . x,x0 ∈Ω Proof. Note that  bϕ (x, x0 ) − G eϕ (x, x0 ) ≤ 2B max kδj k + δbj sup G ∞ 1≤j≤l x,x0 ∈Ω  ∞ max δj − δbj 1≤j≤j hence max δj − δbj 1≤j≤l max δbj 1≤j≤l ∞ ∞ = max kηt − ηbt k∞ + Op (n−1/2 ), 1≤t≤n ≤ max kδj k∞ + max δj − δbj 1≤j≤l 1≤j≤l ∞ . Similar with the proof of Theorem 2, one can get max kδj k∞ ≤ max kξt k∞ = Op ((n log n)2/r1 ). 1≤j≤l 1≤t≤n The proof is completed. Lemma A.15. Under Assumptions (A1)–(A6), one has   2/r1 0 0 ρa +3/2 e e sup Gϕ (x, x ) − Gϕ,B (x, x ) = Op B l log l (n log n) . x,x0 ∈Ω ∞ , INFERENCE FOR LONGITUDINAL IMAGING DATA Proof. Following the analogous steps of the previous lemma, one has eϕ (x, x0 ) − G eϕ,B (x, x0 ) sup G ≤ 2B max kδj k∞ max δj − δej 1≤j≤l x,x0 ∈Ω 1≤j≤l ∞ . Note that 2 E δj − δej ∞ ≤ ≤ ∞ X E k=1 ∞ X ∞ X 2 at0 k ζt−t0 ,k kφk k2∞ t0 =B+1 ∞ X (B + 1)2ρa kφk k2∞ k=1 t0 =B+1 2ρa +1 = O(B ), thus  P max δj − δej 1≤l≤j ∞ >B ρa +1/2  l log l Borel-Cantelli lemma leads to that max1≤l≤j δj − δej ≤ ∞ 1 . l log2 l  = O B ρa +1/2 l log l . Com- bining with max1≤j≤l kδj k∞ = Op ((n log n)2/r1 ), the lemma holds consequently. Lemma A.16. Under Assumptions (A1)–(A6), one has eϕ,B (x, x0 ) − Gϕ,B (x, x0 ) = Op (Bl−1/2 ). sup G x,x0 ∈Ω INFERENCE FOR LONGITUDINAL IMAGING DATA Proof. Denote δjk = B −1 0 PBj t=B(j−1)+1 0 eϕ,B (x, x ) − Gϕ,B (x, x ) G PB t0 =0 at0 ,k ζt−t0 ,k , then ≤ ∞  X  e Bδ·,kk − λk,B φk (x)φk (x0 ) k=1 +2 ∞ X Bδ·,kk0 φk (x)φk (x0 ) , k 1, it follows that  2 ek,B , E (Bδ·,kk ) = E Bδjk =λ ( ) l l  2 X X 4 2 2 ek,B e2 E Bδ·,kk − λ = l−2 E B 2 δjk +2 B 2 δik δjk − λ k,B j=1 i

相关文章