Math Excercises VI

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Rolf Nevanlinna institute, Univ.

of Helsinki Spatial statistics, fall 2003

Math excercises VI
To get the credit, hand in your solutions by 16.12.

1. Let {U (s ), s R } be a stationary spatial process with covariogram C, s i , let i = 1, 2, . . . , n be locations in R and ai , i = 1, 2, . . . , n real numbers. Prove that (p. 238) Var

i=1

aiU (si ) =

j =1 k =1

a j akC(s j sk ).

(1)

Basic results (i) and (ii) given for Math ex. I may be useful here too. 2. Let h.95 be the practical range (p. 248) of theoretical variogram (h; , 2 ) determined by correlogram (h; ) and sill 2 as on p. 242 (no nugget effect). Prove that (h.95 ; ) = 0.05. 3. Show that, for the spherical model, h.95 = r, where r is one solution of x3 3x + 1.9 = 0. Let us then turn to spatial regression (sec. 4.5). To understand, why the GLS estimator on p. 276 looks like it does, consider multivariate Gaussian responses yi = Y (s i ), i = 1, 2, . . . , n, whose distribution is determined by the expected values E(Y (s i )) = x T (s i ) and covariance matrix : f (y ) = exp{(y X )T 1 (y X )/2}, where does not depend on . Maximising f with respect to gives then the maximum likelihood estimator. Clearly, the maximum of f is the minimum of g( ) = (y X )T 1 (y X ), which is a generalised sum of squares of the residuals (if is the identity matrix, then g reduces to the ordinary sum of squares considered in Math ex. V). 4. Find the derivative vector g ( ) of g( ) (as in Math ex. V). ) = 0 , where = (X T 1 X )1 X T 1 y is the GLS-estimator 5. Show that g ( and 0 is a vector of zeroes.

6. Let us then look at the special case X = 1 = (1, 1, . . . , 1)T again. Prove that in this case =
i=1 n

wi yi
i=1

wi

where wi is the sum of all elements in the ith column of 1 . Finally, a few exercises to derive the simple kriging formulae (p. 293-294). Let U be a zero-mean spatial process on R with covariogram C. For a linear predictor (s ) = i (s )U (s i ) U
i=1 n

of U (s ), the mean squared error (s )) = E[{U (s ) U (s )}2 ] MSE(U can be expanded into sum (s )) = E[{U (s )}2 ] 2 E{U (s )U (s )} + E[{U (s )}2 ], MSE(U where E[{U (s)}2 ] = Var{U (s)} (s )U (s )} = Cov{U (s ), U (s )} E{U and (s )}2 ] = Var{U (s )}, E[{U since E{U (s )} = 0 for all s R (compare to Math ex. II). 7. Using (1), show that (s )} = T , Var{U (s ))T and = [C(s i s j )]n i, j=1 . (3) (2)

where = (1 (s ), 2 (s ), . . . , n 8. Prove that

(s ), U (s )} = T c (s ), Cov{U

where c (s ) = (C(s 1 s ), C(s 2 s ), . . . , C(s n s ))T (see Math ex. II/5). (s)), as a function of , as follows Results (2) and (3) allow us to write MSE(U g( ) = C(0 ) 2 T c (s ) + T . 9. What is the derivative g ( ) of g( )? 10. Show that g ( ) = 0 , when = 1 c (s ). 11. Derive simplied expression for g( ), when = 1 c (s ).

You might also like