Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Specification Testing for Panel Spatial Models

Monalisa Sen1 and Anil K. Bera2

Abstract
Specification of a model is one of the most fundamental problems in econometrics. In practice, specification tests are generally carried out in a piecemeal fashion, for example, testing
the presence of one-eect at a time ignoring the potential presence of other forms of misspecification. Many of the suggested tests in the literature require estimation of complex models
and even then those tests cannot take account of multiple forms of departures. Using Bera
and Yoon (1993) general test principle and a spatial panel model framework, we first propose an overall test for all possible misspecification. We derive adjusted Raos score (RS)
tests for random eect, serial correlation, spatial lag and spatial error, which can identify
the definite cause(s) of rejection of the basic model and thus aiding in the steps for model
revision. For empirical researchers, our suggested procedures provide simple strategies for
model specification search employing only ordinary least squares (OLS) residuals from standard linear model for spatial panel data. Through an extensive simulation study, we find
that the proposed tests have good finite sample properties both in terms of size and power.
Finally, to illustrate the usefulness of our procedures, we provide an empirical application of
our test strategy in the context of the convergence theory of incomes of dierent economies
which is a widely studied empirical problem in macro-economic growth theory. Our empirical illustration brings home problems in using and interpreting unadjusted tests and how
these problems are rectified by using our proposed adjusted tests.
Keywords: Raos Score, Specification Test, Spatial Models, Panel Models, Weight Matrix
1

Universite catholique de Louvain, CORE, Voie du Roman Pays-34, Louvain-la-Neuve, B-1348. Email:
monalisa.sen@uclouvain.be
2
Department of Economics, University of Illinois at Urbana-Champaign, 225E David Kinley Hall, 1407
W. Gregory Drive, Urbana, IL 61801. Email: abera@illinois.edu
3
An earlier version of this paper was presented at the North America Econometric Society Summer
Meeting (2011) and Vth World Conference of Spatial Econometrics Association (2011). We are grateful to
the participants at these conferences for comments and suggestions; though the remaining shortcomings are
solely ours.
Preprint submitted to Elsevier

April 1, 2014

1. INTRODUCTION
Econometricians interest on problems that arise when the assumed model (used in constructing specification test) deviates from the data generating process (DGP) goes a long way
back. As emphasized by Haavelmo (1944), in testing any economic relations, specification of
a set of possible alternatives, called the priori admissible hypothesis, 0 , is of fundamental
importance. Misspecification of the priori admissible hypotheses was termed as type-III error
by Bera and Yoon (1993), and Welsh (1996, p. 119) also pointed out of a similar concept in
the statistics literature. Broadly speaking, the alternative hypothesis may be misspecified in
three dierent ways. In the first one, what we shall call complete misspecification, the set
0

of assumed alternative hypothesis, 0 , and the DGP , say, are mutually exclusive. This
happens, for instance, if in the context of panel data model, one test for serial independence
when the DGP has random individual eects but no serial dependence. The second case,
underspecification occurs when the alternative is a subset of a more general model repre0

senting the DGP, i.e., 0 . This happens, for example, when both serial correlation and
individual eects are present, but are tested separately (one-at-a-time assuming absence of
other eect). The last case is overtesting which results from overspecification, i.e., when
0

. This can happen if a joint test for serial correlation and random individual eects

is conducted when only one eect is present in DGP. It can be expected that consequences of
overtesting may not be that serious (possibly will only lead to some loss of power), whereas
those of undertesting can lead to highly misleading results, seriously aecting both size and
power [see Bera and Jarque (1982) and Bera (2000)]. Using the asymptotic distributions of
standard Raos score (RS) test under local misspecification, Bera and Yoon (1993) suggested
an adjusted RS test that is robust under misspecification and asymptotically equivalent to
the optimal Neymans C() test. As we will discuss, an attractive feature of this approach
is that the adjusted test is based on the joint null hypothesis of no misspecification, thereby
requiring estimation of the model in its simplest form. A surprising additivity property
also enables us to calculate the adjusted tests quite eortlessly, and interpret the test results
intuitively.

The plan of the rest of the paper is as follows. In the next section we provide a brief
review of existing literature on testing panel and spatial models. We develop the spatial
panel model framework in Section 3 and present the log-likelihood function. Section 4
reviews the main results on the general theory of tests when the alternative is misspecified
and then formulates the new diagnostic tests for the spatial panel model taking account of
misspecification in multiple directions. To illustrate the usefulness of our proposed tests,
in Section 5, we demonstrate how our methodology can assist a practitioner to reformulate
his/her model using an empirical example. For that purpose, we use Heston, Summers
and Aten (2002) Penn World Table, that contains data on real income, investment and
population (among many other variables) for a large number of countries and the growthmodel of Ertur and Koch (2007). From our illustration, it is clear that use of unadjusted RS
tests can lead to misleading inference whereas the adjusted version lead in right direction(s)
of specification search. To investigate the finite sample performance of our suggested and
some available tests, we carry out an extensive simulation study. The results presented in
Section 6, demonstrate that both in terms of size and power, our proposed tests posseses
good small sample properties. Finally, we conclude in Section 7.
2. A Brief Survey of the Literature
The origins of specification testing for spatial models can be traced back to Moran (1950a,
1950b). Much later this area was further enriched by many researchers, for example, see Cli
and Ord (1972), Brandsma and Ketellapa (1979), Burridge (1980), Anselin (1980, 1988) and
Kelejian and Robinson (1992). Most of these papers focused on tests for specific alternative
hypothesis in the form of either spatial lag or spatial error dependence based on ordinary
least squares (OLS) residuals. Separate applications of one-directional tests when other or
both kinds of dependencies are present will lead to unreliable inference. It may be natural to consider a joint test for lag and error autocorrelations. Apart from the problem of
overtesting (when only one kind of dependence characterizes the DGP), the problem with
such a test is that one cannot identify the exact nature of spatial dependence once the joint
null hypothesis is rejected. Conditional tests can deal with this problem, i.e., to use test for
spatial error dependence after estimating a spatial lag model, and vice versa. This, how3

ever, requires maximum likelihood (ML) estimation, and the simplicity of test based on OLS
residuals is lost. Anselin, Bera, Florax and Yoon (1996) was possibly the first paper to study
systematically the consequences of testing one kind of dependence (lag or error) at a time.
Using the Bera and Yoon (1993) (henceforth BY) general approach, Anselin et al. (1996) also
developed OLS-based adjusted RS test for lag (error dependence) in the possible presence
of error (lag) dependence. Their Monte Carlo study demonstrated that the adjusted tests
are very capable of identifying the exact source(s) of dependence and they have very good
finite sample size and power properties. In a similar fashion, in context of panel data model,
Bera, Sosa-Escudero and Yoon (2001) showed that when one tests for either random eects
or serial correlation without taking account of the presence of other eect, the test rejects
the true null hypothesis far too often under the presence of the unconsidered parameter.
They found that the presence of serial correlation made the Bruesh and Pagan (1980) test
for random eects to have excessive size. Similar over rejection occur for the test of serial
correlation when the presence of random eect is ignored. Bera et al. (2001) developed sizerobust tests (for random eect and serial correlation) that assist in identifying the source(s)
of misspecification in specific direction(s).
Now if we combine the models considered in Anselin et al. (1996) and Bera et al. (2001),
we have the spatial panel model, potentially with four sources of departure (from the classical
regression model) coming from four extra parameters due to the spatial lag, spatial error,
random eect and (time series) serial correlation. The spatial panel model has been studied
extensively and has gained much popularity over time given the wide availability of the
longitudinal data. See, for instance, Elhorst (2003), Lee and Yu (2010) and Pesaran and
Tosseti (2011). In this paper, we investigate a number of strategies to test against multiple
form of misspecification in spatial panel model, and derive an overall test and a number of
adjusted tests that take the account of possible misspecification in multiple directions. For
empirical researchers our suggested procedures provide simple strategies to identify specific
direction(s) in which the basic model needs revision using only OLS residuals from the
standard linear model.
Many researchers have conducted conditional and marginal specification tests in spatial
panel models. Baltagi, Song and Koh (2003) proposed conditional Lagrange multiplier (LM)
4

tests, which test for random regional eects given the presence of spatial error correlation
and also, spatial error correlation given the presence of random regional eects. Baltagi et
al. (2007) adds another dimension to the correlation in the error structure, namely, serial
correlation in the remainder error term. Both these were based on the extension of spatial
error models (SEM). Baltagi and Liu (2008) developed similar LM and likelihood ratio (LR)
tests with spatial lag dependence and random individual eects in a panel regression model.
Their paper derives conditional LM tests for the absence of random individual eects without ignoring the possible presence of spatial lag dependence and vice-versa. Baltagi, Song
and Kwon (2009) considered a panel regression with heteroscedasticity as well as spatially
correlated disturbances. As in previous works, Baltagi et al. (2009) derived the conditional
LM and marginal LM tests. However the specification tests proposed in the above papers
require ML estimation of nuisance parameters and such a strategy will get more complex
as we add more parameters to generalize the model in multiple directions. Based on BY,
Montes-Rojas (2010) has proposed an adjusted RS test for autocorrelation in presence of
random eects and vice-versa, after estimating the spatial dependent parameter using ML
and instrumental variable estimation methods. The possibility of using only OLS estimator
to construct RS-type adjusted tests means wide applicability of the robustness approach that
we are proposing.
3. A Spatial Panel Model
We consider the following spatial panel model:

yit =

N
X

mij yjt + Xit + uit

(1)

j=1

uit = i + it , where i IID(0,


it =

N
X

2
)

(2)

wij jt + vit

(3)

j=1

vit = vit

+ eit , where eit IIDN (0,

2
e ),

(4)

for i = 1, 2, . . . , N ; t = 1, 2, . . . , T. Here yit is the observed value of the dependent variable for
ith location/unit at tth time, Xit denotes the observations on non-stochastic regressors and
eit is the regression disturbance. Spatial dependence is captured by the weight matrices M =
(mij ) and W = (wij ), i, j = 1, 2 . . . , n. The matrices M and W are row-standardized and
the diagonal elements are set to zero. The testing parameters of interest are random eects
(

2
),

serial correlation (), spatial lag dependence ( ) and spatial error dependence ( ). The

regression coefficient vector

and innovation variance

2
e

are the nuisance parameters.

In matrix form, the equations (1) - (4) can be written compactly as


y = (IT M )y + X + u,
where y is of dimension N T 1, X is N T K,

(5)

is k 1, u is N T 1, IT is an identity

matrix of dimension T T and denotes Kronecker product. Here X is assumed to be of


full column rank and its elements are bounded in absolute value. The disturbance term can
be expressed as
u = (T IN ) + (IT B 1 )v.
Here B = (IN

(6)

W ) and T is vector of ones of dimension T. Under this setup, the variance-

covariance matrix of u is given by


=

2 [JT

IN ] + [V (B 0 B) 1 ],

(7)

where JT is a matrix of ones of dimension T T , and V is the familiar T T variance


-covariance matrix for AR (1) process in equation (4),
V = E(v 0 v) = [
with

1
1

1
6
6 ..
V1 = 6 .
4
T

V1 ]

..
.
1

..
.

2
e IN

= V

T 1

...
7
.. 7
..
.
. 7,
5
... ...
1

2
e IN ,

(8)

and V =

1
V.
1 2 1

The log-likelihood function of the above model can be written as:


L=

NT
ln2
2

where A = (IN
1
ln || =
2
2

1
ln || + T ln |A|
2

X ]0 1 [(IT A)y

X ],

(9)

M ). Following Baltagi et. al (2007), we can write

N
ln(1
2
2

1
[(IT A)y
2

where d = + (T

2 ) +
1), =

1
ln |d2 (1
2
q

1+
1

)2 IN + (B 0 B) 1 | +

and

2
e

. Substituting

NT
ln
2
1
2

2
e

(T

1) ln |B|,

ln || in L , we obtain

NT
N
1
NT
ln2+ ln(1 2 )
ln |d2 (1 )2 IN +(B 0 B) 1 |
ln e2 +(T 1) ln |B|+T ln |A|
2
2
2
2
1
[(IT A)y X ]0 1 [(IT A)y X ]. (10)
2

L=

The above log-likelihood function will be used to derive the test statistics in the next
section.
4. Derivation of the Specification Tests
4.1. A Brief Review of Bera and Yoon Test Principle
Consider a general model represented by the log-likelihood L( , , ) where the parameters

and

are, respectively, (p 1), (r 1) and (s 1) vectors. Here we assume

that underlying density function satisfies the regularity conditions, as stated in Serfling
(1980), Lehmann and Romano (2005), for the MLE to have asymptotic Gaussian distribution. Suppose a researcher sets
function L1 ( , ) = L( , ,

0 ),

and tests H0 :

where

and

using the log-likelihood

are known. The RS statistic for test-

ing H0 in L1 ( , ) will be denoted by RS . Let us denote = ( 0 ,


= ( 0 ,

0
0,

0 0
0 )(p+r+s)1

, where is the ML estimator of

under

define the score vector and the information matrix, respectively, as

0
0

, 0 )0(p+r+s)1 and
and

0.

We

da () =

where a = ( 0 ,

@L()
@a

and

RS =

E[

6
1 @ 2 L()
6
]
=
6J
n @@0
4
J

J
J
J

7
7
J 7,
5
J

(11)

, 0 )0 and n is the sample size. If L1 ( , ) were the true model, then it is

well known that under H0 :

where J

J() =

=J
()

J J

1
0J
d ()
n

1 d ()
!
()

2
r (0),

(12)

2
r ( 1 ),

where the non-centrality

J .

Under local alternative H1 :

p , RS
n

parameter
1

1 ()

= 0J

(13)

Given this setup, i.e., under no misspecification, asymptotically the test will have the correct
size and locally optimal. Now suppose that the true log-likelihood function is L2 ( , ) so that
the considered alternative L1 ( , ) is (completely) misspecified. Using the local misspecification

, Davidson and MacKinnon (1987) and Saikkonen (1989) derived the

asymptotic distribution of RS under L2 ( , ) as RS !

2
r ( 2 ),

where the non-centrality

parameter
2

with J
2 , RS

= J

J J

2(

) = 0J

1
. J

.!

(14)

J . Owing to the presence of this non-centrality parameter

will reject the true null hypothesis H0 :

excessive size. Here the crucial term is J

more often, i.e., the test will have

which can be interpreted as partial covariance

between the score vectors d and d after eliminating the linear eect of d on d and d . If
J

= 0, then asymptotically the local presence of

has no eect on RS . BY suggested

a modification to RS to overcome this problem of over-rejection, so that the resulting test

is valid under the local presence of . The modified statistic is given by


RS =

[d ()
n

()J

1
.

()]
0 [J
()d

()

()J

1
.

()J

[d ()

.
.

()]

()J

1
1
.

()].

()d
(15)

This new test essentially adjusts the mean and variance of the standard RS statistics RS ,
and, under Ho :

RS !
while under H1 :

2
r (0),

(16)

2
r ( 3 ),

(17)

+ n ,
RS !

where
3

3 ()

= 0 (J

1
. J

).

(18)

Note the results in (16) and (17) are valid both under presence or absence of local
misspecification, since the asymptotic distribution of RS is unaected by the local departure
of

from

0.

BY shows that for local misspecification the adjusted test is asymptotically equivalent
to Neymans C() test and thus shares its optimal properties. Three observations are worth
noting regarding RS . First, RS requires estimation only under the joint null, namely
=

and

0.

That means, in most cases, as we will see later, we can conduct our

tests based on only OLS residuals. Given the full specification of the model L( , , ), it is
of course possible to derive RS test for

after estimating

(and ) by ML method,

which are generally referred to as conditional tests. However, ML estimation of


difficult in some instances. Second, when J

could be

= 0, which is a simple condition to check,

RS = RS and thus RS is an asymptotically valid test in the local presence of . Finally,


let RS
and

denote the joint RS test statistic for testing hypothesis of the form H0 :
=

using the alternative model L( , , ), then [for a proof see Bera, Bilias and

Yoon (2007), Bera, Montes-Rojas and Sosa-Escudero (2009)]


RS

= RS + RS = RS + RS ,
9

(19)

where RS and RS are, respectively, the counterparts of RS and RS for testing H0 :


0.

This is a very useful identity since it implies that a joint RS test for two parameter vectors

and

can be decomposed into sum of two orthogonal components: (i) the adjusted statistic

for one parameter vector and (ii) (unadjusted) marginal test statistic for the other. Since
many econometrics softwares provide the marginal (and sometime the joint) test statistics,
the adjusted versions can be obtained eortlessly.
Significance of RS
parameter vector

indicates some form of misspecification in the basic model with

only. However, the correct source(s) of departure can be identified only

by using the adjusted statistics RS and RS not the marginal ones (RS and RS ). This
testing strategy is close to the idea of Hillier (1991) in the sense that it partitions the overall
rejection region to obtain evidence about the specific direction(s) in which the basic model
needs revision. And it achieves that without estimating any of the nuisance parameters.
4.2. Score and Information Matrix
We are interested in testing H0 :

= 0 in the possible presence of the parameter

vector

. For the spatial panel model (1)-(4), the full parameter vector is given by =

( 0,

2
, ,

and

2
e,

, )0 . In context of our earlier notation = ( 0 ,

, 0 )0 ,

= ( 0,

could be any combinations of the parameters under test, namely (

2
, ,

2 0
e)

and

, ) . The

main advantage of using RS test principal is that we need estimation of 0 only under the
joint null H0a :

==

= = 0 i.e., of 00 = ( 0 ,

2
0
e , 0, 0, 0, 0) .

For simplicity we assume

the weight matrices W and M to be same. This is often realistic in practice, since there
may be good reasons to expect the structure of spatial dependence to be the same for the
dependent variable y and the innovation term . On the basis of the derivations given in the
Appendix, Section 1, the score functions and the information matrix J evaluated under H0a
i.e., restricted ML estimation of 0 with = ( 0 , e2 )0 are :
@L
=0
@

(20)

@L
=0
@ e2

(21)

@L
N T u0 (JT IN )
u
=
[
2
0
2

@
u u
2 e
10

1]

(22)

@L
N T u0 (G IN )
u
=
[
]
0
@
2
u u

(23)

@L
u0 [(IT W )y]
=
2
@
e

(24)

@L
N T u0 (IT (W + W 0 ))
u
=
[
],
@
2
u0 u

(25)

x is the OLS residual vector of dimension N T 1, e2 =

with u = y

u
0 u

NT

and G =

@V1
| a,
@ Ho

where G is bidiagonal matrix with bidiagonal elements all equal to one.


The information matrix J, defined in (11), under the joint null Hoa is
2

6
6
6
6
6
6
6
J(0 ) = 6
6
6
6
6
6
4

X0X
2

NT
2 e4

NT
2 e4

NT
2 e4

NT
2 e4
N (T 1)
2

X 0 (IT W )X
2

1 @2L
0 )
N T @ @

7
7
7
0
0
0
7
7
N (T 1)
7
0
0
2
7
e
7
7
N (T 1)
0
0
7
7
7
0
T tr(W 2 + W W 0 ) T tr(W 2 + W W 0 )7
5
2
0
0
T tr(W + W W )
H
(26)
e

where J() = E(

X 0 (IT W )X
2

and H = T tr(W 2 + W W 0 ) +

0 X 0 (IT W 0 )(IT W )X
.
2

The detailed

derivation and expression of each of the terms of the information matrix are relegated to the
appendix.
Apart from the RS statistic for full joint null hypothesis H0a , we propose four (modified)
test statistics for the following hypotheses:
I) Hob :

= 0 in presence of , , .

II) Hoc : = 0 in presence of


III) Hod :

= 0 in presence of

IV) Hoe : = 0 in presence of

2
,

, .

2
, , .
2
, ,

These four will guide us to identify the correct source(s) of departure(s) from H0a :
=

= = 0, when it is rejected. One can test various combinations of I) to IV) by testing

11

two/three parameters at a time under the null and compute additional ten test statistic
(as is done sometimes in practice). However, we would argue that is not necessary. Also
keeping the total number of tests to a minimum is beneficial to avoid the pre-testing problem
since in practice, researchers reformulate their model based on test outcomes. There is a
big advantage in considering test statistics that require estimation only under the joint null.
Given the full specification of the model in equation (1) - (4), it is of course possible to
derive conditional RS and likelihood ratio (LR) tests, for say,

= 0 in the presence of

, , as advocated in Baltagi et al. (2003), Baltagi et al. (2007) and Baltagi and Liu
2

(2008). However, that requires ML estimation of (, , ) (and also of


Let us take the case I, i.e., for Hob :
J

2
e

6= 0, where

= 0 in presence of , , , the term J

= (, , )0 . Thus the parameter

and

for LR test).

i.e.,

is not independent

of (, , ) and vice-versa and therefore, the marginal RS test statistic based on the raw
score d

, i.e. RS

for Hob :

= (, , )0 = (0, 0, 0) is not a valid test

= 0 assuming

under the presence of , , . Instead RS 2 which eliminates the eects of (, , ) without


estimating them, would be a more appropriate statistic, as discussed above. Therefore, the
focus of our strategy is to carry out the specification test for a general model with minimum
estimation. As we will see later from our Monte Carlo results, we lose very little in terms of
finite sample size and power. Though RS 2 does not require explicit estimation of (, , ),
eect of these parameters have been taken into account through the use of the eective
score d 2 . . Of course, given the current computing power, it is not that difficult to estimate
a complex model. However, it could be sometime hard to ensure the stability of many
parameter estimates. Also theoretically the stationarity regions of the parameter space have
not been fully worked out as discussed in Elhorst (2010).
4.3. Adjusted RS Tests
We now discuss the test statistics for each of the above hypotheses. From equation (15)
recall the form of locally size adjusted RS for H0 :
RS =

1
[d
n

For each of the following test,

1
.

d ]0 [J

= ( 0,

2 0
e) ,

12

= 0 in presence of parameter :

1
.

] 1 [d

is one parameter of (

1
.

d ].

2 , ,

, ) and

is

the rest three.


Let us consider the test statistics one-by-one. Detail derivations are in Appendix.
I) Hbo :

= 0 in presence of , , .

Here we are testing the significance of random location/ individual eect in presence of
time series autocorrelation of errors, spatial error dependence and spatial lag dependence.
In particular we have

=
J

= (J

= (, , )0 . Here

and

0 0) = [

N (T 1)
0 0],
2
e

which implies that the unadjusted RS is not a valid test under the local presence of , , .
However, note that only the partial covariance between d
for d

and d ; d

and d is nonzero, while it is zero

and d . This fact gets reflected in the unadjusted and adjusted version

of the test statistic for Hob :


RS

RS
where A =

u
0 (JT IN )
u
u
0 u

[d
J

2 2
. e

1 and B =

(d 2 )2
J

2
2 . e

J 1 d ] 2

J 1 J

N T A2
2(T 1)

=
2

N T 2 (A B)2
,
2(T 1)(T 2)

u
0 (GIN )
u
.
u
0 u

In the numerator of RS 2 , the eective (adjusted) score d 2 = [d


part of d

(27)

which is orthogonal to d . For other nuisance parameters

J 1 d ] is that

(spatial error lag)

and (spatial dependence lag) we do not need to make any adjustment since they do not
have any asymptotic eect on

as far as the testing is concerned. Similar interpretation

applies to the denominator of RS 2 which reflects the adjustment needed in variance part
for changing the raw score to eective score. Thus inference regarding

is aected only by

the presence of and is independent of the spatial aspects of the model. This separation
between time and space aspects panel spatial model is quite interesting, and we consider
it a plus point that our adjusted test take account of such information implied by model.
Moreover from equations (14) and (18) we know that overall power (in presence of local

13

misspecification) of RS

and RS 2 are guided by the non-centrality parameters.

2
)

2(

4
(J

2) =
2 J
e

2 .

N 2
(T
T

and
3(

2
)

4
(J

2 .

2
e

J 1 J 2 ) =

respectively. As it is clear from eqution (28),


only on

N
2

1)

(T
4
e

1)(T

2),

is unduely aected by , where as

(28)
3

depends

2
.

II) Hco : = 0 in presence of

2
,

, .

Here we test the significance of time-series autocorrelation in presence of random eect,


spatial lag and spatial error dependence, i.e.,
J

= (J

= and

0 0) = [

=(

2
,

, )0 . Here

N (T 1)
0 0].
2
e

Again this expression can be interpreted that the inference on will be aected only by the
presence of random eect, not by the presence of spatial dependence. The unadjusted and
adjusted test statistics for this case are:
(d )2
N T 2B2
RS =
=
J
4(T 1)2

RS

[d

J 2 J

2
2 . e

J 2 J

2 2
. e

d 2 ]2

J 2

N T 2 (B 2A
)2
T
.
4(T 1)(T T2 )

(29)

Here also the overall power of RS and RS in case of local misspecification can be obtained
respectively, from
2 ()

3 ()

= 2 (J

J 2 J

N
4
e

1
2 2
. e

(T

1)

J 2 ) = N (T

1)(1

2
).
T

For the rest of the two cases we provide only the algebraic expressions:
14

(30)

III) Hdo :
Here

2
, , .

= 0 in presence of

and

2
0
, , )

=(

and
J

where J

= (0 0 J ),

= T tr(W 2 + W W 0 ). The test statistics are


0

(W +W
[ N T u (IT 2
d2
u0 u

RS =
=

J
T2

RS =
where Z =

1
[
u0 E y
2 e2

[d

J J. 1 d ]2

J J. 1 J

0 ))
u

]2

ZZ 0
,
T[1 TJ. 1 ]

(31)

TJ. 1 (
u(E + E 0 )
u)], E = (IT W ) and T = T tr(W 2 + W W 0 ).

The non-centrality parameter of RS and RS are, respectively,


2(

) = 2 T

and
3(

)=

IV) Heo : = 0 in presence of


Here

= and

=(

2
, ,

J J. 1 J ) =

(J

2
, ,

T[1

TJ. 1 ].

(32)

)0 and
J

= (0 0 J ),

where J = T tr(W 2 + W W 0 ). The test statistics are

RS =

RS

[d
=
J.

d2
(
u0 E y)2
=
4 J.
J.
e

[ 2 12 [
u0 E y (
u(E + E 0 )
u)]]2
J J 1 d ] 2
e
=
.
J J 1 J
J.
T
15

(33)

Similarly, the non-centrality parameters of RS and RS , respectively, are


2 ( )

3 ( )

= 2 (J.

J. 1 T2

J J

T).

J ) = 2 (J.

(34)

Going back to the separation between non-spatial dimension of the model (

and ) and

its spatial part ( and ) we show that the following partial covariances are all zero:
J(

).

eect of d

= 0 i.e., the partial covariance between d and (d , d ) after eliminating the


and d is zero.

Similarly, J

2
(

).

= 0, J

2
).

= 0, J (

By decomposing the joint RS for H0a :


RS

= RS

+ RS

2
).

= 0 and J(

==

2
).

= 0.

= = 0, as in equation (19) we obtain

= RS 2 + RS + RS + RS = RS

As expected the omnibus test statistic RS

)(

+ RS + RS + RS . (35)

is not the sum of four marginal RS

statistics. The above result support our finding that the unadjusted RS over- rejects the
null as it fails to take into account of the eect of the relevant interaction eects within the
spatial and non-spatial parameters. From the above decomposition, one can trivially obtain
the adjusted RS tests from their unadjusted counterparts as follows:
RS 2 = RS

RS

RS = RS

RS

(37)

(36)

RS = RS

RS

(38)

RS = RS

RS

(39)

This provides a substantial computational simplicity for practitioners. One can easily
obtain the joint RS (two directional) and marginal RS (one directional) for the parameters
using any popular statistical package like STATA, R, Matlab, based on an OLS residuals, and
then obtain the adjusted test statistics as above. Thus our methodology is implementable

16

without any computational burden, unlike the LR and conditional LM tests.


5. An Application
We now present an application that illustrates the usefulness of our proposed tests. The
data consist of a sample of 91 countries over the period 1961-1995. These countries are
those of the Mankiw, Romer and Weil (1992) non-oil sample, for which Heston, Summers
and Aten (2002) Penn World Table (PWT version 6.1) provide data. We use a slight variation of Ertur and Koch (2007) growth model that explicitly takes account of technological
interdependence among countries and examines the impact of neighborhood eect. The
magnitude of physical capital externalities at steady state, which is not usually identified in
the literature, is estimated using a spatially augmented Solow model. We illustrate how a
practitioner, after estimating the simplest model would proceed to identify the dependent
structures and reformulate the model accordingly. We consider the following model:

ln(

Yit
)=
Lit

0+

1 ln sit +

2 ln(nit +g+ )+

N
X

wij ln(

j6=i

N
Yjt X
)+
wij (
Ljt j6=i

ln sjt +

ln(nj t+g+ ))+uit

uit = i + it
it =

N
X

wij jt + vit

j6=i

vit = vit

+ eit ,

where Y is real GDP, L is the number of workers, s is the saving rate, and n is the average
growth of the working-age population (ages 15 to 64). The coefficients , depreciation of
physical capital and g, the balanced growth rate are taken to be known at value 0.05 ( + g)
as is common in the literature [Mankiw-Romer-Weil (1992), Ertur and Koch (2007)]. Finally,
wij are the elements of weight matrix W, based on geographical distance. We estimate the
model using OLS method under our null hypothesis, i.e., when all the four eects are absent
and then compute the following test statistics: (i) joint test for all four departures, i.e.
random eect, serial correlation, spatial error lag and spatial lag, (RS
for random eect and serial correlation (RS

17

),

(ii) joint test

), (iii) joint test for spatial error lag and lag

dependence (RS ), (iv) the Breush-Pagan test for random eects (RS 2 ), (v) the modified
version (RS 2 ), (vi) the RS test of serial correlation test ( RS ), (vii) the corresponding
modified version ( RS ), (viii) the RS test of spatial error dependence ( RS ), (ix) proposed
modified version (RS ), (x) RS test of spatial lag dependence test ( RS ), and lastly, (xi) the
derived modified version (RS ). To identify specific departure(s) there is no need to consider
any other combination of tests due to the asymptotic independence discussed earlier. Here
we are reporting the unadjusted RS tests, mainly for comparison purpose, knowing well that
they are not informative. The test statistics are presented in Tables 1 and 2.
All of the test statistics are computed individually, and we verified the equalities in
equations (36) - (39). The omnibus statistic (RS
compared to

4
2

= 220.01) rejects the joint null when

critical value at any level. Later in Section 6, through our Monte Carlo case

study we will demonstrate the good finite sample size of RS


eect and serial correlation, RS
RS

The joint tests for random

= 189.45 and for spatial error dependence and spatial lag,

= 30.56 are highly significant after comparing them to

2
2

critical points. These joint

tests are however not informative about the specific direction(s) of the misspecification(s).
All the unadjusted statistics RS 2 , RS , RS and RS strongly reject the respective null
hypothesis. If an investigator takes these rejections at their face values, then s/he would
attempt to incorporate all four parameters into the final model. However, as we pointed out
these one-directional tests are not valid in presence of other possible eects. Significance of
each parameter can only be evaluated correctly by considering our modified tests. Three of
the modified versions RS 2 , RS and RS still reject the respective null at 1 % significance
level, when compared to

2
1

critical value, though it is interesting to see how the values of

the statistics reduce after modification. A somewhat striking result, that the value of RS
is 0.10 in contrast to that of RS which is 20.01. From our analytical results in the previous
section it is clear that 20.01 is not only for the spatial error dependence but also reflect the
presence of lag dependence. Thus the misspecification of the basic model can be thought
to come from the presence of random eects, serial correlation and spatial lag (rather than
spatial error dependence) of the real income of the countries.
This example seems to illustrate clearly the main points of the paper: the proposed
modified versions of RS tests are more informative than the unmodified counterparts. It
18

Table 1: Joint test statistics

RS 2
220.01

RS 2
189.45

RS
30.56

Table 2: Unadjusted and adjusted one-directional test statistics

RS 2
157.14

RS 2
183.03

RS
32.31

RS
6.36

RS
26.01

RS
0.10

RS
30.46

RS
4.55

is worth noting a few observations from our analytical and the empirical results. Since
RS

= RS

+ RS , joint test for serial correlation and random eect is independent

of the joint test for spatial lag and spatial error dependence. However, further additivity
fails, as we note: RS

6= RS

+ RS and RS

6= RS + RS . This is due to the non-zero

interaction eects between parameters, and thus unadjusted statistics are contaminated by
the presence of other parameters.
We have
RS + RS

RS + RS

RS
RS

= RS

= RS

RS 2 = RS
RS = RS

RS = 25.89
RS = 25.91

Thus, 25.89 can be viewed as a measure of the interaction between


25.91 is the measure for

and , and similarly

and . Also these interaction eects are equal to the correction

needed for the respective unadjusted tests.


It is important to emphasize again that the implementation of the modified tests is based
solely on OLS residuals and parameter estimates. Some currently available test strategies
relies on ML estimation of the general spatial panel model with all the parameters, and
then carrying out LR or conditional RS tests individually or jointly. However, we propose
asymptotically equivalent tests without estimating the complex model at all. In the next
section we demonstrate that though our suggested tests are valid only for large samples
and local misspecification, they perform quite well in finite samples and also for not-so-local
departures. We also show that a very little is lost in terms of size and power in using our

19

simple tests compared to the full-fledge computationally demanding tests.


6. Monte Carlo Results
To facilitate comparisons with existing results we follow a structure close to Baltagi,
Song, Jung and Koh (2007) and Baltagi and Liu (2008). The data were generated using the
model:

yit = +

N
X

wij yjt + Xit + uit

(40)

j=1

uit = i + it , where i IID(0,


it =

N
X

(41)

wij jt + vit

(42)

j=1

vit = vit
We set = 5 and

+ eit , where eit IIDN (0,

2
e ).

(43)

= 0.5. The independent variable Xit is generated using:


Xit = 0.4Xit

+ 'it ,

(44)

where ' U nif orm[ 0.5, 0.5] and Xi0 = 5 + 10'i0 . For weight matrix W , we consider
rook design. We fixed
2
, ,

and

2
e

= 20 and let =

2+ 2

. Values of all the four parameters

are varied over a range from 0 to 0.5. We have considered two dierent pairs

for (N, T ), namely (25, 7), (49, 7), (25, 12), (49, 12). For lack of space we report the results for
(25, 12), (49, 12). The results for other pairs are quite comparable to the reported ones and
are available on request. Each Monte Carlo experiment is consist of generating 1,000 samples
for each dierent parameter settings. Thus the maximum standard error of the estimates
q
0.5)
of the size and power would be (0.5(1
= 0.015. The parameters were estimated using
1000
OLS, and eleven test statistics, namely RS

, RS

, RS , RS 2 , RS 2 , RS , RS , RS ,

RS , RS and RS were computed. As discussed earlier, in practice we do not need to compute all these statistics; we do it here for comparative evaluation. The tables and graphs
are based on the nominal size of 0.05. In order to elaborate our results systematically, we
divided the results in two sections. In Section 6.1, we present the Monte Carlo results for
20

RS

, RS

, RS , RS 2 , RS 2 , RS and RS , i.e., the dierent test statistics for the auto-

correlation and individual random eects, both in presence and absence of spatial parameters
and , and in Section 6.2, the rest of the results are reported.
6.1. Results for tests relating to

and

We discuss the results for the following parameter settings: i)

= 0 and = 0 , i.e.,

when there is no spatial dependence; a case similar to Baltagi and Li (1995) and Bera et.al
(2001). The result pertaining to case (i) are reported in Tables 3 and 4.
ii)

= 0.3 and = 0.05 (i.e., in local presence of the spatial lag and error dependence).

iii)

= 0.05 and = 0.3 (i.e., when there is local presence of the spatial parameters).
In addition, Figures 1 to Figure 8 illustrate the size and power of our adjusted test

statistics RS 2 and RS .
Let us now consider the performances of the tests in terms of power and sizes of RS and
RS 2 . For N=25, T=12, the estimated rejection probabilities are reported in Table 3, and
for N=49, T=12 it is reported in Table 4. For both of these tables, the estimated rejection
probabilities are for data generated with

= 0 and = 0 . We also illustrate the results

of the other two cases [(ii) and (iii)] graphically from Figures 1- 8. Let us first concentrate
on RS 2 and RS

, which are designed to test H0 :

power for RS 2 vis-a-vis RS

= 0. When = 0, there is a loss of

, and this loss gets minimized as deviates more and more

from zero. While RS 2 does not sustain much loss in power when = 0, we notice that RS
reject H0 :

= 0 too often when

= 0, but 6= 0. This unwanted rejection probabilities

is due to presence of , as shown in equation (27). RS 2 also has some rejection probabilities
but the problem is less severe as can be seen from equation (27). We showed in Table 6 that
our results are similar as in Table 3 in Baltagi et al. (2007). One dimensional LM and LR
tests, as in Table 3 in Baltagi et. al (2007), are obtained after estimating the parameters
2

and

. However, RS in Table 6 illustrates that one can obtain very similar results, like

conditional LM or LR tests, using our adjusted RS test.


Moreover, even when we allow for case ii)

= 0.3 and = 0.05, case iii)

= 0.05 and

= 0.3; we find close results as in Table 3 and 4 above, i.e., RS 2 are robust only under the
local misspecification, i.e., for low values of . From Figures 1) and 2) we can see that RS 2
21

Table 3: Estimated Rejection Probabilities with =

RS

0
0
0
0
0
0
0
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5

0.047
0.609
0.818
0.985
1.000
1.000
1.000
0.399
0.621
0.786
0.974
1.000
1.000
1.000
0.381
0.582
0.800
0.984
0.998
1.000
1.000
0.440
0.572
0.719
0.913
0.996
1.000
1.000
0.431
0.536
0.648
0.841
0.963
0.995
1.000

RS

0.069
0.858
0.943
0.999
1.000
1.000
1.000
0.681
0.839
0.942
0.998
1.000
1.000
1.000
0.686
0.838
0.944
0.996
1.000
1.000
1.000
0.704
0.822
0.903
0.978
1.000
1.000
1.000
0.719
0.796
0.859
0.961
0.991
1.000
1.000

= 0. Sample size: N = 25, T = 12

RS 2

RS

RS

0.054
0.068
0.130
0.580
0.938
0.974
0.999
0.547
0.614
0.682
0.836
0.925
0.972
0.993
0.543
0.605
0.706
0.829
0.930
0.969
0.994
0.789
0.856
0.985
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

0.048
0.057
0.082
0.390
0.482
0.614
0.781
0.209
0.293
0.305
0.380
0.463
0.611
0.781
0.400
0.412
0.600
0.682
0.761
0.84
0.98
0.705
0.811
0.844
0.973
1.000
1.000
1.000
0.905
0.911
0.944
0.973
1.000
1.000
1.000

0.050
0.188
0.670
0.999
1.000
1.000
1.000
0.281
0.577
0.858
1.000
1.000
1.000
1.000
0.687
0.854
0.961
0.999
1.000
1.000
1.000
0.701
0.832
0.926
0.990
1.000
1.000
1.000
0.709
0.814
0.878
0.970
0.997
0.999
1.000

0.049
0.122
0.652
0.985
1.000
1.000
1.000
0.059
0.320
0.505
0.983
0.998
1.000
1.000
0.057
0.592
0.800
0.977
0.997
1.000
1.000
0.051
0.552
0.800
0.921
0.989
0.994
1.000
0.053
0.182
0.432
0.835
0.946
0.982
0.989

RS

is size robust for local misspecification of the parameters under both the cases ii)
and = 0.05, and iii)

= 0.3

= 0.05 and = 0.3. Comparing the power (Figures 3-4), it is clearly

evident that the power loss gets minimized for RS 2 as deviates from 0.
22

Table 4: Estimated Rejection Probabilities with =

RS

0
0
0
0
0
0
0
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5

0.060
0.935
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

RS

0.059
0.991
0.999
1.000
1.000
1.000
1.000
0.945
0.989
0.999
1.000
1.000
1.000
1.000
0.928
0.988
0.999
1.000
1.000
1.000
1.000
0.953
0.984
0.994
1.000
1.000
1.000
1.000
0.961
0.982
0.989
1.000
1.000
1.000
1.000

23

= 0. Sample size: N = 49, T = 12

RS 2

RS

RS

0.053
0.415
0.853
0.986
0.996
1.000
1.000
0.848
0.889
0.943
0.990
0.998
1.000
1.000
0.850
0.894
0.941
0.987
0.995
1.000
1.000
0.855
0.904
0.930
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

0.047
0.054
0.056
0.187
0.725
0.856
0.949
0.350
0.420
0.508
0.617
0.696
0.847
0.943
0.519
0.533
0.537
0.579
0.713
0.838
0.943
0.760
0.812
0.942
0.990
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

0.073
0.992
1.000
1.000
1.000
1.000
1.000
0.531
0.989
1.000
1.000
1.000
1.000
1.000
0.924
0.988
1.000
1.000
1.000
1.000
1.000
0.938
0.983
0.997
1.000
1.000
1.000
1.000
0.948
0.980
0.996
1.000
1.000
1.000
1.000

0.051
0.797
0.992
1.000
1.000
1.000
1.000
0.047
0.776
0.987
1.000
1.000
1.000
1.000
0.051
0.884
0.977
1.000
1.000
1.000
1.000
0.054
0.871
0.949
0.999
1.000
1.000
1.000
0.045
0.810
0.918
0.987
0.998
1.000
1.000

RS

Size Comparison of RS (mu) and RS (mu*)


,

and

2)

0.8

0.8

Estimated Size

Estimated Size

1)

0.6
0.4

RS(mu*)

0.2

RS(mu)

and

0.6
0.4

RS(mu*)

0.2

RS(mu)

0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5

0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5

Rho

Rho

Power Comparison
4)
1

0.8

0.8

0.6
RS(mu*)
0.4
RS(mu)

0.2

Estimated Power

Estimated Power

3)

0.6
RS(mu*)
0.4

RS(mu)

0.2
0

0
0

0.05

0.1
sigmamu

0.3

0.5

0.05

0.1
sigmamu

0.3

0.5

The Figures 1-4 confirm that the local presence of the spatial dimensions do not aect
RS 2 drastically, which confirms our mathematical proof. The features of RS 2 is more or
less similar when

= 0 and = 0 vis-a-vis their local departures from 0. It means that the

inference of the parameter

does not depend on the local presence of spatial parameters

and .
In similar way, we can explain the size and power of RS using Table 3 and 4 and Figures
5 - 8. From TableL 3 and 4 we note that when

= 0 then RS has better power than

RS . However, unlike RS 2 , the power of RS is much closer to RS . The real benefit of


RS is noticed when = 0 but > 0; the performance of RS is remarkable. For the case
= 0, = 0 , for N=25, T=12 and N=49, T=12 it is evident from the Tables 3 and Table 4.
Even when there is local presence of the parameters

and , the size of RS is significantly

better than RS when > 0. In other words, RS is performing much better than what it
is expected to perform; i.e. not rejecting = 0 when is indeed zero even for large values

24

Size Comparison of RS (rho) and RS (rho*)


and

6)

0.8

0.8
Estimated Size

Estimated Size

5)

0.6
RS(rho*)

0.4

RS(rho)
0.2

and

0.6
RS(rho*)

0.4

RS(rho)
0.2

0
0

0.05

0.1

0.3

0.5

0.05

sigmamu

0.1

0.3

0.5

sigmamu

Power Comparison
8)
1

0.8

0.8

Estimated Power

Estimated Power

7)

0.6
RS(rho*)

0.4

RS(rho)
0.2

0.6
RS(rho*)

0.4

RS(rho)

0.2
0

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

rho

rho

of . The non central parameter of RS , as shown in equation (29) is independent of any


nuisance parameters, which explains the robust performance of the test statistics. On the
other hand RS rejects the null too often even when is actually zero. This can be easily
observed from equation (29) which is a function of

2
.

The power comparison also gives a

very impressive result.


Results from the joint statistics RS
respective null hypotheses H0 :

= =

and RS

is informative when we accept the

= = 0 and H0 :

However, if the null is rejected we need to decompose RS

= = 0 respectively.

and RS

to extract exact

source(s) of misspecification. However, overall they have good power. These results are
consistent with Bera et al. (2001) and also Montes-Rojas (2010). However, we dier from
each of them in our basic model framework, which is more general than both Bera et al.
(2001) and Montes-Rojas (2010).
Table 5 and 6 further illustrates the usefulness of our proposed tests by comparing to the
conditional LM tests.

25

Table 5: Estimated rejection probabilities of RS , RS|

=0
RS

RS|

0
0.2
0.4
0.6
0.8
0.9

0.054
0.730
1.000
1.000
1.000
1.000

0.062
0.815
1.000
1.000
1.000
1.000

= 0.2
RS

RS|

0.055
0.790
1.000
1.000
1.000
1.000

0.051
0.816
0.990
1.000
1.000
1.000

with
2

= 0. Sample Size: N=25,T=12

= 0.5
RS

RS|

0.055
0.833
0.990
1.000
1.000
1.000

0.063
0.848
0.982
1.000
1.000
1.000

Table 5 illustrates that the Monte Carlo results for the conditional LM test, derived in
Baltagi et al. (2007, p. 8-9), i.e., one dimensional conditional test for C.2, H0i : = 0 in
2

presence of

and , which are very similar when

= 0 and

6= 0. This supports our

mathematical results further: the inference on is aected only by the presence of random
eect, not by the presence of spatial dependence. Infact the results in Table 5 is close to the
Monte Carlo results in Baltagi and Li (1995) which illustrated one dimensional conditional
test of in presence of
2
.

after estimating

only. Table 2 in Baltagi and Li (1995) report the power of RS|

For example, the Monte Carlo results for = 0.4 and

which is comparable to RS|


estimating

and

of Baltagi et al. (2007) where RS|

= 0 is 1.000,

is calculated after

.This illustrates our findings further that asymptotically the LM or RS

test statisticLs of time series parameters, i.e. and


i.e.

are independent of spatial parameters

and . Even in the finite sample the size and power do not dier much as illustrated

in our Table 5.We also conducted the Monte Carlo experiments for our adjusted RS test for
error autocorrelation, i.e., RS , assuming

6= 0 and = 0 and compared it with the one

dimensional LM test derived in Baltagi et al. (2007).


Here RS|

refers to the one dimensional conditional LM test as derived in Baltagi et

al. (2007). The rejection probabilities for RS|

are the ones reported in Baltagi et. al

(2007) in Table 3 for N=25, T=12. We computed RS for

6= 0 and > 0. As noted before

RS can be computed using simple OLS residuals, whereas computation of RS|


estimation of

and

2
.

requires

Results reported in Table 6 further supports our findings, i.e., on

one hand the performance of our adjusted RS is very similar to one directional conditional
LM test; on the other, our adjusted RS test is simple to compute than conditional LM test.
26

Table 6: Estimated rejection probabilities of RS , RS|

0
0.2
0.4
0.6
0.8
0
0.2
0.4
0.6
0.8
0

=0
RS

RS|

0
0
0
0
0
0.
0.2
0.2
0.2
0.2
0.4

0.054
0.056
0.058
0.059
0.055
0.730
0.790
0.865
0.802
0.775
1.000

0.051
0.061
0.047
0.051
0.041
0.803
0.785
0.842
0.826
0.813
1.000

with

= 0.2
RS

RS|

0.053
0.051
0.055
0.057
0.046
0.790
0.810
0.802
0.816
0.755
1.000

0.053
0.061
0.053
0.056
0.046
0.817
0.827
0.812
0.816
0.814
1.000

6= 0. Sample Size: N=25,T=12

= 0.5
RS

RS|

0.055
0.051
0.047
0.037
0.031
0.833
0.814
0.828
0.802
0.745
1.000

0.070
0.045
0.035
0.042
0.031
0.848
0.817
0.810
0.810
0.810
1.000

6.2. Results for tests relating to and


Let us now consider the parameters of spatial dimensions. To explore the performance
of these tests we have performed the Monte Carlo study for three cases: i) = = 0 (This
case is exactly similar to Anselin et. al. (1996), and our results are comparable to their
findings.)
ii) = 0.05 and = 0.3
iii) = 0.3 and = 0.05
The results of the last two cases are comparable to the Monte Carlo results of Baltagi et
al. (2007) and Baltagi and Liu (2008). Table 7 and Table 8 give the estimated rejection
probabilities of the tests RS , RS , RS , RS , RS

and RS , for the case = = 0 for

sample size (N,T):(25,12),(49,12) respectively. The RS has power against a spatial lag,
although less than the lag tests i.e., RS . The behavior of RS is interesting. It has no power
against lag dependence i.e. , as it should. For small values of , the rejection frequency of
RS is very close to its expected value of 0.05. In fact for (N, T =49,12) this size robustness
of RS is more evident as the rejection frequency is close to 0.05 even when = 0.5. In other
words, RS does its job very well, even more than what it is designed to do for. However
the rejection frequency of RS is large in presence of even when

is actually equal to zero.

This reiterates our result that RS is robust to local misspecification, while the test results

27

of RS can be very misleading in presence of such nuisance parameters. In terms of power,


RS is trailing just behind RS as can be clearly seen from the tables. From Table 7 and
Table 8, it is evident that RS is size robust for local misspecification of . For

> 0 and

= 0, RS has rejection probabilities higher than 0.05, but it is much less than RS . This
unwanted rejection probabilities of RS is due to the non-centrality term which depends on
. As mentioned before, RS is designed to be robust only under local misspecification, i.e.,
for low values of . From that point of view, it does a good job; the performances deteriorate
as

takes higher values. From the tables we also note that when > 0 an increase in

enhances the rejection probabilities of RS . This is due to the presence of

in the non-

centrality parameter of RS . But the non-centrality parameter of RS does not depend on


(Proofs regarding non centrality parameters of these tests can be found in Bera and Yoon
(1993)).This result is valid only asymptotically and for local departures of .
We further investigated the behavior of RS for the two other cases, i.e., in local presence
of the error autocorrelation and random eect:
ii) = 0.05 and = 0.3
iii) = 0.3 and = 0.05
The results are explained through the Figures 9 to 16. The size of RS is much better than
its unadjusted counterpart in local presence of all the three parameters , and . The power
of RS is slightly less than that of RS , given = 0 for both the above cases. The Figures
9-12, clearly show that the rejection probabilities are very close to 0.05 for RS for varying
from 0, 0.05, 0.1, 0.3 and 0.5. The rejection probability of RS increases with

as it should

be and can be explained by equation (31). On the other hand the rejection probability of
RS is always very high even when

= 0 and being away from zero. This can be explained

using the non centrality parameter in presence of nuisance parameters as shown in equation
(31). This re-iterates our earlier result as evident from Table 5 and 6, when both the random
region eect and error autocorrelation eect were absent i.e, = = 0. These experimental
results provide further support to our mathematical findings.
Finally, we discuss the experimental results of RS and RS in local presence of and
, i.e., for the following two cases: ii) = 0.05 and = 0.3 iii) = 0.3 and = 0.05 Figures
13-16 explains the results of the tests for the above two cases. The size of the test RS is
28

Table 7: Estimated Rejection Probabilities with = = 0. Sample size: N = 25, T = 12

0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5

RS

0
0
0
0
0
0
0
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0.059
0.797
0.998
1.000
1.000
1.000
1.000
0.099
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

RS

0.055
0.397
0.543
0.998
1.000
1.000
1.000
0.158
0.327
0.523
0.867
0.992
1.000
1.000
0.069
0.356
0.603
0.834
0.933
1.000
1.000
0.180
0.390
0.650
0.878
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

29

RS

RS

RS

RS

0.051
0.231
0.378
0.596
0.865
0.910
0.996
0.258
0.482
0.695
1.000
1.000
1.000
1.000
0.683
0.893
0.999
1.000
1.000
1.000
1.000
0.399
0.487
0.698
1.000
1.000
1.000
1.000
0.432
0.790
0.876
0.998
1.000
1.000
1.000

0.056
0.184
0.288
0.329
0.587
0.745
0.889
0.059
0.373
0.567
0.967
0.996
0.999
1.000
0.055
0.275
0.309
0.876
0.998
1.000
1.000
0.082
0.386
0.597
0.999
1.000
1.000
1.000
0.048
0.543
0.699
0.878
1.000
1.000
1.000

0.051
0.774
0.896
0.999
1.000
1.000
1.000
0.386
0.595
0.799
0.995
1.000
1.000
1.000
0.790
0.899
0.900
1.000
1.000
1.000
1.000
0.856
0.980
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

0.052
0.054
0.060
0.141
0.159
0.164
0.191
0.151
0.140
0.117
0.205
0.228
0.273
0.297
0.360
0.461
0.596
0.620
0.635
0.691
0.696
0.666
0.747
0.802
0.907
0.966
0.987
0.999
0.929
0.943
0.950
0.999
1.000
0.997
0.996

Table 8: Estimated Rejection Probabilities with = = 0. Sample size: N = 49, T = 12

0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.2
0.3
0.4
0.5

RS

0
0
0
0
0
0
0
0.05
0.05
0.05
0.05
0.05
0.05
0.05
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0.052
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

RS

0.055
0.122
0.365
0.567
0.990
1.000
1.000
0.076
0.187
0.432
0.765
0.990
1.000
1.000
0.063
0.246
0.534
0.898
0.970
1.000
1.000
0.324
0.521
0.876
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

30

RS

RS

RS

RS

0.052
0.261
0.597
0.789
0.866
0.976
0.999
0.053
0.384
0.557
0.896
0.956
0.990
1.000
0.050
0.487
0.569
0.789
0.887
0.980
1.000
0.049
0.542
0.715
0.987
1.000
1.000
1.000
0.045
0.532
0.723
0.998
1.000
1.000
1.000

0.076
0.396
0.678
0.823
0.935
0.999
1.000
0.279
0.594
0.689
0.967
0.999
1.000
1.000
0.367
0.698
0.798
0.899
0.989
1.000
1.000
0.494
0.799
0.870
0.999
1.000
1.000
1.000
0.546
0.787
0.886
1.000
1.000
1.000
1.000

0.051
0.053
0.056
0.087
0.131
0.159
0.173
0.150
0.178
0.284
0.308
0.369
0.496
0.598
0.585
0.663
0.763
0.920
0.986
0.999
1.000
0.773
0.835
0.890
0.968
0.994
1.000
1.000
0.834
0.873
0.919
0.962
0.984
0.998
1.000

0.062
0.374
0.596
0.999
1.000
1.000
1.000
0.350
0.560
0.739
0.995
1.000
1.000
1.000
0.750
0.846
0.997
1.000
1.000
1.000
1.000
0.998
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000

Size Comparison of RS (lamda) and RS (lamda*)


,

and

10)

0.8

0.8

0.6

RS(lamda*)

0.4

RS(lamda)

0.2

Estimated Size

Estimated Size

9)

and

0.6

RS(lamda*)

0.4

RS(lamda)

0.2
0

0.05

0.1

0.3

0.5

tau

0.05

0.1

0.3

0.5

tau

Power Comparison
11)

12)
1
Estimated Power

Estimated Power

1
0.8
0.6
0.4

RS(lamda*)

0.2

RS(lamda)

0.8
0.6
0.4

RS(lamda*)

0.2

RS(lamda)

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

0.5

0.4

0.45

0.3

0.35

0.2

0.25

0.1

0.15

0.05

lamda

lamda

robust for the local presence of ; and it increases to approximately 0.4 when
contrast the size of RS is approaches 1 when

= 0.5. In

= 0.5. These rejection probabilities can

be explained by equation (33) and equation (33). Thus although there is some unwanted
rejection probability problem with RS (as discussed before) still the problem is much less
severe of RS than RS ; even when there is local presence of and . The power of RS
trails behind RS , but becomes c lose to each other for larger values of . These results
is in similar lines of what we found when and were zero, i.e., the local presence of the
random eect and time series error autocorrelation do not influence the inference of these
tests. Thus these once again support our analytical finding regarding these tests.
One important thing to note is these one dimensional size-robust tests are more meaningful not only from their marginal counterparts but also from the joint tests, RS
dimensional), RS
when

==

, and RS

(four

(two dimensional) tests. These joint tests are only optimal

= = 0. These tests fail to identify the exact source of misspecification.

This is evident from Tables 3 - 4 and 7- 8. In addition, as stated before our conditional tests

31

Size Comparison of RS (tau) and RS (tau*)


,

and

14)

0.8

0.8

0.6

RS(tau*)

0.4

RS(tau)

Estimated Size

Estimated Size

13)

0.2
0

and

0.6

RS(tau*)

0.4

RS(tau)

0.2
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

lamda

lamda

Power Comparison RS (tau) and RS (tau*)


16)
1

0.8

0.8

0.6
RS(tau*)

0.4

RS(tau)
0.2

Estimated Power

Estimated Power

15)

0.6
RS(tau*)

0.4

RS(tau)
0.2
0

0.05

0.1

0.3

0.5

tau

0.05

0.1

0.3

0.5

tau

not only give intuitive results which one can explain analytically and also mathematically,
but these are also easy to compute (they are all based on OLS estimates) than the one
dimensional conditional LM and LR tests. Moreover, one can easily derive our adjusted test
statistic using the unadjusted ones (Eq. 35-38) as shown in Section 4.
7. Conclusion
Based only on OLS estimation, in this paper we have proposed robust Raos score
(RS) test for random eect, serial correlation, spatial error and spatial lag dependence in
the context of spatial panel model. The tests are robust in the sense that they are asymptotically valid in the (local) presence of nuisance parameters. After one has the standard
RS tests for each parameter, our robust tests require very little extra computation. Thus
practitioners can identify specific direction(s) to reformulate the basic model without going
through any complex estimation. Our empirical illustration in the context of the convergence

32

theory of income of dierent countries demonstrates the usefulness of our proposed tests, in
particular, to identify the exact form of spatial dependence. We also have investigated the
finite sample size and a power property of our proposed tests through an extensive Monte
Carlo study, and compared them with the performance of some of the available tests. Our
tests perform very well in finite sample and compares favorably to other tests that require
explicit estimation of nuisance parameters. Also, though our methodology is developed only
for local misspecification, results from our simulation experiments show that in certain cases,
our tests perform quite well for non-local departures.

33

8. Reference
Anselin, L. (1988), Lagrange multiplier test diagnostics for spatial dependence and spatial
heterogeneity, Geographical Analysis 20(1), 117.
Anselin, L. (1998a), Spatial econometrics: Methods and models.
Anselin, L. (2001), Raos score test in spatial econometrics, Journal of statistical planning
and inference 97(1), 113139.
Anselin, L. and A.K. Bera (1998), Spatial dependence in linear regression models with an
introduction to spatial econometrics, STATISTICS TEXTBOOKS AND MONOGRAPHS
155, 237290.
Anselin, L., A.K. Bera, R. Florax and M.J. Yoon (1996), Simple diagnostic tests for spatial
dependence, Regional science and urban economics 26(1), 77104.
Anselin, Luc et al. (1980), Estimation methods for spatial autoregressive structures., Regional Science Dissertation & Monograph Series, Program in Urban and Regional Studies,
Cornell University (8).
Baltagi, Badi H and Qi Li (1995), Testing ar (1) against ma (1) disturbances in an error
component model, Journal of Econometrics 68(1), 133151.
Baltagi, Badi H, Seuck Heun Song and Jae Hyeok Kwon (2009), Testing for heteroskedasticity and spatial correlation in a random eects panel data model, Computational Statistics
& Data Analysis 53(8), 28972922.
Baltagi, B.H. and L. Liu (2008), Testing for random eects and spatial lag dependence in
panel data models, Statistics & Probability Letters 78(18), 33043306.
Baltagi, B.H., S. Heun Song, B. Cheol Jung and W. Koh (2007), Testing for serial correlation, spatial autocorrelation and random eects using panel data, Journal of Econometrics
140(1), 551.
Baltagi, B.H., S.H. Song and W. Koh (2003), Testing panel data regression models with
spatial error correlation, Journal of Econometrics 117(1), 123150.
34

Bera, A., Y. Bilias and M. Yoon (2007), Adjustments of raos score test for distributional
and local parametric misspecifications, University of Illinois at Urbana-Champaign .
Bera, A.K. (2000), Hypothesis testing in the 20th century with a special reference to testing
with misspecified models, STATISTICS TEXTBOOKS AND MONOGRAPHS 161, 33
92.
Bera, A.K. and C.M. Jarque (1982), Model specification tests* 1:: A simultaneous approach,
Journal of econometrics 20(1), 5982.
Bera, A.K., G. Montes-Rojas and W. Sosa-Escudero (2009), Testing under local misspecification and artificial regressions, Economics Letters 104(2), 6668.
Bera, A.K. and M.J. Yoon (1993), Specification testing with locally misspecified alternatives, Econometric theory 9(04), 649658.
Bera, A.K., W. Sosa-Escudero and M. Yoon (2001), Tests for the error component model
in the presence of local misspecification, Journal of Econometrics 101(1), 123.
Bera, A.K. and Y. Bilias (2001a), On some optimality properties of fisher-rao score function in testing and estimation, Communications in Statistics-Theory and Methods 30(89), 15331559.
Bera, A.K. and Y. Bilias (2001b), Raos score, neymans c ([alpha]) and silveys lm tests:
an essay on historical developments and some new results, Journal of Statistical Planning
and Inference 97(1), 944.
Bera, A.K. and Y. Bilias (2002), The mm, me, ml, el, ef and gmm approaches to estimation:
a synthesis, Journal of Econometrics 107(1-2), 5186.
Brandsma, A.S. and R.H. Ketellapper (1979), Further evidence on alternative procedures
for testing of spatial autocorrelation among regression disturbances, Exploratory and explanatory statistical analysis of spatial data pp. 113136.
Breusch, T.S. and A.R. Pagan (1980), The lagrange multiplier test and its applications to
model specification in econometrics, The Review of Economic Studies 47(1), 239253.
35

Burridge, P. (1980), On the cli-ord test for spatial correlation, Journal of the Royal Statistical Society. Series B (Methodological) pp. 107108.
Cli, A. and K. Ord (1972), Testing for spatial autocorrelation among regression residuals,
Geographical Analysis 4(3), 267284.
Coe, D.T. and E. Helpman (1995), International r&d spillovers, European Economic Review
39(5), 859887.
Davidson, R. and J.G. MacKinnon (1987), Implicit alternatives and the local power of test
statistics, Econometrica: Journal of the Econometric Society pp. 13051329.
Elhorst, J Paul (2003), Specification and estimation of spatial panel data models, International regional science review 26(3), 244268.
Elhorst, J Paul (2010), Dynamic panels with endogenous interaction eects when i t/i
is small, Regional Science and Urban Economics 40(5), 272282.
Elhorst, J.P. (2005), Unconditional Maximum Likelihood Estimation of Linear and LogLinear Dynamic Models for Spatial Panels, Geographical Analysis 37(1), 85106.
Ertur, C. and W. Koch (2007), Growth, technological interdependence and spatial externalities: theory and evidence, Journal of Applied Econometrics 22(6), 10331062.
Haavelmo, Trygve (1944), The probability approach to econometrics, Econometrica 12.
Hillier, G.H. (1991), On multiple diagnostic procedures for the linear model, Journal of
econometrics 47(1), 4766.
Kelejian, Harry H and Dennis P Robinson (1992), Spatial autocorrelation: A new computationally simple test with an application to per capita county police expenditures, Regional
Science and Urban Economics 22(3), 317331.
Lee, L.F. (2002), Consistency and efficiency of least squares estimation for mixed regressive,
spatial autoregressive models, Econometric theory 18(02), 252277.

36

Lee, L.F. (2004), Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models, Econometrica pp. 18991925.
Lee, Lung-fei and Jihai Yu (2010), Estimation of spatial autoregressive panel data models
with fixed eects, Journal of Econometrics 154(2), 165185.
Mankiw, N Gregory, David Romer and David N Weil (1992), A contribution to the empirics
of economic growth, The quarterly journal of economics 107(2), 407437.
Montes-Rojas, Gabriel V (2010), Testing for random eects and serial correlation in spatial
autoregressive models, Journal of Statistical Planning and Inference 140(4), 10131020.
Moran, PAP (1950a), Some remarks on animal population dynamics, Biometrics 6(3), 250
258.
Moran, Patrick AP (1950b), Notes on continuous stochastic phenomena, Biometrika 37(12), 1723.
Nerlove, M. (1999), Properties of alternative estimators of dynamic panel models: an empirical analysis of cross-country data for the study of economic growth, Cambridge: Cambridge
University Press.
Pesaran, M Hashem and Elisa Tosetti (2011), Large panels with common factors and spatial
correlation, Journal of Econometrics 161(2), 182202.
Saikkonen, P. (1989), Asymptotic relative efficiency of the classical test statistics under
misspecification, Journal of Econometrics 42(3), 351369.
Welsh, Alan H (2011), Aspects of statistical inference, Vol. 916, John Wiley & Sons.

37

9. Appendix
We consider the following dynamic panel spatial model which is the combination of all
the dierent piecewise framework which has been discussed in Section 2.

yit =

N
X

mij yjt + Xit + uit

(45)

j=1

uit = i + it
it =

N
X

(46)

wij jt + vit

(47)

j=1

vit = vit

+ eit , where eit IIDN (0,

2
e)

(48)

for i = 1, 2, . . . , N ; t = 1, 2, . . . , T. Here yit is the observation for ith individual/observation


at tth time, Xit denotes the observations on non-stochastic regressors and uit is the regression disturbance. Spatial dependence is captured by the weigh matrices M = (mij ) and
W = (wij ). In this framework, I have considered spatial lag dependence ( ), spatial error
dependence ( ), serial correlation in error () and individual eect (i ). I consider random
eect model here, i.e., i IID(0,

),

like Sen and Bera (2011). W and M are row-

standardized weight matrices whose diagonal elements are zero, such that (I
(I

M ) and

W ) are non-singular, where I is an identity matrix of dimension N. I assume that the

model satisfies the regularity conditions given in Lee and Yu (2010).


In matrix form, the equations (44) - (47) can be rewritten as
y = (IT M )y + X + u
where y is f dimension N T 1, X is N T K,

(49)

is k 1 and u is N T 1. Here l is the lag

operator, X is assumed to be of full column rank and its elements are bounded in absolute
value. The disturbance term can be expressed as
u = (T IN ) + (IT B 1 )v

38

(50)

where B = (IN

W ), T is vector of ones of dimension T, IT is an identity matrix of

dimension T T and denotes Kronecker product. Under this setup, the variance-covariance
matrix of u is given by
=

2 [JT

IN ] + [V (B 0 B) 1 ]

(51)

where JT is a matrix of ones of dimension T T , and V is the familiar T T variance


-covariance matrix for AR (1) process in equation (47), i.e.,
V = E(v 0 v) = [
with

1
1

and V =

1
6
6 ..
V1 = 6 .
4
T

1
V.
1 2 1

V1 ]

..
.
1

2
e IN

T 1

...
.
..
. ..
2

...

2
e IN

= V

(52)

3
7
7
7
5

The loglikelihood function of the above model can be written as:


L=

NT
ln2
2

where A = (IN
1
ln || =
2
2

N
ln(1
2
2

1
[(IT A)y
2

X ]0 1 [(IT A)y

X ]

(53)

M ) and following Sen and Bera (2011), I can write

where d = + (T

L=

1
ln || + T ln |A|
2

2 ) +
1), =

1
ln |d2 (1
2
q

1+
1

and

)2 IN + (B 0 B) 1 | +
=

2
e

NT
ln
2

. Thus substituting

1
2

2
e

(T

1) ln |B|

ln || in L , I obtain

NT
N
1
NT
ln2+ ln(1 2 )
ln |d2 (1 )2 IN +(B 0 B) 1 |
ln e2 +(T 1) ln |B|+T ln |A|
2
2
2
2
1
[(IT A)y X ]0 1 [(IT A)y X ] (54)
2

9.1. Derivation of Score


Using the likelihood function eq. (53), we derive the respective score functions as follows:

39

@L
= X 0 1u
@
@L
=
@ e2

1
trC
2

N
1
+ trC
2
1 2

)2

2
IN

NT
2 e2

4
e

@L
=
@ 2

@L
=
@

(d2 (1

1
trC
2

1 (d

)2 IN

(1
2
e

(55)

1 0
u ( 1 (V [(B 0 B) 1 ]) 1 )u
2
1
+ u0 1 (JT IN ) 1 u
2

1
(+(T 1)(1 ) IN )+ u0 (
2

1
1

(56)

(57)

)2 [2V1 +(1 2 )F ](B 0 B) 1 )u


(58)

@L
=
@

@L
=
@

1
(T 1)tr(B 1 W )+ trC
2

where C = (d2 (1

1
T tr(A 1 W ) + 1 (IT W )y
2

[(B 0 B) 1 [B 0 W +W 0 B](B 0 B) 1 ]

(59)

1 0 1
u (V (B 0 B) 1 ) 1 u
2
(60)

)2 IN + (B 0 B) 1 ). The score functions evaluated under H0a , i.e.,

restricted MLE of 0 with !


= ( , e2 ) are:
@L
=0
@

(61)

@L
=0
@ e2

(62)

@L
N T u0 (JT IN )
u
=
[
2
0
@
u u
2 e2

1]

(63)

@L
N T u0 (G IN )
u
=
[
]
0
@
2
u u

(64)

@L
u0 [(IT W )YN T ]
=
2
@
e

(65)

@L
N T u0 (IT (W + W 0 ))
u
=
[
]
0
@
2
u u

(66)

40

where u = y

x is the OLS residual vector, and e2 =

u
0 u

.
NT

9.2. Derivation of Information Matrix


To derive the information matrix under joint null Hoa , I need to derive the second order
derivatives and take expectation. Dierentiating equation (54) wrt ,

2
e,

2
, ,

and , we

get
@ 2L
=
@ @ 0
@ 2L
=
@ @ e2

X 0 1X

(67)

u0 1 (V (B 0 B) 1 ) 1 X

(68)

@ 2L
=
@ @ 2
@ 2L
=
@ @

X 0 1 (JT IN ) 1 u

(69)

X 0 1 (V (B 0 B) 1 ) 1 u

(70)

@ 2L
=
@ @
@ 2L
=
@ @

X 0 1 (IT W )Y

(71)

X 0 1 (V (B 0 B) 1 ) 1 u

(72)

2
, ,

Dierentiating equation (55) wrt

@ 2L
1
=
tr[C
@ e2 @ 2
2

@ 2L
=
@ e2 @

tr[C

1d

(1

)2 IN
2
e

[( + (1

u 1 [[2

)(T
2

V1 +

(B 0 B) 1 ] u 1 (JT IN ) 1 (V (B 0 B) 1 ) 1 u (73)

1)) IN ]C
1

and , we get

(B 0 B) 1 ]

F ] (B 0 B) 1 ] 1 (V (B 0 B) 1 ) 1 u

1
2
1
+ u 1 [[
V
+
F ] (B 0 B) 1 ] (74)
1
2
(1 2 )2
1 2

@ 2L
=
@ e2 @

((W IT )YN T )0 1 (V (B 0 B) 1 ) 1 u

41

(75)

@ 2L
1
=
tr[C 1 (B 0 B) 1 (B 0 W +W 0 B)(B 0 B) 1 C 1 (B 0 B) 1 +C 1 (B 0 B) 1 (B 0 W +W 0 B)(B 0 B) 1 ]
2
@ e@
2
T 1
tr[(B 0 B) 1 (B 0 W +W 0 B)(B 0 B) 1 ] u 1 [V (B 0 B) 1 [B 0 W +W 0 B](B 0 B) 1 ] 1 (V (B 0 B) 1 )
2
1
1 u + u 1 (V (B 0 B) 1 (B 0 W + W 0 B)(B 0 B) 1 ) 1 u (76)
2
2
, ,

Dierentiating equation (56) wrt


@ 2L
@ 2 @

1
= tr[C
2

1d

(1

)2 IN
2
e

@ 2L
1 d2 (1
=
tr[
@ 2 @
2

)2 IN
2
e

[C

1d

(1

and , we get
)2 IN
2
e

( + (T

1)(1
+ (1

@ 2L
=
@ 2 @

@ 2L
=
@ 2 @

1 d2 (1
tr[
2

)2
2
e

IN C

u 1 (JT IN ) 1 (JT IN ) 1 u (77)

) IN C

)] + u0 1 [ e2 (

1
1

)2 [2V1

2 )F ] (B 0 B) 1 ] 1 (JT IN ) 1 u (78)

[(W IT )YN T ]0 1 (JT IN ) 1 u

1d

)2

(1
2
e

IN C

(79)

+ u0 1 [ e2 (V [(B 0 B) 1 [B 0 W + W 0 B](B 0 B) 1 ] 1 (JT IN ) 1 u (80)


Dierentiating equation (57) wrt , and , we get
@ 2L
N + N 2 1
=
+ tr((2 + (T 1)(1 )) IN )[C 1 ( + (T 1)(1 )) IN C
@@
(1 2 )2
2
1
1
+ u 1 [ e2 (
)2 [2V1 + (1 2 )F ] (B 0 B) 1 ] 1 [ e2 (
)2 [2V1
2
2
1
1
1
+ (1 2 )F ] (B 0 B) 1 ] 1 u + u 1 [[ e2 (
)2 [2V1 2F ]
1 2
+ 4(1

@ 2L
=
@@

[(W IT )YN T ] 1 [[

2 )(2V1 + (1

2 )F ) (B 0 B) 1 ] 1 u (81)

2
1
V1 +
F ] (B 0 B) 1 ] 1 u
2
2
(1 )
1 2
42

(82)

@ 2L
1
= tr[[(B 0 B) 1 [B 0 W + W 0 B](B 0 B) 1 C6 1( + (T 1)(1 )) IN C 1 ]
@@
2
1
+u 1 [ e2 (
)2 [2V1 2F ](B 0 B) 1 ] 1 [ e2 (V [(B 0 B) 1 [B 0 W +W 0 B](B 0 B) 1 ] 1
1 2
1
1
u 1 [ e4 (
)2 [2V1 2F ] [(B 0 B) 1 [B 0 W + W 0 B](B 0 B) 1 ](B 0 B) 1 ] 1 u (83)
2
1 2
Dierentiating equation (57) wrt and , we get
@ 2L
=
@ @

T tr((A 1 W )2 )

@ 2L
=
@ @
Under the joint null Hoa :

YN T (IT W ) 1 (IT W )YN T

u 1 (V (B 0 B) 1 ) 1 (IT W )u
=

= =

= =

(84)

(85)

= 0 , the non-zero second-order

derivatives are :
@2L
@ @

X0X
2

X 0 (W IT )X
2

@2L
@ @

@2L
@
@

2
2
e@
2
@ L

2
2
@
2
@ L

2
@

NT
2 e4
NT
2 e4
NT 2
2 4

2
2
e@ e
2
@ L

=
=

N (T 1)
2
e

@2L
@@

N (T

@2L
@ @

(T tr(W 2 + W W 0 ) +

@2L
@ @

1)
( 0 X 0 (IT W 0 )(IT W )X )
)
2
e

@2L
@ @

T tr(W 2 + W W 0 ).

All the other second derivatives becomes zero under joint null. Thus the information
matrix J, equation (12), under Hoa is

43

The information matrix J, under Hoa is


2

6
6
6
6
6
6
6
J(0 ) = 6
6
6
6
6
6
4

X0X
2

NT
2 e4

NT
2 e4

NT
2 e4

NT
2 e4
N (T 1)
2

X 0 (IT W )X
2

7
7
7
0
0
0
7
7
N (T 1)
7
0
0
2
7
e
7
7
N (T 1)
0
0
7
7
7
0
T tr(W 2 + W W 0 ) T tr(W 2 + W W 0 )7
5
0
T tr(W 2 + W W 0 )
H
(86)

1 @2L
0 )
N T @ @

where J = E(

X 0 (IT W )X
2

evaluated at 0 .

9.3. Derivation of test statistics

RS =

1
J
[d ()
N

()J

1
.

()]
0 [J
()d

J
()

()J

1
.

()J

1 [d ()
J
()]

()J

1
.

(87)
where

= ( 0,

I) Hoc :
Here

2 0
e) ,

and

are dierent combinations of the parameters (

2
, , ,

).

= 0 in presence of , , .

= (, , )

d =d

d = (d , d , d )
J

.!

= (J

2 2
. e

,J

, 0, 0)

The adjusted RS test statistics is:


RS 2 =

[d
J

2 2
. e

.!

6
6
=60
4
0

J
J

0
J

J.

J 1 d ] 2

J 1 J
44

7
7
J 7
5
J

=
2

N T 2 (A B)2
2(T 1)(T 2)

(88)

()]

()d

u
0 (JT IN )
u
u
0 u

where A =

1 and B =

II) Hod : = 0 in presence of


Here

2
, ,

=(

u
0 (GIN )
u
.
u
0 u

2
, ,

).

d = d
d = (d
J

.!

, d , d )

= (J 2 , 0, 0).

.!

The adjusted test statistic is:


RS
III) Hog :
Here,

2 2
. e

6
6
=6 0
4
0

J 2 J

2
2 . e

J 2 J

2 2
. e

J.

[d

= 0 in presence of

=(

d 2 ]2

J 2

7
7
J 7
5
J
N T 2 (B 2A
)2
T
4(T 1)(T T2 )

(89)

2
, , .

2
, , )

d =d
d = (d
J

.!

, d , d )

= (0, 0, J ).
2

The adjusted test statistic is:


RS =

where Z =

1
[
u0 E y
2 e2

J 2 2 J 2 0
6 . e
6
J .! 6 J 2
J
0
4
0
0 J.
[d

J J. 1 d ]2

J J. 1 J

3
7
7
7
5

ZZ 0
T[1 TJ. 1 ]

(90)

TJ. 1 (
u(E + E 0 )
u)], E = (IT W ) and T = T tr(W 2 + W W 0 ).
45

IV) Hof : = 0 in presence of


Here

2
, ,

=(

2
, ,

d = d
d = (d
J

.!

, d , d )

= (0, 0, J ).
2

The adjusted test statistic is:


RS

[d
=
J.

J 2 2 J 2 0
6 . e
7
6
7
J .! 6 J 2
J
07
4
5
0
0 J

[ 2 12 [
u0 E y (
u(E + E 0 )
u)]]2
J J 1 d ] 2
e
=
J J 1 J
J.
T

(91)

9.4. Derivation of partial covariances


A) J(

).

Here

= ,

Here

= ( , ), = (

=J

J J J = 0

Therefore, J(

B) J

=?

2
(

).

[0 J

=0

2
,

6
6
0] 6 0
4
0

J
J

2 2
e

2
e

7
7
27
e5

0 J

6
6
60
4
0

7
7
0 7=0
5
0

0 J

= ( , ), = (, ) .

=J

Therefore, J

).

=?

).

2
,

J J J = 0
2
(

).

=0

[0 J

6
6
J 2 ] 6 0
4
0

46

J
0

7
7
0 7
5
J e2

6
6
60
4
0

7
7
0 7=0
5
0

C) J

Here

= ,

D) J (
Here

Here

2
).

[0 J

=0

J J 0
6
7
6
7
0] 6J J
0 7
4
5
2
0
0 J e

0
6
6
6 0
4
J 2

7
7
0 7=0
5
0

2
e

=?

2
).

=(

2
, ),

=( , ).

=J

J J J = 0
2
).

2
).

=0

=(

2
, ),

J
6
6
J ] 6 0
4
0

0
6
6
6 0
4
J 2

7
7
0 7
5
J e2

J
0

2
e

2
).

=0

2
4

32
54

J
0

47

0
J

2
e

3
5

2
4

2 2
e

7
7
0 7=0
5
0

=( ).

J J 1 J = 0

=J

[0 J

=?

= , ,

Therefore, J

= (, ) .

J J J = 0

= ,

2
, ),

Therefore, J (

E) J

=(

=J

Therefore, J

=?

2
).

5=0

You might also like