Cointegration For Applied Economist

You might also like

Download as pdf
Download as pdf
You are on page 1of 140
Se meV Cel ace) a re Re eu Edited by B. Bhaskara Rao Cointegration for the Applied Economist Second Edition Edited by B. Bhaskara Rao PIDE LIBRARY P.O, Box 1091, Islamabad. AccNo_ 32266 MENU 47 og palgrave macmillan Contents List of Tables List of Figures and Screens Preface to the First Edition Preface to the Second Edit Notes on Contributors 1 Introduction B. Bhaskara Rao 1.1 Introduction 1.2. Unit roots and cointegration 1.3. Economic implications 1.4 An overview of the papers 1.5 Concluding observations 2. A Primer on Cointegration with an Application to Money and Income David A. Dickey, Dennis W. Jansen and Daniel L. Thornton 2.1 Introduction : a general framework 2.2.1. Locating stationary linear combination of variables 2.2.2 Multiple cointegrating vectors 2.2.3 ° Tests for cointegration and their relation to unit root tests 2.3. Is there an economic interpretation of cointegration vectors? 2.3.1 Cointegration with exogenous variables 2.3.2 Should there be many or few cointegrating vectors? 2.4 Alternative tests for cointegration 2.4.1 A note about distributions 2.4.2. Other approaches to cointegration xiii xiv xvi xvii 10 10 u 12 13 15 16 16 20 2 22 23 The Significance of Unit Roots and the Pitfalls of Mechanical Statistics Ron Smith 4.1 Introduction 4.2 Mechanical statistics 4.3 Applied econometrics 4.4 Significance 4.5 Unit roots 4.6 VARs, error correction and cointegration 4.7 Weak exogeneity 4.8 Identification 4.9 Conclusions Unit Roots and Structural Breaks: A Survey of the Literature Joseph P. Byrne and Roger Perman $.1 Introduction 5.2. Unit roots and ADF tests 5.3. Exogenous structural breaks 5.4 Endogenous structural breaks 5.5 Non-linear breaks and GLS detrending 5.6 Multiple structural breaks 5.6.1 Two structural breaks 5.6.2 Multiple breaks 5.7 Unit roots and structural breaks: applied Papers 5.8 Other issues 5.9 Conclusion 5.10 Software New Unit Root Tests Designed for the Trend-Break Stationary Alternative: Simulation Evidence and Empirical Applications Amit Sen 6.1 Introduction 6.2 Model and test statistics 6.3 Finite sample size and power 6.4 Empirical applications 6.4.1 Extended Nelson-Plosser data set Contents vii 101 101 103 104 107 109 us 122 124 127 129 129 130 131 132 134 135 136 136 137 138 139 141 143 143 145 159 188 188 Contents ix 8 Panel Cointegration Analysis: An Empirical Example 222 NR. Vasudeva Murthy 8.1 Introduction 222 8.2 Model specification, data and cointegration analysis 224 8.3. Empirical results 228 8.4 Conclusions 233 Appendix to Chapter 8 233 i i References 245 \ Index 237 List of Tables. xi 6.3D Asymptotic critical values for F(A‘, m), m= 2,3,4,...,10 158 64 Critical values for J34(m), m = 2,3,4,...,10 with A= 0.05 160 65 Critical values for J75(m), m = 2,3,4,...,10 witha =0.05 161 6.6 Critical values for J;°(m), m = 2,3,4,...,10 with A= 0.05 162 6.7 Finite sample size of J#(m) with m = 2,3,4,..., 10, 5% nominal size DGP (a = 1): Yr = Ye-1 +P Aya +e + Wert, er ~ N(O,1) 164 6.8 Finite sample power of /#(m) with m= 2,3,4,..., 10, 5% nominal size Crash DGP: yz = @DU f(T) +24, 2 = 02-1 + eA%1 +e,+. Were ~ NO, 1) 164 6.9 Finite sample size of J#(m) with m = 2,3, 4,...,10,5% nominal size DGP (a = 1): ¥¢ = Yr-1 +P AY-1 Her +H er-1, €, ~ NO,1) 166 6.10 Finite sample power of /#(m) with m = 2,3,4,...,10,5% nominal size Changing Growth DGP: y, = y DTy(Tf) + 2, % = a + PAM + et + Wer, er ~ NO,1) 166 6.11 Finite sample size of J(m) with m = 2, 3, 4 nominal size DGP (=U): yt = yea tp Aya tert Werner ~ NO, 1) 168 6.12 Finite sample power of /E(m) with m = 2, 3,4,...,10,5% nominal size Mixed DGP: y¢ = @ DU(TE) + yDT:(T§) + 24, 2 = 07-1 + PAY1 +e + Ve-1,e% ~ NOO,1) 169 6.13 Finite sample size of /;4(m) with m = 2,3, 4,..., 10, 5% nominal size DGP @ = 1): ye = ye +P AM tert Ver, 10, 5% er ~ N(O,1) 177 6.14 Finite sample power of with m = 2,3,4,...,10,5% nominal size Crash DGP: yr = @DU;(T§) + 2, 2p = 021 + p Ary ter + Wer-1,e ~ N(O,1) 177 6.15 Finite sample size of /¢¥(m) with m = 2,3, 4,..., 10, 5% nominal size DGP(@ =D: Ye = Ye +o Ay-1 Hert Ver1, &% ~ N(O,1) 179 6.16 Finite sample power of J78(m) with m= 2,3,4,...,10,5% nominal size Changing Growth DGP: y; = yDT}(T§) + ze, = 02-1 + PA%1 +e + Wert, e ~ NO,1) 179 List of Figures and Screens Figures 2.1 The income velocity of M1 1953:1 to 1988:4 2.2 The income velocity of M2 1953:1 to 1988:4 2.3. The reciprocals of income velocities of M1, M2 and the non-M1 components of M2 1953:1 to 1988:4 3A.1.__Log real consumers’ expenditure (C) 3A.2 Change in log real consumers’ expenditure (AC) 3A.3 Log real personal disposable income (1) 3A.4 Change in log real personal disposable income (Al) 3A. Log real wealth (W) 3A.6 Change in log real wealth (AW) 34.7 Autocorrelation function of C: 1955:1 to 1991:2 3A.8 Autocorrelation function of AC: 1966:4 to 1991:2 3A.9 Autocorrelation function of I: 1955:1 to 1991:2 3A.10 Autocorrelation function of Al: 1955:2 to 1991:2 3A.11 Autocorrelation function of W’: 1966:4 to 1991:2 3A.12 Autocorrelation function of AW: 1967:1 to 1991:2 3.13 Coefficient of W;_1 and its 25.8. Based on Recursive OLS 4.1 Logarithms of US real GNP, consumption ani 7.1 Data in levels 7.2 Real money and real GDP 7.3. Interest rates Screens 8A.1 Loaded data file window 8A.2 Group unit root dialogue box 8A.3 Group unit root tests 8A.4_— The IPS test results for level 8A.S__ Individual ADF tests 8A.6 Rats 6.01 window 8A.7__ Pedroni cointegration tests results window in rats 8A.8a Estimating the cointegrating equation using Pedroni GMPFMOLS in RATS 8A.8b Pedroni GMPFMOLS results Window in RATS xiii 26 27 27 85 85 86 86 86 87 87 88 88 89 89 90 92 us 208 206 206 234 235 236 237 238 239 240 241 241 Preface to the First Edition xv the appendix, will strengthen students’ practical econometric and quan- titative skills. Therefore, this book will be a valuable source of project topics to the teachers of these courses ‘We must record our deep sense of gratitude to all the contributors to this volume. Their response to our initial request to revise their papers to make them more pedagogic was immediate and spontaneous. We are grateful to Giovanna Davitti, Commissioning Editor of the Macmillan Press, for her encouragement, patience and guidance. Thanks are also due to Naroj Nafratitti and Mayda Shahinian of the School of Mathematics, University of New South Wales, for skillfully typing the manuscript, and to Tarlok Singh, the Reserve Bank of India, for help with proof reading. Our final thanks go to Professor V.K. Srivastava, University of Lucknow, for constant encouragement and help. B. Bhaskara Rao Preface to the Second Edition xvii Patiently conducted the laboratory sessions for my course. Saten Kumar hhas also prepared the table of contents and the list of references. B. Bhaskara Rao July 2007 Sydney. Notes on Contributors xix from the State University of New York at Binghamton. Professor Murthy has published in the Journal of Health Economics, Economics Letters, Public Finance, Applied Economics, National Tax Journal, Journal of Interna- tional Technology and Information Management and Public Choice. Contact information: vmurthy@creighton.edu Roger Perman is Lecturer in Economics at the University of Strathclyde, Glasgow, UK. He has graduated from the universities of Manchester and London, His research interests are in the areas of econometrics and regional economics. Amit Sen is Assistant Professor of Economics at Xavier University. He received his PhD from the North Carolina State University. His research focuses on unit root time series analysis, structural stability testing and applied econometrics. He has published in the Journal of Business and Economic Statistics, the Econometrics Journal, Statistics and Probability Let- ters, Economics Letters, Empirical Economics, Applied Economics, Structural Change and Economic Dynamics and Public Choice. Contact information: sen@xavier.edu and homepage: http://staff.xu.edu/~sen Ron Smith has been at Birkbeck College, University of London, since 1976, previously teaching at Cambridge. He has been Professor of Applied Economics since 1985 and visiting professor at London Busi- ness School and the University of Colorado. He has written 4 books, been an editor of another 4 and has published over 150 papers mainly in applied econometrics and deferice economics. He Is responsible for the graduate econometrics teaching at Birkbeck and has provided econo- metrics training for many public and private sector groups including, the Bank of England. He has worked as a consultant to a range of bod- ies and firms. Contact information: R.Smith@bbk.ac.uk and homepage: http://www.econ.bbk.ac.uk/faculty/smith/ Daniel L. Thornton is Assistant Vice President and Research Economist at the Federal Reserve Bank of St. Louis. He joined the research depart- ment of the Federal Reserve Bank of St. Louis in 1981 asa staff economist and was promoted to his current position in 1989. He has published numerous research papers in the Federal Reserve Bank of St. Louis Review and in other professional journals. He is Associate Editor of the Jour- nal of International Financial Markets Institutions and Money. He is a native of Clinton, Iowa and received his PhD in Economics from the University of Missouri-Columbia in 1976. Contact information: Daniel.L.Thornton@stls.frb.org 1 Introduction B. Bhaskara Rao 1.1 Introduction Methodological revolutions in economics are not new. The major impact of a revolution is that it calls for a fundamental change in our way of thinking about modelling economic phenomena. Such revolutions in economics are invariably controversial, partly because they often imply that existing policy measures are inappropriate and should be abandoned in favour of a new set of policies. The old and new policies may have different adherents depending on their sense of economic fairness and justice. Therefore, it is hard to derive widely acceptable conclusions on the relative merits of these revolutions. The unit roots and cointegration revolution have both economic and statistical implications. Although its economic implications may be controversial, its statistical implica- tions are less controversial. We shall first describe briefly its statistical implications. 1.2 Unit roots and cointegration The standard classical methods of estimation, which we routinely use in applied econometric work, are based on the assumption that the means and variances of the variables are well-defined constants and indepen- dent of time. However, applications of the unit root tests have shown that these assumptions are not satisfied for a large number of macroe- conomic variables, Variables whose means and variances change over time are known as non-stationary or unit root variables. Furthermore the unit root revolution has also shown that using classical estimation methods, such as the ordinary least squares (OLS), to estimate relation- ships with unit root variables gives misleading inferences. This is known Introduction 3 cointegration relationships. Therefore, it is desirable to use more than one technique to estimate the long-run relationships. ‘The aforesaid outline is a highly simplified and condensed version of the major steps in applying unit root and cointegration techniques. In practice, however, the applied economist will encounter several problems. Nonetheless, it can be said that in the applied econometric works some widely used unit root tests are the augmented Dickey-Fuller test (ADF) and the more powerful alternative tests like the general- ized least squares ADF test or GLSADF and the Elliot, Rothenberg and Stock (ERS) test. These tests can be implemented with many standard software packages like EViews, RATS and TSP. For estimating the coin- tegrating equations the more popular estimation methods are the Engle and Granger two-step method, the general to specific approach (GETS), the Phillips-Hanson technique, bounds test and the Johanson maximum, likelihood method. The papers in the first edition of this book adequately explained some of these standard and frequently used procedures. We have included two such papers from the original edition in this volume which cover some basic and widely used procedures. 1.3. Economic implications The economic implications of the unit roots and cointegration lit- erature are also important especially to provide some perspective to those who are interested mainly in applying alternative unit root tests. ‘The mainstream Keynesian and neoclassical alternative macroeconomic paradigms have treated economic fluctuations as temporary deviations froma stable trend rate of growth of output and offered different explana- tions for these fluctuations. These dominant macroeconomic paradigms also assume that economic fluctuations are due to aggregate demand shocks and these fluctuations, sooner or later, die down and the econ- ‘omy will eventually reach its full employment equilibrium. Therefore, by and large, demand shocks are unlikely to have any permanent effects on ‘the full employment output. In contrast, the real business cycle theories argued that economic fluctuations are due to shocks to aggregate supply and that they are likely to have permanent effects on the level of output. While it may be hard to directly evaluate these two theories, devel- ‘opments in time series econometrics offered a simpler and an indirect method of evaluation. The now classic Nelson and Plosser (1982) unit root tests have shown that several US macro variables are unit root vari- ables. Doubts have been raised on the separation of cycles and trends in the traditional macroeconomic theories, and therefore the real business Introduction § 1.4 An overview of the papers We have included two papers from the first edition of this book because many readers said that they have been very useful. These are the classic Paper on the demand for money by Dickey, Jansen and Thornton and an equally valuable expository and slightly revised paper on unit roots and cointegration by Holden and Perman. Dickey, Jansen and Thornton use the Dickey-Fuller and the ADF tests to test for unit roots in the variables of the demand for money function. Both tests yield similar conclusions. Next, they explain how the cointe- grating regressions can be estimated using three different approaches namely, the Engle and Granger two-step, the Johansen ML, and the Stock-Watson procedures. The Johansen ML method seems to yield more satisfactory results. A much appreciated aspect of this paper is a detailed step-by-step explanation of the Johansen ML and the Stock-Watson methods in the Appendix. Holden and Perman’s paper is somewhat similar in scope to the Dickey, Jansen and Thornton paper, illustrating among other things, the useful. ness of the Phillips and Perron non-parametric test. The discussion of various unit root tests is very comprehensive, and an easy to follow step- by-step sequential procedure to conduct these tests is explained. The error-correction formulation is discussed in some detail. Although the Johansen ML method is used, there is a detailed discussion of the appli- cation of cointegration in econometric modelling. They apply some of these tests to the UK consumption function in their Appendix. The remaining five papers in this volume by Ron Smith, Joseph Byrne and Roger Perman, Amit Sen, Roselyne Joyeux and Vasudeva Murthy — are new and discuss various aspects of the recent developments in the ‘unit roots and cointegration literature, In his ‘Significance of Unit Roots and the Pitfalls of Mechanical Statis- tics’, Ron Smith says that unit roots, vector auto-regressions (VARs) and cointegration play a central, though controversial, role in modern time series econometrics. His paper uses these concepts to examine one aspect of applied econometrics, the prevalence of mechanistic application of statistical techniques and the problems with such an approach, After reviewing the general methodological questions, Smith examines the issues of estimation and testing, using a small macro-model of the US as. an example. We strongly recommend that readers also read Smith (2002) to get a good methodological perspective on the present developments in the Introduction. 7 Byrne and Perman for further details. The programmes to compute Sen's tests can be obtained from his homepage or by contact him. He illustrates the use these tests to evaluate all series in the extended Nelson-Plosser data set and finds evidence against the unit root null hypothesis for the US real GNP, real per capita GNP, industrial production, employ- ment, and common stock prices series. He also tests for the presence of a unit root in the real per capita GDP series of 18 OECD countries. The unit root null is rejected for Austria, Belgium, Canada, Denmark, the United Kingdom and the United States. Therefore, his findings raise fresh doubts on the support for the real business cycle theory in the original Nelson-Plosser tests. In ‘How to Deal with Structural Breaks in Practical Cointegration Analysis”, Roselyne Joyeux illustrates how to estimate a cointegrating equation with a known structural break. Structural breaks have to be accounted for when testing for cointegration among the variables in a ‘VAR model. Her paper shows how to specify and include intervention dummies, and illustrates with an example the latest developments in the use of intervention dummies when testing for unit roots and coin- tegration in a VAR framework. A simple explanation of the specification of intervention dummies is provided together with an application. This is another important gap filler in applied work, because there are very few guidelines on how to proceed further if it is found that all or some variables contain unit roots with structural breaks. From the applied economist’s point of view, if the break dates are different for different variables, it is even more difficult to estimate cointegrating equations. In our view, the problems caused by structural breaks, although well explored for the unit root tests, are not ade- quately addressed for the estimation of cointegrating equations. Burke and Hunter (2005) for example, point out that ‘Probably the main future of economic time series that is capable of undermining cointegrating analysis is that of structural beaks’. If the unit root tests with structural breaks show that all the variables in a model are stationary in levels, then the standard classical methods of estimation with appropriate shift dummies can be used.” However, as noted above, if the break dates are different and some variables are stationary in their levels and others are not, cointegration techniques become difficult to apply. A pragmatic methodological alternative is to determine the dominant common break date in a set of variables, as briefly mentioned by Byrne and Perman, through a systems method. By and large, the Gregory and Hansen (1996) method does this within the Engle-Granger cointegration framework, by identifying an endogenous break. Their procedure is briefly explained in Introduction 9 Chapter numbers, however, aie not prefixed to equations because none of the chapters refers to the equations in another chapter. Since one of the objectives of this book is to encourage replications, all the data sets used in the papers are made available on the publisher's book homepage. Further details on any missing data and software routines can be obtained by contacting the respective authors. An old Chinese proverb goes something like this: If I hear, I know. If I see, I believe. If I do, I understand. Therefore, applied economists are strongly advised to replicate the results from various papers. It is our hope that our book will encourage many to do quality research with the help of some recent econometric techniques. Notes 1. An intuitive explanation of the identification and endogeneity concepts in cointegration is given in Rao (2007). 2. For an informative survey of ECM see Alagoskafis and Smith (1991), 3. Alternative unit root tests have also economic implications, and we shall examine them shortly. 4, In applied work, some have used shift dummies in the cointegrating or in the short-run dynamic equations, Some others have estimated separate coin- ‘tegrating equations for the sub-samples after determining the break dates. Needless to say, these are arbitrary procedures. 5. A bright graduate student has asked If he can estimate a cointegrating equation between the bank rate, and imports to tests whether a recent rise In the bank rate in Fifi, from 3.5 percent to 3.75 per cent, will decrease the high growth in Imports. There area number of ad hoc applications ofthe unit roots and cointegration techniques, especialy in the time series applied growth lit erature with grossly misspecified equations, where the growth rate is simply regressed on a single variable like foreign aid or even defense expenditure etc. 6. See also Rao (2007) which is very much influenced by Smith's methodological views 7. The well-known GETS method, developed at the London School of Eco- nomics, can be used to distinguish between the long- and short-run relation- ships. GETS was originally developed for this purpose, well before the unit roots and cointegration methods had much impact. The distinguishing fea- ture of GETS is the dynamic adjustment process based on the ertor-correction ‘model, which is later used in all the time series models. 8. Rao and Kumar (2007), inspired by Byrne and Perman and Joyeux, is an easy to read application of the Gregory and Hansen (1992) technique to estimate cointegrating equations with a single endogenous structural break, 9. Seealso Franses (2001) fora discussion of cointegration with structural breaks. The dominant break date, for example, may be first determined endoge- ously with the Gregory and Hansen method and then used to implement the Johansen procedure. 10, CATS and MALCOLM were run using RATS version 6.02b. A Primer on Cointegration 11 time-series analysis on the other. The chapter then addresses the broader question of the economic interpretation of cointegration by contrasting it with the usual linear, dynamic, simultaneous equation model which is frequently used in macroeconomics. ‘The chapter goes on to compare three recently proposed tests for cointegration and outlines the procedures for applying these tests. ‘An application of these tests to U.S. time-series data using alternative monetary aggregates, income and interest rates suggests that there is a stable long-run relationship among real output, interest rates and several monetary aggregates, including the monetary base. 2.2. Testing for cointegration: a general framework Because of the close correspondence between tests for cointegration and standard tests for unit roots, it is useful to begin the discussion by considering the univariate time-series model yew er — Wen a where y; denotes some univariate time series, u is the mean of the series and e; is a random ertor with an expected value of zero and a constant, finite variance. The coefficient g measures the degree of persistence of deviations of y¢ from w. When @= 1, these deviations are permanent. In this case, yt is said to follow a random walk - it can wander arbitrarily far from any given constant if enough time passes.4 In fact, when = 1 the variance of ye approaches infinity as t increases and p the mean of yr is not defined. Alternatively, when jel < 1, the series is said to be mean reverting and the variance of yt is finite. Although there is a similarity between the tests for cointegration and unit roots, as we shall see below, these tests are not identical. Tests for unit roots are performed on univariate time-series. In contrast, cointe- gration deals with the relationship among a group of variables, where (unconditionally) each has a unit root. ‘Tobe specific, consider Irving Fisher's important equation of exchange, MV =Pa, where M is a measure of nominal money, V is the velocity of money, P is the overall level of prices and q is real output.5 This equation can be rewritten in natural logarithms as. InM +InV—InP—Ing= @) In this form, the equation of exchange is an identity. The theory of the demand for money, however, converts this identity into an equation ‘A Primer on Cointegration 13 chosen a priori, but are determined by choosing among all possible vec- tors, tests for cointegration encounter the type of distributional problems associated with order statistics and multiple comparisons. Hence, it is ‘useful to discuss some of these problems in more detail In multiple comparison tests, an experimenter is usually concerned with, say, comparing the highest and lowest sample means among several. He wants to find the pair of sample means with the. largest differ- ence to see whether the difference is statistically significant, When the means to be compared are chosen ahead of time, tests for a significant difference between the means can be done using the usual t-statistics. If, however, the means to be compared are chosen simply because they are the largest and smallest from a sample of, say, five means, the rejection rate under the null hypothesis will be much higher than that implied by the percentile of the t-distribution. In order to control for the experi- mentwise error rate, as it is called, tables of distributions of highest mean minus lowest mean (standardized) have been computed for the case of no true differences in the population means. These ‘tables of the studen- tized range’ are then used to test for significant differences between the highest and lowest means.” The price paid for controlling the experimentwise error rate is a loss in Power. That ts, the difference between the means must be much larger than in the case of the standard t-test before it can be declared significant at some predetermined significance level. Thus, the power of the test to detect significant differences is reduced, In an analogous way, it is difficult to reject the null hypothesis that there are no stationary linear combinations when the observed data are used to estimate the most stationary-looking linear combination before testing for cointegration. This loss of power is evident in the test tables given by Stock and Watson or Johansen, where the percentiles are shifted far away from those of the standard unit root distribution of Dickey and Fuller (1979). Consequently, detecting cointegrating relationships among variables is relatively hard. Some power can be gained, however, if economic theory is used to assign values to some coefficients, a priori. Indeed, if theory fully specified the cointegrating vector, as in our exam- ple of the Fisher equation, using conventional unit root tests to test for cointegration would be appropriate. 2.2.2, Multiple cointegrating vectors Until now, the possibility that only one linear combination of variables is stationary has been considered; however, this need not be the case. In cases where more than two time-series are being considered, more ‘A Primer on Cointegration 15 This is accomplished by obtaining a k by n sub-matrix of B', 6’, of rank k such that the transformed series p'Y; is stationary. The k rows of B’ asso- ciated with these stationary series are called ‘cointegrating vectors’. The remaining n — k unit root combinations are termed ‘common trends.'!! 2.2.3. Tests for cointegration and their relation to unit root tests In illustrating tests for cointegration, we draw the analogy between tests for cointegration and tests for unit roots. Any autoregressive time-series of order p can be written in terms of its first difference, one lag level and (p~ 1) lag differences. Consider first the univariate case, VOSA + a2vf_g +--+ OpY_p + et, (8) where yf = yt ~ u. The equation can be reparameterized as DYE = PLAY i 1 + b2O¥E_2 + + bp-1 Yt pga — Vt-p + et (6) where bj = -1+a; +a2+---+ajandc=1-a,—a2~---~ap. Alternatively, equation (5) can be reparameterized as. Ay = dy AYE + d2A¥¢_ +--+ dp 1 AYE py — Hed Het, ”) where dp_y = ~ dp, dp-2= ~ ap - dp-1,...,d) = ~ dp ~ dp_y ~--»— a. The Dickey and Fuller test uses the t-statistic from the ordinary least squares regression to test the null hypothesis that the coefficient c in equation (7) is equal to zero. The t-statistic is the likelihood ratio test of the null hypothesis of a unit root, where the likelihood is conditional upon an initial value, yo.2 Consider now the multivariate analogue to equation (5): ¥e=Ai¥e-1 +A2¥e-2 +AgVe-g +--+ ApYenp tet, 8) where 41,A2,...,Ap are n x n matrices. Equation (8), too, can be reparameterized as either AY SPAY 1 + T2A¥e2 +--+ Tp 1AYepii— Yep ter, (9) or AY, =O, A¥e-1 + 62O¥iu2 +--+ 6p-1AKe-pyi —WYi-i ter. (10) If the matrix y= (I ~ Ay — Az ~--- ~ Ap) is full rank, then any linear combination of Y; will be stationary. If y is a matrix of zeros, then any linear combination of ¥; will be a unit root process and, hence, non-stationary. 13 A Primer on Cointegration 17 A™T exists, so that the dynamic reduced form can be written as Yean¥e + 0X +e (a2) where x(= A~1B) and [(= A~!C) are matrices of unknown reduced-form parameters. Equation (12) contains only predetermined variables on the right-hand side, so that the dynamic response of ¥; to X; can be studied by recursive substitution using equation (12). Letting L denote the Jag operator (LY; = Y;_;), the result is written as YeaU taht nL? bo PX eta tL? + Jer 3) The infinite series (/ +L +271? +..-), evaluated at L = 1, converges to (I~ m)"" ifall of the eigenvalues of x areless than 1 in absolute value. ‘The expected value of equation (13), conditional on X,, is E(Yp) = (U4 my PL? 4 PX, I= nL) 1PX, 4) Equation (14) is used to investigate the long-run response of ¥; to a change in one or more of the elements of X;. Assuming that X; is a vector of zeros for all < 0, and X; = 4 for all t > 0, and that the system converges, then, in the limit (as) Thus, (I - 7)~1P6 gives the long-run response of Y; to a permanent change, 8, in X. Theil and Boot (1962) have termed (I — x)~1F the final- form multipliers. ‘The cointegration techniques of Johansen and Stock and Watson start with an equation similar to equation (12). There are two key differences, however. First, all of the variables are explicitly endogenous. Second, because m has unit eigenvalues corresponding to the common trends in the system, (J — x) is not invertible, and therefore, the final-form multipliers in (15) are undefined.!8 Arelated technical point is that the initial conditions are not transient. To see this in equation (12), let X;=0 for all t and substitute recursively to obtain Vp seq meq tte ig te tat ley $a VQ. (16) Because the matrix ! does not converge to a matrix of zeros as t approaches infinity, the initial condition is not transient and must A Primer on Cointegration 19 where it is assumed that | 02: 0 tean'= | “ ay | 7 for all t. Again, assume that A-! exists, so equation (21) can be rewritten as ¥ |_[ atB -a-tcp ][ Ya br [k]-[or Ife )-[S] or [*]-[¢ ® lee EE}: 23) Cointegrating vectors and their interpretation depend on the assump- tions made about x and D. For example, if all of the eigenvalues of x are less than one, so that conditional on X;, ¥¢ is stationary, the common trends in the system are the result of X; being non-stationary. The num- ber of common trends will be equal to the number of unit roots in D. If D = 1, for example, then there will be g common trends and n cointe- grating vectors. In this case, there is a steady-state representation for the | endogenous variables conditional on the exogenous variables.20 ‘That is, the final-form multipliers of Theil and Boot are conditional on the exogenous variables. They do not exist, however, because the | variance of the exogenous variables is unbounded and, hence, so is the unconditional variance of the endogenous variables. If x also has some unit eigenvalues, however, then part of the non- stationarity of the system is due to instability in the dynamics of the ‘endogenous variables in the system.2" In this case, the conditional final-form multipliers, analogous to those obtained by Theil and Boot, exist along the stationary directions given by the cointegrating vectors as before. An important aspect of this formulation is that, given the proposed structure of the system, the researcher can identify whether the common trends stem from the struc- tural dynamics or are simply a manifestation of the stochastic properties of the exogenous variables. Under the assumption on (e-nz)’, estimating cointegrating vectors for the system given by the equations in (23) is the ‘same as estimating them for the system given by the equations in (8).22 Hence, any of the multivariate methods discussed in the next section can be employed. The reader is cautioned, however, that these proce- dures would have to be modified to impose the upper triangular structure of the system given by the equation in (23),23 A Primer on Cointegration 21 the same; it seems desirable to have many cointegrating vectors.26 Alter- natively stated, we prefer economic models that have unique steady-state equilibria. Accordingly, researchers are interested not only in testing to. see whether variables are cointegrated, but in obtaining precise estimates of the cointegrating vectors. 2.4 Alternative tests for cointegration With a number of tests for cointegration being available, it is impor- tant to understand their similarities and their differences. The purpose of this section is to discuss the salient features of alternative tests for coin- tegration. Step-by-step instructions on how to perform two of the more difficult of these (the Stock- Watson and Johansen test) are presented in the Appendix. The relationship among tests for cointegration developed by Johansen (1988), Engle and Granger (1987), Stock and Watson (1988) and Fountis and Dickey (1989) can be illustrated by considering the multi- variate model Y= ApYe- + ADV a2 +--+ ApYr-p + et. (24) Johansen reparameterizes equation (24) as AY=PrAYe1 +P2O¥e-2 5-4 Tp AYe-pat — WYeep ter (25) where, as before, = (I - Ay ~ Az ~-+- ~ Ap). He then makes use of the fact that any m by n matrix y, of rank k < 1 can be written as the product of two n by k matrices of rank k ~ that is, ¥ =a" where a and B are n by k matrices of rank k. He maximizes the likelihood function for ¥; conditional on any given f using standard least squares formulas for regression of AY; on A¥1—1, A¥t-2,...,AYi-ps1 and B'Y;—p. The solution to this maximization problem gives estimates of Py, 2,..., p-1 and a conditional on f. Once this is done, B, or more specifically, the row space of , is estimated. In cointegration, it is only possible to determine the rank of ap’.27 Specifically, obtaining unique elements of a and B without imposing arbitrary constraints is impossible. The rank of y can be obtained by computing canonical correlations between AY; and Y¢_p, adjusting for all intervening lags. Johansen chooses to put the lag level at the largest lag but, as has been shown earlier, this is not crucial. In the Johansen approach, —af’ is the coefficient matrix on the lagged level. Upon pre-multiplying equation (25) by 6’, the last term A Primer on Cointegration 23 of these parameters complicates the test procedures is illustrated for the univariate, case.29 Consider a simple version of the univariate model given by equation (5): (rt — 2) = (2 + Ut-1 =H) — Ae Ve-2 — W) + er (26) If g=1, w drops out. The series y; is a unit root process with no tendency to move toward any level. In this case, however, the test of the null hypothesis that @=1 is complicated by the presence of a nuisance parameter, 1. Regardless of g, as noted before, the model can be reparameterized as Aye= ~(@— DODYt-1 ~ 4) + AQ AYe-1 + ere (27) Notice that the coefficient on (yr ~ u) is now a multiple of @ — 1. In ordinary regression, multiplying a regressor by a constant changes the distribution of the regression coefficient by a multiplicative constant but does not change the distribution of its t-statistic. The same holds true asymptotically in cointegration.2° If4 is known, the estimated coefficient on (yz_1 - ) could be divided by (1~a) and the limit distribution n(@,.~1) listed in Fuller (1976, section 8.5) could be obtained. If 4 is unknown, however, an approximation must be obtained by dividing by (1-2), where iis estimated by regressing ‘yr on Aye_1, Suggested by imposing the null hypothesis @= 1. Alternatively, the univariate model could be written as Ut —A¥t-1) (@ = 1 yr-1 — A¥e-2 = HCL 2) + Fee (28) Since i is a consistent estimate of 2, it can be used to filter ys. That is, Fr=y — ip. ft can then be regressed on Af;_y (with an intercept) oron F;_1 ~F, where F is the mean of the series, F;, to get a test statistic asymptotically equivalent to n(,.—1)- 2.4.2. Other approaches to cointegration The approach of Fountis and Dickey (1989) is similar to that of Stock and ‘Watson but only allows for the possibility that there is one unit root. As such, it is much less general than either of the above approaches. It does, however, provide estimates of the cointegrating vectors and common trend, based on the coefficient matrix of the lagged levels. Although there isan asymptotic link between variances and lag coefficients as established above, the actual definition of a unit root process is in terms of lag rela- tionships. All approaches to cointegration use lags in testing. However, in transforming Y;, Stock and Watson use only the variance-covariance matrix while Fountis and Dickey (and Johansen) use the lag information. A Primer on Cointegration 25 But if M1 is the relevant measute of money, the Fisher equation suggests that there should be at least one cointegrating relationship between M1, its velocity, real income and the price level. The specification of tests for cointegration depend on the specification of the demand for money. Consequently, it is important to review the theory of money demand before performing tests for cointegration. ‘A general specification for the long-run demand for money is M!=fP,.Q2) (29) where M and Q denote the nominal money stock and the nominal income level, respectively, P denotes the level of prices and Z denotes all other variables that affect money demand - for example, current and expected future real interest rates, the expected rate of inflation, etc. Assuming that economic agents do not suffer from money illusion, equation (29) can be written as M4/P =m =f(QYP,2)=f(4,2). (30) That is, the demand for real money balances, m4, is a function of real income q, and some other variables. Furthermore, it is commonly assumed that the demand for money is homogeneous of degree one in eal income, so that equation (30) can be written as mi /q=h(Z), G1) where h(Z) is the famous k in the Cambridge cash balance equation which is the reciprocal of the income velocity of money. In equilibrium, the demand for real money equals the supply of real money, m’, so that ‘n(Z) is observed simply as the ratio of real money stock to real income. 2.5.1. The velocity of M1 and M2 Consider now two alternative monetary aggregates, M1 and M2. In this framework, the reciprocals of their velocities can be written, respec- tively, as ml/q=h\(Z), and (32) m2/q = hAW). 3) The specification for M2 allows for the possibility that there are factors that affect its demand that do not affect the demand for M1 - that is, Z ‘A Primer on Cointegration 27 ato Ratio 19 to 18 18 17] h7 16 16 15 1s 195254 86 68 60 62 G4 65 6B 70 72 74 76 78 OO OO BO HH OH) Figure 2.2 The income velocity of M2 1953:1 to 1988:4 1 4 195258 56 58 60 62 64 68 68 70 72 7a 76 7a G0 62 G4 85 68 1900 Figure 2.3. The reciprocals of income velocities of M1, M2 and the non-M1 components of M2 19S3:1 to 1988:4 the velocity of MN1M2. Also, a comparison of the series in Figure 2.3 reveals that much of the variability in the reciprocal of M2 velocity is associated with variability in the reciprocal of NMIM2 velocity, rather than variability of the reciprocal of M1 velocity.3637 ‘A Primer on Cointegration 29 ratio of currency to total checkabie deposits, denoted K, is also included because it is the most important determinant of the money multiplier. 40 In both parts, the income and price level measures are real GNP, q, and the GNP deflator, P, respectively. Two measures of nominal interest rates, R, are used: the three-month Treasury bill rate, R3M, and the yield on 10-year government securities, RLOY. The data consist of quarterly obser- vations from 1953:2 to 1988:4 and all data are transformed to natural logarithms. 2.5.4 Tests for the order of integration Before testing for cointegration, the order of integration of the individual time-series must be determined. Tests for unit roots are performed on all of the data using the augmented Dickey-Fuller test with three lagged differences. The null hypothesis is that the variable under investigation has unit root, against the alternative that it does not. The substantially negative values of the reported test statistic lead to rejection of the null hypothesis. The tests are performed sequentially. The first column in the top half of Table 2.1 reports tests of stationarity of the levels of the time-series about a non-zero mean. The critical values of the test statistic (t,] are tabulated in Fuller (1976) and discussed in Dickey and Fuller (1979). The reported test statistics indicate that the null hypothesis cannot be rejected for any variable. We then test for stationarity about a deterministic time trend, using the Dickey-Fuller statistic {¢,]. The results of this test are given in the second column in the top half of Table 2.1. Critical values for this test statistic are tabulated in Fuller (1976). With the exception of R3M, the null hypothesis that the time-series has a unit root cannot be rejected. The bottom half of Table 2.1 reports results for the augmented Dickey- Fuller test on first differences of the variables. The null hypothesis of unit root is rejected for all of the timeseries using differenced data. ‘These results are broadly consistent with the hypothesis that the indi- vidual time-series are individually 1(1). Because these data appear to be stationary in first differences, no further tests are performed. 2.5.5 Tests for cointegration using three methodologies Tests for cointegration for real M1, real income and either R3M or R10Y using methodologies proposed by Johansen, Stock-Watson and Engle- Granger are presented in Table 2.2. For the Johansen and Engle-Granger tests three lagged differences were used. Both the test statistics and the estimated cointegrating vector, setting the coefficient on (M1/P) equal A Primer on Cointegration 31 Table 2.2. Tests for cointegration for M1 Se Test Statistics Cointegrating Vector eee ee | q RBM RIOY | Johansen Test! | Trace Max. Eigenvalue kst kas2 k=0 k=l 323" $2 12 27.1" 400,680 -0.369- 284 60 18 224° 42 0.845, 0.570 ‘Stock-Watson Test? 93,2) 24.91 0.398 ~0.152 0 - =15.61 0.635 = 0.353 Engle-Granger Test Dependent Test Variable Statistics MiP -2.26 0.383 -0131 0 ~ 2.08 0.538 = - 0.303 q 3.92 0.671 -0271 = 3.18 0.794 - 0.449 R 4.86" 0.730 -0.359 + ~3.36 0826 = = 0.496 SO Notes: * Indicates statistical significance at the S per cent level 1. Critical values: Trace Max. Eigenvalue k=O ks1 ks2 k=O kal 313-178 81 21346 2. Critical values for 43,2) is -31.. 3. Critical values for ADF taken from Engle and Yoo (1987), 100 observations 200 observations 3.93 3.78 4. For an interpretation of the coefficients in the last three columns, see fn. 42. ‘two nominal interest rates. Moreover, there appears to be a single coin- tegrating vector. The maximum eigenvalue test of k= 1 vs. k=2 fails to reject the null hypothesis of k= 1. Thus, there are two common trends and one cointegrating vector. A Primer on Cointegration 33 Table 2.3 Tests for cointegration for the broader monetary aggregates ‘Trace Statistic Max. Eigenvalue Aggregate k=O ks1 ks2 k=O & 1 M2 337" 13916 198 123 387° 173 440 214 129 NMIM2 534° 20.6" 27 328" 179° 397" 163 37 234" 12.6 ‘Notes: * Indicate statistical significance atthe $ per cent level Critical Values Tiace Max. Eigenvalue ke0 kel ks2 kao 313-178 81213146 Table 2.4 Normalized cointegrating vectors and hypothesis tests Cointegrating Vector Hypothesis Test Aggregate og = R3M_~—sRIOY. sg? «RSM? ROY? MI 0680-0369 - = .223 21377 084s = - = -0.870 061 = 14.66" M2 0971 0016 = =- 0200.16 - 1044 = 0.031 O21 - 0.24 NMIM2 1.043 0217 - = 009 8" = 0078 - = 0646 2.72 = 4.30" Notes: * Indicates statistical significance a the § percent level 1. Nall hypothesis is that the income elasticity i one. 2, Null hypothesis is thatthe interest elasticity is zero, interest elasticities for M1 and NM1M2 should equal that of M2. While there are no formal tests of these cross-equation restrictions, the point estimates in Table 2.4 indicate that, with the exception of the income elasticity when R3M is used, these restrictions do not do too much violence to the data. For the three-month rate, the sum of the interest elasticities for M1 and NM1M2 is -0.15 compared with the estimated elasticity for M2 of 0.02; the sum of the income elasticities is 1.72, com- pared with an estimated income elasticity for M2 of 0.97. For the 10-year rate, the sum of the elasticities is -0.08, compared with the estimated elasticity of -0.03; the sum of the income elasticities is 0.92, compared ‘A Primer on Gointegration 35 are consistent with the proposition that the long-run demand for money is stable, even though they may not be estimates of the long-run money demand function itself.*6 They also suggest that the reason M2 velocity is stable is because it includes transactions and non-transactions components that are close substitutes for each other in the long run. In particular, the upward drift in M1 velocity appears to be due largely to a relatively steady shift from M1 to the non-transactions deposits in M2. The magnitude of the trend movements in these variables is approximately equal so that M2 velocity is essentially trendless over the estimation period.*” 2.6 Summary and conclusions ‘This paper reviews the concept of cointegration, notes the relation- between tests for it and common tests for unit roots and considers its implications for the relationship among real money balances, real income and nominal interest rates. We argue that if M2 and nominal income are cointegrated, while M1 and nominal income are not, there necessarily exists a stationary long-run relationship between M1 and the non-M1 components of M2. We also argue that, if M1, real income and the nominal interest rate are cointegrated, the same could be true for real income, the nominal interest rate, the monetary base and a proxy for the monetary base/money multiplier. Tests for cointegration among real M1, real income and one of two interest rates using three alternative procedures show that the results are sensitive to the method used. Nevertheless, the technique proposed by Johansen indicates that there is a single cointegrating relation- ship among these variables. While the cointegrating vector cannot be interpreted as the long-run demand for money, the estimated long- run income and interest elasticities are consistent with those often hypothesized and estimated for the long-run demand for money. ‘We also show that the hypothesized long-run relationship for the cointegrating vectors for M1, M2 and the non-M1 components of M2, namely that the sum of the income and interest elasticities for M1 and the non-M1 components of M2 equal the income and interest elasticities of M2, is supported by the data. Finally, we show that if the currency- deposit ratio is used to proxy the monetary base multiplier, the real monetary base, real income, the interest rate and the currency-deposit ratio are cointegrated. A Primer on Cointegration 37 Note: The squared canonical correlations are the solution to the determinan- tal equation lO Ska ~ 540560 Sig O where Syg=N-! Nw N YE LeLSo0=N~! SP Ded, and Sy =N~! SN, LyDj and D, and L; are column vectors of residuals from steps 2 and 3. The maximum likelihood estimates of the k cointegrating vectors (k columns of f; for which o?5,48; = Sip S5Sjo A) ‘Step-by-step application of the Stock-Watson approach to cointegration: In the Stock-Watson approach the null hypothesis is that there are m common trends (n ~ k = m) against the alternative that there are less than m, ‘common trends. There are six steps: say m~4q, 1. Pick the autoregressive order p for the model 2. Compute the eigenvectors of S-¥;¥;, that is, do a principal components analysis of Ye. 3. Using the m principal components with highest variance, that is, largest eigen- values, fit a vector autoregression to the differences. If P; is the vector of m principal components (select as described in the text) then the autoregressive model is denoted AP =AyAP_1 +--+ Ap-14P:-pa1 + ¢,, where, as before, p stands for the number of lags in the ‘original’ autoregressive. This provides 3 filter to use in step 4. 4. Compute a filtered version, f, of Py by Fe =P; —AyPy-1 — reduces the multi-lag model to a one lag model. S. Regress AF; on F;_; getting coefficient matrix B. Compute the eigenvalues of 8, normalize, and compare to the distributional tables of Stock and Watson (1988). Rejecting the null hypothesis of m common trends in favour of the alternative of m~q common trends means a reduction in the number of common trends by q and thus an increase of q in the number of cointegrating vectors. ~ApPp. This Notes 1. That is, formal statistical tests often cannot reject the null hypothesis of a unit oot. The results of these tests, however, are sensitive to how the tests are performed, that is, whether an MA or AR data generating processes is assumed, as in Schwert (1987), and whether the test is performed using classical of Bayesian statistical inference; see Sims (1988) and Sims and Uhlig (1988). These sensitivities are partly due to the ack of power these tests have against an alternative hypothesis ofa stationary but large root, Itcan give ise to the possibility ofa spurious relationship among the levels of the economic variables. Also, the parameter estimates from a regression of one such variable on others are inconsistent unless the variables are cointegrated. 16. 7, 18. 21 22, 23. 24, 2s, 26. 22. ‘A Primer on Cointegration 39 for computing the test statistics are straightforward least-squares formulas. It 1s the distribution of the test statistics that are non-standard. (Of course, because equation (7) is essentially a multivariate VAR, in principle, it should be possible to give these a ‘structural’ interpretation by imposing ‘identifying restrictions on the reduced-form parameters, as has been done recently for VAR models. For example, see Bernanke (1986) and Blanchard and Quah (1989). Stock and Watson (1988) find that the nominal Federal funds, the three- month Treasury bill and one-year Treasury bill rates are cointegrated. Finding unit eigenvalues in x Is equivalent to finding zero eigenvalues in y. This is so because y=1 — x $0 that {y ~ Al] =[a( — x) = l= ix — (0 — yl = x ~81|=0, s0 A=0 1s equivalent to 5=1 More technically, the eigenvalues of !- p’« are less than one in absolute value ‘A form of this model has been proposed recently by Hoffman and Rasche (1990). This is the Stock-Watson formulation, except that X; isa set of latent variables. Conditional on the unobserved X, the endogenous variables are assumed to be stationary. This so because x-M TD oO p-w |= -MiID-Ai=0. If these error terms are not independently distributed, obtaining consistent estimates of the cointegrating vector will be more difficult. For an example of this case, see Stock and Watson (1989). Indeed, both Stock and Watson (1988) and Johansen (1988) allow for the possibility of exogenous variables in the sense of equation (22); however, they do not impose the exogeneity restriction ex ante. In terms of the model given by the equations in (22), when (1 ~ x) Is full rank, the mote cointegrating vectors in D, the more directions in which the ‘final-form’ multipliers will exist. Of course, the subsystem defined by equation (20) converges to a point in R2, This point is given by the intersection of this plane and the line given by the Intersection of the two cointegrating vectors. Having said that, we should hasten to add that it Is very unlikely that these tests will indicate that there are a large number of cointegrating vectors. ‘These tests lack the power to reject the null hypothesis of zero cointegrat. Ing vectors. At a more practical level, it is well known that macroeconomic time-series data are highly correlated so that, typically, the generalized vari- ance of the matrix of such variables is concentrated on relatively few principal components. See fn 27, ‘The space spanned by the columns of the a and f’ matrices is important because af’ can be obtained by many choices of matrices. To see this, note that for any non-singular k by k matrix, H, (@H~!\(Hp')=ap’. For example, let X be an n by k matrix and let / = (X'X) be a k matrix of rank Then there exists a k by k matrix T such that T’T = A is a diagonal matrix with the eigenvalues of J on the diagonal. The columns of T are the eigen- vectors of J. These eigenvectors are called ‘principal components’ because the ‘generalized variance off, |, is equal to the trace of A. Because often many of 38. 39. 40. 41 42. 4. 44 4s. 46. 47. 48. A Primer on Cointegration 41 For example, it is not reasonable to obtain this result by simply assuming that the monetary base is the appropriate measure for money because it is composed of currency and bank reserves. It is not necessary that H Include variables that are not included in Z. If it does not, however, there isan identification problem. That is, one cannot te the difference between the above model and simply treating the monetary base as the appropriate monetary aggregate. Since this possibilty is difficult to conceive of, itis useful if includes variables that are not in Z, as in our empirical work which follows. A simple linear regression of the multiplier (or its growth rate) on K (or its growth rate) using quarterly data indicates that K alone explains 95 per cent Of the variation in the level of the multiplier. Moreover, the growth rate of K alone explains 84 percent ofthe variation in the growth rate of the multiplier. Johansen and Juselius (1990, p. 9) pointed out that ‘One would, however, expect the power of this procedure [the trace test} to be low, since it does rnot use the information that the last three eigenvalues have been found Not to differ significantly from zero. Thus one would expect the maximum eigenvalue test to produce more clea cut results” The estimated cointegrating vector from the Engle-Granger method, when the equation is normalized on real output, is (M1/P) = -.271R+.671q. While the estimated coefficient on the three-month Treasury bill rate is markedly different from that when cointegration is indicated, the estimated coefficient on output is nearly dential The x2(1) statistics for the test ofthe coefficient on output are 2.23 and .61, respectively forthe three-month and 10-year rates, and the test statistics for the interest rates are 21.37 and 14.66, for the two rates, respectively. If a 10 per cent significance level is used, the test indicates there are two cointegrating vectors for M2 when the 10-year bond rate is used. Taking the log of equation (36) and letting the money multiplier bea function ‘ofboth K and the interes rate and h(2) be solely afunction of the interest rate, results in In mb = Ing + Inh(R) ~Inmm(R, K), where h’ < O and amm/aR > 0 and amm/aK <0. ‘This equation implies that the long-run elasticity of In mb with respect to Ing is unity the elasticity of In mb with respect to Ris negative, but, smaller than the estimate for the long-run demand for money, and that the elasticity with respect to K is negative These results are both quantitatively and quantitatively similar to: those obtained by Hoffman and Rasche (1989), ‘The stable long-run relationship between real income, the real monetary base, nominal interest rates and the currency-deposit ratio, is also consistent with the idea recently put forth by McCallum (1987) that nominal GNP can be controlled in the long-run by monetary base targeting. Theoretically, a cointegrating vector is associated with ¢?, 3, o0,0. Thethe- retical counterpart ofthe trace test for Hg :k = 3or less cointegrating vectors is -N Ff gln(1~0) = Oso the rest statistic would be within sampling ertor of 2 or less cointegrating vectors, the theoretical value ofthe test Incl ~ oP) = =Nin(1 ~ 3) > Oand as N gets large, this diverges to too. Note that o2 and o3 (which are both zero) do not contribute to the 3 Unit Roots and Cointegration for the Economist Darryl Holden and Roger Perman 3.1 Introduction Previous papers by one of the authors, Perman (1989, 1991), have proved popular amongst applied economists seeking an introduction to the new econometrics of unit roots and cointegration. The aim of the present paper is, as before, to provide a comprehensive overview of the field in a ‘manner which minimises the technical knowledge required of the reader and which offers intuitive explanations wherever possible. Other useful surveys, at a slightly higher technical level, include the special issues of the Oxford Bulletin of Economics and Statistics (March 1986, August 1992), Dolado et al. (1990) and Campbell and Perron (1991). In this introduction we motivate the study of unit roots and cointegration, and outline the contents of the rest of the chapter, Economic variables, such as consumption expenditure, income and wealth, which are the variables used in our illustrative examples, are often transformed before being used in a regression analysis. A com- ‘mon transformation of a time series variable involves first differencing. However, the level of a variable and its first difference will typically be very different in terms of mean and variation. It is of interest, there- fore, to ask whether there are formal arguments in favour of or against differencing. One approach to this issue starts by noting that estima- tion and hypothesis testing, using the least squares method, is justified ‘only when the various variables being used are stationary. Differencing would be appealing, therefore, if the fist differences of a set of variables were stationary, with the variables themselves being non-stationary. This prompts several questions: What does stationary mean? How can we determine whether a variable is stationary or not? How can we determine whether the first difference of a variable is stationary, if the B Unit Roots and Cointegration for the Economist 45 In Section 3.6 the implications of the cointegration literature for applied econometric work are explained by discussing different approaches to the determination of consumption expenditure. The chapter discusses a large number of theoretical ideas and empirical techniques which are illustrated in the Appendix to this chapter. How- ever, two important topics are not covered in this chapter. Issues involving the finite sample properties of large sample techniques are not dealt with in any detail. We have also ignored issues raised by the use of non-adjusted quarterly data. A discussion of the latter can be found in Muscattelli and Hurn (1990). 3.2. Stationarity and unit roots 3.2.1 Stationary time series ‘We begin by defining stationarity. A time series is stationary, if its mean, variance and autocovariances are independent of time. Thus, suppose y; isa time series (or stochastic process) that is defined for t = 1,2,... and for t = 0,1, -2,.... Formally, y¢ is covariance (weakly) stationary if the following conditions are satisfied; see Harvey (1981a, p. 22): Eo =a @ Ely ~ w)?} = var(rr) = x(0) 2) Ele ~ WY t=1 ~ WI = COVYE Yt) = XCD, T= 12a 8) Equations (1) and (2) require the process to have a constant mean and variance, while (3) requires that the covariance between any two values from the series (an autocovariance) depends only on the time interval between those two values (r) and not on the point in time (t). The ‘mean, variance and autocovariances are thus required to be independent of time. For many purposes the autocorrelation function is more useful than the autocovariance function in (3) and is defined by OVE: Ft=r) x() Wary waryr—) x10)’ * 3.2.2. The first order autoregressive process: AR(1) core, Yer 1:2) In order to illustrate the application of the conditions for stationarity in (1), (2) and (3) we consider the process defined by Ye = pyra ter, 171,01, O) Unit Roots and Cointegration for the Economist 47 absolute value. We refer to being less than one in absolute value as the stationarity condition. This condition can be expressed in a different way if we return to equation (5) and write it in the form Pye = er where p(L),= 1 - pL, is a linear function of L, the lag operator. The root of this function (that is, the solution to p(L) = 0) is given by L = (1/p), So that the requirement that p has absolute value less than one is equivalent to requiring that the root of p(L) is greater than one in absolute value. Furthermore, p(L) has a unit root, that is, the AR(1) process has a unit root, if and only if p is 1. In this case, the stationarity condition is not satisfied. The AR(1) process with a unit root is non-stationary. To explore the implications of this, we contrast the unit root (p = 1) case with the stationary case (o is less than one in absolute value). However, since the validity of assuming that the process starts in the infinite past is unclear when we do not assume stationarity, we now assume the process starts at f = 0, and therefore we replace (4) with N= nate, t= 1,2, an where yo is assumed to be a fixed initial value for the process. We retain the previous assumptions as far as the e are concerned. Repeated backwards substitution in (11) allows us to write n= Yo + et + per + P7et-2 ++ + ph ley and to obtain E(y), var(y) and cov(y, yr) ina straightforward manner. If we assume that p = 1, so that Ye=Yorer tera te2t-- ey eater we obtain Ey) = y0 var(yt) = to? COT(Yt, Ye—r) Unit Roots and Cointegration for the Economist 49 which contrasts with the result in (15) in that, as r increases, cont(yt, Yee) fades away in the stationary case but remains constant at 1 in the unit root case. This theoretical result illustrates a common infor- mal rule for deciding whether a times series should be considered to be stationary or not: for a stationary series, the estimated autocorrelations should fade away rapidly as r increases whereas for a non-stationary series they should not tend to. 3. Since aye Rah 12, in the unit root case, and aye s Oet-s , sm2,... in the stationary case, we see that a ‘shock’, or ‘innovation’, has a sus- tained effect in the unit root case and an effect that diminishes with time in the stationary case. In the case of the AR(1) process in (11) we discuss the unit root tests of Dickey and Fuller (1979, 1981) in some detail below. The tests have p= 1s the null hypothesis and p < 1 as the alternative hypothesis and follow from Dickey and Fuller's consideration of the distribution of in (16) when p = 1. Dickey and Fuller (1979) show that the statistic T@-p)=TH-1) (8) has a limiting distribution. However, in contrast to (17) where the limit- Ing distribution is normal, this limiting distribution is of a non-standard type and does not lead to hypothesis tests based on critical values from conventional distributions such as the normal, t and F. The critical val- ues for unit root tests based on the limiting distribution for the statistic in (18) (or on the limiting distribution for the least squares ¢ statistic for p = 1) have to be obtained by Dickey and Fuller using simulation techniques. ‘There is a further point that emerges when we contrast (17) and (18). The fact that 4 — p is scaled by VT in (17) suggests that the distribution of A ~ p collapses, as T increases, at the same rate that VT increases. The collapsing of this distribution as T increases is a reflection of the fact that f is a consistent estimator of p. In (18) A ~ 1 is scaled by T which suggests that the distribution of f ~ 1 collapses, as T increases, at the same rate that T increases. The collapsing of this distribution as T Unit Roots and Cointegration for the Economist $1 and (L=agh)") = 1+ a2h + o3L? +0313 + which implies that we can write the AR(2) process as a moving average process of infinite order YO + agh + afl? +... yep where =1, 6.50) +2, 82 =07 +aya2 +23, 1 2 But itis true that any moving average process is stationary? so that a and ‘a2 being less than one in absolute value are the stationarity conditions for the AR(2) process. Since @) = 1/r) and a2 = 1/rg it follows that stationarity requires that r, and rp, the two roots of p(L), are greater than one in absolute value.> ‘Suppose now that the AR(2) process is non-stationary because one of 3 the two roots of p(L) is one (Le. p(L) has a single unit root). Let r, = 1 2 and assume ry is greater than one in absolute value. In this case, since 2 a2 =1/r2 = 1, we have mena N pibyye =~ Ld - Dye = = aL) Aye = bye ~ 0 By1-1 =e and the non-stationary AR(2) which explains y¢ implies Ay; is generated by a stationary AR(1) process. Thus y; is I(1) and Ay; is 1(0) in this case, If both of the roots of p(L) are one (Le. p(L) has two unit roots) then = land a= ply: = - DA ~ Ly, = A - Ly Aye = Oye — Bye =e so that Ayr is I(1) and A?y;, the second difference of ye, is 1(0). The yr series needs differencing twice in order that a stationary series is obtained and is therefore (2), integrated of order two. 3.3. Testing for unit roots 3.3.1 The Dickey-Fuller tests As noted above Dickey and Fuller (1979, 1981) consider the AR(1) process in (11) where yo isa fixed initial value and e; is an ID sequence of random variables. We wish to test He 1 against. Ha: <1. Unit Roots and Cointegration for the Economist $3 with a mean of a/(1 ~ p) and (20.3) a stationary AR(1) process about a linear trend if 6 is non-zero. If the data are generated according to (20.1) with p equal to one then it can be said that yt is integrated of order one and is a random walk without drift. If the data are generated according to (20.2) with p equal to one and a non-zero then y; is again integrated of order one and is a random walk with nonzero drift. Note that in this case (20.2) can be also written in the form Weaviyte with yf=y-at. If the data are generated according to (20.3) with p equal to one and 6 non-zero then y; is a random walk about a non-linear time trend since (20.3) can then be written in the form Ye=viy ter with van [os Be 5 2) It is important to understand the differences between these three specifications and note the terminology used. At this point it is useful to make explicit the various alternative com- binations of estimating equation and true parameter values that can be considered, We may choose to estimate either (20.1), (20.2) or (20.3). On the other hand, the values of # and f in (20.3) will necessarily be accord with one and only one of the following possibilities «= 0; B=0 0 a=0; B40 a «40; p=0 an «#0; B40. ayy Thus, for example, case (1) implies (20.1) is correct, (20.2) includes an ‘unrequired intercept term and (20.3) includes both an unrequired inter- cept and an unrequired time trend. Case (II) implies (20.1) excludes a required time trend, (20.2) includes an unrequired intercept and excludes a required time trend and (20.3) includes an unrequired intercept. Dickey and Fuller (1979) derive a limiting distribution for the least squares t-statistic for the null hypothesis that » = 1 where (20.1) to (20.3) are each in turn assumed to be the estimated equation, but in each case under the assumption that case (I) is correct (i.e. that (20.1) generates the data). When (20.1) fs the estimated equation we denote the standard least squares t-statistic for p = 1 as ty. When (20.2) is the Unit Roots and Cointegration for the Economist 5S Table 3.1 A schematic table for unit root tests Assumptions Required in deriving Stat. Eqn. Null Alternative Distribution" Critical Values f 201 op pst F852) pal (top) f 202 p p< F852) (@,) = 0,1) centre) % 203 p p TB and 0 otherwise DU; =1 if > TBand 0 otherwise and where TB is the time of the break. Both the intercept and the trend coefficient are thus allowed to change after TB.!0 If (26) is assumed to describe how the data are generated and one looks at the distribution of the least squares estimator of p in the misspecified equation Maat Br + Vea +e Perron is able to show that the normalised bias T(p - 1) has a prob- ability limit that varies with 4 = (TB/T) but is always between 0 and 0.5. Since the S per cent critical value for this statistic is -21.8, it follows that ‘the unit root hypothesis could not be rejected even asymptotically’; Unit Roots and Cointegration for the Economist 63 fist oil price shock. Given this, it seems sensible to evaluate the unit root hypothesis in a setting that allows for the possibility of structural breaks. When this is done Perron finds that the hypothesis is rejected in the majority of cases (in ten out of the thirteen cases which use annual data and in the single case that involves quarterly data) and that the ‘trend stationary’ hypothesis is more appropriate. There has been much further development in this area since Perron’s seminal (1989) paper appeared. This recent literature includes, among other things, testing for a structural break where the date of that break is unknown a priori and testing for the existence of multiple structural breaks. As this literature is reviewed at some length in another chapter in this volume by Byrne and Perman, we shall consider it no further here. 3.3.6 Trend and difference stationarity Consider the ‘difference stationary’ equation Mesa pye1 + er with p (29) or yn =yotar+ S as against the ‘trend stationary’ equation Ves Mt Be ter (30) where, in whichever equation we assume actually generates the data, we take ¢ to satisfy the conditions of Phillips and Perron. If (30) is perceived to be a plausible alternative to (29) as an explanation of y; then it is useful to consider the probability of rejecting the unit root hypothesis when (29) is estimated but (30) describes the data generation process. Perron (1988) has considered this issue and demonstrates that T(— 1), the standardised bias, and tp, the t statistic for p = 1, both have probability limits of zero when (30) is correct. This implies that, for a sufficiently large sample, we are essentially assured of accepting the unit root hypothesis if (30) actually describes how the data are generated, so that (29) ‘cannot distinguish a stationary process around a linear trend from a unit root’. Perron uses this result in support of the use of (20.3) in Unit root testing, The argument carries greater weight the more plausible we consider (30) to be as a potential alternative to the unit root hypoth- esis, as Maddala (1992) stresses (so that a different argument would be Unit Roots and Cointegration for the Economist 65 an extra pound of income per week will lead to increased expenditure of 50 pence per week if 6 = 0.5 without reference to any equilibrium or long-run relationship between consumption and income that may exist. The decision to increase spending by 50 pence per week, given an extra pound of income, is made without considering whether current spending is too high or too low, given the current level of income. This problem with (31) is reflected in the fact that it does not have an equilib- tium, or long-run, solution. If we interpret equilibrium as meaning the variables become constant we want to impose C= C1=C2=-- and h=h1=h2= on (31). Doing this, and ignoring the disturbance term, gives 0 = 0 and so does not give us an expression for C in terms of J. Such an expression, if it existed, would be the equilibrium solution of (31). Now suppose that the value of C in equilibrium is given by C* and assume C* = f(1). We can attempt to improve on (31) by introducing a variable that takes into account the level of C in period (t — 1) relative to the equilibrium value of C for the same period (f(J;_1)). This leads to the specification ACp = BAN + (Cr — fe) + in which both short-run and long-run factors are allowed a role to play in determining how C is changed from its value in period t — 1. The new variable Is the period t ~ 1 discrepancy between actual and equilibrium Cand, given this, we expect 6 to be a negative parameter. If we assume that f is a linear function and write Ch = fl) = Ah we obtain AC = At + (Cea — Mp1) + Ur (32) or AC = Bl + OCy-1 ~ OA + Up (33) suggesting that a regression of AC; on Alt, Cy_1 and fy is required, A specification of this type is called an error correction mechanism and has proved popular in applied econometric research. Unit Roots and Cointegration for the Economist 67 We note the following points 1. An a vector which leads to a’¥; being a stationary random variable will have at least two non-zero elements. This is because we exclude the zero vector, and any a vector with a single non-zero element will result in a non-zero multiple of a single element of ¥;, which must be 11). 2. If we find an @ vector such that @’Y; is 1(0) then any non-trivial scalar multiple of the a vector will also lead to a stationary linear combination of the elements of ¥;. 3. We may be able to find a second linear combination of the elements of ¥; which is stationary and which is not related to the initial linear combination in the manner outlined above. Thus, the two @ vectors which we find will be linearly independent. In fact, we may find up to k ~ 1 linearly independent a vectors (a1,a2,...,a4-1, Say) such that (qj)'Y; is stationary (for i = 1,2,...,k — 1). We can exclude the possibility of finding k linearly independent a vectors, each giving a stationary linear combination of the elements of ¥,, since this would imply that ¥; is a stationary vector.'2 In general, suppose that aj, f= 1,2,...,r (0 0. Here the null is that the process is stationary, This is the basis of the KPSW test. Now suppose that you have decided which variable you are interested in and which transformation, linear logarithmic and so on and have loaded the data into an econometric package, say EViews 5, and are ready to test for unit roots. This involves clicking on a variable in the workfile, choosing view and unit root test. You will get a dialogue box. At the top is a slot labelled test type. EViews offers six different unit root tests with ADF as the default. There are other tests not included in this list, which will probably be included in future versions of the package. The EViews tests differ as to whether the null is a unit root (5 of the 6) or the null is stationarity (KPSW); whether they deal with serial correlation para- metrically by adding lagged changes (e.g. ADF) or non-parametrically like PP; and whether they improve power by using Generalised Least Squares (e.g. Elliott Rothenberg and Stock). With each test you have to choose between three treatments of the deterministic elements: inter- cept; trend and intercept; or neither. You have to choose lag length for parametric tests, but, of course, this can be done automatically using a model selection criterion. EViews offers you a choice of six model selec- tion criteria that you might use to choose lag-length: Akaike, Schwarz and so on, so you still have to make a choice. Similarly, you have to choose spectral estimation method and bandwidth for non-parametric tests, and EViews offers you seven spectral estimation methods and two The Significance of Unit Roots 115 1980 1955 1960 1965 1970 1975 1980 1985 ; —y I ==-- cs) | Figure 4.1 Logarithms of US real GNP, consumption and investment ‘The three US series from KPSW, log real GNP yr, log real consumption, | ye. and log real investment, i; that we are going to analyse are plot- | ted in Figure 4.1. The tests are not unanimous but suggest that income and consumption are [(1), but except for the ERS point-optimal, suggest that investment is [(0). The reasons for this is that while income and consumption, the top two series, are quite smooth, investment is more volatile and thus crosses its trend more often. For pedagogical purposes, we will treat all three as I(1), as did KPSW. 4.6 VARs, error correction and cointegration The autoregression used above is univariate, but economists are primarily interested in the interactions between variables. The multivariate form of the autoregression is the Vector Autoregression (VAR). Suppose that ‘we were interested in the interaction between log income, y; and log consumption cr. The second order, two lags, VAR involves regressing each variable on two lagged values of itself and the other variable, as well as any deterministic component such as a trend. For instance, Mea ao tay +r r t+ Pyrat P42 +t tere @) Ge = A294 yaa + a22C-4 + a1yt-2 + @o2cr_2 + at + err Vector Autoregression Estimates Date: 07/17/06 Time: 13:17 Sample: 1949Q1 1988Q4 Included observations: 160 ‘The Significance of Unit Roots 117 Standard errors in () & t-statistics in [] —— Ly Lc Se Ly(-1) 1.107106 0.163578 (0.08551) (0.05372) 112.9478), (3.04498) Ly(-2) 0.336353 -0.243080 (0.08420) (0.05290) {-3.99466] [-4.59502) Lc(-1) 0.423745, 0.847969 (0.14149) (0.08889) (2.99498), (9.53944), Lct-2) 0.208893 0.195260 (0.14510) (0.09116) [-1.43966] (2.14193), c 0.010678 0.154678 (0.13513) (0.08490) {-0.07902] (-1.82196] u 1098-06 0.000125, (0.00012) (7-78-05) (0.00888), {1.62080}, Sa ae eS Resquared 0.996454 0.998732 Adj. R-squared 0.996339 0.998691 Sum sq. resids 0.022212 0.008767 S.E. equation 0.012010 0.007545, Festatistic 8655.089, 24256.20 Log likelihood 483.5550 857.9210 Akaike AIC ~5.969437 6.899012 Schwarz SC 5.854118, ~6.783693 Mean dependent 4.365908 4.600347 S.D. dependent 0.198483, 0.208520 Determinant resid covariance 6246-09 (dot adj.) Determinant resid covariance S.78E-09 Log likelihood 1063.446 Akaike information criterion, =13.14307 Schwarz criterion -12.91244 a ee The Significance of Unit Roots 119 so the two variables keep close together, making the linear combination (the error or deviation from equilibrium) stationary. As with the drunk farmer and his dog, keeping them close requires some feedback between the two variables, and this implies that there must be Granger-causality in at least one direction. Cointegration offered an empirical way of dealing with two fundamen. tal economic issues. First, it provided a way of testing the stability of the great ratios, Kosobud and Klein (1961). These included purchasing power parity, constant real exchange rate; stable money demand, constant velocity of circulation; consumption function proportionality, constant consumption (saving) income ratios, and so on. Theorists tended to take them as empirical (if stylised) facts; empirical economists tended to take them as theoretical predictions. If the logarithms of the original variables were I(1), the logarithms of these ratios should be 1(0), and this was testable. The second possibility cointegration appeared to offer ‘was to operationalise the fundamental economic concept of equilibrium as meaning that some linear combination of non-stationary variables should be stationary. Under this definition, one could test for the exis- tence of equilibrium, estimate the equilibrium relationship and measure the adjustment processes that returned the economy to equilibrium. ‘Once the basic idea was appreciated a large number of techniques were proposed for testing the null hypothesis of no cointegration and estimat- ing the cointegrating vectors, if the null hypothesis was rejected. There are also KPSW-like tests which have cointegration as the null. Although the statistical properties of the alternative tests vary, the market leader is the procedure proposed by Johansen (1988). It is very widely used because it was rapidly implemented in popular econometric packages; it handles multiple cointegrating vectors easily, and it integrates testing and estimation within a consistent framework. ‘The changes in each variable are linear functions of this (0) variable, which is usually interpreted as a deviation from equilibrium: Bye = by +01 (Ce-1 ~ Bo ~ Biye—1 ~ Bat) + 811 Ay¥e-1 + 8120¢¢-1 + Mer Ace = by + a2(Cr-1 — Bo ~ Biye—1 ~ Bat) + 821 dyt-1 + $22.Ace—1 + Woe © A teparameterisation does not change the number of parameters esti- mated (4) and (5) each involve estimating 12 parameters, 6 in each equation, and the equations are statistically identical. But in moving The Significance of Unit Roots 121 Vector Error Correction Estimates Date: 07/17/06 Time: 13:35 Sample: 1949Q1 1988Q¢ Included observations: 160 Standard errors in () & t-statistic in [] Cointegrating Eq: CointEqi Lc(-1) 1.000000 1-1) 1.127538 (0.11215) [-10.0542] @ TREND(47Q1) 0.000231 (0.00048) (0.48024), G ~0,342692 Error Correction: DiLc) Diy CointEqt 0.083165 0.197965, (0.02601), (0.04098) (3.19770) [4.83120] Daci-1) -0.203080 0.212187 (0.09096) (0.14331) [-2.23225} (1.48038) Davi) 0.241904 0.336850 (0.05314) (0.08372) (4.88248) 1 4.02359] c 0.004639 0.001843, (0.00070) (0.00110) 16.6091], [1.67979] Resquared 0.140005 0.271447 Adj. R-squared 0.123467 0.257436 Sum sq. resids 0.008962 0.022246 S.E. equation 0.007579 0.011942 Fsstatistic, 8.465483 19.37433 Log likelihood 356.1646 483.4297 ‘Akaike AIC ~6.902057 5.992872 ‘Schwarz SC ~6.825178 5.915992 Mean dependent 0.004716 0.004288 S.D. dependent 0.008096 0.013858 Continued The Significance of Unit Roots 123 normal is a constant. V(yr/x1) = 02. The second, later, story of regres- sion, started with a causal model supplemented with an unobserved error Bo + 61% + up, and made various assumptions about the error this later story which came to dominate econometric text ‘There is a one-to-one mapping between the two stories, but some felt unhappy about the second story, because it involved unobservables, tr; call them angels, and you can make any assumptions you like about angels, because you never see them. In contrast, the first story is about things you can observe, the distributions of y; and xp. To establish the properties of the estimators of f, the second story needed some exogeneity assumption. Three were used, (a) the regressors were non- stochastic, which apart from deterministic elements is not appropriate in economics; (b) the regressors, xp, were distributed independently of the true unobserved errors, strict exogeneity; and (c) the regressors were uncorrelated with u;, predetermined. With (a) or (b) the estimators were unbiased, a small sample property; with (c) they were only consistent, a large sample property. A different approach to exogeneity comes from (7). Suppose that your parameters of interest are y, and y are just a function of the parameters Of the conditional distribution, @ (the regression coefficients and vari- ances, in this instance) and there are no links between the parameters of the conditional and marginal distribution, @¢ and 6m are variation free, then x; is weakly exogenous for y. Essentially, this says that there is no Information in the process that determines xy, that will help you estimate ‘y. Examples where weak exogeneity fails are cases like errors in variables and simultaneity, where you need to know about the process generating 4% to estimate the parameters of interest. Engle, Hendry and Richard who introduced this terminology to econometrics, also introduced two other concepts. Whereas weak exogeneity was needed for estimation, strong exogeneity was needed for forecasting and super-exogeneity for policy analysis, Exogeneity ceases to be the property of a variable. A variable may be weakly exogenous for one purpose, parameter of interest, but not for another. It may be weakly exogenous for some parameters in y, but not for others. This may well be the case in the VECM. Taking (6) and dropping the change terms for convenience: Aye = by +e (Cr-1 — Bo ~ Biyt-1 ~ Bat) + Ue ‘cy = ba +.402(6r-1 ~ Bo ~ BrYt-1 ~ Bat) + tae The Significance of Unit Roots 125 normalisation: choosing the depetident variable as consumption (by list- ing it first in Eviews) and setting its coefficient to unity. For r > 1, we require more restrictions. Let us extend the model above to three variables by including invest- ment so we have x; = (¥,¢, i)’ and decide as did KPSW that there are two cointegrating vectors. Pesaran and Smith (1998) consider in some detail how many cointegrating vectors there might be, which is compli- cated by the fact that investment may be (0). In this case, it is natural to treat the two cointegrating vectors as a consumption function and an investment function and to say that investment does not appear in the consumption function and consumption does not appear in the investment function. This gives the two restrictions on the consumption function: that the coefficient of consumption is unity and of investment is zero. Similarly, for the investment function: the coefficient of invest- ment is unity and of consumption is zero. Just identifying restrictions cannot be tested, but play a central role in interpretation. The difficulty is that many different interpretations, using different just identifying restrictions, may be observationally equivalent, so there is no data-based way to decide between them. If one specifies two cointegrating vectors and orders, the variables (LC LI LY) EViews automatically produces the right just identifying restric. tions in this case, though they may not be appropriate in other cases. The estimates are Vector Error Correction Estimates Date: 07/17/06 Time: 13:48 Sample: 1949Q1 1988Q4 Included observations: 160 Standard errors in () & t-statistics in [] Cointegrating Eq: Cointéqi CointEq2 Lc(-1) 1.000000 0.000000 Lu-1) 0.000000 1.000000 ty) 1.113086 1.351910 (0.10830) (0.20095) [-10.2782} {-6.72744) @ TREND(47Q1) 0.000168 0.001365 (0.00046) (0.00086) [0.36266] 1.58430} c ~0.274075 0.07862 See Continued The Significance of Unit Roots 127 Individually, they are not significantly different from one. In Eviews, these are imposed under the VEC restrictions as: b(1, 1) = 1, b(1, 2) = 0, b(1,3) = -1, (2,1) = 0, (2,2) = 1, b(2,3) = —1. When these are imposed, the test statistic, which is asymptotically Chi squared (2) under the null hypothesis that both income elasticities are unity is 1.33, with ap value of 0.51. Sowe can accept these two over identifying restrictions. The test for over-identifying restrictions can have poor small sample proper- ties and a tendency to over-teject. With the over-identifying restrictions imposed, the adjustment coefficient on the second CE in the consump- tion equation is insignificant as above and on the first CE only just on the margin of significance. A test of a(1, 1) = 0, a(1,2) = 0 is just rejected with a p value of 0.043, so consumption cannot be treated as weakly exogenous. Long-run restrictions come from arbitrage conditions or from standard economic functions like consumption functions and investment func: tions as above. For example, Garratt et al. (2003) construct a structural cointegrating VAR for the UK economy in eight variables, with the oil price as exogenous. They find five cointegrating vectors. These are pur- chasing power parity; a Fisher effect, real interest rates are stationary; international interest rate parity; convergence between domestic and foreign output; and money-demand function. Theory specifies most of these coefficients. The twenty three over-identifying restrictions implied by this structure ate not rejected. This means that the system has a sim- ple long-run structure of the sort that would arise from many theoretical models. Apart from the constants, there are only two freely estimated parameters, the coefficients of the interest rate and trend in the money demand function. 4.9 Conclusions Vector Autoregressions provide a flexible way to fit the relationships between variables. Unit roots and cointegration provide a way to describe the time-series properties of the data and the way that long-run pro- cesses cause variables to hang together. In many cases, these long-run relations can be given a natural theoretical interpretation in terms of arbitrage processes, which produce the various international parity con- ditions. VARs need not be atheoretical, and long-run cointegrating VARs ‘use more theoretical information than typical Dynamic Stochastic Gen- eral Equilibrium Models, where the standard practice is to use an ad hoc statistical procedure, such as the Hodrick-Prescott filter, to detrend the data, throwing away all the long-run theoretical information. 5 Unit Roots and Structural Breaks: A Survey of the Literature Joseph P, Byrne and Roger Perman $.1 Introduction In the past three decades, empirical work in applied economic research has been fundamentally changed by a revolution in time-series mod- elling. In particular, it is now widely accepted that there are substantial implications for empirical modelling when one or more of the time series being used is found to contain a unit root. Whether, in fact, typical eco- nomic data sets are unit root processes is a hotly debated topic. Nelson and Plosser (1982) were of the view that almost all macroeconomic time series used in applied work have a unit root. This was forcibly challenged by Perron (1989) who suggested that it may be necessary to isolate some unique economic events and consider them as changing the pattern of time series permanently. Consequently, many time series should not be modelled by an AR(p) process, where p denotes lag length, with fixed parameters in the deterministic components. Perron went on to suggest that the results of Nelson and Plosser (1982) were not decisive if rare occurrences or structural breaks were accounted for and that much of the persistence of time series was due to infrequent permanent shocks. ‘The approach of identifying isolated economic events a priori has given way to a new literature which tests for breaks and break dates simulta- neously. Whilst these new developments have partially overturned the initial results, it is certainly the case that work on structural breaks, initi- ated by Perron, has ‘dramatically altered the face’ of applied time-series analysis, according to Hansen (2001). In this paper, we consider developments since the original Perron (1989) paper. These include considering whether break dates should be determined endogenously (and so considered unknown), which extends the initial treatment by Perron of breaks as exogenous. This strand of 129 Unit Roots and Structural Breaks 131 where yr is a time series of T observations and yu, = uo + yt are deter- ministic terms (if uo # 0 there is a constant, and deterministic trend when j1 #0). The ADF test statistic has a null hypothesis of a unit-root process (i.e. » = 0) against the alternative of a stationary (p < 0 and 441 = 0) or trend stationary (p < O and yx; #0) process. ‘An issue often raised in the time-series literature is the difficulty of differentiating between trend stationary and difference stationary pro- cesses. Deterministic trends do not always appear to be linear, and shocks sometimes have permanent effects. Another major concern has been the low power of ADF tests and the inability to reject a false null of unit s00t; see, for example, DeJong et al. (1992). The ADF-GLS test of Elliott, Rothenborg and Stock (1996) achieves improvements in power by esti- mating the deterministic regressors before estimating the autoregressive parameter. Noting that increasing the number of deterministic compo- nents (from no constant, to constant, to trend and constant) reduces the ctitical values and hence the ability to reject the null of unit root (or the power of ADF tests), Elliott et al. (1996) have developed tests based on GLS detrending. These tests are found to have both improved power and size properties compared to the conventional OLS-based ADF tests; see Elliott et al. (1996). 5.3. Exogenous structural breaks The possible importance of structural breaks for the implementation and interpretation of unit root tests was first emphasised by Perron (1989) and Rappoport and Reichlin (1989). Perron (1989) suggested that structural change in time series can influence the results of tests for unit roots. In particular, time series for which an uncritical application of ADF-type tests infers the existence of a unit root may often better be characterised bya single permanent break in a deterministic component of a stationary or trend-stationary process. Perron’s results are based on the following general ADF model with shifts in mean and trend: pd Aye = evt-at Do yyy tae tue (2) ja where yr = Ho+Hf deny tuit+u} (t-Tp)drrg are the possible deterministic terms (which contains a constant when ug # 0, and deterministic trend when y # 0) and djry is a dummy variable taking values of 1 from Tp and zero prior to that; see below. The break date is at time Tp Unit Roots and Structural Breaks 133 date as exogenously determined and known ex ante has often been considered inappropriate in the subsequent theoretical and empirical li erature. According to Christiano (1992), Banerjee, Lumsdaine and Stock (1992), Zivot and Andrews (1992), Perron and Vogelsand (1992) and Chu and White (1992), identification of the break date may not be unrelated to the data, and if the critical values of the test assume the opposite, there may be substantial size distortions (i.e. the tests will have a tendency to over-reject the null hypothesis of unit root). The main innovation of these papers is to suggest that the date of the break should be identified endogenously when testing for breaks. Intuitively, the tests apply the Perron (1989) methodology for each possible break date in the sample, or some part of that sample, and then choose the break at the point where evidence against the null is most strong. The endogenous structural break test of Zivot and Andrews (1992) is a sequential test which utilises the full sample and uses a different dummy variable for each possible break date. Here the break date is selected where the t-statistic from an ADF test of unit root is at a minimum (i. where its absolute value is maximum). Consequently, a break date will be cho- sen where the evidence is least favourable for the unit root null, The Zivot and Andrews (1992) minimum t-statistic has its own asymptotic theory and critical values.” The latter are more negative than those pro- vided by Perron (1989) and may suggest greater difficulty in rejecting the unit root null (which we discuss in greater detail in Section 5.7 below). Banerjee et al. (1992) also tests for endogenous break dates and utilises sequential, rolling and recursive tests. The non-sequential tests use sub- samples to determine the number of breaks and can be viewed as not using the full information set, which may have implications for the power of these tests. Tests include the max DF statistic, the min DF statistic and the difference between the two statistics. Zivot and Andrews (1992) and Banerjee et al. (1992) test the joint null hypothesis of a unit root with no break in the series. As a consequence, accepting the null hypothesis in the context of the Zivot and Andrews (ZA) and Banerjee et al. (B) tests does not imply unit root but rather unit root without break, Perron (1994), on the other hand, considers a test of the unit root hypothesis where the change in the slope is allowed under both the null and the alternative hypotheses. Critical values are derived for ZA and B tests assuming no structural breaks under the null. Nunes, Newbold and Kuan (1997) suggest that there may be some size distortions where such critical values are used in the presence of structural breaks under the null.3 Lee and Strazicich (1999) discuss the source of the size Unit Roots and Structural Breaks 135 6 and y are scalar parameters. Where 9 + oo, the shift function becomes a simple (0, 1) dummy variable. Estimation of the parameters n = (140, 141, y')’ is obtained by minimis- ing the generalised sum of squared errors of equation (10). This amount to, under the null hypothesis, minimising Qp(a, 8.0%) = (Y — Z@)n)'Ela*)“"Y — Z@)n) (13) where a* is the vector of coefficients in a*(L), E(a*) = Cov(V)/a2, V = (1,...,vp)! the error vector of the model, ¥ = [y1, Ay2,..., Ayr] and Zz Zz : 23} with Zy = (1,0,...,0Y,Z2 = (1,1,..., 1]! and Z3 = (fi), Bf200),..., SFO’. Saikkonen and Litkepohl (2002) and Lanne et al. (2002) suggest estimating the deterministic term of equation (9) first, following the approach of Elliott et al. (1996) using GLS de-trending. Consequently, this deterministic component is subtracted from the original series, and then ADF tests are applied to the adjusted series. It may be obvious in some situations that a break has occurred at a particular date (e.g. the Great Crash with Perron’s (1989) empirical model and unification in a study of German monetary policy by Lutkepohl and Wolters (2003)). In addition, the approach is extended to a situation of an unknown break date by Lanne et al. (2003). A different asymptotic distribution is utilised when a linear trend is incorporated, and there may be an improvement in power when a trend is not present (Litkepohl 2004). Hence, all prior information should be utilised when deciding whether a deterministic trend is important. ‘Two parameters have to be identified: the lag length of the ADF test and the break date. If the break date is known, then it should be imposed, and subsequently identification of the lag length is required. If the date of the break is known, it can be imposed and then the AR identified by standard procedures. In situations where the break date is unknown, Lanne et al. (2003) recommend that a generous lag be first utilised to obtain the break and then a more detailed analysis of the AR order be pursued primarily in an attempt to improve the power of the test. Non- standard critical values come from Lanne et al. (2002) and depend on whether a linear trend is excluded from the tests.> 5.6 Multiple structural breaks In addition to relaxing the assumption that breaks are known and dis- crete, further assumptions of Perron’s (1989) initial paper have been Unit Roots and Structural Breaks 137 Table 5.1 Unit root tests and the Nelson and Plosser data set OE Stationary (with possible Model #of Breaks Unit Root breaks) SS rrr Nelsonand ADF 0 B 1 Plosser (1982) Perron (1989) Exogenous 1 3 10 breaks Zivot and Endogenous 1 10 3 Andrews breaks (1992) Lumsdaine Endogenous 2 8 5 and Papell breaks (1997) — to account for the actual number of breaks will lead break tests to fail to reject the null of unit root. Kapetanois (2005) examine the unit root hypothesis with drift but no breaks against a trend stationary alternative hypothesis with x breaks in the constant and/or trend. This also is a sequential approach, and Kapetanois (2005) argues that it is computationally efficient. This is important in the context of the argument by Lee and Strazicich (2001) that the computational burden of tests with more than two breaks (e.g. through a grid search) would increase significantly with three or more breaks. ‘Again the tests which allow for the possibility of multiple breaks ~ Lumsdaine and Papell (1997), Ohara (1999) and Kapetanois (2005) — do not allow for breaks under the unit root null hypothesis. This may potentially bias these tests, 5.7. Unit roots and structural breaks: applied papers One of the most influential studies of the unit root properties of actual economic data was by Nelson and Plosser (1982) who considered whether Us macroeconomic data were non-stationary. Abstracting from the pos- sibility of structural breaks in the time series, the authors examined 14 macroeconomic time series over the period 1909-1970 and discovered that 13 contained a unit root (in particular, the unemployment rate did not contain a unit root). This led many researchers to conclude that Unit Roots and Structural Breaks 139 to structural breaks. This is a minimum t-statistic where the null hypoth- esis implies a stationary time series with a break and contrasts with the null of unit root minimum t-statistic which provides the least favourable break date for the unit root null. Instead with Lee and Strazicich break points are chosen which give most favourable results for stationarity with abreak. The authors use Monte Carlo simulations to suggest that the test performs reasonably well, when the structural break is large, in identify- ing the break date. Also the test has reasonable power. Harvey and Mills (2004) also consider a test with a stationary process as the null hypoth- esis against an alternative of unit root. In addition, their test allows for smooth transition in linear trend under the null hypothesis. ‘Alternative approaches to estimating break dates have been proposed using Bayesian procedures and also Markov Switching methods. Kim and Maddala (1991) consider multiple unknown breaks using BIC crite- ria and Gibbs Sampling. Garcia and Perron (1996) identify breaks using MS methods, based on work by Hamilton (1989), which also identi fies the break date. This work assumes a number of regimes (two and three) to identify the break dates in real interest rates. This has impli- cations for identifying the break date. Markov switching methods have also been used by Murray and Nelson (2002), who use a parametric boot- strap of a Markov switching model for real GDP, and Nelson, Piget and Zivot (2001). ‘An approach to testing for a stochastic trend in a multivariate set- ting with known break points is provided by Busetti (2002). This test considers the null that a number of series are stationary and with a sim- ilar trend with breaks, and has an alternative that at least one series is non-stationary. Bai, Lumsdaine and Stock (1998) also utilise multivariate methods when identifying confidence intervals around structural breaks. Additionally in a multivariate setting, there has been recent interest in non-stationary panel data methods accounting for structural breaks, including those by Tzavalis (2002), Murray and Papell (2002), Im et al (2005) and Westerlund (2006). An alternative question to that of testing for unit root in a univariate context is an examination of persistence in time series in the presence of structural breaks. This issue is examined by Diebold and Inoue (2001), Kurozumi (2005) and Busetti and Taylor (2004). 5.9 Conclusion ‘This paper considers the literature on testing for unit roots in the pres- ence of structural breaks. We emphasise, consistent with Perron (1989), Unit Roots and Structural Breaks 141 5.10 Software All of the tests reviewed in this chapter can be implemented in pro: ‘grammable or quasi-programmable econometric software packages such as GAUSS, TSP and RATS. For such software packages, associated web sites often contain libraries of routines or procedures donated by users of those packages that contain ready-made implementations. We list a few examples below for the RATS package. In addition, authors occasionally supply their own software imple- mentations. One notable example is the Jmulti package, available at www.jmulti.de. This implements the tests proposed by Saikkonen and Litkepoh! (2002) and also provides tests for unknown break dates. The Package consists of a Java program and runs on Windows (98SE and later) and Linux. Rats implementations: wwweestima.com (see ‘Procedures and Examples’) 1, Zivot and Andrews Sequential Test 2. Banerjee et al. tests Rolling, recursive and sequential tests, 3. Perron (1997) Endogenous breaks [RATS has also a discussion forum: http://www.estima.com/forum/ index.php where clarifications may be sought. A new version of RATS is also shortly due for release (editor).] Notes 1. Work by Granger and Newbold (1974) on estimation with non-stationary data first Identified the problem of spurious regression, and Engle and Granger (1987) provided the mechanics to identify long-run relationships using coin. tegration. 2, Zivot and Andrews (1992) provide both asymptotic and small sample critical values. 3. Garcia and Perron (1995) consider breaks in real interest rates and estimate structural break dates using a Markov Switching model. 4, See also Leybourne, Newbold and Vougas (1998) where the break is modelled as a smooth transition in linear trend which is endogenously determined. Maddala and Kim (1998) also suggest that gradual structural breaks should receive more attention in the literature S. ADF tests robust to structural breaks and utilising GLS detrending tests have also been proposed by Perron and Rodriguez (2003). 6. Ctitical values for the case of two breaks are provided in Ohara (1999). 7. Zivot and Andrews (1992) make a distinction between asymptotic critical values and finite sample critical values obtained by bootstrap methods. When New Unit Root Tests Designed for the Trend-Break Stationary Alternative: Simulation Evidence and Empirical Applications Amit Sen* 6.1 Introduction Since the seminal paper of Perron (1989) appeared, it has been well rec- ognized that conventional unit root tests of Dickey and Fuller (1979) lack power against the trend stationary alternative that allows for a break in the trend function. Perron (1989) proposed three different char- acterizations of the trend-break stationary alternative, namely, (1) the Crash model that allows for a break in the intercept; (2) the Changing Growth model that allows for a break in the slope with the two seg- ments joined at the break-date; and (3) the Mixed model that allows for a simultaneous break in the intercept and slope in the trend function. Perron (1989) proposed statistics for the unit root null hypothesis when the break-date under the alternative is known. Perron's strategy was to specify the location of break-date (T,) and then estimate a regression that nests the random walk null and the trend-break stationary alterna- tive of choice. The unit root statistic is the t-statistic on the first lag of the dependent variable, denoted by ti,-(T}) where i = A,B, or C corresponds to the Crash, the Changing Growth, or the Mixed model respectively. Numerous studies have extended Perron’s (1989) tests to the case when the break date is unknown to the practitioner; see Banerjee, Lumsdaine, and Stock (1992), Lee and Strazicich (2004), Murray and Zivot (1998), Perron (1997), Perron and Vogelsang (1992), Vogelsang and Perron (1998), and Zivot and Andrews (1992). The most popular strategy is “Part of this research was supported by a Summer Fellowship grant from Xavier University. We thank Herman Bierens for kindly providing the extended Nelson-Plosser data set 143 New Unit Root Tests 145 In Section 6.4, we present the empirical results, and Section 6.5 provides some concluding comments. Details regarding the sources of data used in the empirical applications are given in Appendix A, and the calculation and use of the /-statistics is provided in Appendix B. 6.2 Model and test statistics In this section, we briefly discuss the data generating process under three different characterizations of the trend-break stationary alternative and the unit root null hypothesis, as well as the test-statisics for the unit root null hypothesis. Consider the time series {yr}7_g. Following Perron (1989), we consider the following three models: Model (A): yp = 4 + ODUL(T§) + t+ ue a Model (B): yp =u + BC+ yDTy(T§) + uy (2) Model (C): yp = « + ODUs(TS) + Bt + yDT (TE) + uy @) where Tf is the true break-date, DU,(T§) = (t > Tf), DT(T§) = (t- Tf) A(t > Tf), and 1(t > Tf) is an indicator function that takes on the value Oift < Tf and 1 if t > Tf. For the asymptotic results, it is assumed that the true break-fraction is constant, that is, 4° = T{/T € (0, 1). We assume that u is an ARMA(p + 1,q) process, that is, A(L)up = B(L)ep, where er ~ iid.(0, 02) with finite fourth moments, A(L) = (1 ~aL)A*(L) and B(L) are polynomials in the lag operator L of order p + 1 and q respectively, A*(L) and B(L) have all roots outside the unit circle, and yp is a fixed constant.2 Under the alternative hypothesis, |a| < 1 and so up = A(L)~1B(L)ep. In this case, Model (A) or the Crash model (with @ # 0) allows for a break in the intercept, Model (B) or the Changing Growth model (with y 4 0) allows for a break in the slope with the two segments joined at the time of the break, and Model (C) or the Mixed model (with @ # 0 and y 4 0) allows for a simultaneous break in the intercept and the slope. Under the unit root nuil hypothesis, a = 1, and soit follows that uy = uy_1-+A* (Ly BiLyey and, Yee HAN tMe (a) where v; = A*(L)~!B(L)e,.4 Next, we describe the unit-root statistics for the trend-break stationary alternative given in equations (1)-(3), and their respective limiting null New Unit Root Tests 147 Table 6.1A Finite sample critical values for JA(¢,m), m = 2,3, T=100 +10 and % maz mad mad mas ma6 m=7 m=8 mad maid x 502 1 0.0001 0.0075 0.0371 0.0931 0.1421 0.2182 0.3185 0.3968 0.5104 25 0.0004 0.0184 0.0687 0.1475 0.2268 0.3303 0.4494 0.5641 0.6742 S 0.0016 0.0355 0.1161 0.2357 0.3308 0.4632 0.6037 0.7400 0.8691 10 0.0060 0.0731 0.2045 0.3591 0.4879 0.6640 0.8524 1.0279 1.1995 i =03 1 0.0001 0.0073 0.0445 0.0985 0.1706 0.2476 0.3297 0.4253 0.5130 2 0.0004 0.0210 0.0859 0.1555 0.2606 0.3562 0.4608 0.8716 0.7019 5 0.0015 0.0445 0.1377 0.2300 0.3643 0.4927 0.6117 0.7815 0.9070 10 0.0056 0.0924 0.2277 0.3583 0.5360 0.7011 0.8789 1.0637 1.2272 f= 0.4 1 0.0001 0.0081 0.0449 0.1158 0.1740 0.2659 0.3297 0.4403 0.5408 25 0,0006 0.0238 0.0829 0.1813 0.2606 0.3790 0.4807 0.5953 0.7135 5 0.0025 0.0496 0.1362 0.2629 0.3702 0.5070 0.6391 0.7947 0.9474 10 0.0091 0.1044 0.2229 0.4046 0.5461 0.7316 0.8887 1.0826 1.2661 i = 0S 1 0.0002 0.0094 0.0493 0.0975 0.1907 0.2729 0.3743 0.4725 0.5772 25 0.0008 0.0232 0.0964 0.1756 0.2805 0.3794 0.5180 0.6319 0.7533, S 0.0033 0.0456 0.1513 0.2610 0.4078 0.5116 0.6696 0.8046 0.9770 10 0.0125 0.0962 0.2537 0.3920 0.5858 0.7253 0.9369 1.1016 1.2998 a 06 1 0.0001 0.0104 0.0460 0.1050 0.1608 0.2630 0.3596 0.4355. 0.5348 25 0.0006 0.0258 0.0800 0.1758 0.2556 0.3900 0.4919 0.6120 0.7391 5 0.0027 0.0497 0.1332 0.2533 0.3727 0.5225 0.6542 0.7994 0.9478 10 0.0099 0.1019 0.2278 0.4059 0.5514 0.7353 0.8940 1.0745 1.2549 x 20.7 10,0001 0.0107 0.0448 0.0973 0.1725 0.2476 0.3447 0.4455 0.5190 25 0.0003 0.0241 0.0860 0.1622 0.2653 0.3587 0.4864 0.5981 0.7044, 5S 0.0014 0.0482 0.1451 0.2402 0.3792 0.4898 0.6330 0.7779 0.9081 10 00052 0.0968 0.2380 0.3664 0.5427 0.6864 0.8741 1.0497 1.1995, i =08 1 0.0001 0.0079 0.0467 0.0956 0.1613 0.2289 0.3328 0.4238 0.5051 25 0.0004 0.0171 0.0821 0.1684 0.2371 0.3381 0.4692 0.5732. 0.6831 5 0.0017 0.0345 0.1321 0.2430 0.3385 0.4757 0.6176 0.7422 0.8738 10 0.0064 0.0729 0.2188 0.3682 0.4963 0.6599 0.8550 0.9995 1.1687 ‘New Unit Root Tests. 149 Table 6.16 Finite sample critical values for JS, m = 2,3,4,.-..10 and T= 500 ee % m=2 m=3 mad mas m=6 m=? ma8 mad maid eee 10,0001 2.5 0.0003 3 0.0016 10 0.0063 10,0000 2.5 0.0003 5 0.0013 10 0.0050 1 0.0001 2.5 0.0005 S 0.0022 10 0.0086 1 0.0001 2.5 0.0007 5 0.0029 10 0.0112 1 0.0001 2.5 0.0006 S 0.0022 10 0.0089 1 0.0001 2.5 0.0003 5 0.0013, 10 0.0052 10,0001 2.8 0.0004 S 0.0016 10 0.0061 0.0075 0.0178 0.0366 0.0732 0.0078 0.0195 0.0434 0.0922 o.o112 0.0244 0.0504 0.1012 0.0087 0.0217 0.0450 0.0955 0.0081 0.0236 0.0472 1041 0.0088 0.0208 0.0438 0.0921 0.0067 0.0162 0.0325 0.0722 0.0463 0.0826 0.1286 0.2211 0.0408 0.0799 0.1322 0.2240 0.0482 0.0840 0.1416 0.2361 0.0460 0.0931 0.1536 0.2874 0.0438 0.0828 0.1408 0.2327 0.0497 0.0876 0.1417 0.2299 0.0370 0.0767 0.1282 0.2124 a= 02 0.1055. 0.1599, 0.1699 0.2460, 0.2526 0.3509 0.3690 0.4966 a= 03 0.0889 0.172 0.1532 0.2581 0.2272 0.3598 0.3463. 0.5300, a= 04 0.1064 0.1717 1823 0.2602 0.2676 0.3769 0.4017 0.5391 af = 05 0.1020 0.1866 0.1748 0.2815 0.2616 0.4066 0.4037 0.5819 206 0.1001 0.1761 0.1762 0.2586 0.2604 0.3691 0.4033 0.5404 M07 0.0995 0.1777 0.1618 0.2596 0.2343. 0.3644 0.3849 0.5253 M08 0.0967 0.1538 0.1618 0.2345 0.2358 0.3346 0.3549 0.4765 0.2293 0.3461 0.4732 0.6634 0.2361 03419 0.4650 0.6674 0.2582 0.3776 0.5065 0.7200 0.2583 0.3723 0.5102 0.7274 0.2507 0.3709 0.5102 0.7326 0.2504 0.3580 0.4894 0.6812 0.2329 0.3351 0.4448, 0.6458 0.3299 0.4641 0.6072 0.8238 0.3180 0.4519 0.6076 0.8318 0.3273 0.4763 0.6390 0.8741 0.3649 0.5065 0.6829 0.9325 0.3284 0.4657 0.6392 0.8761 0.3319 0.4631 0.6141 0.8484 0.3242 0.4506 0.5973 0.8334 0.4106 0.5322 0.7266 0.9801 0.4033 0.5671 0.721 1.0047 0.4197 0.5948 0.7793 1.0652 0.4507 0.6003 0.7958 1.0974 0.4247 0.5821 0.716 1.0512 0.4360 0.5882 0.760 1.0461 0.4059 0.5506 0.7195 0.9848 0.5100 0.6835 ossas 11411 0.4987 0.6849 0.8890 1.1689 0.5278 0.7122 0.9175 1.2495 0.5476 0.7378 0.9654 1.3044 0.5072 0.6965 0.9193 1.2473 0.5146 0.7106 0.9013 1.2028 0.4917 0.6611 0.8519 1.1471 a aan aanne SmaI ‘New Unit Root Tests 151 Table 6.2A Finite sample critical values for JBS,m), m = 2,3,4, T=100 10 and % maz mead mead meas m=6 m=? m=8 m=9 m=10 1 0.0001 2.5 0.0005 5 0.0019 10 0.0088 10,0001 2.5 0.0006 Ss 0.0023 10 0.0083 10,0001 2.8 0.0004 5 0.0012 10 0.0049 1 0.0000 2.5 0.0001 S 0.0006 10 0.0026 10,0000 2.5 0.0003 S 0.0013 10 0.0055 1 0.0001 2.$ 0.0006 5 0.0022 10 0.0085 1 0.0001 2.5 0.0007 S 0.0024 10 0.0095 o.o1s 0.0257 0.0519 0.1062 0.0038 0.0145 0.0277 0.0613 0.0062 o.o1as 0.0314 0.0673 0.0068 0.0170 0.0353 0.0713 0.0059 0.0167 0.0335 0.0690 0.0085 0.0151 0.0294 0.0382 0.0103 0.0261 0.0527 0.1061 0.0372 0.0665 o.11s4 1914 0.0382 0.0654 0.1082 0.1846 0.0389 o.o714 0.1162 0.1948 0.0314 0.0895 0.0925 0.1399 0.0383 0.0680 0.1124 0.1921 0.0360 0.0651 0.1122 0.1849 0.0354 0.0684 0.1090 0.1867 a 0.2 0.0808 0.1583 0.1346 0.2428 0.2112 0.3434 0.3258 0.5013 03 0.0959 0.1425 0.1913 0.2219 0.2246 0.3079 0.3425 0.4404 M04 0.0769 0.1489 0.1327 0.2251 0.1972 03182 0.2953 0.4587 af 05 0.0864 0.1316 0.1419 0.2092 0.2059 0.2808 0.3103 0.4048 206 0.0733. 0.1405 0.1261 0.2106 0.1871 0.2996 0.2885 0.4475 = 07 0.0859 0.1329 0.1393 0.2034 0.2082. 0.2949 0.3360 0.4288 a= 08 0.0866 0.1590 0.1435. 0.2469 0.2218 0.3534 0.3345. 0.5239 0.2288 0.3284 0.4521 0.6469 0.2289 0.3346 0.4519 0.6219 0.2137 0.3048, 0.4104 0.8657 0.2149 0.3067 0.3999 0.5711 0.1978 0.2849 0.3926 0.5542 0.2147 0.3225 0.4238 0.5921 0.2277 03317 0.4631 0.6531 0.2883 0.3950 0.5409 0.7683 03159 0.4407 0.5779 0.7812 0.2933 0.4076 0.5430 0.7383 0.2739 0.3848 0.5047 0.6881 0.2888 0.4051 0.5358 0.7162 0.3029 0.4127 0.5380 0.7430 0.2840 0.4085 0.5652 0.7939 0.3699 0.5213 0.6903 0.9614 0.3830 osi10 0.6639 09115 0.3692 0.4953 0.6550 0.8774 0.3792 0.5197 0.6571 0.8716 0.3489 0.4989 0.6360 0.8399 0.3716 0.4968, 0.6470 0.8812 0.3996 0.5543 0.7263 0.9792 0.4638 0.6383 0.8474 1.1501 0.4725 0.6394 0.8293 1.1075 0.4441 0.5938 0.713 1.0254 0.4500 0.6082 0.7827 0.9949 0.4460 0.6033 0.7760 1.0339 0.4721 0.6340 1.0714 0.4844 0.6867 0.8792 1.1536 New Unit Root Tests 153 Table 6.2C Finite sample critical values for J8(5,m, m = 2,3,4,...,10 and T = 500 eee eee % m=2 m=3 mad m=S m=6 m=7 m=8 m=9 m=O ee eee a= 02 10,0001 0.0096 0.0343 0.0808 0.1510 0.2215 0.2840 0.3749 0.4603 25 0.000$ 0.0254 0.0690 0.1345 0.2307 0.3336 0.4010 0.5335 0.6520 5 0.0023 0.0521 0.1094 0.2073 0.3424 0.4598 0.5451 0.6988 0.8585 10 0.0087 0.1017 0.1851 0.3268 0.5021 0.6346 0.7502 0.9670 1.1333 af =03 1 0.0001 0.0048 0.0347 0.0855 0.1404 0.2147 0.2999 0.3521 0.4581 2.8 0.0006 0.0132 0.0660 0.1405 0.2086 0.3220 0.4229 0:5066 0.6331 0.0021 0.0277 0.1097 0.2147 0.2964 0.4343 0.5483 0.6535. 0.7987 10 0.0076 0.0589 0.1835 0.3305 0.4307 0.5983 0.7490 0.8728 1.0561 = 04 10,0001 0.0059 0.0377 0.0792 0.1407 0.1938 0.2673 03518 0.4312 25 0.0004 0.0148 0.0705 0.1273 0.2166 0.2854 0.3923 0.4903 0.5796 5 0.0014 0.0328 0.1148 0.1873 0.3162 0.3999 0.5254 0.6389 0.7638 10 0.0054 0.0683 0.1934 0.2940 0.4562 0.5705 0.7311 0.8767 1.0177 ae = 05 1 0.0000 0.0066 0.0279 0.0837 0.1323 0.2221 0.2755 0.3550 0.4255 28 0.0002 0.0161 0.0552 0.1391 0.2095 0:3098 0.3798 0.4810 0.5713 0.0008 0.0337 0.0980 0.2041 0.2841 0.4115 0.4989 0.6418 0.7348 10 0.0031 0.0749 0.1659 0.3076 0.4127 0.5783 0.6827 0.8636 0.9776 i= 06 10,0000 0.0070 0.0373 0.0857 0.1501 0.2250 0.3114 0.3820 0.4567 28 0.0003 0.0161 0.0734 0.1336 0.2331 0.3147 0.4141 0.5078 0.6008 S$ 0.0015 0.0318 0.1182 0.1984 0.3233 0.4148 0.5474 0.6595 0.7724 10 0.0058 0.0674 0.1996 0.2936 0.4681 0.5661 0.7387 0.8743 1.0360 = 07 1 0.0001 0.0057 0.0336 0.0891 0.1458 0.2239 0.2865 0.3578 0.4521 25 0.0008 0.0151 0.0673 0.1508 0.2122 0.3184 0.4080 0.4866 0.6225 5 0.0022 0.0310 0.1146 0.2246 0.2931 0.4295 0.5510 0.6454 0.8116 10 0.0087 0.0620 0.1969 0.3428 0.4331 0.6019 0.7482 0.8667 1.0729 af = 08 1 0.0001 0.0109 0.0354 0.0818 0.1634 0.2373 0.2916 0.3779 0.4746 28 0.0008 0.0273 0.0676 0.1440 0.2420 0.3330 0.4064 0.5196 0.6346 S 0.0022 0.0510 0.1128 0.2054 0.3389 0.4497 0.5408 0.7026 0.8369 10 0.0093 0.1062 0.1875 0.3232 0.4939 0.6337 0.7504 0.9493 1.1311 aS SSS ee New Unit Root Tests 155 Table 6.3A Finite sample critical values for JE,m), m = 2,3,4,...,10 and T=100 % mal mad mad mes mab ma? m=B mad m=10 10,0001 0.0091 2.5 0.0006 0.0217 S 0.0021 0.0443 10 0.0082 0.0938 10,0001 0.0057 2.8 0.0005 0.0148 5 0.0018 0.0307 10 0.0070 0.0633 1 0.0000 0.0043 2.5 0.0003 0.0125 $ 0,0012 0.0271 10 0.0053 0.0582 10,0000 0.0051 25 0.0002 0.0144 5 0.0008. 0.0298 10 0.0036 0.0590 1 0.0001 0.0051 2.8 0.0003 0.0130 S$ 0.0012 0.0275 10 0.0049 0.0567 10,0001 0.0063 2.5 0.0005 0.0158 S 0.0020 0.0330 10 0.0071 0.0657 1 0.0001 0.0091 2.8 0.0005 0.0222 S 0.0020 0.0441 10 0.0080 0.0922 0.0347 0.0685 0.1116 0.1931 0.0292 0.0579 0.0965 0.1672 0.0314 0.0601 0.0985, 0.1632 0.0289 0.0540 0.0873 0.1508 0.0335 0.0613 0.1031 0.1703 0.0346 0.0894 0.0976 0.1713 0.0326 0.0629 0.1080 0.1860 af 20.2 0.0726 0.1358 0.1217 0.2202 0.1879 0.3115 0.2959 0.4572 af 203 0.0777 0.1264 0.1288. 0.1937 0.1971 0.2735 0.2983 0.3988 w= 04 0.0716 0.1356 0.1167 0.2016 0.1719 0.2777 0.260 0.4043, a= 08 0.0774 0.1244 0.1314 0.1983 0.1953. 0.2698 0.2836 0.3889 = 06 0.0732 0.1367 0.1216 (0.2126 0.1802 0.2876 0.2728 0.4083 a =07, 0.0781 0.1276 0.1290 0.1983 0.1953 0.2759 0.2981 0.4020 M508 0.0723 0.1425 0.1208 0.2159 0.1817 0.3093 0.2913 0.4347 0.2080 0.2980 0.4133 0.5884 0.1995 0.2838 0.3958 0.5418 0.1943 0.2761 0.3721 0.5208 0.1961 0.2797 0.3766 0.5265 0.1987 0.278 03778 0.5203, 0.1955 0.2832 0.3836 0.5420 o2ti4 0.3100 0.4142 0.5820 0.2546 0.3523 0.4292 0.3719 0.4751 0.5802 0.5015 0.6314 0.7626 0.7013 0.8698 1.0265 0.2752 0.3403 0.4243 0.3848 0.4542 0.5657 0.5011 0.5827 0.7247 0.6868 0.8080 0.9765 0.2634 0.3395 0.4098 0.3633 0.4573 0.5434 0.4814 0.5832 0.6875 0.6578 0.7801 0.8923 0.2517 0.3237 0.3907 0.3522 0.4458 0.5330 0.4674 0.5839 0.6987 0.6382 0.7812 0.9066 0.2665 0.3277 0.4069 0.3733 0.4607 0.5387 0.4862 0.5916 0.7011 0.6595 0.7868 0.9195 0.2663 0.3117 0.4061 03741 0.4429 0.5701 0.5028 0.6136 0.7358 0.6971 0.8086 0.9829 0.2728 0.3573 0.4383 0.3837 0.4991 0.6127 0.5083 0.6530 0.7973 0.7003 0.8670 1.0455 New Unit Root Tests 1S7 Table 6.3C Finite sample critical values for JE(S,m), m = 2,3,4,...,10 and T = 500 % m=z mad mad m=S m=6 m=? m=8 m=9 m=l0 1 0.0001 2.5 0.000S Ss 0.0021 10 0.0083 1 0.0001 2.5 0.0008 5 0.0019 10 0.0073 10,0001 25 0.0003 Ss 0.0014 10 0.0056 10.0000 25 0.0002 Ss 0.0009 10 0.0037 10,0001 2.5 0.0003 5 0.0012 10 0.0049 1 0.0001 2.8 0.0005 S 0.0018 10 0.0070 10,0001 25 0.0006 5 0.0024 10 0.0090 0.0106 o.0244 0.0472 0.0932 0.0068 0.0163 0.0308 0.0639 0.0084 0.0137 0.0272 0.0886 0.0083 0.0126 0.0287 0.0620 0.0051 0.0130 0.0261 0.0556 0.0075 0.0159 0.0332 0.0662 0.0077 0.0216 0.0480 0.0989 x= 02 0.0342 0.0787 0.1812 0.0686 0.1278 0.2195 0.1129 0.1940 0.3190 0.1896 0.3071 0.4691 x =03 0.0332 0.0769 0.1293 0.0549 0.1363 0.2003 0.0962 0.1972 0.2836 0.1620 0.3079 0.4073 = 04 0.0269 0.0672 0.1183 0.0580 0.1113 0.1892 0.0944 0.1681 0.2670 0.1873 0.2561 0.3876 05 0.0283 0.0773 0.1276 0.0538 0.1276 0.1891 0.0903 0.1896 0.2674 0.1529 0.2847 0.3899 i206 0.0314 0.0740 0.1293, 0.0596 0.1174 0.1964 0.0985 0.1732 0.2728 0.1654 0.2674 0.3926 207 0.0291 0.0723 0.1271 0.0387 0.1304 0.1912 0.1003 0.1917 0.2728 0.1681 0.3012 0.3984 i =08 0.0356 0.0795 0.1507 0.0711 0.1337 0.2336 0.1169 0.2020 0.3259 0.1959 0.3138 0.4677 0.2104 0.3132 0.4295 0.6138 0.2057 0.2898 0.3929 0.5431 0.1850 0.2635, 0.3562 0.4961 0.1793 0.2765 03794 0.5145 0.1918 0.2677 0.3562 0.5013, 0.1979 0.2856 0.3873 0.5377 0.2220 0.3226 0.4268 0.6133, 0.2677 0.3843 0.s175 0.7262 0.2668 0.3794 0.4997 0.6806 0.2556 0.3650 0.4678 0.6324 0.2533 0.3608 0.4694 0.6238 0.2595 03444 0.4651 0.6383 0.2719 0.3766 0.5067 0.6774 0.2735 0.3909 0.5169 0.7173 0.3493 0.4920 0.6411 0.8988 0.3375 0.4535 0.5830 0.7903 0.3291 0.4586 0.5708 0.7525 0.3253 0.4475 0.5824 0.7589 0.3127 0.4298 0.5791 0.7694 0.3387 0.4635 0.5949 0.7993 0.3464 0.4898 0.6498 0.8772 0.4428 0.6064 0.7901 1.0673 0.4313 0.8861 0.7099 0.9414 0.4190 0.5395 0.6821 0.8851 0.3823 0.5372 0.6728 0.8716 0.3868 0.5325 0.6812 0.8945, 0.4139 0.5550 0.7246 0.9412 0.4433 0.5956 0.7736 1.0260 New Unit Root Tests 159 he did not report their critical values. The finite sample critical values of Fy (Tpem) (i = A,B,C) for break-fraction 4° 0.2,0.3,...,0.8, and the largest superfluous time trend polynomial m = 2,3,4,...,10 are presented in Tables 6.1A-6.3D. The asymptotic critical values for F(T, m) = A,B,C) ate also reported in Tables 6.1A-6.3D, and these are calculated by simulating the Wiener process over the grid {0,1/N,2/N,...,1} on the unit-interval [0,1] with N = 1,000, and R = 10,000 replications. It should be pointed out that small sample evi- dence reported in Park (1990) suggests that one should use two or more superfluous time trends since one superfluous polynomial is insuf- ficient to discriminate between the unit root null hypothesis and the trend-stationary alternative. When the break-date is unknown, we assume that the true break- fraction lies in the interval A = (49, 1—Ag] (0, 1), that is, the break-date Tp belongs to the set ({4oT}, 407] + 1,...,T ~ oT}, where [1 is the smallest integer function. For each possible break-date in this index set, we calculate J}(T,,m) using equation (8). Based on the sequence UL(Ty, m\ THOT) indexed by the break-date, the unit root statistic for the unknown break-date case is defined as FE Om) = InfryetRoT WoT HH1...T-boThI7To™, i= A,B,C (10) While Vogelsang (1998b) suggested the use of /+!(m) (i = A,B,C) to test the unit root null hypothesis against the trend-break stationary alter- native when the break-date is unknown, he did not report the critical values for these statistics.5 The finite sample critical values (for sample size T = 100, 200, 500) and the asymptotic critical values of J#!(m) for i = A,B,C with trimming parameter 49 = 0.05 and largest super- fluous time trend polynomials m = 2,3,...,10 are presented in Tables 6.4-6.6. The asymptotic critical values, also shown in Tables 6.4-6.6, are calculated by approximating a Wiener process over the grid (0, 1/N,2/N, ..., 1] on the unit-interval [0,1] with N = 1,000, and R = 10,000 replications. 6.3 Finite sample size and power In this section, we present finite sample size and power evidence of the known break-date statistics Ji.(Tf, m) (i= A, B,C) and the unknown Table 6.5. Critical values for 20m, m = 2,3,4, 10 with A = 0.05 % m =3 mad 5 ™ mat 8 m=9 m=O T= 100 1 (0.000000007050 0.0001 «0.0027, «.0138 00388-00736 0.116217, 0.2281 2.5 0,000000049628 0.0002 «0.0089 0.0286. 0.06S1_ 0.1193 0.1722, 0.2439 0.3100 5 (0.000000230952 0.0005 «0.0119 «0.0469 0.0996 0.1691 0.2409 0.3187 0.4107 10 (01000000983786 0.0013. «(0.0256 «0.0814. «0.1585 0.2467 «3411441208545 15 (0.000002406747 0.0024 --«0.0382-«O.1156 02085-03149 0.4291 0.5571 0.6759 20 (0.000004456770 0.0040 «0.0557 —«O0.1S22-0.2605 «03802-05136 0.6544 «0.7895 T=200 1 ‘.000000001018 = 0.0020 0.0146 0.0396 0.0696. 0.1163 0.1622 0.2146 2.5 0000000010880 0.0002 0.0048 0.0260 0.0658 = 0.1130 0.1718. 0.2336 0.3135 5 0.000000046176 0.0003. «0.0101. 0.0457 0.1027, 0.1657 0.2374 «0.3187 0.4061 to 0,000000216745 0.0010 «0.0229 «0.0809 O1S80 02424 «0342504477557 1s (0.000000582228 00020 -«0.0368.««O.11S1 0.2090 «0.3126 «= 0438105594 0.6844 20 0,000001108451 0.0038 «0.0528 «0.1473 0.2568 «= 0.3824 OS1S1 0.6499 0.7914 T =500 1 10,000000000411 - 0.0021 0.0133. 0.0388. = 0.0740 1211 0.1678 0.2280 2 0,000000002358 0.0001 0.0051 0.0264 0.0645. «0.1145 0.1728 = 0.2380 0.3084 5 {0.000000009863 0.0002 «0.0107 «0.0444. «0.0991 0.1640 0.2389 = 0.3220 0.414 10 0,000000041205 0.0010 «0.0234. «0.0810. 0.1561 02419-03394 0.4483 0.5535, 18 0,000000097669 00022 0.0384 «0.1137 0.2073. «0.3108 04255 0.5467 0.6757 20 0,000000195099 0.0039 -«0.05S1_«0.150S 02590 03754 «S072, 6427 (0.7924 T=0 1 10,000000000048 0.0000 0.0017 0.0134. 0.0406 0.0761 0.1263 0.1630 0.2172 2.5 0,000000000504 0.0001 0.0082 0.0264 0.0693. «0.1174. 0.1736 = 0.2405 0.3097 5 0,000000002083 0.0002 0.0104.» 0.0464 = 0.1034. 0.1691 0.2428 0.3161 0.4042 Yo —_0,000000009482 0.0009 0.0233. «(0.0825 0.1576 0.2499 0348004440 (0.5484 18 0,000000022519 0.0019 «0.0391 0.1160 (02097 0.3229 0.4371 0.5460 0.6787 20 0,000000044560 0.0037 «0.0553. «0.1517 0.2617 0393905146 = 0.6476. «0.7841 5189] 1004 30 MAN’ tot New Unit Root Tests 163 break-date statistics /4!(m) (i = A,B,C) statistics with m = 2,3,4,..., 10. ‘We use the experimental design of Vogelsang and Perron (1998) and generate data according to Ye =ODUS + yDT§ + 2%, t=1,2,....T an where DUS = DU;(T{) and DT{ = DT;(Tj), the correct break-date T{ = 50, the sample size is T = 100, 2, = azy-1 + pAz—-1+er + Vert, e, ~ i.id.N(O, 1), and yo = y-1 = 0. We set a = 1 for the size simulations, and a = 0.8 for the power simulations. We use the following combi- nations of (p, ¥): {(0,0); (0.6, 0); (-0.6,0); (0, 0.5); (0, -0.5)}. For the size simulations, we set @ = y = 0. For the power simulations, we consider the data generating process under (1) the Crash model with a break in the intercept equal to @ = {0, 2,4, 6,8, 10}; (2) the Changing Growth model with a break in the slope equal to y = (0.1,0.2, 0.6, 1.0, 1.5, 2.0}; and (3) the Mixed model with a simultaneous break in the intercept and slope corresponding to all combinations arising from @ = {0,2,4,6,8) and y = (0,0.2,0.6, 1.0, 1.5, 2.0}. For each parameter combination we gener- ated 10,000 replications and calculated the size and power of /j.(m) and Jylom) (i= A,B,C and m = 2,3,...,10) at the 5 per cent nominal signif icance level using the appropriate T = 100 finite sample critical values. For the known break-date case, we calculated the FT, m) (i= A,B,C and m = 2,3,..., 10) statistic using the true break-date T, = 50, and used the critical values corresponding to the true break-fraction 4° = 0.5. For the unknown break-date case, we used the trimming parameter Ag = 0.05 to calculate, Tm), i=A,B,C and m =2,3,...,10. For the known break-date case, the size of J/.(Tf, m) are presented in Table 6.7, Table 6.9, and Table 6.11 for i = A,B,C respectively, and the power of J}(Té, m) are presented in Table 6.8, Table 6.10, and Table 6.12 for i= A,B, C respectively. The size of the unknown break-date statistics Fem are presented in Table 6.13, Table 6.15, and Table 6.17 for i = A,B,C respectively, and the power of J+!(m) are presented in Table 6.14, Table 6.16, and Table 6.18 for i = A,B,C respectively. In what follows, ‘we enumerate the main features that emerge from our simulations: ‘« The size properties of the known break-date statistics Ji.(Tg, m) (i A,B,C) depend on the correlation structure of the data generating process, see Tables 6,7, 6.9, and 6.11. The exact size of these statis- tics are quite close to the nominal size when (p,¥) = (0,0). When (2,#) = (0.6,0) or (0,0.5), the size of J} (Tf, m) (i = A,B,O) is less than the nominal size, and falls with an increase in the largest time 0.2170 0.2179 0.2298 0.2197 0.1434 0.1426 0.1373 0.1368 0.1369 0.1399 0.1397 0.1350 0.1368 0.1342 0.1358 0.1386 0.2103 0.2085 0.1999 0.2045 0.2079 0.2095 0.3987 0.4016 0.3997 0.3907 0.2389 0.2360 0.2328 0.2366 0.2310 0.2327 0.2083 0.2099 0.2108 0.2101 0.2063 0.2040 0.4097 0.4107 0.4060 0.4128 0.4122 oat 0.6316 0.6284 0.6262 0.6246 0.3598 0.3644 0.3601 0.3623 0.3638 0.3632 0.3125 0.3108 0.2943 0.2986 0.3103 0.3013, 0.6594 0.6600 0.6639 0.6643 0.6607 0.6637 0.7216 0.8023 0.7211 0.8066 0.7260 0.8081 0.7234 = 0.8038 @=08, p= -06, ¥ =0.0 0.4586 0.5504 0.4539 «0.5454 0.4476 0.5426 04611 0.5463 0.4561 0.5487 0.4887 0.5497 «=08,p v=0s 0.3627 0.4072 0.3559 0.4000 03446 (0.3933 0.3458 0.3882 0.3586 = 0.4021 0.3486 (0.3954 0.8, p= 0.0, ¥ = -0.5 0.7918 0.8890 0.7976 = 0.8922 0.7947 0.8929 0.8016 0.8980 0.7974 0.8903 0.7991 0.8902 0.8050 8101 0.8109 0.8068 0.5926 0.5881 0.5857 0.5907 o.saal 0.5924 0.4020 0.3996 0.3942 0.3913 0.4004 0.3863 0.9222 0.9223 0.9245 0.9301 0.9192 0.9191 0.8372 0.8310 0.8381 0.8327 0.6878 0.6539 0.6554 0.6670 0.6598 0.6626 0.4137 0.4163 0.4130, 0.4009 0.4188 0.4042 0.9539 0.9550 0.9565 0.9614 0.9576 0.9551 0.8347 0.8337 0.8372 0.8332 0.7091 0.7044 0.7039 0.7165 0.7098 o711s 0.4180 0.4238 0.4237 0.4070 0.4193 0.4155 0.9694 0.9690 0.9731 0.9737 0.9739 0.9730 0.8333 0.8364 0.8404 0.8406 0.7629 0.7572 0.7600 0.7697 0.7657 0.7638 0.4316 0.4252 0.4270 0.4150 0.4266 0.4238 0.9827 0.9843 0.9846 0.9862 0.986 0.9855 SOL s9say 300y aun man Sa5s SaSase 0.0844 0.0916 0.0914 0.0903 oo7is 0.0697 0.0698 0.0646 0.0680 0.0667 0.0659 0.0611 0.0642 0.0630 0.0688 0.0634 0.0951 0.0924 0.0958 0.0959 0.1000 0.0910 0.3173 0.3120 0.3173 0.3247 0.1788 0.1721 0.1682 0.1724 0.1799 0.1715 0.1518 0.1465 0.1507 0.1488 0.1572 0.1489 0.3234 0.3244 0.3167 0.3244 0.3260 0.3227 0.4081 0.4010 0.4080 0.4086 0.2350 0.2283 0.2288 0.2275 0.2250 0.2213 0.1720 0.1717 0.1690 i712 0.1723 0.1695 0.4609 0.4577 0.4512 0.4625 0.4564 0.4534 0.5798 0.5894 05886 © 0.5886 0.5926 0.5907 0.5856 = 0.5945, 08, p= -0.6, ¥ = 0.0 0.3466 0.3961 0.3426 © 0.3839 0.3377 (0.3805 0.3412 0.3833 03441 0.3847 0.3352 0.3739 8,9 =0.0, ¥ = 0. 0.2353 (0.2312 0.2363 (0.2347 0.2358 «0.2291 0.2331 = (0.2273 0.2383 (0.2329 0.2317 0.2246 =08, p= 00, ¥ =-05 0.6747 (0.7397 0.6710 0.7400 0.6609 0.7341 0.6768 = (0.7398 0.6674 0.7295 0.6710 0.7320 0.6596 0.6496 0.6554 0.6548 0.4573 0.4509 0.4506 0.4494 0.4486 0.4468 0.2525 0.2437 0.2467 0.2402 0.2473 0.2429 0.8293 0.8277 0.8225 0.8294 0.8215 0.8273 0.6654 0.6507 06611 0.6851 0.5253 0.5158 0.5188 015244 0.5168 0.5164 0.2621 0.2549 0.2586 0.2560 0.2589 0.2581 0.8821 0.8798 0.8765 0.8845 0.8740 0.8826 0.7002 0.6848 0.6964 0.6941 0.6031 0.6004 0.6025 0.6073 0.s984 0.6003 0.2840 0.2712 0.2768 0.2786 0.2756 0.2747 0.9304 0.9323 0.9298 0.9347 0.9313 0.9289 0.6655 0.6621 0.6675 0.6745 0.6403 0.6385 0.6456 0.6515 0.6517 0.6647 0.2688 0.2630 0.2717 0.2693 0.2691 0.2760 0.9452 0.9473 0.9462 0.9529 0.9837 0.9548 952], 1004 1149) Ma uot Table 6.12. Finite sample power of /S(m) with m= 2,3,4, Mixed DGP: y; = 8 DUL(TE) + yDTVTE) +24, %¢ = az-1 + PAZ =1 +e + Werte et ~ NOD 10,5% nominal size FQ) IQ) Ia) FS) IE) IE) 58) KE) GAO) © =08, p= 00, ¥ =0.0 ° 0.0726 0.1511 0.1950 0.2903 0.2908 0.3263. 0.3359 0.3628 0.3838 ° 0.0727 0.1821 (0.1877 0.2847 0.2849 0.3224 ©3302, 0.3526 0.3743 ° 0.0775 0.1546 (0.1890 0.2860 (0.2866 0.3261 «0.3365 0.3639 0.3883 6 0.0695 0.1580 0.1972 0.2880 = 0.2921 0.3299 0.3354 «0.3656 0.3914 ° 0.0754 0.1807 0.1946 (0.2926 © «02908 ~—«0.3301.- «03410-03669 03927 ° 0.0811 0.1664 0.2021.«««0.2977 0.2934 0.3273. «(0.3403 «0.3646 0.4044 e 0.0706 0.1462,—0.1945 0.2943 0.2984 = 0.3281-—0.3334. 0.3863 (0.3760 @ 0.0760 «0.1513. (0.1892 0.2907 0.2921 (0.3288_-«0.3381 (0.3643 «(0.3890 o 0.0711 0.1882 0.1946 0.2938 = 0.2916 0.3307 0.3412,——0,3652—0.3872 6 0.0668 0.1501 0.1925 0.2864 © 0.2849 0.3284 © 0.3346 0.3579 0.3860 @ 0.0779 0.1644 (0.1933 0.2916 0.2950 0.3410 0.3497 0.3640 0.3975 @ 0.0720 0.1513 0.1913 0.2881 0.2947 «0.3367 0.3393 «0.3593 0.4079 e 0.0719 0.1867 0.1963 0.2897 0.2980 0.3319 «0.3408 0.3642 —0.3849 6 0.0758 —0.1607_-«0.1952 02902-02897 «0.3274 ©3360» 0.3639 0.3827 e 0.0692 0.1579 0.2016 «0.2913 0.2925 —«0.3282-——«0.3353-«—«0.3586 0.3789 @ 0.0738 0.1527 0.1927 0.2881 0.2909 0.3263 0.3325 «0.3624 0.3926 ° 0.0715 0.1558 0.1917 02903-02971 -—«0.3353.«««0.3386 0.3680 (0.3948 a 0.0739 0.1572 0.1924 0.2909 0.2909 0.3277 «03356 «0.3618 0:3986 @ 0.0761 0.1520 0.1866 0.2870 0.2936 «= 0.3296 0.3385 0.3661. 0.3837 8 0.0768 0.1562 0.1943 0.2852 (0.2928 © 0.3266 ©0393» 0.3673. «0.3810 @ 0.0713 0.1529 0.1858 «0.2917 «0.2935 «0.3311 «0.3406 ~=—«0.3689 «0.3931 Continued 180], 3004 1019) MaN 691 4, 0.1002 0.2630 «0.3658 «= O.S41S 0.5447 0.5979 0.5846 0.6008 0.5931 4, 0.1037 0.2632 «0.3640 «0.5384 «0.5451 0.5930 0.5807 0.5958 0.5837 4, 0.0993 0.2677 «0.3688 ~—(0.S341 «0.5358. «0.5918 0.5792 0.5938 0.5842 6 0.1029 0.2647 «0.3610 «0538S OS411 0.5899 0.5838 0.5991 0.5892. 6 0.1017 —0.2692-««0.3681 «S589 0.5555 0.6019-0.8907 0.60100. 886 6, 0.0999 0.2566 «0.3592 «0.5347 0.5419 0.5963 0.5850 0.5975 0.5911 6 0.1086 0.2618 0.3610 «0.5330 0.5399 0.5959 0.5806 0.5923 0.5847 6 0.1054 0.2705 «0.3664 ~—«0.S370 «0.5375 0.5931 0.5847 0.5988 0.5918 6. 0.1007 0.2660 0.3664 «(0.5324 0.5339 «0.5820 0.5719 0.5866 0.5894 o=8, 0.0979 0.2704 ~=—=—«0.37S1—«OS428 0.5460 (0.5942 0.5847 0.6024 0.5863 8, 0.0993 0.2651 0.3642 «0.5358 = O.S411 0.5876 0.5752 0.5986 0.5835, 8, 0.1006 © 0.2627, «0.3569 «0538S 0.5460 «0593105777 0.944 0.5825 8, 0.1046 © 0.2663.« 0.3588 «0540905442 05978 0.5899 0.5985 0.5867 8, 0.1080 0.2648 «0.3602 0.5382 «0.5505 0.6014 0.5820 0.6057 0.5955, 8 0.0984 = 0.2633 «0.3661 «S413 OS454 «0.5943 0.5890 0.5988 0.5958. 0.0778 0.1765 0.2391-«0.3718 0.4144 0.4960 0.5517 0.6151 0.6764 y 0.0799 0.1648 «0.2328 (0.3718 += 0.4189 0.4962 0.5440 0.6109 0.6705 y 0.0781 0.1767 0.2349 0.3708 0.4188 0.4948 «0.5525 0.6159 0.6795 y 0.0781 0.1683-0.2308 «0.3681 0.4141 0.4887 0.8413. 0.6071 0.6774 y 0.0770 0.1737 0.2395 «0.3755 0.4177 «0.026 0.8547 0.6161 0.6950 y 0.0803 © 0.1786 0.2334 «0.3676 «0.4099 0.4946 0.5429 0.6095 0.6965 y 0.0740 0.1639 0.2366 «0.3745 «0.4175 0.4891 0.5470 0.6082 0.6711 y 0.0782 0.1756 0.2420» 0.3711 «0.4161 (0.4967 0.5472 0.6134 0.6735 y 0.0803 0.1726 0.2334 0.3666 © 0.4096« 0.4918 §=— 0.5377 (0.6076 0.6733 iy 0.0731 0.1709 «0.2351 «0.3739 «0.4233 «= 0.S077 0.8869 0.6211 (0.6874. 0.0750 0.1717 0.2343 0.3686 «0.4131 0.4915 0.5435 0.6131 0.6857 ‘Continued 154] 1004 HUN MON, ut 0.0687 0.1356 0.1539 0.2311 0.2202 0.2387 0.2312 0.2373 (0.2460 0.0653 0.1401-—«0.1548 —0.2332-=—«0.2230 0.2421 «0.2367 0.2443 0.2524 0.0681 0.1379 0.1537 0.2309» 0.2207 0.2439 0.2414 0.2465 (0.2600 0.0703 0.1379 «0.1592 0.2336 «0.2219 0.2362 0.2317 0.2393 0.2452. 0.0623 0.1377, «0.1586 0.2337 «0.2212 0.2426 «= 0.2345 0.2426 0.2474 0.0690 0.1411 «0.1587 0.2332 0.2249 0.2420» 0.2313. «0.2405 0.2453. 0.0694 0.1388-—«0.1593 0.2308 0.2217 0.2474 «0.2410 0.2498 0.2562. 0.0633 © 0.1331-«0.1536 0.2306 0.2213 0.2422 0.2349 0.2461 0.2559 0.0715 0.1298 «0.1547 0.2220» 0.2110 0.2346 = 0.2304 «0.2379 0.2499 0.0686 0.1304 0.1535 0.2239 0.2160 0.2363.« 0.2259 0.2318 0.2347 0.0722 © 0.1421.-«0.1619 0.2313 0.2275 «0.2394 «0.2364 = 0.2418 0.2491 0.0672 0.1345 0.1588 0.2265 0.2251 0.2459 0.2414 «0.2484 0.2530 0.0635 0.1329 0.1594 0.2260 0.2196 0.2391 0.2390 0.2430» 0.2479 0.0706 © 0.1413 0.1552 0.2271 0.2198 0.2420» 0.2382 = 0.2421——0.2834 0.0665 0.1338_0.1595 0.2283 0.2260 0.2427 «0.2324 «= 0.2387 0.2514 0.0673 0.1417 0.1599 0.2294 0.2157 0.2379 0.2333 «0.2448 0.2483. 0.0705 0.1392 0.1624 0.2328 0.2214 «0.2430 «0.2380 0.24520.2438 0.0698 0.1379 0.1574 0.2312 0.2202 0.2404 «0.2351 0.2447 0.2505 0.0661 0.1373 «O.N6IS 0.2288 «0.2195 0.2412 0.2321 «—0.2407-—0.2502. 0.0687 0.1329 0.1864 0.2255 0.2144 0.2408 = 0.2360 0.2482 0.2571 0.0706 0.1357 0.1578 0.2330 O.2181 0.2418 0.2393 0.2445 0.2534 0.0710 0.1393 0.1597 0.2335 0.2223 0.2427 «0.2371 0.2451 0.2520 0.0711 0.1345 0.1614 0.2323 0.2197 0.2417 0.2368 «0.2428 (0.2498 0.0661 0.1252 0.1559 0.2230» 0.2131 0.2366 «0.2354 = 0.2451 0.2510 0.0699 0.1410 0.1649 0.2298 0.2197 0.2402 «0.2323 «0.2357 0.2469 0.0693 0.1360 0.1589 0.2267 «0.2148 0.2363 «0.2322 0.2377 0.2498, 0.0698 0.1385 0.1581 0.2268 = O.2178 «0.2445 0.2402 «0.2418 0.2852. ‘Continued 9821, 1004 Nun Mayr e2t 0.1045 0.1028 0.1070 0.1059 0.1063 0.1082 0.1035 0.1042 0.1041 0.1109 0.1037 0.1109 0.1081 0.1100 0.1084 0.2987 0.3003 0.3026 0.3005 0.2900 0.2997 0.2987 0.3004 0.2994 0.3020 0.2921 0.3064 0.3009 0.2976 0.2933 0.4410 0.4516 0.4547 0.4507 0.4472 0.4459 0.4481 0.4532 0.4500 0.4554 0.4477 0.4544 0.4534 0.4542 0.4373 0.6838 0.6801 0.6893 0.6833 0.6817 0.6826 0.6881 0.6903 0.6878 0.6821 0.6829 0.6860 0.6883 0.6879 0.6766 0.7427 0.7488 0.7529 0.7495 0.7487 0.7468 0.7495 0.7540 0.7496 0.7427 0.7470 0.7544 0.7524 0.7533 0.7423 0.8315 0.8314 0.8390 o.gai4 0.8356 0.8338 0.8334 0.8419 0.8375 0.8255 0.8369 0.8379 0.8335 0.8334 0.8296 0.8719 0.8770 0.8808 0.8791 0.8791 0.8770 0.8737 0.8785 0.8779 0.8685 0.8798 0.8803 0.8764 0.8726 0.8726 0.9188 0.9176 0.9208 0921s 0.9197 0.9177 0.9187 0.9222 0.9207 0.9142 0.9187 0.9228 0.9192 0.9200 0.9131 0.9508 0.9536 0.9587 0.9477 0.9453 0.9445 0.9475 0.9544 0.9582 0.9438 0.9451 0.9495 0.9501 0.9530 0.9558 989] 1004 NUN MaN set NS Table 6.13 Finite sample sizeof f;4¢m) with m = 2,3, 4,..., 10,59 nominal size DGP (@ = 1): Ye = Year +P AYe-1 +e + Werte ~ NO.1) FO Fe Fa KO FO FAQ Fe FO Fao 0.0463 0.0471 0.0473 0.0435 0.0432 0.0423 0.0444 0.0445 0.0428, 0.0422 0.0403 0.0309 «0.0257 0.0194 0.0156 0.0129 -—«0.0107_—«0,.0092 0.0516 0.0609 0.0664 0.0776 0.0859 0.1021 0.1204 0.1473 0.1657 0.0403 0.0432 0.0427 (0.0439 0.0412 0.0343 «0.0310 «0.0295 (0.0268 0.0544 0.0684 0.0858 0.1160 0.1501 0.1854. 0.2235. 0.2677 0.3039 v v v v =0. Table 6.14 Finite sample power of with m = 2,3,4,..., 10,5% nominal size Crash DGP: yy = ODUH(T§) + 21,24 = 24-4 + 02-1 +61 + Herter ~ NO,V 2 FO Fa KO HO TO Fe FO — sFAa0 p= 00, ¥ =00 0.1068 0.1883 0.2784 0.3471 0.3852, 4117 0.4336 0.4583 0.4637 0.1075 0.1679 0.2464 0.3122, 0.3559 03728 = 0.4004. «0.4218. ~——04310 Grtel 04088 BSS 0.2839 024802881 0.3420 0808 osrID 01172 0.0664 0.1512 0.2057 0.2736 02746 ~— 03303 O.s4sd os 0.1213 0.0494 0.1495 0.2082 0.2863 «0.2886 «= 0351503709 «oats SS 0.1160 0.0357 0.1544 0.2309 0.3136 0.3384 = 0.4171 «0.4389 0.4894 = @=08, 9 =06,y =00 § 0.1465 03084 0.4732 0.6115 0.6752 069537174 0.7294 ans 0.1429 0.2713 0.4337 0.5647, 0.6342 0.6727 0.6930 0.7055 0.6998 e Continued Table 6.15. Finite sample sizeof /,Pcm) with m = 2,3,4,.-, 10,596 nominal size +p bya ter + Wer er ~ N(O.1) DGP (@ = 1): yr = Ye @ @ (a) 6 FO FM Fe Ie 000465 0.0511 0.0535 0.0505 0.0484 0.0493 0.0496 0.0486 0.0500 0.0472 0.0362 «0.0344 «0.0297 ««0.0221._««0.0178 «0.0147 0.0109 0.0092 0.0551 0.0645 0.0698 0.0772 0.0892 0.1105 0.1236 0.1395 0.1647 00490 0.0454 0.0507 0.0438 0.0390 «0.0402 0.0354 0.0340 0.0307 0.0629 0.0751 0.0863. «0.1068 0.1328 0.1756 0.2006 0.2299 0.2717 Table 6.16 Finite sample power of /;8(m) with m = 2,3,4, ..,10,5% nominal size Changing Growth DGP: y = yDT+(T) + ze, 2¢ = 21-1 + 2M2-1 + 6+ Wert er ~ NOD @ JP QR) @) 6) IPM FP 8) FEO) FF 0) 2 = 08, p=00,¥=00 0.0752 0.1159 0.1642, 0.2030 0.2363 (0.2650 0.2737 0.2793 (0.3013, 0.0416 0.0781-—«O.1211.-—0.165S 0.2052 0.2512 0.2688. 0.2783 (0.3025, 0.0198 = 00335. 0.0761««01133. 0.1486 = 0.1784 0.1993 2111 (0.2402 0.0137 0.0194 0.0680 «0.1028. 0.1417- 0.1648 0.1937 0.2039 (0.2362 00118 0.0118 0SS4_——0.0904 0.1282 O.1SIS 0.1790 0.1883 0.2197 0.0116 0.0095. (0.0462 0.0823. 0.1202, 0.1431 O1717, 0.1772, 0.2219 a= 08, p=06,y =00 y=01 0.1039 0.1956. 0.2950 (0.3980 0.4641 0.5208 0.5404 0.5494 0.5579 y=02 0.0672 01311 -«0.2432«««0.365S 0.4350 0.5002. 0.S142 0.8236 0.5401 Continued LL 5152p 1004 11) MON Table 6.17 Finite sample size of J7(m) with m = 2,3, 4,..., 10,5% nominal size DGP (= Vi: ¥¢ = Yea +P AYE Hee + Werte er ~ NOI) FEQ FFB Fa FES) p=0.0,¥=0.0 0.0495 0.0480 0.0524 0.0498 0.0473 0.0480 0.0495 0.0469 0.0501 p=0.6,%=0.0 0.0412 0.0372 0.0334 0.0283 0.0227 0.0156 0.0130 0.0095 0.0070 =0.6,¥=0.0 0.0566 0.0643 0.0767 0.0891 0.1066 0.1268 0.1439 0.1613. 0.1897 0,¥ =05 0.0488 0.0424 0.0516 0.0446 0.0393 0.0379 0.0351 «0.0301 0.0289 Ow=-05 0.0618 0.0772 0.1030 0.1237 0.1574 0.1970 0.2420 0.2778 0.3251 Ff ao) © TEM FF) 1994 1004 HUN may Ie y=1S 0.0187 0.0430 0.1050 0.1589 0.1835 0.2256 0.2616 = 0.2760 0.3139 y=20 0.0159 0.0399 0.0915 0.1439 0.1633 0.2179 0.2860 0.2774 0.3218 y 0.0368 © 0.0226 «0.0717 O.11S8 0.1331 0.1654 0.1933. 0.2118 0.2233 y=02 0.0367 0.0338.-««0.0765 «0.1142 0.1346 0.1681 0.1963 0.2148 0.2300 y=0.6 0.0242 0.0485 0.1237 0.1384 0.1985 0.1939 0.2431 0.2358 (0.2694 y=10 0.0222 0.0543 0.1379 0.1628 0.2283 «0.2267 «0.2891 (0.2693 0.3100 y=15S 0.0197 0.0500 0.1326 © 0.1808 = 0.2344 «0.2674 = 0.3284 0.3223 0.3718 y=20 0.0152 0.0420» 0.1209 0.1892 0.2281 0.2861 0.3309 0.3475 (0.4036 a= 08, p=06,¥ 0.1344 0.2406 0.3893 0.4838 0.5458 0.5847 0.6104 0.5939 0.5903 0.0719 0.1567 0.2507 0.3387-—0.4142 0.4692 0.5026 0.5087 0.5071 0.0388 —0.0871_-«0.1354 ——0.2065-«0.2482 «0307703413 03486 0.3656 0.0241 0.0725 «0.1241 0.1835 0.2206 «0.2785 0.3088 0.3237 0.3278 0.0202 0.0553 0.1069 0.1734 0.2001.-«0.2614 0.2813. 0.3029 0.3135 0.0190 0.0503 0.1011 (0.1664 0.1915 0.2505 0.2722 0.2901 -—0.3018 0.1235 0.2233» «0.3802 0.4472 «(0.5079 0.5444 «0.5811 0.5669 0.5699 0.0703 © 0.1446 ~—0.2388-0.3231 (0.3939 0.4367 0.4821 «0.4818 0.4807 0.0335 0.0863 0.1428 0.2143 (0.2696 0.3260 0.3615 0.3693 0.3836 0.0275 0.0719 0.1281 0.1969 0.2401 0.2934 0.3268 0.3379 0.3443 0.0229 00572 0.1116 0.1866. 0.2188 0.2721 +0300 «0.3173 0.3198 0.0175 0.0535_-0.1162 0.1808 0.2130 0.2711 (0.2996 0.3178 0.3249 0.0942 0.1583 0.2725 «0.3420 0.4211 «0.4478 0.4923 «0.4882 0.4926 0.0663 0.1258 0.2324 «0.2958 0.3702 O.4157 0.4548 «0.4543 0.4614 0.0340 © 0.0946 0.1728 0.2331 0.2987 0.3371 0.3877 0.3880 0.4028 0.0282 0.0820 0.1817 0.2175 0.2670 «0.3177 0.3586 0.3662 0.3776 0.0230 © 0.0635 O.1311 0.2028» 0.2487 «0.3082 0.3366 = 0.3518 0.3879 Continued g ; eer y y y y y y y y y y y y y y y y y y y 0.0124 0.0848 0.0403 0.0253 0.0178 0.0154 0.0140 0.0391 0.0349 0.0207 0.0195 0.0143 0.0140 0.0316 0.0321 0.0223 0.0192 0.0173 0.0149 0.0929 .0ss4 0.0258 0.0186 0.0182 0.0121 0.0860 0.0283 0.0591 0.0647 0.0629 0.0512 0.0388 0.0355 0.0326 0.0398 0.0611 0.0360 0.0505 0.0407 0.0219 0.0250 0.0453 0.0836 0.0527 0.0471 0.1315 0.0883 0.0513 0.0432 0.0341 0.0345 0.1199 0.0733 0.1165 0.1407 0.1298 0.1028 0.0930 0.0913 0.0811 0.1068 0.1497 0.1368 0.1200 0.1148 0.0665 0.0739 0.1360 0.1696 0.1499 0.1349 0.1873 0.1267 0.0671 0.0569 0.0516 0.0ss4 0.1794 0.1306 0.1700 0.1662 0.1659 0.1640 0.1562 0.1562 0.1351 0.1501 0.1770 0.1932 0.2037 0.2127 0.1339 0.1426 0.1713 0.2155 0.2411 0.2528 0.2131 0.1431 0.0804 0.0764 0.0726 0.079 0.1978 0.1489 0.2175 0.2262 0.2252 0.2115 0.1946 0.1952 0.1777 0.2019 0.2561 0.2778 0.2701 0.2688 0.1748 0.1861 0.2592 0.3145 0.3345 0.3326 os 0.2376 0.1620 0.0820 0.0749 0.0670 0.0754 0.2164 0.2243 0.2615 0.2715 0.2694 0.2801 0.2693 0.2838 0.2400 0.2558 0.2895 0.3334 0.3475 0.3730 0.2807 0.2535 0.2827 0.3335 0.4035 0.4411 0.2438 0.1801 0.0981 0.0907 0.0913 0.0940 0.2251 0.2667 0.3230 0.3451 0.3497 0.3526 0.3320 0.3403 0.3008 0.3258 0.3823 0.4193 0.4315 0.4424 0.3088 0.3169 0.3675 0.4428 0.4958 0.5310 0.2548 0.1966 0.1058 0.0966 0.0920 0.0895 0.2353 0.3320 0.3637 0.3811 0.3796 0.3982 0.3972 0.4102 0.3489 03771 0.4000 0.4415 0.4779 osi12 0.3727 0.3735 0.3989 0.8475 0.5183 0.3836 0.2456 0.1964 0.1077 0.0996 0.0992 0.0948 0.2260 0.4451 0.4230 0.4465 0.4499 0.4722 0.4788 0.s074 0.4051 0.4302 0.4778 0.5258 0.5654 0.5971 0.4240 0.4296 0.4699 0.5339 0.6124 0.6677 0.2440 0.1968 0.1188 0.1011 0.0987 0.0976 0.2229 z z iE > g z seu SRR RRR REESE 0.0202 0.0183 0.0116 0.0120 0.0917 0.0444 0.0252 0.0185 0.0145 0.0134 0.0477 0.0332 0.0243 0.0216 0.0168 o.014s 0.0307 0.0294 0.0221 0.0211 0.0178 0.0149 0.0210 0.0278 0.0197 0.0173 0.0158 0.0146 0.0668 0.0508 0.0438 0.0412 0.1291 0.1164 0.0742 0.0607 0.0s14 0.0427 0.0455 0.0890 0.0853 0.0724 0.0631 0.0532 0.0249 0.0279 0.0651 0.0705 0.0654 0.0898, 0.0212 0.0172 0.0349 0.0607 0.0625 0.0638 0.1272 0.1166 0.1284 0.1158 0.2418 0.2196 0.1626 0.1417 0.1364 0.1340 0.1342 0.1891 0.2090 0.1946 0.1776 0.1633, 0.1173 0.1250 0.2065 0.2279 0.2082 0.1964 0.0937 0.0921 0.1427 0.2103 0.2288 0.2359 0.2306 0.2219 0.2252 0.2192 0.3654 0.3059 0.2662 0.2578 0.2570 0.2558 0.2428 0.2663 0.3118 0.3327 0.3255 0.3146 0.2389 0.2351 0.2894 0.3555 0.3794 0.3923 0.2239 0.2381 0.2499 0.3105 0.4001 0.4446 0.3078 0.2941 0.2998 0.2981 0.4708 0.4284 0.3633 0.3447 0.3423 0.3400 0.3446 0.4005 0.4429 0.4460 0.4257 0.4152 0.3398 0.3505 0.4475 0.4959 0.5082 0.5105 0.3335 0.3442 0.3906 0.4775 0.485 0.5894 0.4298 0.4144 0.4244 0.4231 0.5668 0.5343 0.4918 0.4806 0.4807 0.4799 0.4698 0.4947 0.536 0.5710 0.5658 0.5666 0.4730 0.4705 0.5187 015946 0.6373 0.6697 0.4812 0.4930 0.4967 0.5528 0.6529 0.7250 0.5182 0.5095 05135 0.5094 0.6740 0.6471 0.5949 0.5767 0.5785 0.5679 0.5796 0.6109 0.6565 0.6738 0.6686 0.6618 0.5785 0.5845 0.6415 0.7079 0.7451 0.7565 0.5882 0.5980 0.6132 0.6872 0.7578 0.8068 0.6046 0.5977 0.6048 0.5967 0.7327 0.7116 0.6706 0.6608 0.6596 0.6577 0.6611 0.6692 0.7040 0.7334 0.7414 0.7491 0.6594 0.6661 0.6915 0.7487 0.7963 0.8220 0.6753 0.6879 0.6856 0.7249 0.7962 o.8s42 0.6876 0.6857 0.7228 0.7622 0.8008 0.7837 0.7479 0.7442 0.7735 0.8138 0.7328 0.7455 0.7888 0.8077 0.8213 08441 0.7310 0.7409 0.7723 0.8295 0.8631 0.8826 0.7405 0.7509 0.7625 0.8085 o8s7it 0.9107 5152], 300 jun] May zat ‘New Unit Root Tests 189 real per capita GNP, industrial production, common stock prices, and real wages.© ‘We test for the presence of a unit root in all series contained in the extended Nelson-Plosser data set using the Mixed model /#°(m) statistic when the break-date is considered to be unknown. The highest order of superfluous time trends used is m = 10. We use the logarithm of all series except the interest rate series which we examined in the level form. Our results are presented in Table 6.19. We reject the unit root null for real GNP and real per capita GNP at the 1 per cent level, for employment at the 2.5 per cent level, and for industrial production and common stock prices at the S per cent level. The estimated break-dates for the real GNP and real per capita GNP series is 1938, for industrial production is 1920, for employment is 1902, and for common stock prices is 1953. Table 6.19 Empirical results for the extended Nelson-Plosser data OO Series Period J; (10) 7, a é é % ——— $F Real GNP 1909-1988 0.1563 1938 4.7894 0.2467 0.0160 0.0161 Nominal 1909-1988 1.8606 1938 10.6470 0.0594 0.0289 0.0459 GNP Real Per 1909-1988 0.0206 1938 7.1806 0.2748 0.0032 0.0153 Capita GNP Industrial 1860-1988 0.2831 1920 ~0.1795 -0.3531 0.0498 -0.0079 production Employment 1890-1988 0.2436 1902 9.9386 0.0731 0.0241 -0.0004 GNP deflator 1889-1988 3.8516 1964 2.9953 -0.0423 0.0214 0.0409 Consumer 1860-1988 3.6100 1898 3.6841 -0.2089 -0.0117 0.0382 prices Nominal 1900-1988 1.7581 1930 6.0318 -0.5545 0.0459 0.0100 wages Money stock 1889-1988 1.3011 1929 1.0797 -0.5698 0.0706 -0.0024 Velociy 1869-1988 1.0755 1937 1.6167 0.0079 -0.0188 0.0265 Interest rate 1900-1988 1.8016 1958 4.2826 0.2463 0.0220 0.2923 Common 1871-1988 0.3358 1953 1.3363 0.8105 0.0185 0.0285 stock prices Real wages 1900-1988 1.2370 1958 2.8561 0.2061 0.0187 -0.0136 RR ‘Notes: The small letters In parenthesis that appeat as superscript indicate the significance of ‘these statistics, The letters a’, 'c, and 'd indicates significance with respect to the Anite sample (T = 100) critical values a the 1 per cen, 25 percent, 5 percent, and 10 pet cent significance level respectively, see Tables 4 and 6 forthe erica values. indicates borderline significance atthe 18 percent or 20 percent level L New Unit Root Tests 191 Our results are most comparable to those in Sen (2004) that are based ‘on the minimum-t-statistics discussed by Zivot and Andrews (1992) and Sen (2003). Sen (2004) rejects the unit root null hypothesis for six series: real GNP, nominal GNP, real per capita GNP, industrial produc- tion, employment, and common stock prices. The break-dates implied by the J;°(m) statistic and the minimum-t-statistic are the same except fot industrial production and employment.” Therefore, the /#°(m) statis tic reveals slightly less evidence against the unit-root null hypothesis compared to the use of the Perron-type minimum-t-statistics. 6.4.2. Real per capita GDP for 18 OECD countries In this section, we present empirical evidence regarding real per capita GDP for a group of 18 OECD countries: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, Japan, The Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, United Kingdom, and the United States. We use annual data over the period 1870-1998 for all countries except Germany (1950-1998), Japan. (1874-1998), Spain (1900-1998), and Switzerland (1899-1998). Details regarding data sources are discussed in Appendix A. Empirical evidence regarding real per capita GDP can be found in the earlier studies by Ben-David and Papell (1995), and Raj (1992). Ben-David and Papell (1995) present evidence for a group of 16 OECD countries {all countries in our study except New Zealand and Spain) using data over the period 1860-1989 obtained from Maddison (1991). Raj (1992) examined nine countries (Australia, Canada, Denmark, France, Italy, Norway, Sweden, United Kingdom, and United States) using data over the period 1871-1985 from Cogley (1990).8 The results in Ben-David and Papell (1995) show that the unit root null hypothesis is rejected in favor of the trend-break for 11 countries: Austria, Belgium, Canada, Demark, Finland, France, Germany, Japan, Sweden, United Kingdom, and United States. Raj (1992) rejected the unit root null for Canada, Denmark, France, United Kingdom, and United States. ‘Our results for the real per capita GDP series are based on more recent data obtained from Maddison (2001) for 18 OECD countries. Our data covers a longer time horizon compared to Ben-David and Papell (1995) and Raj (1992), except for Germany for which we use data over the period 1950-1998. In Table 6.20, we report the calculated Mixed model statis- tic, J#°(m), for the logarithm of the real per capita GDP series when the break-date is considered unknown. The highest order of superfluous time trends used is m = 10. We reject the unit root null hypothesis for ‘Austria, Belgium, Canada, Denmark, United Kingdom, and the United New Unit Root Tests 193 period 1950-1998 are given in Maddison (2001), Appendix C, Table C1-b, pp.272- 275. Real GDP for the period 1870-1949 were calculated using the real GDP index series in Maddison (1995), Appendix B, Table B-10a, pp. 148-153, and the real GDP in 1950 reported in Maddison (2001). The real GDP Index series were used to backcast the real GDP series in Maddison (2001), as suggested in the note in Maddison (2001), Appendix C, p. 267. Population data for the period 1950- 1998 were obtained from Maddison (2001), Appendix C, Table Cla, pp. 268-271, and population data for the period 1870-1949 were obtained from Maddison (1995), Appendix A, Table A-3a, pp. 104-107. Real per capita GDP is calculated by dividing the real GDP with population. All data are available from the author upon request. Appendix B In this appendix, we briefly discuss the calculation of the J-statistics, specifically, the /¢°(mm) statistic, when the break-date is assumed to be unknown to the prac. titioner. We illustrate the application of the /-statistics within the context of the empirical results pertaining to the extended Nelson-Plosser data set. Consider the time series {y;|/_,. For example, we have data on Real GNP over the sample period 1909-1988 which gives usa sample of T = 80. We need to first specify the trimming parameter (29), So that the break-date (7) is assumed to be in (RoT].(4oT] + 1,..-,T — [gT}}. We use Ao = 0.05 which implies that the Dreak-date lies in the (4,5, ...,76) In order to calculate the /3(m) statistic, we need to estimate regressions (3) and (7) by OLS for each possible break-date (T,). In equation (3), for a given break-date (7), we regress (y;] on a constant, an intercept dummy defined by the chosen break-date (DU;(T) that is equal to 0 for t < Tp and 1 fort > T, +1, a time trend (t that takes on the values 1,2, ..,80), and a slope dummy (DT; (Tp) that is equal to 0 for t < T; and t~ 7, for t > T;, +1). The residual sum of squates calculated from regression (3) is denoted by RSS¢(T,). In equation (7), for a given break-date (T,), we regress {y¢) on a constant, an intercept dummy defined by the chosen break-date, a time trend, a slope dummy defined by the chosen break-date, and ‘m — 1' superfluous time trend variables (2,3, ..,¢") ‘We use m = 10 in our emplcical applications. The residual sum of squares from regression (7) is denoted by RSS (T,). On the basis ofthe residual sum of squares from regressions (3) and (7), we calculate J&(T,,m)-as shown in equation (8) ‘Once the sequence {/6(Ty,m))7, “077, indexed by the break-date is calculated, the /-statistic (J7°(m) is obtained as the minimum of this sequence. The esti- ‘mated break-date isthe time at which the sequence (/£(Tp,m)}7, 07H, attains its ‘minimum. For the Real GNP series, /3°(m) with m = 10 is equal to 0.1563, and the esti- mated break-date is 1938 which corresponds to T, = 30. The J7°(m) statistic 1s significant if the calculated statistic is less than the appropriate finite sample critical value shown in Table 6.6. The finite sample critical value for T = 100 and m = 10 at the 10 per cent significance level Is 0.4666, and so we reject the unit-root null hypothesis for the Real GNP series. How to Deal with Structural Breaks in Practical Cointegration Analysis? Roselyne Joyeux 7.1 Introduction The empirical literature making use of unit root and cointegration tests has been growing over the last two decades. The application of those tests is challenging for many reasons including the treatment of deter- ministic terms (constant and trend) and structural breaks. Franses (2001) addresses the problem of how to deal with intercept and trend in practical cointegration analysis. In this chapter, the same approach is applied to the treatment of structural breaks in VAR models used to test for unit roots and cointegration. Ifa series is stationary around a deterministic trend with a structural break, the null of a unit root is likely to be accepted even if a trend is included in the ADF regression. There is a similar loss of power in the unit root tests, ifthe series presents a shift in intercept. If the breaks are known, the ADF test can be adjusted by including dummy variables in the ADF regression (Perron (1989, 1990, 1994), Banerjee, Lumsdaine and Stock (1992), Zivot and Andrews (1992) among others). Similarly, when testing for cointegration among the variables in a VAR model structural breaks have to be accounted for. A survey of the applied literature using Johansen’s tests for cointegration in a VAR setting would reveal that intervention dummies are usually inappropriately specified. It is the aim of this chapter to show how to specify and include intervention dummies and how to make accessible to applied economists the latest developments in the use of intervention dummies when testing for unit roots and cointegration in a VAR framework. A simple explanation of the specification of intervention dummies is provided. Johansen, Mosconi and Nielsen (2000) propose a cointegration model with piecewise linear trend and known break points. They show that within this model it is 195 How to Deal with Structural Breaks? 197 When [1} < 1, one can say that y; is attracted by uy for t < Ty and by (1 + #2) for £ > Ty. This model can be rewritten as Ye ~ Gey + H2De) = b1 Vea ~ (or + H2Dt-a)) +e a where Dp =O0iftTh. If @) = 1 in equation (1) then (1) becomes Ye= Ye + H(t ~ Dea) +e a 141 is not identified when @; = 1 but the shift in mean x22 is. (1) can be rewritten as: Aye = 1 — Dyt-1 + A ~ oda + wae ~ b1Dy-1) + er @) or Aye = (#1 — Dyn + (1 ~ bay + H2De1) + 2 ADe + 64 (4) If py = 41 ~ 1, then (4) can be rewritten as Aye = piven ~ pris + H2Dr—1) + HZ Dy + ep (5) Since AD; = Oif (<7) orift > T, +1, and AD; = Lift = Ty +1, the effect of 4D; corresponding to the observation yr, 4 is to render the associated residual zero given the initial value in the second sub- sample. The inclusion of AD; does not affect the asymptotic distribution of the £ statistic of the estimated coefficient of yr_1, 61, under the null of a unit root. This representation also illustrates that when testing for a unit root, the test regression should include both the lagged intervention dummy, Dr-1, and the first difference of the intervention dummy, AD;, even though under the null of p, = 0 the lagged dummy disappears. Perron (1990) and Perron and Vogelsang (1992) tabulate the asymptotic distri- bution of the f statistic of which is the estimated coefficient of y;_1, under the null of a unit root. A better test would be to test for the joint significance of the coefficient of y;_1, the intercept and the lagged inter- vention dummy in (5). Versions of such a test are considered in Zivot and Andrews (1992) and Banerjee et al. (1992). Therefore, this chapter considers this joint test in the multivariate section below. How to Deal with Structural Breaks? 199 in the test regression the intercept, a lagged dummy times the intercept, a first difference dummy intercept, the trend and the lagged dummy times the trend. Note that the trend and the lagged dummy times the trend disappear under the null of p = 0. A test of the null of a unit root therefore is a test of the joint significance of the coefficients of y 1 the trend and the lagged dummy times the trend in (8) 7.2.3 Generalization to an AR(k) process In the case where the process follows an AR(k) model with AR coefficients 1y---/0k equation (8) becomes: ket Aye = epVe-1 ~ PASE ~ 2KS2De-Kt +m + Din + DAY E-1 i + Dad iter (9) where Pe = O1 +240 += 1, and the model is formulated conditionally on the first k observations of each sub-sample. This representation shows that the test regression should include both the intervention dummy lagged k periods, the first difference of the intervention dummy and up to k — 1 lags of the first difference of the intervention dummy. It also shows that the intervention dummy lagged k periods should be included both in the intercept and the deterministic trend variable, even though under the null of px =0 the lagged dummy disappears in the trend compo- nent (but not in the intercept part). A unit root test should be a joint test of the joint significance of the coefficients of yr_1, the trend and the dummy lagged k periods times the trend in (9). 7.24 Generalization to the case of more than one shift The previous model can be generalized to q samples periods, 1 = Ty < T, < Tp <-:- < Tq =T. The last observation of the jth sample is Tj, and the first period of the j+ 1)th sample period is Tj + 1, j= 1,...,q. The model is formulated conditionally on the first k observa- tions of each sub-sample, for example, for the jth sub-sample, Yrrtie-so¥Tj.y4k: It is now necessary to define q — 1 intervention How to Deal with Structural Breaks? 201 are required. In this section, equation (2.6) of Johansen et al. (2000) is obtained by expanding the results from Section 2. In what follows, it is assumed that Y; is a p-vector process and that without struc- tural breaks the model can be formulated conditionally on the first k observations by ka AY, =TYy + Mittut DAY i tep, qu) in where ¢},...,e7 are normal, independent and identically distributed px1 vectors with mean 0 and variance @. It is also assumed that although some ot all of the p time series in ¥; may have a time trend, none have a quadratic trend. The hypothesis of cointegration can be reformulated as a reduced rank problem of the I matrix, in which case 11 = aB, where a and f are (p x r) full rank matrices, and ¥; has a quadratic trend. If none of the p time series displays a quadratic trend, it is necessary to assume that 11 where y is a (1 x r) full rank matrix. If q~1 breaks are present (and consequently, there are q sub-samples), conditionally on the first k observations of each sub-sample, the model can be rewritten as q equations: =ay', ka ay; = (nin)) ("t) +0+ Eanes (12) = 1,....g, where 1 and yj are (p x 1) vectors. Under the null of cointegration, the trend is restricted to the cointe- grating relationships to exclude the possibility of quadratic trends in any time series. This means that [lj = a7/. Instead of writing q equations, the following matrices are defined: De= Creer Dg WE Uy rtgd 4= Of rgy! of dimensions (q x 1), (p x q), (q x r) respectively. Equation (12) can then be rewritten in a form similar to (10): OV [ Yen kt kaa ane (’) feel) tPent Frat OD i50 j= (13) where the dummy variables Dj, Dj, and Jj, are defined as in the previous section, and the x); are (p x'1) vectors. How to Deal with Structural Breaks? 203 is referred to section 3.4 of Johansen et al. (2000) for the computation of the critical values depending on the number of non-stationary relations and depending on the location of the break points. 7.3.2, Some or all of the time series follow a trending pattern This model allows the individual time series to have broken trends, while the cointegrating relations may also have broken trends. This model is denoted by Hi(r) in Johansen et al. (2000). It is the most general case and is represented by equation (13). The derivation of the critical values for this model is also given in section 3.4 of Johansen et al. (2000). The application in section 4 falls under this category. 7.3.3. Some or all of the time series follow a trending pattern in each sub-sample and the cointegrating relations are stationary in each sub-sample (with possibly a broken constant level); trend breaks are allowed only in the non-stationary series This model is denoted H)-(r) in Johansen et al. (2000). The asymptotic distribution of the likelihood ratio test depends on nuisance parameters and cannot easily be obtained. kt kel AY =0f'Ye-1 + uxt DIAN-A+ DV eieiter (16) st iy 7.34 Unit root tests In the first two cases, models (13) and (15), Johansen et al. (2000) also show that tests for linear restrictions on B,y and v are asymptotically x?-distributed. Such tests can be used to identify the cointegrating vec- tors. They also make it possible to test whether the individual time series are trend stationary in each sub-sample. 7.4 Empirical illustration: A German money-demand system 7.4.1 Description In this section, the techniques described above are applied to the estimation of a German money demand system of money, income, infla- tion and interest rates using quarterly data from 1980:2 to 1998:43. We choose to start the estimation a year after the start of the European ‘Monetary System (EMS) to the onset of the Euro. This period includes the reunification of Germany in 1990. Neumann and von Hagen (1993) on 0.08. 0.08: 0.04 002 O02 Figure How to Deal with Structural Breaks? 205 Yearly inflation 7.1 Data in levels speculative attacks saw the withdrawal of a number of currencies from the exchange rate mechanism including the British pound and the Italian lira). Note that since these dummies are indicator dummies; they do not affect. the critical values of the usual cointegration tests. ‘The data are shown in Figures 7.1-7.3. These graphs show the presence of trends in real money, real GDP and nominal interest rates, justifying the us ise Of Hy(r) instead of He(r). Indeed model Hj(r) allows for deciding the nature of the trend, once the cointegration rank is determined. 742 The analysis The analysis is conducted with MALCOLM 2.9 (Mosconi, 1998). This program is written in RATS and can estimate all of the models described in Section 7.3. The previous model is implemented in a straightfor- ward manner in MALCOLM. The number of breaks and dates of the last observations in each period have to be specified a priori, If another programme, such as CATS version 1 (Hansen and Juselius, 1995), is ch (+ josen, then the intervention dummy variables times the trend, 1)D;-k41, have to be entered as exogenous variables? This implies that automatically AtD,_4,..., A(t — k + 1)Dp_ey1-(k-1) Will How to Deal with Structural Breaks? 207 Table 7.1 Maximum lag analysis k Akaike Hannan-Quinn Schwartz Godfrey? x2,,(100) 1 31633-30815 -29.578 0.029 2 -32872 -31.676 —-29.868 0.910 3 ~32.608 31.035 - 28.656 0.632 4 ~32462 © 30.485 ~27.483 0.900 5 -32560 30.199 -26.617 0.999 Notes: Sp-values for the Godfrey tests The reader should refer to the Appendix for more details on available softwares such as CATS (versions 1 and 2), Eviews (version 5), Malcolm (version 2.9) and Microfit (version 4.1). ‘The maximum lag k was selected to be 2 on the basis of the usual information criteria (see Table 7.1) and the autocorrelation tests. All of the information criteria agree on a value of k = 2. Itisalso the frst lag for which the Godfrey test for fourth order autocorrelation is insignificant. ‘The normality tests (Table 7.2) show some problem with kurtosis for the money and bond yield equations. There is, however, no problem with skewness in any of the equations. Johansen et al. (2000) point out that all residual-based misspecification tests should be modified to take into account the fact that the first k residuals of each period are set to zero because of the presence of indicator dummies. This could explain some of the problems with kurtosis. Juselius (1996) identifies three cointegrating relationships among those five variables using slightly different data from 1984:1 to 1994:1. She also analyses the model using CATS version 1 and thus specifies the intervention dummy variables as exogenous. The first cointegrating relationship is a money-demand relationship where velocity is related to both interest rates. The second relation is a short-run Phillips curve, and the third is a stationary real bond rate. Therefore, the cointegration rank is expected to be three. ‘The tests for cointegration are reported in Table 7.3. Both the 90 per cent Gamma approximation (Johansen et al., 2000) and the Osterwald- Lenum (1992) critical values are presented. As expected, the Gamma critical values are much larger. Using the Osterwald-Lenum (1992) crit- ical values, we would actually conclude that all variables are stationary. How to Deal with Structural Breaks? 209 Table 7.4 Exclusion from the cointegrating relations tests x2@) (p-values) Trend Trend m-p sdp iy ig Ap period 1 period 2 0.003 0.000 0.000 0.091 0.002 0.000 0.000 Notes: All tests are done under the assumption that r= 3. Table 7.5 Weak exogeneity tests x2(3) (p-values) m-p gdp is ia Ap 0.022 0.000 0.026 0.486 0.005 ‘Notes: All tests are done under the assumption that r= 3. Table 7.6. Stationarity tests x2(3) (p-values) m-p edo ty ja Ae 0.006 0.044 0.066 0.019 0.014 Notes: All testsare done under the assumption that r= 3, Single stationarity tests are reported in Table 7.6. These unit root tests are carried out by testing restrictions on the cointegrating vectors. In all cases, the tests were done under the assumption that the trend breaks were present in the cointegrating relationships (as expected, excluding the trends does decrease the p-values dramatically). In all cases, we reject the null of stationarity at the 10 per cent level; only for the bond yield do we not reject the null at the S per cent level. Since we have three cointegrating relationships, it is necessary to test for identification restrictions. We consider the same three trend stationary linear combinations as in Juselius (1996): 21¢ = me ~ Pe — SdPt + Br3int + Braiat Zat = Sdpr + Bos Spt How to Deal with Structural Breaks? 211 Table 7.7, The long-run model using MALCOLM ‘The cointegrating vectors (standard errors in parentheses) Variable A Ba bs mp 1 0 0 ap “1 1 ° iy ~3.2 ° 1 (0.003) iy 0.005 0 o (0.003) ap 0 ~0.285 1 (0.037) ‘ oi 0.007 0.040 (0.039) (0.0001) (€.013) Ween 0577 =0.003 0.182 (0.018) (0.00006) (0.006) The speed of adjustment coefficients (c-ratios in parentheses)? Variable a a a as m-p -0.038 0.037 0.122 (1915) (0.755) (1.950) stp 0.078 0.630 0.235 (2.901) (9.965) (2.943) iy 1.068 11.369 3.187 (0810) 466) (0.770) ja 1311 2.561 4.002 (1.045) 820) 016) 4p 0.246 0.405 0.769 (6.820) 6.861) (5.807) LR test on the restrictions x2(3) = 7.55, p-value = 0.056 Notes: #The t-ratios on the speed of adjustment parameters are not computed automatically by MALCOLM. rank tests and the subsequently estimated models. This is the case for two important reasons: 1, in the presence of intervention dummies, the standard cointegration rank tests critical values are no longer valid; and 2, how the intervention dummies are included in the model matters. How to Deal with Structural Breaks? 213 7.5 Conclusion In the last decade, applied econometricians have usually treated structural breaks in VAR models in an ad hoc fashion. Intervention dummies have been included, but with little care given to their spec- ification. In this chapter, three models of interest in applications have been considered, and a detailed account of the specification and inclusion of intervention dummies in those cases has been given. Statistical theory for those cases has been developed in Johansen et al. (2000). The empirical application clearly shows that how the interven- tion dummy variables are included does matter. It also illustrates that using the right cointegration rank tests critical values is of the utmost importance. Appendix to Chapter 7 In this appendix, an overview of the capabilities of CATS® (versions 1 and 2), * EViews (version 5), MALCOLM (version 2.9) and Microfit (version 4.1) for han. dling cointegration tests and VECM estimation in the presence of structural breaks is presented. It Is impossible to do justice to these four programs in a short overview. Both EViews and Microfit are general econometrics package which cover a very wide range of econometric techniques. Both these programs are at the cutting edge in many econometric area. It is easier to compare CATS and MALCOLM since they both specialise in the study of VAR models, Both are a collection of RATS procedures, and consequently, itis possible to know exactly what each procedure does and how it does it. Further. ‘more, the programs can be adapted to any particular case one might encounter. EViews and Microft are ‘black boxes’. Therefore, itis impossible to edit the original programs. We have to rely totally on what is presented in the manuals. It should be mentioned that these four programs are all menu driven and easy to use. MAL- COM and Microfit have the inconvenience of having some menus being hidden inside menus, Finding the right menu that will allow you to conduct the test or estimation that you desire is sometimes, cumbersome for a novice user. The man- uals have to be read (which of course one should always do). They are, however, excellent. Table AT summarises the important cointegration related capabilities of the four programs. This list is by no means exhaustive as is discussed in Section 7A. We start this appendix by evaluating, in Section A.1 and A.2, the capabilities of those four programs in the context of the estimation of the H.(r) model: eta +L Daileiter ka +Sonay, Ba)

You might also like