Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

The economic crisis and the crisis of economics Titel The necessity for changes in econometric models to make

economic crises more predictable Herwig Urban July 2011

Authors Date

Abstract The purpose of this paper is to show how econometric models and methods in the field of operations research need to be developed to serve as good forecasting models for financial and economic crises. First, I would like to give an introduction in some standard econometric models and discuss the problems with these kinds of models. I also want to point out that there are some models that are self organizing systems and really have the ability to learn from historical data. Third hybrid models will be introduced. These models enable us to make better predictions about the economies and financial markets in the future. They have less restricting assumptions in the background, so that they can be applied to a wider range of input data. Furthermore, it will be discussed what problems with input data can arise and which bad policy decisions can result out of this.

The economic crisis and the crisis of economics 1 Introduction

This paper is about the necessity for changes in econometric models in order to make economic crises more predictable. The last decades have shown that economic crises have hardly been foreseen. The aim of this paper is to look at various econometric models that are used to describe and predict financial crises and mortgage defaults. It is essential to show why these models have failed in predicting economic crises and how this can be changed. Theoretically there are several different opinions on financial crises. Authors like Ozkan and Sutherland (1995) believe that a family of variables can be used to predict any economic crisis. On the other hand there is an existing group of authors who believe that crises can develop without any change in economic fundamentals. The author of the following paper is in accordance with the first groups of authors and therefore is going to discuss models that predict economic crises with respect to different independent variables. These variables can be derived from broad categories like the external sector, the financial sector, the real sector, the public sector, institutional and structural variables or political variables. Potentially good variables could be the ratio of M2 to Reserves or the ratio of the current account to GDP (e.g. Lin et al., 2008, p. 1099ff.). The first section describes some standard econometric models and shows that these models need to be changed to serve as good forecasting models. One statistical technique for predicting bank failures is the discriminant analysis (DA), with its subcategories linear, multivariate and quadratic. Another method that will be discussed is the logistic regression (logit), which was together with the DA the basic econometric technique over years. Each model solely is not suitable to predict economic crises (e.g Yuliya/Iftekhar, 2010, p. 319). Furthermore, the concept of signalling models will be introduced. Signalling models look for differences of the independent variables to make conclusions about the dependent variables and then try to predict forthcoming economic crises. The second section will be about methods guaranteeing better results and other possibilities to develop or extend models so that they can predict economic crises in advance. One extremely high developed method is based on the function of the human brain. These intelligence techniques have mathematical and algorithmic elements that model the human nervous system (e.g. Yuliya/Iftekhar, 2010, p. 315). A further subsection will be about how better monetary statistics could have signaled the financial crisis. This section will also show that monetary policy was maybe more contractionary than believed and so fed the already existing bubbles up to the point of burst. Furthermore there will be a focus on how various instruments can be combined to get better prediction ability. One of the models that combine various methods (including the ones from the first section) is called the IEWS (integrated early warning system) and leads compared to the application of the methods solely to quite good predictions of bank failures and financial crises. Another combining model is the multivariate adaptive regression splines (MARS) model in combination with fuzzy clustering. This model is quite good in predicting bank failures and economic crises because of its ability to model relationships which are nearly additive.

The economic crisis and the crisis of economics All in all the result of the paper shall be that there are possibilities (models) that can predict economic crises to a certain extend. The main point now is to investigate which models can be used in which combination to get results that can be used by policymakers to stop or lower future economic crises in advance. A further conclusion that should be drawn is the necessity for a variety of methods. When taking the methods olely the most methods give results that cannot be taken as a basis for policy actions; the combination of various methods, however, can give a quite solid fundament for making policy decisions. 2 Standard econometric models

In this section the focus is on two standard models in operations research. Because of the magnitude of econometric models it would not be possible to look at each model in a separate way. To get an idea of different econometric models Chong (2005) gives a good overview by comparing their performance. Some central results will be mentioned in this paper briefly. Some models like the EWMA (exponentially weighted moving average) which gets to its predictions through the smoothing of time series, seem to be quite consistent in its predictions, but needs both human as well as computer resources. All the historical models are at the lower end of the spectrum. The reason is that these models do not account for the time varying nature of covariances and variances (Chong, 2005, p. 483). Another result is that even complex models (in implementation and structure) do not give predictions that can be used for political action (e.g. Chong, 2005, p. 482f.). The next two subsections will show how such standard econometric models work, where its strengths and where its weaknesses lie. For this purpose we will look at the discriminant analysis and the logistic regression. 2.1 Discriminant analysis

The discriminant analysis (DA) is an instrument that can classify new observations in different groups. Through observations, for which the classification is known, parameters for a discriminant function are developed. Criteria are for example the minimization of misclassifications or the maximization of correct classifications. The result is that a new sample is classified in one of several existing groups (e.g. Toshiyuki, 2006, p. 247). Toshiyuki (2006) develops in great detail the mathematical background for the method of discriminant analysis. Since the second section is more important to the writer (because it shows methods that are more effective in predicting economic crises) here only the weaknesses of the model will be discussed. The most severe weaknesses of these methods are their strong restrictions. The variables which have to be categorized need to be distributed multivariate normal. Moreover the group dispersion matrices must be equal for all groups and therefore common variance-covariance matrices are needed. The probabilities of occurrence must be known a priori and the costs of misclassification must be specified. The problem is that if the normality assumption is violated the tests of significance and the estimated errors are influenced strongly and lead to very bad results. Some financial ratios are taken as input data for a discriminant analysis. They were used in previous

The economic crisis and the crisis of economics arise if the financial ratios are not distributed jointly normal. In these cases either the analysis is not as good anymore or another method is needed. This results in firm failure studies. These ratios are taken to classify existing banks or businesses in either the category of healthy or the category of failing businesses. Ratios which are well suited for this purpose are for example ratios regarding the liquidity, the profit, the return on investment, the turnover, the leverage, the securities or the capital structure of the investigated businesses. If we regard economic crises we see that problems big discussions about the usefulness of DA in predicting bank failures, mortgage defaults and financial crises. Although the financial ratios are good indicators for company failure and on a bigger scale for economic failure, the multivariate discriminant analysis can be erroneous, because of a complete lack in the significance of the interpretation. In addition to this a big problem is that the probability of misclassifications is much higher if the above mentioned requirements are not met. Another weakness is that each set that has to be classified, needs an own proof to test its normality. This is a severe problem because it needs a lot of computing time and causes huge costs in making policy decisions. In the worst case the finding could be that the set is not distributed normal at all and the DA has to be discarded (e.g. Karels/Prakash, 1987, p. 573ff.). 2.2 The logistic regression

The logistic regression model (from now on abbreviated as logit model) belongs to the category of limited dependent regression models and is primary used to predict the probabilities of forthcoming crises over the next periods (e.g. Lin et al, 2008, p. 1100). The mathematical development of a simple logit model is quite easy and its basic steps are shown in this paper. For a deeper understanding and other types of developments Berry (2005) provides greater detail. Input to this model is a continuous dependent variable Y c that is not observable directly. For a binary indicator Y the following is valid: Y = 1 if and Y = 0 if Yc < T*, and T* is a threshold value. Furthermore a set of independent variables have a certain effect on Yc: Because Yc is not observable this equation cannot be solved through OLS (ordinary least squares) regression. Instead maximum likelihood estimation must be used. Under the assumptions that ( ) and that has a logistic distribution with a constant variance these equation can be solved. Logit coefficients provide information about the direction of the effect (positive or negative) the independent variables have on the dependent variable. One weakness is that the magnitude of the effect remains unknown because the scale of the latent variable is not known. Logit models are often used (as we will see in section two) for Yc T*

The economic crisis and the crisis of economics providing information to other statistics that can give information on the magnitudes of changes (e.g. Berry, 2005, p. 165ff.). For example logit models can be used to predict the probability of a bank failure. Within this model there are two types of errors, which should be minimized. On the one hand there is the possibility to classify banks as healthy although they fail, and on the other hand the model can predict that a bank will fail although it is healthy (e.g. Canbas et al., 2005, p. 536). Another limitation of these models is that they cannot forecast which variables do have abnormal values. As a result they cannot predict how complex the macroeconomic crisis is (e.g. Lin et al., 2008, p. 1100). The largest criticism about logit models is that they can hardly be used solely. But as we will see in section two, they are a good instrument when using them connected with other techniques. 2.3 Signaling Models

The operation method of signaling models is to find differences in the independent variables in times before a crisis and during a crisis. These differences are then used to examine which variables can be used for predicting crises and which not. This is only possible if the assumption is valid that economic crises follow a recurrent systematic pattern. Good examples are that overvaluations of currencies are often seen before a currency crisis or that declines in asset prices are often the trigger for banking crises. The signals approach is able to optimize a so called signal to noise ratio for different independent variables. Mathematically this ratio looks the following:

A is the number of times observing a signal if there is really a crisis, B if there is no crisis. C is the number of times getting no signal if there is a crisis, D if there is no crisis. Through the optimization of the signal to noise ratio the errors of the model are minimized. For a further mathematical development of the model see Lin et al (2008). Some advantages of this technique are that it can measure the exact effect of each variable for the crisis prediction and that a summary indicator can be calculated. Points of criticism are the ignoring of strong correlations between indicators, the missing of providing a framework for statistical testing and the openness to misclassifications, which can have a strong influence on possible interpretations (e.g. Lin et al., 2008, p. 1100ff.). 3 Intelligence techniques and combined models

This section is now going to develop and discuss some models and combinations of already mentioned models to make the economic future more transparent and to possibly foresee economic crises. In a first subsection the concept of neural networks and fuzzy logic is introduced and applied to the field of economics. This method gives quite good prediction results, because of the learning ability of the neural networks (see below). Other concepts discussed in this section are the IEWS (the integrated

The economic crisis and the crisis of economics early warning system), which combines the two concepts of the previous section. In addition the MARS method (multivariate adaptive regression splines method) combined with a fuzzy clustering approach will be introduced. 3.1 Expert systems, fuzzy logic and neural networks

Neural networks (abbreviated NN) belong to the model group of intelligence techniques and have some similarities with the human brain. The relationships between the independent and dependent variables are computed with nonlinear function approximation tools. This so called connectionist approach connects various network units with a flow of information. This method is more relevant than the ones mentioned above. It does not make any restrictions about statistical distributions or properties of the data. This means that this technique is applicable to all kind of input data. A second advantage of these kinds of models is their nonlinearity. Compared to other models this approach can also find relationships between variables that do not follow a linear behavior. That is why intelligence techniques provide a better model performance and can predict financial crises and defaults in advance and more precisely (e.g. Yuliya/Iftekhar, 2010, p. 315). The expert system can embed the past experience into the system; fuzzy logic can describe the problem in a way that is close to the human reasoning process and accommodate the inaccuracy and uncertainty associated with the data; the neural network can learn from historical data. (Lin et al., 2008, p. 1099). This quote summarizes the function and interplay of these three methods perfectly. The constraints of this kind of modeling are the finding of an appropriate data base and the problems with the causal explanation of the relationships of the variables through the neural network. If it is possible to combine these three methods and bundle its strengths while minimizing its weaknesses this would be a mighty tool for predicting economic crises and could provide a fundamental basis for policy actions to prevent economies of financial crises (e.g. Lin et al., 2008, p. 1099). 3.1.1 Artificial neural network (ANN) As mentioned above neural networks are closely related to the function of the human brain. The output values are generated through an implemented function f, which uses the various processing elements (neurons) and their connections. In figure one below the standard structure of a neural network is shown. In this figure a neural network with an input layer of L nodes (or neurons), a hidden layer of M nodes (or neurons) and an output layer with a single output variable is shown.

The economic crisis and the crisis of economics


Figure 1: Feed forward neural network

Source: Lin et al. (2008) p. 1103.

Each prediction variable is represented by one input neuron. These neurons are connected with the ones of the hidden layer through the weights of the edges w hij . The outputs of the hidden layer are weighted with w ajk and finally transformed in one output variable. The number of input and hidden layers is closely connected to the complexity of the relationship between the explanatory and to be predicted variables. The special thing about neural networks is their ability to learn, which is possible through finding the best approximation of the function f which is coded in the two mentioned weights. It can be distinguished between two types of learning the supervised and the unsupervised learning. The main difference between the two possibilities is that within the supervised learning the network gets correct input output data, within the unsupervised learning the network only gets pairs of data, but does not know if the output data is correct. In the supervised learning, on which this paper will concentrate, the user has to provide the training set. The widely used algorithm for supervised learning is the backpropagation algorithm. This algorithm, in a first step (the training phase), takes the correct input output pairs and and tries to find such connections between the neurons so that the output data fits. The edges get adjusted through minimizing the squared difference between the actual output Y j and the calculated output Oj. The stopping criterion for this process is the reaching of a threshold value : ( ) ,

where E is an objective function (e.g. Lin et al., 2008, p. 1104f.). This is an extremely mighty tool, because as Hornik (1989) shows are neural networks able to approximate every function if there are enough input data and hidden layers with enough neurons in it. Kim et al. (2004) investigate in their paper the training results and testing results of different models. They show that the training error rate for neural networks with a five nodes input layer, a seven nodes hidden layer and a three nodes output layer is only 0.34 %. In this training period the artificial neural network was supposed to classify 292 businesses either in stable, unstable or in crises periods to the categories of going bankrupt or healthy businesses. The ANN was able to classify 291 correct and only 1 business was misclassified, which corresponds to an error rate of 0.34%. The

The economic crisis and the crisis of economics only problem is that the neural network approach suffers from local minima and overfitting problems. One further conclusion is that the used artificial neural network is well suited for predicting the behavior of the Korean economy. The conclusion of the paper regarding neural networks shows that these kinds of models are good in predicting economic crises (e.g. Kim et al., 2004, p. 587ff.). Because of the fact that this example is only illustrative, it has to be taken into account that the drawn conclusions are only valid for this one special case. The reason for mentioning them in this paper is to get an idea how such networks work and which results they probably can achive. 3.1.2 Neuro fuzzy In general, fuzzy logic is about the decision if an object belongs to a set. For that purpose membership functions are constructed. The whole model uses instead of mathematical variables linguistic ones. These variables are only specified through ordinary language terms. An example could be the following: IF current account is high AND GDP is low THEN chance of a currency crisis is high. A fuzzy logic system is nothing else than a composure of such IF-THEN relations (e.g. Lin et al., 2008, p. 1105). For developing a fuzzy logic it is necessary to clearly specify the linguistic variables and the terms, which are corresponding with a membership function each. These membership functions assign a value to each term. This process is called fuzzification and is the first of three steps in developing a fuzzy logic. An example of different membership functions is shown below.
Figure 2: Membership function

Source: Lin et al. (2008), p. 1105

The economic crisis and the crisis of economics Secondly the probability of crises and its validity is calculated as follows: The extent of the validity for the THEN part depends on the extent to which the IF part is satisfied. ..., the extent to which the IF part is satisfied is determined as the minimum of the membership function in the IF part. (Lin et al., 2008, p. 1107). The essential purpose of the third part, the defuzzification, is to convert the linguistic variables in numerical values. For that the proxy values of each term have to be combined, which can be quite difficult, because each term gets its own weight. With this so called gravity method a different influence can be assigned to each influencing variable. To find weights that correctly correspond to reality the learning ability of the neural network is used (e.g. Lin et al., 2008, p. 1106ff.). These authors also give a good numerical example in their paper, which provides support for deeper understanding of the fuzzy logic. The probably biggest advantage of fuzzy logic is the ability of using information to classify failures of a bank for example on a continuous scale and not only on a dichotomous scale, like the methods presented in section two. This is possible, because of the fact that the likelihood of the failure is predicted by using the possibility/probability consistency principle (e.g. Alam et al., 2000, p. 186f.). 3.1.3 Empirical comparison with other models Evaluating the signals the model gives before crises and during normal periods, lead to the conclusion that the neuro fuzzy technique is superior both to the signaling approach and the logistic regression. Through the ability to learn the neural network has a much higher accuracy rate (about 80 percent) than the other models. The practical advantages of neural fuzzy networks are the providing of huge data bases, which help explaining the detailed causal links between the different variables. One point of criticism is the necessity for big data bases in advance. The neural network is only able to learn, and in a second step to provide good results, if it has enough input data in the training period. Summarized neural networks with fuzzy logic are well suited for dealing with nonlinearity and the prediction of financial and economic crises. This kind of method could also be widely used in making policy decisions. Despite the warning ahead of economic crises, the neural networks also provide information about the causal linkage of the data. The recognition of these causal linkages is also only possible if the input data is large and complex enough. But if this is the case the neural network is able to understand the connections more deeply, and clear decisions regarding the prevention of the economic crisis can be made (e.g. Lin et al., 2008, p. 1116ff.). 3.2 The integrated early warning system (IEWS)

Integrated early warning systems are constructed to predict bank failures in advance. The aim of these predictions is to make regulatory actions possible that can prevent banks from failing or at least minimize the costs of the failure. In the IEWS, presented below, four parametric models are combined to guarantee good prediction results discriminant analysis, logit and probit models and the principal component analysis (PCA). In a first step the PCA helps to discover the financial ratios that influence the banks most and to understand the underlying relationships. Secondly the PCA

The economic crisis and the crisis of economics calculates factor scores which are used as independent variables in the following parametric models. This IEWS is said to have high predictive ability to determine if a bank is going bankrupt or not (e.g. Canbas et al., 2005, p. 528ff.). 3.2.1 Principal component analysis According to Canbas et al. (2005) the main aim of the PCA is to find out which characteristics are important in explaining if a bank is healthy or not. To use the PCA correctly there are some criteria which have to be met. There must be a significant relationship between the financial ratios (used to describe the financial conditions of the bank) to be appropriate for evaluating data sets with the PCA. This relationship can be tested with a so called Bartletts test of sphercity. If in a Bartlett test the correlation matrix of the financial ratios would be an identity matrix there would be no correlation for the data and a PCA would not be possible. If the values of the chi-square test for sphercity are large enough and the observed significance levels are small enough in a next step the financial ratios can be expressed in standardized form, which means that they have a mean of zero and a standard deviation of one. The variables that are used in such a model are according to Canbas et al. (2005) the following: Interest Expenses/Average Profitable Assets Interest Expenses/Average Non-Profitable Assets (Shareholders Equity + T. Income)/(Deposit + Non-Deposit Funds) Interest Income/Interest Expenses (Shareholders Equity + T. Income)/(Total Assets + Con. And Com.) (Shareholders Equity + T. Income)/(Total assets) Net working Capital/Total Assets (Salary and Employees Beneficials + Res. For Retire.)/Number Of Personel Liquid Assets/(Deposit + Non-Deposit Funds) Interest Expenses/Total Expenses Liquid Assets/Total Assets Standard Capital Ratio. These twelve ratios are the most significant for predicting bank failures and are listed from the most significant to the least significant. The first three factors of this list explain nearly 80 per cent of the financial conditions of a bank. In the next subsection only the first three ratios are used, because those are the only rations with an eigenvalue greater than one, which means that they are the most important dimensions for the explanation.

The economic crisis and the crisis of economics For calculating the factor scores for each bank the standardized financial ratios are set up the following:

with k = 1,...12 and a being the number of investigated banks. For calculating the factor score coefficients the equation below is used: ,

where wjk is the factor score coefficient and Faj is the factor score for a bank. The calculated factor score coefficients are now needed as independent inputs for the forthcoming analysis with the DA, the logit and the probit models (e.g. Canbas et al., 2005, p. 531ff.). 3.2.2 The discriminant model For two populations (failed and healthy banks) it is assumed that the independent variables are distributed within each group according to multivariate normal distribution with different means but equal dispersion matrices. (Canbas et al., 2005, p. 535). The aim within this model is to find linear combinations of the independent variables as to maximize the variances between the populations compared to within the group. Through a discriminant model the linear combination of the factor scores, derived in the last section, provide for every single bank a D-score, like the below example: , where Da is the D-score and the F values are the relevant factor scores according to the last section. The coefficients for this model are chosen according to the rule: Maximize the ratio of the sum of squares of D-scores between and within groups. To classify the bank now in one of the two categories (failed or healthy) some kind of threshold value is needed. In this model this is a so called cut off score C: , with C being the cut off score, Na the number of healthy banks, Nb the number of failed banks, Da the average score for healthy banks and Db the average score for failed banks. The classification is then calculated very easy. If the D-score is larger than the cut off score the bank is said to be healthy and if the D-score is smaller or equal to the cut off score the bank is predicted to fail (e.g. Canbas et al., 2005, p. 535f.). 3.2.3 The logit and probit models The logit and probit models are also constructed to predict the belonging of a bank to either the group of failing or healthy banks. The measure to do this, are the above discussed financial characteristics of the bank.

The economic crisis and the crisis of economics The equations for predicting the probability of a bank to fail look the following: logit:

probit: with Zia = 1F1a + 2F2a + 3F3a . The parameters 1 to 3 are found by maximizing the log-likelihood function and are statistically significant. The classification follows again some threshold value, which is in these cases 0.5. If the probability is below that value the bank is considered to be healthy, in the other cases to fail (e.g. Canbas et al., 2005, p. 536f.). 3.2.3 Putting together the IEWS The IEWS is used as an analytical decision support tool. In this whole model the system parameters are for each test of a single bank fixed, the only thing that changes are the inputs, the twelve financial ratios mentioned above. One of the advantages of this model is its integration of several single methods. This provides in fact a very solid basis for predicting failures of banks. Furthermore it is a very useful tool, because it can be done with data that is publicly available which means that huge costs on data collection can be avoided. Clearly this tool cannot only be used for predicting bank failures, but more likely any kind of failures of other businesses. Going once more over the functioning of the IEWS the figure below provides a good overview over the integrated model. In a first step the businesses, to be investigated, must be chosen. After computing the ratios, the standard values of the ratios can be calculated. In the PCA the factor scores for the business are traced, which in the next step are used as inputs. In each of the DA, the logit and the probit some threshold value is calculated and the probability of failing is calculated as shown below (e.g. Canbas et al., 2005, p. 537ff.). ,

The economic crisis and the crisis of economics


Figure 3: The IEWS

Source: Canbas et al. (2005), p. 538.

3.3

Better monetary statistics

On the one hand there is a couple of economists that claim that monetary policy had extremely improved over the last decades and that is why output volatility has decreased. These economists further argue that central banks have become used to damp the business cycle and so can generate stability. In this paper a contrary opinion is investigated: the case of severe data problems that produced misperceptions of monetary policy. The major problem in this context is the

The economic crisis and the crisis of economics existence of two different concepts when talking about monetary policy. The two rivaling concepts are on the one hand the microeconomic theory based monetary aggregate which is called divisia, and the simple summing up of monetary aggregates. This section will show that decisions about monetary policies of the central banks were not irrational but based on bad monetary statistics. This caused in many cases expansionary monetary policies over a few years, even if the recession, which it should stop, was over. These expansionary policies fed some of the existing bubbles and finally let them burst. This lasting money growth led to speculation and leveraging, and made it possible to get loans with very low collateral values. The reasons for a too expansionary monetary policy of central banks will be discussed below. The main question to clarify is how central banks could not see that their monetary policy is causing asset bubbles. The reason why central banks made no irrational decisions but although made wrong decisions, is the existence of the two different kinds of measurements mentioned above. As the figure below shows the divisia measure of M3 (the green line) has fallen more sharply than the simply summing up (the black line). This is the reason why the money supply by the central banks was higher than the central banks believed. The main argument here is that the central banks have underestimated the amount of money already in the economy and so continued to fuel the bubbles. Summarizing it can be said that bad monetary data has been provided from simple sum aggregation, which led to poor decisions of central banks (e.g. Barnett/Chauvet, 2011, p. 6ff.).
Figure 4: divisia M3 vs. summing up M3

Source: Barnett/Chauvet (2011), p. 20.

3.4

Fuzzy clustering and multivariate adaptive regression splines (MARS)

This method belongs, as the IEWS showed above, to the group of hybrid approaches. The main characteristics of a hybrid approach are to combine various methods to achieve better accuracy of the model. The following hybrid model is based on firstly fuzzy clustering and secondly a multivariate adaptive regression splines method (MARS). These techniques are selected, because of several

The economic crisis and the crisis of economics reasons. The fuzzy clustering fits perfectly for predicting failures because of the fact that failing businesses often have dissimilar financial properties. Furthermore MARS is good in predicting failures because it is a flexible procedure that can model relationships that are nearly additive. It essentially builds flexible models by fitting piecewise linear regressions; that is, the non-linearity of a model is approximated through the use of separate regression slopes in different intervals of the variable space. (Andres et al., 2011, p. 1867). 3.4.1 Fuzzy c-means algorithm Using the fuzzy c-means algorithm, a method to address one data element to two or more clusters is provided. It is also possible that the input data is placed somewhere in the middle of two clusters according to a certain probability. To reach these results the following objective function must be minimized: ,

with uij being the probability of the company xi belonging to the jth cluster, cj the coordinates of the jth center of the cluster, the Euclid norm. The sums are taken from i=1 to N, the number of companies, and j=1 to C, the number of clusters. Characterizing for a fuzzy clustering is an iterative process, in which uij and cj are updated according to the following rules: ( )

Each business that has to be investigated is now set in one cluster j with a probability of uij (e.g. Andres et al., 2011, p. 1868). 3.4.2 Multivariate adaptive regression splines (MARS) In a MARS model the values of a continuous variable are predicted with multivariate nonparametric regression technique. A dependent variable y, which is a vector of the dimension (nx1) shall be predicted through a set of explanatory variables x with dimension (nxp) and an error vector with dimension (nx1): ( ) . As a big advantage against models presented in chapter two, the MARS method does not need to make any statistical assumptions about the relationship between the input and the output variables. This is beneficial, because all data can be used, and the risk of proofing data, which is quite costly and that later does not fit is fully eliminated.

The economic crisis and the crisis of economics 3.4.3 Constructing the MARS model The MARS regression model is constructed by fitting basis functions to distinct intervals of the independent variables. Generally, piecewise polynomials, also called splines, have pieces smoothly connected together. ...,the joining points of the polynomials are called knots, nodes or breakdown points. (Andres et al., 2011, p. 1868f.). The knots will be named t. Moreover a spline of degree q has a polynomial function for each segment, with spline basis functions that look the following: (t x)q if x < t [-(x t)]q+ = { 0 otherwise (t x)q if x > t [(x t)]q+ = { 0 otherwise where q denotes the power to which the splines are raised and also define the degree of smoothness. The MARS model with M basis functions looks like: ( ) ( ), with Bm being the mth basis function and cm the relating coefficient. To construct the final MARS model two further steps are necessary. Firstly through a two-at-a-time forward stepwise procedure the pairs of basis functions are selected. This results in an overfitted model with quite poor prediction ability. To get rid of this the nonessential basis functions are removed through a backward stepwise procedure. MARS uses a generalized cross validation (GCV) to find out which basis functions to use. This criterion looks the following: ( )
( ( ( ))
( )

with C(M) being a complexity penalty increasing with the following rule: ( ) , where M denotes the number of basis functions and d is a penalty for too much basis functions in the model (e.g. Andres et al, 2011, p. 1868f.). Andres et al (2011) show how to model and evaluate the prediction ability of this model with a root mean squared error of cross validation approach. 3.4.4 Applying the MARS model Firstly a random training database is chosen. In a second step the fuzzy c-means algorithm is using financial ratios as input. These ratios are the following: Working capital/Total assets Retained earnings/Total assets

The economic crisis and the crisis of economics Earnings before interest and taxes/Total assets market value of equity/Book value of total debt sales/Total assets. For the prediction at the end the C cluster is split in two groups, one that contains failed businesses and one with healthy businesses. In a further step the MARS model uses the financial ratios for training and sets the failed businesses equal to one and the others equal to zero. Then the MARS model is used to investigate the validation data set and finally calculates a bankruptcy probability according to: , with BRCi being the number of failing businesses for which the outputvalues of the MARS model are lower than for the ith business and NBRCi being the total number of businesses for which the outputvalue is lower than that of the ith business (e.g. Andres et al., 2011, p. 1869ff.). 4 Summary and conclusions

The first section has shown a rough overview over the paper. This section also has had the aim to give first impressions about the necessity for changes in econometric models. In the second section some standard econometric models have been presented and constructed. The main aim of this section has been to show which problems occur with such models. On the one hand there is the problem that these models considered just as independent units do not provide good predictions of economic or financial crises and bank failures. On the other hand these models underlie strong assumptions about the relationships between the input and output variables, which makes it hard to use them on a wide range of data sets. Moreover it can cause huge cost to get and test input data. In the third section some alternatives and extensions of models have been explained to show how econometric models and methods in the field of operations research can be used more widely and precisely. There are two main types of models that provide better predictions. Firstly all methods that include some kind of self organizing systems that have the ability to learn by themselves from historical data provide excellent results in predicting economic crises and bank failures. The second type of functioning models are hybrid models, like the IEWS or the MARS model, which combine various simple models to get a better prediction performance. Furthermore in a short section some ideas about problems with the input data have been given, which clearly have showed that it can be very difficult to find appropriate data for using prediction models. Summarizing it can be said that there is a necessity for changes in econometric models to prevent economies from failing. The improvisation econometric models have to make can be different. On the one hand there is the possibility of self organizing system and some kind of neural networks. On the other hand there is the possibility of the combination of various existing models. Both these methods provide

The economic crisis and the crisis of economics much better results than the most existing models. To sum up the arguments and somehow to repeat the title of the paper, there is a strong necessity for changes in econometric models to make economic crises more predictable.

The economic crisis and the crisis of economics References Alam, P., Booth, D., Lee, K. and Thordarson, T. (2000), `The use of fuzzy clustering algorithm and self-organizing neural networks for identifying potentially failing banks: an experimental study`, Expert Systems with Applications, vol. 18, p. 185-199. Andres, D. J., Lorca, P., Cos, J., Francisco, J. de and Sanchez-Lasheras, F. (2011), Bankruptcy forecasting: A hybrid approach using Fuzzy c-means clustering and Multivariate Adaptive Regression Splines (MARS)`, Expert Systems with Applications, vol. 38, p. 1866-1875. Barnett, W. A. and Chauvet, M. (2011), How better monetary statistics could have signaled the financial crisis`, Journal of Econometrics, vol. 161, p. 6-23. Berry, W. D. (2005), Probit/Logit and other binary models`, Encyclopedia of social measurement, vol. 3, p. 161-169. Canbas, S., Cabuk, A. and Kilic, S.B. (2005), Prediction of commercial bank failure via multivariate statistical analysis of financial structures: The Turkish case`, European Journal of Operational Research, vol. 166, p. 528-546. Chong, J. (2005), The forecasting abilities of implied and econometric variancecovariance models across financial measures`, Journal of Economics and Businesses, vol. 57, p. 463-490. Hornik, K. (1989), Multilayer feedforward networks are universal approximators`, Neural Networks, vol. 2, p. 359-366. Karels, G. V. and Prakash, A. (1987), Multivariate normality and forecasting of business bankruptcy`, Journal of Business Finance and Accounting, vol. 14(4), p. 573-593. Kim, T. Y., Oh, K. J., Sohn, I. and Hwang, C. (2004), Usefulnes of artificial neural networks for early warning system of economic crisis`, Expert Systems with Applications, vol. 26, p. 583-590. Lin, C.-S., Khan, H. A., Chang, R.-J. and Wang, J.-C. (2008), A new approach to modeling early warning systems for currency crises: Can a machine-learning fuzzy expert system predict the currency crises effectively?`, Journal of International Money and Finance, vol. 27, p. 1098-1121. Ozkan, F. G. and Sutherland, A. (1995), Policy measures to avoid a currency crisis`, The economic Journal, 105, p. 510-519. Toshiyuki, S. (2006), DEA-Discriminant Analysis:Methodological comparison among eight discriminant analysis approaches`, European Journal of operational research, vol. 169, p. 247-272. Yuliya, D. and Iftekhar, H. (2010), Financial crises and bank failures: A review of prediction methods`, Omega, vol. 38, p. 315-324.

You might also like