R I t t-1: C a b R b C ε I a b R R ε R C I

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

1.

Модель мультипликатора-акселератора / Model of a multiplier-accelerator

C t=a1+ b11 Rt + b12 C t−1 +ε 1 ,


I t=a2 +b21 ( R t−R t−1 ) +ε 2 ,
Rt =C t + I t ,

where C - consumption expenses;


R - revenue;
I - investments;
t - current period;
t-1 - previous period.

2. Модель Клейна / Klein's model (simplified version) / Macroeconomic model

C t=a1+ b12 Y t +b13 T t + ε 1 ,


I t=a2 +b21 Y t +b24 K t −1 + ε 2 ,
Y t =Ct + I t ,

where C - consumption expenses;


Y - yield;
I - investments;
T - taxes;
K - capital;
t - current period;
t-1 - previous period;

3. Конъюктурная модель / Conjunctive model

C t=a1+ b11 Y t + b12 C t−1 +ε 1 ,


I t=a2 +b21 r t +b22 I t −1+ ε 2 ,
r t =a3 + b31 Y t + b32 M t + ε 3 ,
Y t =Ct + I t +Gt ,

where C - consumption expenses;


Y - GDP;
I - investments;
r - interest rate;
M - money supply;
G - government expenses;
t - current period;
t-1 - previous period.
4. IS-LM model
Ct =a 0+ a1 ∙(Y ¿ ¿t−T t )+ε 1 t ¿ I t=b0 +b1 ∙ R t +ε 2 t

{
Lt =c 0 + c1 ∙ R t +Y t + ε 3 t
Y t =C t + I t +Gt
Initial form: Lt=M t
E ( ε 1 t ) =0 , E ( ε 2 t ) =0 , E ( ε 3 t )=0
Var ( ε 1 t )=const , Var ( ε 2 t )=const , Var ( ε 3 t ) =c

5. The Samuelson-Hicks econometric Model

Ct =a 0+ a1 ∙ Y t −1 + ε 1 t

{
I t=b0 +b1 ∙ ( Y t−1−Y t −2 ) +ε 2 t
Gt =g 1 ∙ Gt−1 +ε 3 t
Initial form: Y t =C t + I t +Gt
E ( ε 1 t ) =0 , E ( ε 2 t )=0 , E ( ε 3t )=0
Var ( ε 1 t ) =const ,Var ( ε 2t )=const , Var ( ε 3 t ) =c

6. Keynes's models for open and closed economies with and without government
intervention

The Keynesian econometric model of a closed economy without government intervention

C t=a0 +a1 Y t + a2 Y t −1+ ℇ 1t

Initial form:
{ I t=b0 +b1 Y t +b 2 Y t −1+ ℇ 2t
Y t =Ct + I t + Gt
E ( ε 1t )=0 , E ( ε 2 t )=0
Var ( ε 1 t ) =const ,Var ( ε 2t )=const
Ct =α 0 +α 1 Y t−1 +α 2 G t + μ 0

{
Reduced form: I t =β 0 + β 1 Y t −1 + β 2 G t + μ1
Y t=γ 0+ γ 1 Y t−1 +γ 2 G t + μ2

The Keynesian econometric model of closed economy with government intervention

Ct =a0 + a1 (Y t −T t )+ ℇ1 t

Initial form:
{ I t =b0 + b1 Y t +b2 R t + ℇ2 t
Y t =Ct + I t + Gt
E ( ε 1t )=0 , E ( ε 2 t )=0
Var ( ε 1 t ) =const ,Var ( ε 2t )=const
Ct =α 0 +α 1 T t +α 2 Rt + α 3 G t + μ0

{
Reduced form: I t =β 0 + β 1 T t + β 2 Rt + β3 G t + μ1
Y t=γ 0 + γ 1 T t + γ 2 Rt + γ 3 G t + μ 2

The Keynesian econometric model of open economy

Ct =a0 + a1 Y t + a2 Ct −1+ ε 1 t

{
I t=b0 +b1 Y t +b 2 r t +ε 2 t
r t =c 0 +c 1 Y t +c 2 M t + c 3 r t −1+ ε 3 t
Initial form: Y t =Ct + I t +G t
E ( ε 1 t )=0 , E ( ε 2 t ) =0 , E ( ε 3 t ) =0
Var ( ε 1 t ) =const ,Var ( ε 2t )=const , Var ( ε 3 t ) =const

Ct =α 0+ α 1 Ct −1+ α 2 M t +α 3 r t −1+ α 4 Gt + μ 0

Reduced form:
{
I t =β 0+ β1 Ct −1+ β2 M t + β 3 r t−1 + β 4 G t + μ1
r t=γ 0 + γ 1 Ct −1+ γ 2 M t +γ 3 r t−1 +γ 4 G t + μ2
Y t=δ 0 +δ 1 C t−1 +δ 2 M t + δ 3 r t−1 +δ 4 Gt + μ3

Mt – денежная масса
Модель Менгеса
11. Model specifications: structural, reduced, evaluated

1st example
Structural:

Reduced:

2nd example from classwork


Examples from her textbook(as in our HW):

Structural:
Estimated:
12. Taking into account the principles of the specification when developing the
structural form of the model

15. Construction of a matrix of pairwise correlations. Identification of the type of


correlation, analysis of possible multicollinearity in the regression model.
Multicollinearity occurs when independent variables in a regression model
are correlated. This correlation is a problem because independent variables should
be independent. If the degree of correlation between variables is high enough, it
can cause problems when you fit the model and interpret the results.
A key goal of regression analysis is to isolate the relationship between each
independent variable and the dependent variable. The interpretation of a regression
coefficient is that it represents the mean change in the dependent variable for each
1 unit change in an independent variable when you hold all of the other
independent variables constant. That last portion is crucial for our discussion about
multicollinearity.
The idea is that you can change the value of one independent variable and
not the others. However, when independent variables are correlated, it indicates
that changes in one variable are associated with shifts in another variable. The
stronger the correlation, the more difficult it is to change one variable without
changing another. It becomes difficult for the model to estimate the relationship
between each independent variable and the dependent variable independently
because the independent variables tend to change in unison.
There are two basic kinds of multicollinearity:
Structural multicollinearity: This type occurs when we create a model term
using other terms. In other words, it’s a byproduct of the model that we specify
rather than being present in the data itself. For example, if you square term X to
model curvature, clearly there is a correlation between X and X2.
Data multicollinearity: This type of multicollinearity is present in the data
itself rather than being an artifact of our model. Observational experiments are
more likely to exhibit this kind of multicollinearity.
Correlation is a statistical measure that indicates the extent to which two or
more variables move together. A positive correlation indicates that the variables
increase or decrease together. A negative correlation indicates that if one variable
increases, the other decreases, and vice versa.
The correlation coefficient indicates the strength of the linear relationship
that might be existing between two variables.

The correlation matrix below for the numeric features indicates a high
correlation of 0.82 and 0.65 between (TotalCharges, contract_age) and
(TotalCharges, MonthlyCharges) respectively. This indicates a possible problem of
multicollinearity and the need for further investigation.
16. Construction of scatter diagrams. Analysis of residuals to identify
autocorrelation and heteroscedasticity.
According to this assumption there is linear relationship between the
features and target. Linear regression captures only linear relationship. This can be
validated by plotting a scatter plot between the features and the target.

The first scatter plot of the feature TV vs Sales tells us that as the money
invested on Tv advertisement increases the sales also increases linearly and the
second scatter plot which is the feature Radio vs Sales also shows a partial linear
relationship between them, although not completely linear.
Autocorrelation occurs when the residual errors are dependent on each
other. The presence of correlation in error terms drastically reduces model’s
accuracy. This usually occurs in time series models where the next instant is
dependent on previous instant.
Autocorrelation can be tested with the help of Durbin-Watson test. The null
hypothesis of the test is that there is no serial correlation. The Durbin-Watson test
statistics is defined as:

The test statistic is approximately equal to


2*(1-r) where r is the sample autocorrelation of the residuals. Thus, for r == 0,
indicating no serial correlation, the test statistic equals 2. This statistic will always
be between 0 and 4. The closer to 0 the statistic, the more evidence for positive
serial correlation. The closer to 4, the more evidence for negative serial correlation.
Suppose the regression model we want to test for heteroskedasticity is the
one in Equation:
yi=β1+β2xi2+...+βKxiK+ei
The test we are construction assumes that the variance of the errors is a
function h of a number of regressors zs, which may or may not be present in the
initial regression model that we want to test. Equation shows the general form of
the variance function.
var(yi)=E(e2i)=h(α1+α2zi2+...+αSziS)
The variance var(yi) is constant only if all the coefficients of the regressors z
in Equation are zero, which provides the null hypothesis of our heteroskedasticity
test shown in Equation:
H0:α2=α3=...αS=0
Since the presence of heteroskedasticity makes the lest-squares standard
errors incorrect, there is a need for another method to calculate them. White robust
standard errors are such a method.
The R function that does this job is hccm (), which is part of the car package
and yields a heteroskedasticity-robust coefficient covariance matrix. This matrix
can then be used with other functions, such as coeftest () (instead of summary),
waldtest () (instead of anova), or linear Hypothesis () to perform hypothesis
testing. The function hccm() takes several arguments, among which is the model
for which we want the robust standard errors and the type of standard errors we
wish to calculate. type can be “constant” (the regular homoscedastic errors), “hc0”,
“hc1”, “hc2”, “hc3”, or “hc4”; “hc1” is the default type in some statistical software
packages.

21. Confidence intervals when checking the adequacy of the model

Any hypothetical value of β2 that satisfies (3.58) will therefore automatically be


compatible with the estimate b2, that is, will not be rejected by it. The set of all such
values, given by the interval between the lower and upper limits of the inequality, is
known as the confidence interval for β2.

Note that the center of the confidence interval is b2 itself. The limits are equidistant on
either side. Note also that, since the value of tcrit depends upon the choice of significance
level, the limits will also depend on this choice. If the 5 percent significance level is
adopted, the corresponding confidence interval is known as the 95 percent confidence
interval. If the 1 percent level is chosen, one obtains the 99 percent confidence interval,
and so on.

If the Yp values from the control sample are covered by a confidence interval — the
model is considered adequate, otherwise it is subject to revision.

22. Tests for the significance of the model coefficients.

You might also like