Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

22/7/23, 10:52 M8_Modelo_MA_Returns.

ipynb - Colaboratory

Importing the relevant packages

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.graphics.tsaplots as sgt
import statsmodels.tsa.stattools as sts
# from statsmodels.tsa.arima_model import ARMA
from scipy.stats.distributions import chi2
from math import sqrt
import seaborn as sns
sns.set()

Importing the Data and Pre-processing

raw_csv_data = pd.read_csv("uspopulation.csv")
df_comp=raw_csv_data.copy()
df_comp.date = pd.to_datetime(df_comp.DATE, dayfirst = True)
df_comp.set_index("DATE", inplace=True)
df_comp=df_comp.fillna(method='ffill')
df = pd.read_csv('./uspopulation.csv',index_col='DATE',parse_dates=True)
df.index.freq = 'MS'
df.head()

PopEst

DATE

2011-01-01 311037

2011-02-01 311189

2011-03-01 311351

2011-04-01 311522

2011-05-01 311699

df_comp['market_value']=df_comp.PopEst

size = int(len(df_comp)*0.8)
df, df_test = df_comp.iloc[:size], df_comp.iloc[size:]

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 1/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

The LLR Test


Para comparar modelos

def LLR_test(mod_1, mod_2, DF = 1):
    L1 = mod_1.llf
    L2 = mod_2.llf
    LR = (2*(L2-L1))
    p = chi2.sf(LR, DF).round(3)
    return p

import warnings
warnings.filterwarnings("ignore")

Creating Returns

df['returns'] = df.market_value.pct_change(1)*100
# se crea para obtener una serie estacionaria

df.head()

PopEst market_value returns

DATE

1/1/2011 311037 311037 NaN

2/1/2011 311189 311189 0.048869

3/1/2011 311351 311351 0.052058

4/1/2011 311522 311522 0.054922

5/1/2011 311699 311699 0.056818

ACF for Returns

#eliminando el primer retorno con zero
sgt.plot_acf(df.returns[1:], zero = False, lags = 40)
plt.title("ACF for Returns", size=24)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 2/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

resultado: no todos son significativos y existen coeficientes positivos y negativos. Los valores varian mucho en magnitud. Verificar el 8 si es
significativo y comparar modelos.

sgt.plot_pacf(df.returns[1:], lags = 35, zero = False, method = ('ols'))
plt.title("PACF Pop Returns", size=24)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 3/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

resultado: los precios de hoy a menudo se mueven en la direccion opuesta de los precios de hayer.

MA(1) for Returns

# modelo MA con un solo retorno
from statsmodels.tsa.arima.model import ARIMA
model_ret_ma_1 = ARIMA(df.returns[1:], order=(0,0,1))
results_ret_ma_1 = model_ret_ma_1.fit()
results_ret_ma_1.summary()

SARIMAX Results
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 1) Log Likelihood 303.457
Date: Sat, 22 Jul 2023 AIC -600.914
Time: 14:47:25 BIC -593.961
Sample: 02-01-2011 HQIC -598.138
- 04-01-2017
Covariance Type: opg
coef std err z P>|z| [0.025 0.975]
const 0.0582 0.001 62.496 0.000 0.056 0.060
ma.L1 0.9992 4.356 0.229 0.819 -7.538 9.536
sigma2 1.688e-05 7.3e-05 0.231 0.817 -0.000 0.000
Ljung-Box (L1) (Q): 46.53 Jarque-Bera (JB): 2.10
Prob(Q): 0.00 Prob(JB): 0.35
Heteroskedasticity (H): 1.01 Skew: -0.04
Prob(H) (two-sided): 0.99 Kurtosis: 2.19

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

El coeficiente no es significativo el 5%, no rechazamos Ho, es cero.

Complejizando.

Higher-Lag MA Models for Returns


Ajustando mas de un retorno.
https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 4/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

model_ret_ma_2 = ARIMA(df.returns[1:], order=(0,0,2))
results_ret_ma_2 = model_ret_ma_2.fit()
print(results_ret_ma_2.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_1, results_ret_ma_2)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 2) Log Likelihood 322.900
Date: Sat, 22 Jul 2023 AIC -637.800
Time: 14:49:47 BIC -628.530
Sample: 02-01-2011 HQIC -634.098
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0581 0.001 53.178 0.000 0.056 0.060
ma.L1 1.3771 0.110 12.522 0.000 1.162 1.593
ma.L2 0.6608 0.117 5.647 0.000 0.431 0.890
sigma2 1.031e-05 2.47e-06 4.180 0.000 5.48e-06 1.51e-05
===================================================================================
Ljung-Box (L1) (Q): 13.55 Jarque-Bera (JB): 2.72
Prob(Q): 0.00 Prob(JB): 0.26
Heteroskedasticity (H): 0.99 Skew: 0.01
Prob(H) (two-sided): 0.98 Kurtosis: 2.07
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 0.0

El coeficiente L2 es significativo y existe diferencia entre los modelos (comparando con el modelo 1).

model_ret_ma_3 = ARIMA(df.returns[1:], order=(0,0,3))
results_ret_ma_3 = model_ret_ma_3.fit()
print(results_ret_ma_3.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_2, results_ret_ma_3)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 3) Log Likelihood 340.927
Date: Sat, 22 Jul 2023 AIC -671.854
Time: 14:55:44 BIC -660.266
Sample: 02-01-2011 HQIC -667.227
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 5/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory
------------------------------------------------------------------------------
const 0.0578 0.001 42.285 0.000 0.055 0.060
ma.L1 1.5230 0.161 9.488 0.000 1.208 1.838
ma.L2 1.4260 0.127 11.255 0.000 1.178 1.674
ma.L3 0.8612 0.149 5.793 0.000 0.570 1.153
sigma2 6.136e-06 1.45e-06 4.240 0.000 3.3e-06 8.97e-06
===================================================================================
Ljung-Box (L1) (Q): 6.83 Jarque-Bera (JB): 1.36
Prob(Q): 0.01 Prob(JB): 0.51
Heteroskedasticity (H): 1.02 Skew: 0.06
Prob(H) (two-sided): 0.96 Kurtosis: 2.35
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 0.0

El coeficiente L3 es significativo y existe diferencia entre los modelos (comparando con el modelo 2).

model_ret_ma_4 = ARIMA(df.returns[1:], order=[0,0,4])
results_ret_ma_4 = model_ret_ma_4.fit()
print(results_ret_ma_4.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_3, results_ret_ma_4)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 4) Log Likelihood 332.098
Date: Sat, 22 Jul 2023 AIC -652.196
Time: 14:56:44 BIC -638.291
Sample: 02-01-2011 HQIC -646.644
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0580 0.002 38.674 0.000 0.055 0.061
ma.L1 1.9749 0.156 12.619 0.000 1.668 2.282
ma.L2 1.5225 0.323 4.718 0.000 0.890 2.155
ma.L3 0.3218 0.312 1.031 0.303 -0.290 0.934
ma.L4 -0.1772 0.140 -1.265 0.206 -0.452 0.097
sigma2 7.752e-06 1.61e-06 4.816 0.000 4.6e-06 1.09e-05
===================================================================================
Ljung-Box (L1) (Q): 0.31 Jarque-Bera (JB): 1.54
Prob(Q): 0.58 Prob(JB): 0.46
Heteroskedasticity (H): 0.76 Skew: 0.06
Prob(H) (two-sided): 0.50 Kurtosis: 2.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 6/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

LLR test p-value = 1.0

El coeficiente L4 es significativo y existe diferencia entre los modelos (comparando con el modelo 3).

model_ret_ma_5 = ARIMA(df.returns[1:], order=[0,0,5])
results_ret_ma_5 = model_ret_ma_5.fit()
print(results_ret_ma_5.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_4, results_ret_ma_5)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 5) Log Likelihood 323.386
Date: Sat, 22 Jul 2023 AIC -632.772
Time: 14:57:18 BIC -616.549
Sample: 02-01-2011 HQIC -626.294
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0589 0.000 180.640 0.000 0.058 0.059
ma.L1 1.3153 0.127 10.334 0.000 1.066 1.565
ma.L2 0.5813 0.192 3.035 0.002 0.206 0.957
ma.L3 -0.5649 0.153 -3.702 0.000 -0.864 -0.266
ma.L4 -1.0381 0.172 -6.052 0.000 -1.374 -0.702
ma.L5 -0.6077 0.126 -4.824 0.000 -0.855 -0.361
sigma2 9.921e-06 2.29e-06 4.329 0.000 5.43e-06 1.44e-05
===================================================================================
Ljung-Box (L1) (Q): 13.59 Jarque-Bera (JB): 1.25
Prob(Q): 0.00 Prob(JB): 0.54
Heteroskedasticity (H): 1.59 Skew: -0.15
Prob(H) (two-sided): 0.25 Kurtosis: 2.45
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 1.0

El coeficiente L5 es significativo y existe diferencia entre los modelos (comparando con el modelo 4)

model_ret_ma_6 = ARIMA(df.returns[1:], order=[0,0,6])
results_ret_ma_6 = model_ret_ma_6.fit()
print(results_ret_ma_6.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_5, results_ret_ma_6)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 7/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory
Model: ARIMA(0, 0, 6) Log Likelihood 354.716
Date: Sat, 22 Jul 2023 AIC -693.433
Time: 14:57:28 BIC -674.893
Sample: 02-01-2011 HQIC -686.030
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0578 0.002 24.052 0.000 0.053 0.062
ma.L1 1.6503 0.131 12.597 0.000 1.394 1.907
ma.L2 1.9516 0.213 9.178 0.000 1.535 2.368
ma.L3 2.0209 0.286 7.069 0.000 1.461 2.581
ma.L4 1.5076 0.309 4.876 0.000 0.902 2.114
ma.L5 1.1259 0.244 4.611 0.000 0.647 1.605
ma.L6 0.5841 0.144 4.061 0.000 0.302 0.866
sigma2 4.201e-06 7.77e-07 5.408 0.000 2.68e-06 5.72e-06
===================================================================================
Ljung-Box (L1) (Q): 0.53 Jarque-Bera (JB): 1.92
Prob(Q): 0.47 Prob(JB): 0.38
Heteroskedasticity (H): 0.98 Skew: 0.39
Prob(H) (two-sided): 0.95 Kurtosis: 2.86
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 0.0

El coeficiente L6 es significativo y existe diferencia entre los modelos (comparando con el modelo 5)

model_ret_ma_7 = ARIMA(df.returns[1:], order=[0,0,7])
results_ret_ma_7 = model_ret_ma_7.fit()
print(results_ret_ma_7.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_6, results_ret_ma_7)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 7) Log Likelihood 345.510
Date: Sat, 22 Jul 2023 AIC -673.020
Time: 14:57:37 BIC -652.162
Sample: 02-01-2011 HQIC -664.691
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0590 0.000 247.214 0.000 0.058 0.059
ma.L1 1.4894 0.743 2.006 0.045 0.034 2.945
ma.L2 0.9874 1.947 0.507 0.612 -2.829 4.804
ma.L3 0.4782 2.873 0.166 0.868 -5.153 6.109
ma.L4 -0.5791 3.066 -0.189 0.850 -6.589 5.431

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 8/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory
ma.L5 -1.2322 2.563 -0.481 0.631 -6.256 3.792
ma.L6 -1.3842 1.822 -0.760 0.447 -4.955 2.187
ma.L7 -0.7319 0.725 -1.010 0.313 -2.153 0.689
sigma2 5.575e-06 5.27e-06 1.057 0.290 -4.76e-06 1.59e-05
===================================================================================
Ljung-Box (L1) (Q): 4.12 Jarque-Bera (JB): 2.81
Prob(Q): 0.04 Prob(JB): 0.25
Heteroskedasticity (H): 1.54 Skew: -0.28
Prob(H) (two-sided): 0.29 Kurtosis: 2.23
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 1.0

El coeficiente L7 es no significativo y no existe diferencia entre los modelos (comparando con el modelo 6)

model_ret_ma_8 = ARIMA(df.returns[1:], order=[0,0,8])
results_ret_ma_8 = model_ret_ma_8.fit()
print(results_ret_ma_8.summary())
print("\nLLR test p-value = " + str(LLR_test(results_ret_ma_7, results_ret_ma_8)))

SARIMAX Results
==============================================================================
Dep. Variable: returns No. Observations: 75
Model: ARIMA(0, 0, 8) Log Likelihood 356.404
Date: Sat, 22 Jul 2023 AIC -692.809
Time: 14:57:49 BIC -669.634
Sample: 02-01-2011 HQIC -683.555
- 04-01-2017
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0580 0.002 36.690 0.000 0.055 0.061
ma.L1 1.6304 0.169 9.676 0.000 1.300 1.961
ma.L2 1.7868 0.274 6.528 0.000 1.250 2.323
ma.L3 1.7277 0.378 4.567 0.000 0.986 2.469
ma.L4 1.0764 0.456 2.358 0.018 0.182 1.971
ma.L5 0.4436 0.458 0.968 0.333 -0.455 1.342
ma.L6 -0.1641 0.403 -0.407 0.684 -0.954 0.626
ma.L7 -0.6076 0.308 -1.975 0.048 -1.211 -0.005
ma.L8 -0.3725 0.170 -2.194 0.028 -0.705 -0.040
sigma2 3.928e-06 8.53e-07 4.606 0.000 2.26e-06 5.6e-06
===================================================================================
Ljung-Box (L1) (Q): 0.41 Jarque-Bera (JB): 0.98
Prob(Q): 0.52 Prob(JB): 0.61
Heteroskedasticity (H): 0.88 Skew: 0.28
Prob(H) (two-sided): 0.76 Kurtosis: 2.96
===================================================================================

Warnings:

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 9/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

LLR test p-value = 0.0

El coeficiente 8 es significativo existe diferencia entre los modelos (comparando con el modelo 7).

nos quedamos con el modelo 6 u 8, comparemos estos dos modelos.

# DF= 2(comparando dos reotrnos de 8 a 6)
LLR_test(results_ret_ma_6, results_ret_ma_8, DF = 2) # observe df=2

0.185

p(0.185)> 0.05, es no significativo. El modelo MA8, no es mejor que el modelo MA6 retornos.

Residuals for Returns

df['res_ret_ma_6'] = results_ret_ma_6.resid[1:]

print("The mean of the residuals is " + str(round(df.res_ret_ma_6.mean(),3)) + "\nThe variance of the residuals is " + str(round(df.res_ret_ma_6.var(),3)))

The mean of the residuals is 0.0


The variance of the residuals is 0.0

# obteniendo la desviacion estandar
round(sqrt(df.res_ret_ma_6.var()),3)

0.002

Si tomamos el criterio 3 sigma, estariamos a 3.5 hacia un lado y menos 3.5 al otro, eso significa en el peor de los casos se puede tener hasta 7
puntos porcentuales de diferencia al predecir los retornos de un indice de mercado, los errores pueden estar como mucho hasta 7 puntos
porcentuales.

# los residuos se asemejan a un ruido blanco?
df.res_ret_ma_6[1:].plot(figsize = (20,5))
plt.title("Residuals of Returns", size = 24)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 10/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

se puede concluir que los residuos son bastante aleatorios. probemos su estacionariedad

sts.adfuller(df.res_ret_ma_6[2:])

(-1.2340989648608862,
0.6587579453004929,
12,
61,
{'1%': -3.542412746661615,
'5%': -2.910236235808284,
'10%': -2.5927445767266866},
-599.7205888004464)

p(0.65) > 0.05, la prueba no es significativa, los residuos no son estacionarios.

# examinando  los coeficientes, deberian ser no significativos.
sgt.plot_acf(df.res_ret_ma_6[2:], zero = False, lags = 35)
plt.title("ACF Of Residuals for Returns",size=24)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 11/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

Recuerde que el modelo es MA6, los residuos hasta ahi son no significativos (modelo bastante bueno), y recien, en el 12 aprox es significativo.
los valores en el pasado pierden relevancia.

NUEVO METODO DE TRANSFORMACION DE DATOS DE SERIES TEMPORALES QUE SE LLAMA


NORMALIZACION.
Normalized Prices

# necesitamos establecer un punto de referencia (puede ser cualquiera)
# usaremos el primer periodo.
benchmark = df.market_value.iloc[0]
df['norm'] = df.market_value.div(benchmark).mul(100)

df.head()

PopEst market_value returns res_ret_ma_6 norm

DATE

1/1/2011 311037 311037 NaN NaN 100.000000

2/1/2011 311189 311189 0.048869 NaN 100.048869

3/1/2011 311351 311351 0.052058 0.002452 100.100953

4/1/2011 311522 311522 0.054922 -0.000081 100.155930

5/1/2011 311699 311699 0.056818 -0.000451 100.212836

# prueba de Dicker y Fuller para la nueva variable
sts.adfuller(df['norm'])

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 12/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

(-0.8504944096583017,
0.803838113312104,
12,
63,
{'1%': -3.5386953618719676,
'5%': -2.9086446751210775,
'10%': -2.591896782564878},
-628.2543773965115)

p(0.80) > 0.05, la serie es no estacionaria.

Normalized Returns

bench_ret = df.returns.iloc[1]
df['norm_ret'] = df.returns.div(bench_ret).mul(100)

df.head()

PopEst market_value returns res_ret_ma_6 norm norm_ret

DATE

1/1/2011 311037 311037 NaN NaN 100.000000 NaN

2/1/2011 311189 311189 0.048869 NaN 100.048869 100.000000

3/1/2011 311351 311351 0.052058 0.002452 100.100953 106.526889

4/1/2011 311522 311522 0.054922 -0.000081 100.155930 112.386543

5/1/2011 311699 311699 0.056818 -0.000451 100.212836 116.266075

sts.adfuller(df.norm_ret[1:])

(-1.160474131441898,
0.6903635781615368,
12,
62,
{'1%': -3.540522678829176,
'5%': -2.9094272025108254,
'10%': -2.5923136524453696},
320.91804974071215)

p(0.69) > 0.05, la serie no es estacionaria.

# funcion de autocorrelacion-
sgt.plot_acf(df.norm_ret[1:], zero = False, lags = 40)
https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 13/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory
plt.title("ACF of Normalized Returns",size=24)
plt.show()

# iniciando con el MA6 que sera el mejor modelo
model_norm_ret_ma_6 = ARIMA(df.norm_ret[1:], order=(0,0,6))
results_norm_ret_ma_6 = model_norm_ret_ma_6.fit()
results_norm_ret_ma_6.summary()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 14/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

SARIMAX Results
Dep. Variable: norm_ret No. Observations: 75
Model: ARIMA(0, 0, 6) Log Likelihood -224.027
Date: Sat, 22 Jul 2023 AIC 464.054
Time: 15:42:55 BIC 482.594
Sample: 02-01-2011 HQIC 471.457
- 04-01-2017
Covariance Type: opg
coef std err z P>|z| [0.025 0.975]
const 119.0854
# creando residuos 4.090 29.113 0.000 111.068 127.102
ma.L1 1.7551 1.205 1.457 0.145 -0.606 4.116
df['res_norm_ret_ma_6'] = results_norm_ret_ma_6.resid[1:]
ma.L2 1.6510 1.346 1.227 0.220 -0.986 4.288
ma.L3 1.6499 1.987 0.830 0.406 -2.244 5.544
sts.adfuller(df.res_norm_ret_ma_6[2:])
ma.L4 1.6618 2.502 0.664 0.507 -3.243 6.567
ma.L5 0.8021 1.527 0.525 0.599 -2.191 3.795
(-1.5322000144479382,
ma.L6 -0.1018 0.305 -0.334 0.739 -0.700 0.496
0.5175266061826727,
11,
sigma2 18.6827 33.662 0.555 0.579 -47.293 84.659
62,
Ljung-Box (L1) (Q): 0.00 Jarque-Bera (JB): 2.18
{'1%': -3.540522678829176,
'5%':Prob(Q): 0.97
-2.9094272025108254, Prob(JB): 0.34
'10%': -2.5923136524453696},
Heteroskedasticity (H): 1.13 Skew: 0.15
330.5280283158778)
Prob(H) (two-sided): 0.76 Kurtosis: 2.22

los residuos no son estacionarios.


Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

df.res_norm_ret_ma_6[1:].plot(figsize=(20,5))
plt.title("Residuals of Normalized Returns",size=24)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 15/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

# acf de residuos normalizados
sgt.plot_acf(df.res_norm_ret_ma_6[2:], zero = False, lags = 40)
plt.title("ACF Of Residuals for Normalized Returns",size=24)
plt.show()

MA Models For Prices


MA para datos no estacionarios

sgt.plot_acf(df.market_value, zero = False, lags = 40)
plt.title("ACF for Prices", size=20)
plt.show()

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 16/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

El grafico sugiere un modelo infinito, pero, iniciemos con un modelo MA1

model_ma_1 = ARIMA(df.market_value, order=(0,0,1))
results_ma_1 = model_ma_1.fit()
results_ma_1.summary()

SARIMAX Results
Dep. Variable: market_value No. Observations: 76
Model: ARIMA(0, 0, 1) Log Likelihood -691.494
Date: Sat, 22 Jul 2023 AIC 1388.989
Time: 15:49:49 BIC 1395.981
Sample: 01-01-2011 HQIC 1391.783
- 04-01-2017
Covariance Type: opg
coef std err z P>|z| [0.025 0.975]
const 3.18e+05 475.067 669.382 0.000 3.17e+05 3.19e+05
ma.L1 1.0000 0.237 4.219 0.000 0.535 1.464
sigma2 4.503e+06 6.31e-05 7.14e+10 0.000 4.5e+06 4.5e+06
Ljung-Box (L1) (Q): 68.33 Jarque-Bera (JB): 3.32
Prob(Q): 0.00 Prob(JB): 0.19
Heteroskedasticity (H): 1.01 Skew: -0.08
Prob(H) (two-sided): 0.98 Kurtosis: 1.99

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.69e+29. Standard errors may be unstable.

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 17/18
22/7/23, 10:52 M8_Modelo_MA_Returns.ipynb - Colaboratory

Observamos que la constante y y L1 son significativos.

check 0 s completado a las 10:49

https://colab.research.google.com/drive/10JJo3psbf8qmFjkJLU23Rp5UCG5ujg0C?hl=es#scrollTo=F5CJs7rSSt8d&printMode=true 18/18

You might also like