Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Group 11

ASSET ALLOCATION AND


INVESMENT STRATEGIES
Assignment 1

Group 11
02519963
02425529
02512457
02469848
02422796
Group 11

Contents
Question 1..................................................................... 3
Question 2..................................................................... 5
Question 3.................................................................... 10
Group 11

Question 1

The analysis started with the loading of several essential packages, including cvxpy, csv, numpy, math,
pandas, among others, which were necessary for data analysis and evaluation. Subsequently, we imported the
data from the given Excel file into Python. After cleaning the data, we generated daily log returns, displaying
the results in a table. Utilizing these daily log returns, we created a dataframe to calculate summary statistics
for each index, using functions for mean, standard deviation (sd), skewness, and kurtosis. The table below
presents these statistics for each index. Finally, using a similar method, we constructed the correlation matrix,
the results of which are depicted in the table below.

log_return = np.diff(np.log(stock_price), axis=0)


In [ ]:
# get mean, standard deviation, covariance matrix, skewness, kurtosis
mean = np.mean(log_return,
axis=0)
std=np.std(log_return,axis
=0)
cov = np.cov(log_return.T)
skew = np.mean(((log_return - mean) / std) ** 3,
axis=0)
kurt = np.mean(((log_return - mean) / std) ** 4,
axis=0)
autocorr = np.zeros(len(stock_name))
for i in range(0, len(stock_name)):
autocorr[i] = np.corrcoef(log_return[:, i][:-1],
log_return[:, i][1:])[0, 1]

df = pd.DataFrame({'mean': mean, 'std': std, 'skew': skew, 'kurt': kurt,


'autocorr': autocorr}, index=stock_name)
df = df.round(5)
corr = np.corrcoef(log_return.T)
corr_pd = pd.DataFrame(corr, columns=stock_name,
index=stock_name)
corr_pd = corr_pd.round(5)

------------------------Summary Statistics----------------------------------

mean std skew kurt autocorr


TSX 0.00012 0.01525 -0.86589 15.28533 0.00723
CAC 0.00002 0.01835 0.37206 20.55424 -0.07581
DAX 0.00021 0.01805 0.4108 19.88828 -0.05845

Eurostoxx50 -0.00004 0.01848 0.28361 19.95257 -0.0772

NIKKEI225 0.00017 0.01616 -0.55218 19.71093 -0.19926

FTSE100 -0.00006 0.01537 -0.27432 13.07999 -0.0196


SP500 0.00037 0.01358 -0.63745 15.00146 -0.15457

IBOVESPA -0.00006 0.02479 -0.15472 14.05913 0.01295


Group 11

------------------------ Correlation Matrix ----------------------------------

Eurostox NIKKEI22
TSX CAC DAX FTSE100 SP500 IBOVESPA
x50 5
TSX 1 0.67723 0.6597 0.66192 0.22234 0.71639 0.77141 0.68563

CAC 0.67723 1 0.95052 0.98484 0.2607 0.84171 0.61242 0.56733

DAX 0.6597 0.95052 1 0.96659 0.24615 0.80985 0.61593 0.541


Eurostox
0.66192 0.98484 0.96659 1 0.24823 0.83024 0.61119 0.5533
x50
NIKKEI22
0.22234 0.2607 0.24615 0.24823 1 0.34364 0.09931 0.23832
5
FTSE100 0.71639 0.84171 0.80985 0.83024 0.34364 1 0.61669 0.61785

SP500 0.77141 0.61242 0.61593 0.61119 0.09931 0.61669 1 0.61008

IBOVESPA 0.68563 0.56733 0.541 0.5533 0.23832 0.61785 0.61008 1

Normality Test

In our analysis, we evaluated the feasibility of approximating the return process by an i.i.d. Gaussian
distribution. Initial observations from skewness and kurtosis values indicated that this approximation would be
inappropriate, as these values diverged significantly from zero. To validate this, we applied the Shapiro-Wilk
normality test to each index's daily log returns. The resulting p-values, all substantially below 5%, reinforced
our conclusion that approximating the return process as i.i.d. Gaussian is not reasonable at a 95% confidence
level. We plotted each stock's log returns against a theoretical normal distribution using the stats.probplot
function and matplotlib for visualization. These plots provided a graphical evaluation of how closely the log
returns followed a normal distribution. This was further supported by visual assessments using probability plots
and D'Agostino's K-squared tests, which confirmed the departure from normality in the return distributions.
Group 11

------------------------ D Agostino’s K squared test ----------------------------------


p value of TSX is 2.8795090867390074e-207
p value of CAC is 3.1112995957597887e-181
p value of DAX is 1.4947405355780612e-181
p value of Eurostoxx50 is 1.538258757483906e-172
p value of NIKKEI225 is 6.902274120858383e-194
p value of FTSE100 is 1.2161088673424176e-134
p value of SP500 is 5.489066603981039e-179
p value of IBOVESPA is 1.8646943278851568e-135

Question 2
Using the annualized daily log returns and libraries such as cvxpy, scipy.optimize, numpy and pandas, we
solved for the optimal portfolio weights using the Markowitz mean-variance optimization framework. This
involved finding the portfolio with the minimum variance for a given level of return and the tangency portfolio,
which maximizes the Sharpe ratio (the ratio of excess return to standard deviation of return). In our process,
we defined constraints to ensure the sum of weights equals 1 (fully invested portfolio) and bounds are set to (0,
1) for the no-short selling case, which restricts the weights to be between 0 and 1. The table below shows the
calculated weights for ‘MV’ (Minimum Variance) with short selling, ‘ME_V’ (Mean-Variance) with short selling,
‘TP’ (Tangency Portfolio) with short selling ‘MV_NS’ (Minimum Variance with no short selling) ‘ME_NS’ (Mean-
Variance with no short selling) and ‘TP_NS’ (Tangency Portfolio with no short selling). In order to graph the
efficient frontier we first annualized the mean and standard deviation of returns by multiplying the mean by 252
(the typical number of trading days in a year) and the standard deviation by the square root of 252. A range of
target returns was created using np.linspace to sample between -20% and 50%. The code uses a loop to solve
the optimization problem for each target return. The tqdm library is used to create a progress bar for the loop.
We first plot the mean-variance efficient frontier with short-selling and then without short-selling. Individual
stocks are plotted as red dots, showing their annualized risk and return. The minimum variance portfolio is
identified as the portfolio with the lowest standard deviation and is marked with a green dot on the plot. The
tangency portfolio is the one that maximizes the Sharpe ratio, which is calculated using a risk-free rate of 2%
and it is marked with a yellow dot. Individual stocks are generally below the efficient frontier, indicating that
they are not as efficient as the portfolios on the frontier, which achieve diversification benefits. These plots
visually demonstrate the benefit of combining assets into portfolios to reduce risk (standard deviation) while
striving for the highest returns. Moreover, it is evident that the no short-selling constraint has as a result a
Tangency Portfolio with a much lower annualized return than the one without the constraint.

In [ ]:
# Get minimum variance portfolio and tangency portfolio in both cases
threshold = 0.000001
# mean variance portfolio with short selling
w = cp.Variable(len(stock_name))
P = cov
q = mean
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)
prob = cp.Problem(cp.Minimize(cp.quad_form(w, P)-q.T @ w),
[A @ w == b])
prob.solve(solver=cp.SCS) mean_var = w.value
mean_var[np.abs(mean_var) < threshold] = 0

# mean variance portfolio without short selling


w = cp.Variable(len(stock_name))
P = cov
q = mean
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)
Group 11

prob = cp.Problem(cp.Minimize(cp.quad_form(w, P)-q.T @ w),


[A @ w == b, w >= 0, w <= 1])
prob.solve(solver=cp.SCS) mean_var_noshort = w.value

mean_var_noshort[np.abs(mean_var_noshort) < threshold] = 0


# minimum variance portfolio with short selling
w = cp.Variable(len(stock_name)) P = cov
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)

prob = cp.Problem(cp.Minimize(w.T @ P @ w),


[A @ w == b]) prob.solve(solver=cp.SCS)
mv = w.value

mv[np.abs(mv) < threshold] = 0

# minimum variance portfolio without short selling


w = cp.Variable(len(stock_name))
P = cov
q = np.zeros(len(stock_name))
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)

prob = cp.Problem(cp.Minimize(w.T @ P @ w),


[A @ w == b, w >= 0, w <= 1])

prob.solve(solver=cp.SCS)
mv_no_short = w.value
mv_no_short[np.abs(mv_no_short) < threshold] = 0
from scipy.optimize import minimize

def objective(w):
return -np.dot(w, mean) / np.sqrt(np.dot(w, np.dot(P, w)))

def constraint(w):
return np.sum(w) - 1

w0 = np.ones(len(stock_name)) / len(stock_name)
cons = {'type': 'eq', 'fun': constraint}
# bounds = [(0, 1) for i in range(len(stock_name))]
result = minimize(objective, w0, method='SLSQP', constraints=cons,tol=1e-10)
w_TP = result.x
w_TP[np.abs(w_TP) < threshold] = 0
Group 11

def objective(w):
return -np.dot(w, mean) / np.sqrt(np.dot(w, np.dot(P, w)))

def constraint(w):
return np.sum(w) - 1

w0 = np.ones(len(stock_name)) / len(stock_name)
cons = {'type': 'eq', 'fun': constraint}
bounds = [(0, 1) for i in range(len(stock_name))]
result = minimize(objective, w0, method='SLSQP',
bounds = bounds, constraints=cons,tol = 1e-10)
w_TP_noshort = result.x

w_TP_noshort[np.abs(w_TP_noshort) < threshold] = 0

# summarize the weights


df_weight = pd.DataFrame({'MV': mv, 'ME_V': mean_var, 'TP': w_TP, 'MV_NS': mv_n
'ME_NS': mean_var_noshort, 'TP_NS': w_TP_noshort},
index=stock name)

df_weight = df_weight.round(5)
------------------------ Weights ----------------------------------

MV ME_V TP MV_NS ME_NS TP_NS

TSX 0.07986 -0.67591 -0.48237 0.00000 0.00000 0.00000

CAC -0.12817 4.18223 3.07838 0.00000 0.00000 0.00000

DAX 0.06316 5.39260 4.02780 0.00000 0.00000 0.00000

Eurostoxx50 -0.02709 -8.97340 -6.68234 0.00000 0.00000 0.00000

NIKKEI225 0.37451 0.47107 0.44634 0.38728 0.14425 0.19131

FTSE100 0.19386 -1.18879 -0.83471 0.05812 0.00000 0.00000

SP500 0.58571 2.12557 1.73122 0.55460 0.85575 0.80869

IBOVESPA -0.14184 -0.33337 -0.28432 0.00000 0.00000 0.00000

# graph efficient frontier with short selling under the assumption of rt = 0.


mean_anual = mean * 252
std_anual = std * sqrt(252)
cov_anual = cov * 252

return_min = min(mean_anual)
return_max = max(mean_anual)
samples = 1000
target_returns = np.linspace(-0.2, 0.5, samples) weights = np.zeros((samples,
len(stock_name)))
variances = np.zeros(samples) returns = np.zeros(samples)
w = cp.Variable(len(stock_name))
P = cov_anual q = mean_anual
G = np.ones(len(stock_name)).reshape(1,-1) # reshape to 1*n row vector
A = mean_anual.copy().reshape(1, -1) # reshape to 1*n row vector
b = np.ones(1)
Group 11
# sum w = 1
for i in tqdm.tqdm(range(samples)):
prob = cp.Problem(cp.Minimize(cp.quad_form(w, P)),
[A @ w == target_returns[i], G @ w == b])
# prob.solve(solver=cp.ECOS)
prob.solve(solver=cp.ECOS)
#prob.solve(solver=cp.CVXOPT)
weights[i, :] = w.value
x= w.value
variances[i] = cp.quad_form(w, P).value
returns[i] = target_returns[i]
# get the standard deviation
standard_deviation = np.sqrt(variances)
mv[np.abs(mv) < threshold] = 0

# minimum variance portfolio without short selling


w = cp.Variable(len(stock_name))
P = cov
q = np.zeros(len(stock_name))
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)

prob = cp.Problem(cp.Minimize(w.T @ P @ w),


[A @ w == b, w >= 0, w <= 1])

prob.solve(solver=cp.SCS)
mv_no_short = w.value
mv_no_short[np.abs(mv_no_short) < threshold] = 0
Group 11

# plot the efficient frontier without short selling under


# the assumption of rt = 0.02
return_min = min(mean_anual)
return_max = max(mean_anual)
samples = 1000
target_returns = np.linspace(return_min, return_max, samples)
weights = np.zeros((samples, len(stock_name)))
variances = np.zeros(samples)
# plot the efficient frontier without short selling under
# the assumption of rt = 0.02

return_min = min(mean_anual)
return_max = max(mean_anual)
samples = 1000
target_returns = np.linspace(return_min, return_max, samples)
weights = np.zeros((samples, len(stock_name)))
variances = np.zeros(samples)
Group 11

Question 3

In our study for Question 3, we used a method that looks at portfolio changes every five years, from 2010 to
2021. This method adjusts the mix of investments as market conditions change. We found the S&P 500 to be a
favorite choice for investment, showing up often in our selected portfolios because of its strong performance
and potential for good returns. On the other hand, the Eurostoxx50's role in the portfolios changed a lot,
showing how our evaluation of it shifted over time. This work shows that picking investments based on past
performance and risk can lead to choices different from those just based on how big the companies are. Our
findings highlight the importance of being flexible and carefully evaluating how investments have done in the
past, pointing to a way of building portfolios that looks beyond just the size of the investments.

In [ ]: cap_weight = np.array([0.0418,0.0432, 0.0362,0.091,


0.0925,0.0816,0.6023,0.0114]

for i in range(0, 8):


mask = (date.dt.year >= 2010+i) & (date.dt.year <= 2014+i) date_range
= date[mask]
date_index = date_range.index-1
log_return_range = log_return[date_index, :]
mean_range = np.mean(log_return_range, axis=0)
cov_range = np.cov(log_return_range.T)
# minimum variance portfolio with short selling
w = cp.Variable(len(stock_name))
P = cov_range
A = np.ones(len(stock_name)).reshape(1, -1) # reshape to 1*n row
vector
b = np.ones(1)
Group 11

w0 = np.ones(len(stock_name)) / len(stock_name)
cons = {'type': 'eq', 'fun': constraint}
bounds = [(0, 1) for i in range(len(stock_name))]
result = minimize(objective, w0, method='SLSQP', bounds = bounds, constraint
w_TP_noshort = result.x

w_TP_noshort[np.abs(w_TP_noshort) < threshold] = 0

mv_no_short = w.value

mv_no_short[np.abs(mv_no_short) < threshold] = 0

# summarize the weights


year = str(2010+i)+'-'+str(2014+i)
df=pd.DataFrame({'MV': mv, 'TP': w_TP, 'MV_NS': mv_no_short, 'TP_NS':
w_TP_noshort, “Cap_weight”: cap_weight}, index = stock_name)
df = df.round(5)

------------------------ Weights 2010-2014 ---------------------------------


MV TP MV_NS TP_NS Cap_weight

TSX 0.20713 -0.44829 0.16027 0.00000 0.0418

CAC -0.39160 -1.03847 0.00000 0.00000 0.0432

DAX -0.01183 2.19569 0.00000 0.00000 0.0362

Eurostoxx50 0.06474 -1.34570 0.00000 0.00000 0.0910

NIKKEI225 0.30278 0.20979 0.31722 0.10657 0.0925

FTSE100 0.36031 0.25781 0.02104 0.00000 0.0816

SP500 0.52623 1.70722 0.50146 0.89343 0.6023

IBOVESPA -0.05777 -0.53805 0.00000 0.00000 0.0114


------------------------ Weights 2011-2015 ---------------------------------
MV TP MV_NS TP_NS Cap_weight

TSX 0.28754 -2.60140 0.24173 0.00000 0.0418

CAC -0.27942 0.22385 0.00000 0.00000 0.0432

DAX -0.09022 3.18132 0.00000 0.00000 0.0362

Eurostoxx50 0.11381 -3.17403 0.00000 0.00000 0.0910

NIKKEI225 0.26644 0.77054 0.26441 0.10937 0.0925

FTSE100 0.28183 0.21831 0.03078 0.00000 0.0816

SP500 0.48529 3.70705 0.46308 0.89063 0.6023

IBOVESPA -0.06526 -1.32563 0.00000 0.00000 0.0114


Group 11

------------------------ Weights 2012-2016 ---------------------------------


MV TP MV_NS TP_NS Cap_weight

TSX 0.24228 -0.73993 0.22639 0.00000 0.0418

CAC 0.09633 2.33678 -0.00000 0.00000 0.0432

DAX 0.17945 2.64880 -0.00000 0.00000 0.0362

Eurostoxx50 -0.38670 -4.56934 -0.00000 0.00000 0.0910

NIKKEI225 0.18888 0.30529 0.18284 0.13164 0.0925

FTSE100 0.10636 -0.52367 0.00000 0.00000 0.0816

SP500 0.61685 1.76822 0.59078 0.86836 0.6023

IBOVESPA -0.04345 -0.22615 -0.00000 0.00000 0.0114


------------------------ Weights 2013-2017 ---------------------------------
MV TP MV_NS TP_NS Cap_weight

TSX 0.23989 -0.55634 0.23432 0.00000 0.0418

CAC 0.24337 2.43148 0.00000 0.00000 0.0432

DAX 0.30381 1.98735 0.00000 0.00000 0.0362

Eurostoxx50 -0.59411 -4.18233 0.00000 0.00000 0.0910

NIKKEI225 0.16219 0.25259 0.16249 0.14278 0.0925

FTSE100 0.06811 -0.31910 0.01262 0.00000 0.0816

SP500 0.60069 1.48806 0.59057 0.85722 0.6023

IBOVESPA -0.02395 -0.10170 0.00000 0.00000 0.0114


------------------------ Weights 2014-2018 ---------------------------------
MV TP MV_NS TP_NS Cap_weight

TSX 0.25416 -2.48917 0.24352 0.00000 0.0418

CAC 0.17656 9.73515 0.00000 0.00000 0.0432

DAX 0.20637 4.94412 0.00000 0.00000 0.0362

Eurostoxx50 -0.45397 -14.20159 0.00000 0.00000 0.0910

NIKKEI225 0.22228 0.64845 0.21588 0.09485 0.0925

FTSE100 0.13612 -1.11767 0.06900 0.00000 0.0816

SP500 0.48226 3.18028 0.47161 0.90515 0.6023

IBOVESPA -0.02378 0.30042 0.00000 0.00000 0.0114


Group 11

------------------------ Weights 2015-2019 ---------------------------------


MV TP MV_NS TP_NS Cap_weight

TSX 0.29862 -1.10116 0.27159 0.00000 0.0418

CAC 0.07739 6.91073 0.00000 0.00000 0.0432

DAX 0.17380 2.98318 0.00000 0.00000 0.0362

Eurostoxx50 -0.34362 -9.41435 0.00000 0.00000 0.0910

NIKKEI225 0.24073 0.54586 0.23232 0.27988 0.0925

FTSE100 0.13076 -0.63221 0.05411 0.00000 0.0816

SP500 0.45677 1.45534 0.44198 0.71147 0.6023

IBOVESPA -0.03444 0.25262 0.00000 0.00864 0.0114


------------------------ Weights 2016-2020 ---------------------------------
MV TP MV_NS TP_NS Cap_weight

TSX 0.13033 -0.44005 0.06617 0.00000 0.0418

CAC 0.00860 6.16024 0.00000 0.00000 0.0432

DAX -0.03501 4.92046 0.00000 0.00000 0.0362

Eurostoxx50 -0.09309 -10.43924 0.00000 0.00000 0.0910

NIKKEI225 0.42754 0.83294 0.42405 0.43525 0.0925

FTSE100 0.23854 -1.39582 0.15006 0.00000 0.0816

SP500 0.40289 1.13936 0.35973 0.54473 0.6023

IBOVESPA -0.07978 0.22212 0.00000 0.02002 0.0114


------------------------ Weights 2017-2021 ---------------------------------
MV TP MV_NS TP_NS Cap_weight

TSX 0.11910 -0.54953 0.06107 0.00000 0.0418

CAC -0.06248 5.45337 0.00000 0.00000 0.0432

DAX -0.12686 2.21069 0.00000 0.00000 0.0362

Eurostoxx50 0.08260 -6.92225 0.00000 0.00000 0.0910

NIKKEI225 0.47658 0.63288 0.47580 0.31495 0.0925

FTSE100 0.24948 -1.00600 0.17241 0.00000 0.0816

SP500 0.32334 1.59917 0.29072 0.68505 0.6023

IBOVESPA -0.06176 -0.41833 0.00000 0.00000 0.0114

You might also like