Market Microstructure Term Paper

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

Market Microstructure and Asset Pricing

Term Paper Group 2

HDB424-0284/2022- Samson Manjuru Mburu

HDB424-0895/2022 Serah W. Muthike

Department of Economics, Accounting and Finance

Jomo Kenyatta University of Agriculture and Technology

PhD Finance

Unit Name: Market Microstructure

Unit Code: HFIN 4103

Dr. James Nyamweya

July 9, 2022

1
Title: Market Microstructure and Asset Pricing

Abstract

This paper discusses market microstructure and asset pricing. It explores an empirical analysis of

market microstructure and asset pricing, market microstructure and risk metrics, return volatility:

realized volatility, range- based volatility measures and market microstructure and information

disclosures.

Table of Contents

1. Introduction

2. Empirical review

3. Market microstructure and risk metrics

4. Return Volatility: Realized Volatility

5. Range based volatility measures

6. Market Micro- Structure and Information Disclosure

1. Introduction

Asset pricing is developed around the concept of a state-price deflator which relates the price of

any asset to its future (risky) dividends and thus incorporates how to adjust for both time and risk

in asset valuation.

2. Empirical review

In all asset pricing models, returns to investors are regarded as the compensation for facing

systematic risks, also called market risks, because they are tied to broader economic factors

(Goyal, 2012). The main difference among the asset pricing models is what variables or

indicators represent the systematic risk in asset pricing models. In an early work, Markowitz

2
(1959) developed a specific measure of portfolio risk and built a foundation for developing asset

pricing models, named portfolio theory. He argues that the main incentive for a risk-averse

investor who focuses on the mean and variance of the returns of every single asset included in a

portfolio is earning the highest possible profit given the lowest potential risk. Building on the

fact that investors tend to maximize the mean and minimize their investment variance,

Markowitz (1959) proves that investors' optimal choice is a portfolio selected from the efficient

frontier. Later, Tobin (1958), using the separation theorem, showed that the single market

portfolio is the efficient frontier that dominates any other combination. The capital asset pricing

model (CAPM), which relates the expected rate of return of an individual asset to a measure of

its systematic risk, is introduced and developed by Sharpe (1964) and Treynor (1961). Lintner

(1969), Mossin (1966), and Black (1972) extended this simple theory such that market portfolio

returns are treated as a single factor whose exposure determines expected returns. In portfolio

theory, an asset's price is exogenously given and cannot be affected by any investor, while

CAPM assumes that an asset's price is determined through an equilibrium of the market.

Generally, and relying on rational expectations, most of the empirical studies examining cross-

sectional asset pricing models assume that investors price assets according to the basic pricing

formula presented according to this formula: 1 = Et(Mt+1Ri,t+1) in which 𝑀𝑡+1 = 𝛽 𝑢 ′

(𝑐𝑡+1)/(𝑢′(𝑐𝑡)) is denoted as the stochastic discount factor (SDF) or pricing kernel based on a

representative investor's preferences 𝛽 and 𝑅𝑖,𝑡+1 is the return of an asset at time 𝑡 + 1 and is

defined as 𝑅𝑖,𝑡+1 = (𝑃𝑖,𝑡+1+𝐷𝑖,𝑡+1)/(𝑃𝑖,𝑡) (Nagel, 2013). Assuming that an investor maximizes

his recursive utility function proposed by Epstein and Zin (1991) and Weil (1990).

Jensen et al. (1972), Fama and MacBeth (1973), and Blume and Friend (1973) are the studies

that support the CAPM. However, since the late 1970s, some empirical literature documents the

3
CAPM's failure in explaining cross-sectional return predictability. Return anomalies are other

examples of CAPM failure in explaining the cross-section variation of stock returns. Basu (1977)

and Ball (1978) discover that higher returns are observed for the firms with lower price-earnings

ratios (a negative relation between price-earnings (𝑃/𝐸) ratio and average return). Miller and

Scholes (1972) find that stocks with lower price gain higher returns. According to Banz (1981),

stocks with a small market capitalization (small firm premium) earn higher returns than large-

capitalization stocks. Bhandari (1988) found that a highly leveraged firm (high ratio of the book

value of debt to the equity market value) earns abnormally high returns. Similarly, Rosenberg et

al., (1985) found that firms with a high book-to-market equity (B/M) ratio earn higher returns

than ones with a low book-to-market ratio. The existing evidence in the literature shows that

such cross-sectional differences in expected returns cannot be explained by CAPM (Goyal,

2012). The value premium concept was introduced by Fama and French (1992), the implication

being that value stocks earn higher average returns than stocks with low book-to-market ratios

(i.e., growth stocks). They also show that, on average, small (capitalization) stocks earn a higher

return than large stocks. They find that the difference in betas of small vs. large stocks and value

vs. growth stocks is not enough to explain average returns' differences. Fama and French (1992),

however, argue that size (market value of equity) and book-to-market (B/M) ratio jointly explain

the cross-sectional difference in expected stock returns associated with the price-based variables

such as P/E ratio, B/M ratio, size, and leverage. This influential study motivated Fama and

French to introduce the most prominent factor model in the literature. Fama and French (1993)

propose a three-factor model consisting of market index return, 𝑅𝑚, the return difference

between large and small stock portfolios, 𝑆𝑀𝐵, and the difference in returns between portfolios

of high and low B/M stocks, 𝐻𝑀𝐿.

4
Numerous studies in the literature, however, show that the Fama-French three-factor model is

incapable of explaining additional anomalies such as momentum-based, investment-based, and

profitability-based anomalies (Cochrane, 2009; Cooper & Maio, 2019; Fama & French, 2015;

Fama & French, 1996; Hou et al., 2015 & Maio, 2013). Jegadeesh and Titman (1993) define the

price momentum anomaly as a pattern in which recently outperformed stocks continue to

outperform, whereas underperforming stocks continue to underperform. Similarly, and based on

Moskowitz and Grinblatt (1999), the pattern by which assets in past winning industries continue

to earn a higher return than stocks in past losing industries is described as industry momentum.

Profitability or Return-on-equity (ROE) anomaly corresponds to the state that stocks with higher

ROE generate higher returns than stocks with lower ROE (Balakrishnan et al., 2010; Fama &

French, 2008; Haugen & Baker, 1996; Jegadeesh & Livnat, 2006 & Novy-Marx, 2013). Cooper

et al. (2008) and Hou et al. (2015) introduce asset growth anomaly as a pattern where stocks of

firms with low asset growth earn higher average returns than stocks of firms with higher asset

growth. The investment anomaly is associated with the state when the average returns earned by

stocks of firms with a higher investment are lower than those with less investment ( Fama &

French, 2008 & Xing, 2008). To overcome the incapability of traditional factor models in

explaining the cross-section dispersion and correct the momentum anomaly in the Fama and

French three-factor model, Carhart (1997) introduced a new factor called momentum factor

(UMD). Empirical evidence provides significant evidence about the relationship between stock

returns and profitability and investment after controlling Fama and French's three factors

(Aharoni et al., 2013 & Novy-Marx, 2013 among others). Relying on the present-value valuation

model of Miller and Modigliani (1961), Fama and French (2015, 2016) add profitability (𝑅𝑀𝑊)

5
and investment (or asset growth, 𝐶𝑀𝐴) into their three factors model. They propose a five-factor

model that can describe investment-based and profitability-based anomalies. They define the

profitability (𝑅𝑀𝑊) factor as a difference between the return of the portfolio of stocks with high

and low operating profitability. The investment (𝐶𝑀𝐴) factor is defined as the difference in

returns of the portfolio of assets with conservative and aggressive investment. They document

that compared to Fama and French (1993), the five-factor model performs better.

The failure of CAPM in explaining the cross-section returns dispersion also motivated some

researchers to introduce other asset pricing models such as Intertemporal CAPM (ICAPM).

CAPM assumes investors maximize their expected utility of wealth in a single period model. In

contrast, Merton (1973) derives an intertemporal capital asset pricing model, ICAPM, where

investors maximize their expected utility of lifetime consumption and assets are traded

continuously at every point of time. With ICAPM, investors' current demands for portfolios are

influenced by the possibility of unforeseen changes in future investment opportunities. The

ICAPM of Merton (1973) asserts that expected returns are the reward to investors for bearing the

market risk and taking the risk of adverse changes in the investment opportunity set. Therefore,

risky assets with no exposure to the market risk, a beta of zero, may have higher expected returns

than the risk-free rate of return due to the exposure to an adverse change in the state variable.

The static CAPM does not explain such a relation. Therefore, the ICAPM is a linear factor

model, including the market portfolio and additional factors built on the innovations in one (or

more) state variables. According to static CAPMs, excess returns compensate investors for

bearing systematic risk and taking on an unexpected change in the state variable. The intuition is

that in general, investors will pay more for assets that perform well when future investment

6
opportunities are expected to worsen, which causes expected returns to plunge. Due to their

ability to hedge against adverse changes in investment opportunities, these assets require a

smaller risk premium. Investors will, however, demand a higher price for holding assets that do

poorly when future investments opportunities deteriorate.

3. Market microstructure and risk metrics

Risk measures are statistical measures that are historical predictors of investment risk

and volatility, and they are also major components in modern portfolio theory (MPT)/ mean

variance theory/ Markowitz portfolio theory. MPT is a standard financial and academic

methodology for assessing the performance of a stock or a stock fund as compared to

its benchmark index.

There are five principal risk measures, and each measure provides a unique way to assess the

risk present in investments that are under consideration. The five measures include

the alpha, beta, R-squared, standard deviation, and Sharpe ratio. Risk measures can be used

individually or together to perform a risk assessment. When comparing two potential

investments, it is wise to compare like for like to determine which investment holds the most

risk. The Risk metrics in finance are as follows

3.1 Profit-at-Risk (PaR) is a risk management quantity most often used for electricity

portfolios that contain some mixture of generation assets, trading contracts and end-user

consumption. It is used to provide a measure of the downside risk to profitability of a

portfolio of physical and financial assets, analyzed by time periods in which the energy is

delivered. For example, the expected profitability and associated downside risk (PaR)

might be calculated and monitored for each of the forward looking 24 months. The

7
measure considers both price risk and volume risk (e.g. due to uncertainty in electricity

generation volumes or consumer demand).  Mathematically, the PaR is the quantile of the

profit distribution of a portfolio. Since weather related volume risk drivers can be

represented in the form of historical weather records over many years, a Monte-Carlo

simulation approach is often used.

3.2 Value at Risk (VAR) is a statistic that is used in risk management to predict the greatest

possible losses over a specific time frame. VAR is determined by three variables: a

specific time period, a confidence level, and the size of the possible loss. There are three

ways of calculating the VaR:

1. One way of calculating VAR is based on historical data, assuming that future returns

will be roughly similar.

2. The variance-covariance method assumes that returns will follow a normal statistical

distribution.

3. A Monte Carlo simulation creates a simplified model that predicts future stock returns

with random trials.

3.3 The economic value of equity (EVE) is a cash flow calculation that takes the present

value of all asset cash flows and subtracts the present value of all liability cash flows.

Unlike earnings at risk and value at risk (VAR), a bank uses the economic value of equity

to manage its assets and liabilities.

8
Harry Markowitz (1952) considered and rejected the idea that an investor could combine

increasing their wealth and minimize their risk associated with the potential gain. A portfolio is a

collection of different investments held by an investor. He explained that, “the portfolio with the

maximum expected return is not necessarily the one with the minimum variance. There is a rate

at which the investor can gain expected return by taking on variance, or reduce variance by

giving up expected return”. He notes, “In trying to make the variance small it is not enough to

invest in many securities with high covariances among themselves”. Covariance is the degree to

which two variables move together relative to their means (average) over time. A positive sign

would indicate a positive covariance in the sense that if one variable is increasing relative to its

mean then the other variable would also increase relative to its mean. A negative covariance

indicates that if one variable is increasing relative to its mean the other would be reducing

relative to its mean thus using the negative covariance the investor is able to diversify his

portfolio thereby posting greater returns.

Markowitz came up with the efficient frontier, which represents the best attainable combination

of assets for an individual portfolio. An efficient frontier could also be defined as a set of

portfolio’s combination that contain the maximum possible return for a given level of risk or the

minimum level of risk for a given level of return. All investors would like to get portfolio of

asset combinations along the frontier to get a trade-off between expected return and risk. It is

economically efficient to limit choice to portfolio that fall on this frontier.

Markowitz explains that an investor will choose a portfolio at the point of tangency between the

investor’s utility curve and the Markowitz efficient frontier. Roy (1952) also supports Markowitz

9
and represented his problem, with risk as the independent variable and expected return as the

dependent variable, was adopted as a standard to the finance profession. Markowitz is regarded

as the father of the portfolio theory.

4. Return Volatility: Realized Volatility

Volatility refers to the rate at which the price of a security increases or decreases for a given set

of returns. Volatility is measured by calculating the standard deviation of the annualized returns

over a given period of time. It shows the range to which the price of a security may increase or

decrease. Volatility measures the risk of a security. It is used in option pricing formula to gauge

the fluctuations in the returns of the underlying assets. Volatility indicates the pricing behavior of

the security and helps estimate the fluctuations that may happen in a short period of time. If the

prices of a security fluctuate rapidly in a short time span, it is termed to have high volatility. If

the prices of a security fluctuate slowly in a longer time span, it is termed to have low volatility.

In finance, volatility (usually denoted by σ) is the degree of variation of a trading price series

over time, usually measured by the standard deviation of logarithmic returns.

4.1 Types of Volatility

Volatility, typically expressed as a percentage and interpreted as standard deviation of returns,

measures how much a security moves over a certain period. The individual types of volatility

differ mainly in two things:

i. What that period is (historical volatility is for some period in the past, while future or

forecast volatility is forward looking).

10
ii. How we find or calculate the volatility (implied volatility is calculated from option

prices, while realized volatility is calculated from underlying price changes).

Investors care about volatility for at least eight reasons:

1. The wider the swings in an investment's price, the harder emotionally it is to not worry.

2. Price volatility of a trading instrument can define position sizing in a portfolio.

3. When certain cash flows from selling a security are needed at a specific future date,

higher volatility means a greater chance of a shortfall.

4. Higher volatility of returns while saving for retirement results in a wider distribution of

possible final portfolio values.

5. Higher volatility of return when retired gives withdrawals a larger permanent impact on

the portfolio's value

6. Price volatility presents opportunities to buy assets cheaply and sell when overpriced;

7. Portfolio volatility has a negative impact on the compound annual growth rate (CAGR)

of that portfolio.

8. Volatility affects pricing of options, being a parameter of the Black–Scholes model.

4.1.1 Implied Volatility Versus Realized Volatility

Volatility is particularly important for option traders, because it affects options prices (or option

values). In general, higher volatility makes options more valuable, and vice versa. The subtle

difference between an option's price and its value is key to understanding the difference between

implied and realized volatility. Price is what you pay. Value is what you get. To make a profit

11
you must buy high value for low price (or sell low value for high price). This is true when

trading options like it is when trading stocks (or anything else).

Implied volatility is what you pay – it is the volatility implied (contained or reflected) in an

option's price. Option pricing models such as the Black-Scholes model can calculate exact option

price for a particular level of volatility (assuming we also know the other factors, such as the

option's strike price, time to expiration, or underlying price). They can also be reversed to find

the exact volatility that is implied in a particular option price. This is how we calculate implied

volatility – from option prices. Therefore, implied volatility is the future volatility expected by

the options market. This expectation may be correct, or it may not.

Realized volatility is what you get – it is the volatility actually realized in the underlying market.

It can be calculated from underlying price moves (e.g., daily stock price changes). Although

there are various approaches, the most common way is to calculate realized volatility as standard

deviation of daily logarithmic returns. This is why realized volatility is sometimes called

statistical volatility. Option prices don't affect realized volatility in any way. In fact, you can

calculate realized volatility even for securities without any options on them. On the contrary,

there is no implied volatility without options. At the same time, when there are multiple options

listed on the same underlying and same expiration (calls and puts, different strikes), each of these

options can have different implied volatility (this is very common and known as volatility skew

or smile).

4.1.2 Historical Volatility versus Future Volatility

While implied volatility is always forward looking (it is the expected volatility from now until

the option's expiration), realized volatility can relate either to the past (then it is called historical

volatility) or the future (then it is called future realized volatility). Another important

12
characteristic of realized volatility (historical or future) is the length of the period over which it is

measured. For example, "20-day historical volatility" measures realized volatility over last 20

days (it is typically calculated as standard deviation of last 20 daily price changes).

4.1.2.1 Future Volatility Versus Forecast Volatility

When talking about future volatility, we must distinguish between our opinion or prediction of

what the future volatility will be – this is often called forecast volatility – and the reality, which

we will only know after it happens – the future realized volatility. Our ability to forecast

volatility is essential for successful option trading – we want our forecast volatility (what we

think will happen) to be as close as possible to the future realized volatility (what actually

happens). If we can do that consistently, we only need to buy options which are underpriced

relative to our expectation (the option's implied volatility is lower than our forecast volatility) or

sell options which are overpriced (implied volatility is higher than our forecast volatility). This

won't guarantee that we make a profit on one particular trade (besides volatility there are other

factors affecting option prices, including particularly underlying price direction, though these can

be hedged). However, it should make us profitable in the long run, over a large number of trades.

Therefore, Implied volatility is calculated from an option's price. It is the volatility that the

buyers and sellers of this particular option expect to be realized in the period from now until the

option's expiration. Different options can have different implied volatilities, even when they are

on the same underlying and with the same expiration date. Realized volatility is calculated from

underlying price changes over a certain. If this certain period is in the past, we call it historical

volatility. If it is in the future, we call it future realized volatility.

5. Range based volatility measures

13
5.1 Introduction

Asset volatility, a measure of risk, plays a crucial role in many areas of finance and economics.

Therefore, volatility modelling and forecasting become one of the most developed parts of

financial econometrics. However, since the volatility is not directly observable, the first problem

which must be dealt with before modelling or forecasting is always a volatility measurement (or,

more precisely, estimation). Consider stock price over several days. From a statistician’s point of

view, daily relative changes of stock price (stock returns) are almost random. Moreover, even

though daily stock returns are typically of a magnitude of 1% or 2%, they are approximately

equally often positive and negative, making average daily return very close to zero. The most

natural measure for how much stock price changes is the variance of the stock returns. Variance

can be easily calculated and it is a natural measure of the volatility. However, this way we can

get only an average volatility over an investigated time period. This might not be sufficient,

because volatility changes from one day to another. When we have daily closing prices and we

need to estimate volatility on a daily basis, the only estimate we have is squared (demeaned)

daily return. This estimate is very noisy, but since it is very often the only one, we have, it is

commonly used. In fact, we can look at most of the volatility models (e.g., GARCH class of

models or stochastic volatility models) in such a way that daily volatility is first estimated as

squared returns and consequently processed by applying time series techniques. When not only

daily closing prices, but intraday high frequency data are available too, we can estimate daily

volatility more precisely. However, high frequency data are in many cases not available at all or

available only over a shorter time horizon and costly to obtain and work with. Moreover, due to

market microstructure effects the volatility estimation from high frequency data is rather a

complex issue. However, closing prices are not the only easily available daily data. For the most

14
of financial assets, daily open, high and low prices are available too. Range, the difference

between high and low prices is a natural candidate for the volatility estimation. The assumption

that the stock return follows a Brownian motion with zero drift during the day allows Parkinson

(1980) to formalize this intuition and derive a volatility estimator for the diffusion parameter of

the Brownian motion. This estimator based on the range (the difference between high and low

prices) is much less noisy than squared returns. Garman and Klass (1980) subsequently introduce

estimator based on open, high, low and close prices, which is even less noisy.

5.2 Range based volatility models

Different range estimators are considered for modelling volatility.

5.2.1 Parkinson

Parkinson (1980) introduced the first advanced volatility estimator based only on high and low

prices, (the current day’s high during the trading interval (Hi) and the current day’s low during

the trading interval (Li)). As Parkinson measure does not take into account the opening jumps,

the Parkinson volatility estimator tends to underestimate the volatility. On the other hand, since it

does not handle drift [and assumes that drift is zero], in a trendy market it may overestimate the

volatility in the pertinent time interval.

5.2.2 Garman-Klass

Garman and Klass proposed an estimator that is based on all commonly available prices of the

current day of trading open-high-low-close chart (OHLC). The Garman–Klass estimator includes

opening and closing prices for the current trading day. From this perspective, the estimator

extends and improves the performance offered by the Parkinson estimator. It does not include the

overnight jumps though; therefore, it may underestimate the volatility.

15
5.3 Conclusion

Volatility is not directly observable and must be estimated. Estimator based on daily close data is

imprecise. Range-based volatility estimators provide significantly more precision. The Garman-

Klass volatility estimator is the best volatility estimator based on daily (open, high, low and

close) data.

6. Market Micro- Structure and Information Disclosure:

Disclosure of information in financial markets is at the forefront of regulatory efforts to improve

financial market quality and stability. Recently, these efforts have been very prominent, with the

Sarbanes–Oxley Act of 2002 and the Dodd–Frank Act of 2010 emphasizing various aspects of

improved disclosure. For example, the Sarbanes–Oxley Act was passed “to protect investors by

improving the accuracy and reliability of corporate disclosures made pursuant to the securities

laws, and for other purposes”.

Disclosure regulation comes in different forms and affects different activities. Over time, firms

have increasingly been required to disclose information about their operations and financial

activities in financial reports to their investors. Similarly, investors are required to disclose

information about their holdings in firms that might pertain to activism, intentions of activism, or

acquisitions intentions that could ultimately affect firm value.

Moreover, improved quality of public information is also achieved by increasing the reliability of

credit ratings and by greater disclosure of macroeconomic and industry-related information.

Recently, following the financial crisis of 2008, governments increased the amount of disclosure

available about banks by conducting annual stress tests and making their results publicly

available.

16
Disclosure can potentially promote some important goals: By leveling the playing field in

financial markets, it can increase market liquidity and market efficiency and can decrease the

cost of capital for firms. There are potential unintended consequences of disclosure, which occur

because of the crowding out of private information production, the destruction of risk-sharing

opportunities, and the promotion of destabilizing beauty-contest incentives. Given the flow of

new regulations related to disclosure in recent years, researchers have been delving more and

more into the topic, trying to understand the pros and cons and answering key questions, such as:

What is the optimal level of disclosure in terms of promoting market quality and social welfare?

What types of disclosure are most beneficial? In what circumstances is disclosure desirable?

6.1 Market Quality and Disclosure

The effect of disclosure is often understood by examining different measures of market quality’

6.1.1 Market liquidity. Market liquidity refers to a market’s ability to facilitate the purchase or

sale of an asset without drastically affecting the asset’s price. Disclosure improves market

liquidity; that is, Intuitively, more precise public information implies that there is less uncertainty

about the asset value, so rational traders trade more aggressively against liquidity traders. As a

result, changes in liquidity trading are absorbed with a smaller price change.

6.1.2 Market efficiency. Market efficiency, also called price efficiency or informational

efficiency, concerns how informative the prevailing market prices are about the future values of

the traded assets. An underlying reason for promoting market efficiency is that it is believed to

be a good proxy for real efficiency, by which more information in prices about underlying values

improves real investment decisions. Disclosure improves market efficiency, that is, Intuitively,

more public information before the price is formed directly injects more fundamental

17
information into the price through updating traders’ forecasts about the asset payoff. This implies

that the price tracks fundamental value more closely.

6.1.3 Cost of capital. The expected return is often interpreted as the cost of capital on the risky

asset. A lower cost of capital benefits the issuer of the security, as it enables the issuer to sell the

security at a higher price. The cost of capital is positively affected by risk aversion and asset

supply in the numerator. This is because traders are willing to pay a lower price when they are

more risk averse and have to hold more of the asset on average, so the risk they have to bear is

higher. The expression in the denominator is inversely related to the average risk perceived by

traders per unit of the security. When the perceived risk goes up, the cost of capital also

increases. Disclosure affects the cost of capital only through affecting the perceived risk: A

higher level of disclosure lowers the cost of capital by lowering traders’ average risk.

6.1.4 Return volatility. Return volatility is another measure that attracts attention from

academics and regulators. Disclosure lowers return volatility. This is because more public

information improves market efficiency, which thus brings the asset price closer to the

fundamental value.

6.2 The Effect of Disclosure

6.2.1 The crowding-out effect on private information

Starting from low disclosure, all traders choose to become informed, and an increase in

disclosure reduces the precision of their information. Then, at some point, the fraction of

informed traders starts decreasing as disclosure continues to improve, eventually drying up all

the information produced privately in the market. Hence, public disclosure clearly crowds out

private information.

6.3 Investors’ welfare

18
Another important issue that has to be considered in evaluating the effects of disclosure is the

implications for the welfare of traders in the financial market.

6.4 Destruction of Trading Opportunities

The well-known Hirshleifer (1971) effect is a powerful argument against public disclosure of

information. The idea is that when traders face idiosyncratic risks arising from, say,

heterogeneous endowment shocks, public disclosure decreases welfare by reducing the risk-

sharing opportunities available to traders.

6.5 Conclusion

The analysis provided in this article demonstrates key insights from the literature on how

information disclosure in financial markets affects market quality, information production, real

efficiency, and traders’ welfare. As the analysis shows, there are many aspects to consider when

evaluating the effects of disclosure and the optimal regulation of the level and form of disclosure.

We have showed here how many of these effects can be manifested in a cohesive analytical

framework that has proven useful in understanding trading and information in financial markets.

As disclosure is being used more and more as a tool by regulators to increase market quality, it is

important to consider its different implications. We hope that our review will be useful for

researchers interested in advancing the theoretical and empirical work in the area and for policy

analysis as regulations of market disclosure continue to evolve.

REFERENCES:

Baitinger, E., & Papenbrock, J. (2017). Interconnectedness risk and active portfolio

management. Journal of Investment Strategies.

19
Brugière, P. (2020). Quantitative Portfolio Management. Springer Texts in Business and

Economics.

Garman, M. B., & Klass, M. J. (1980). On the Estimation of Security Price Volatilities from

Historical Data. The Journal of Business, 53 (1), 67–7

Hirshleifer J. 1971. The private and social value of information and the reward to inventive

activity. Am. Econ. Rev. 61:561–74

Nugroho, B. Y. (2020). The Effect of Book to Market Ratio, Profitability, and Investment on

Stock Return. SSRG International Journal of Economics and Management Studies

(SSRG-IJEMS), 7(6).

Rustagi, R. P. (2021). Investment Analysis & Portfolio Management. Sultan Chand & Sons.

20

You might also like