Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

RISE OF THE RATIONAL MARKET HYPOTHESIS

To many young people, the idea of efficient financial markets -- the idea that, in
the words of economist Eugene Fama, “At any point in time, the actual price of a
security will be a good estimate of its intrinsic value” -- probably seems like a
joke. The financial crisis of 2008, the bursting of the housing bubble, and gyrations
in markets from gold to Bitcoin to Chinese stocks have put paid, at least for now,
to the idea that prices are guided by the steady hand of rationality. The theory won
Fama an economics Nobel Prize in 2013, but he shared it with Robert Shiller,
whose research poked significant holes in the idea decades ago.

But believe it or not, there was a time when efficient markets theory occupied a
place of honor in the worldview of economists and financial professionals alike.
This is chronicled in my colleague Justin Fox’s excellent book, “The Myth of the
Rational Market: A History of Risk, Reward, and Delusion on Wall Street.”
Though Fama did empirical research that seemed to support market efficiency, at
its core the idea is based on fairly simple logic -- if people consistently buy assets
for more than its fundamental value, they’ll lose money. If people lose money,
they’ll be pushed out of the market, and only investors who tend to pay the right
price -- often referred to as “rational arbitrageurs” in economics papers -- will
remain. Economist Milton Friedman legendarily stated this idea in 1953 when he
wrote:

People who argue that speculation is destabilizing seldom realize that this is
largely equivalent to saying that speculators lose money, since speculation can be
destabilizing in general only if speculators on the average sell when [an asset] is
low in price and buy when it is high.

This logic is simple and compelling, but it doesn’t have to be true. In the real
world, people who trade based on fundamentals don’t have infinite resources --
short-selling on the expectation that a stock will fall in price is expensive and risky,
borrowing shares to place those bets costs money and financial backers can
withdraw their money before a trade pays off. Because of these limits, trading
against a mispricing isn’t true arbitrage, because, as economist and investor John
Maynard Keynes is said to have quipped, “The market can stay irrational longer
than you can stay solvent.”
In the wake of the 1987 stock market crash, economists pushed back strongly
against the idea of efficient markets. Keynes’s remark was formalized into
economic theory by J. Bradford DeLong, Andrei Shleifer, Lawrence Summers and
Robert Waldmann in 1989. The idea was that so-called noise traders -- a
coordinated herd of investors who either overpay or underpay for a financial asset -
- can act in concert to push prices out of line with fundamentals. With their limited
resources, rational arbitrageurs can’t always risk pushing back against the irrational
tide -- sometimes, like a movie character who throws down his shotgun and runs
when faced with an onrushing gang of zombies, rational traders can end up moving
in the same direction as the speculators. When this happens, the noise traders can
actually make money and remain in the market, defying Friedman’s formulation.

It’s a compelling idea. But it’s very hard to test because it’s hard to observe
rational arbitrageurs in action. Typically, no one knows which trades are placed by
the rational folks, and which are by misinformed speculators.

A recent paper, however, does something along those lines. In “ETF Arbitrage and
Return Predictability,” economists David Brown, Shaun Davies and Matthew
Ringgenberg take advantage of the way exchange-traded funds are structured. An
ETF typically has a designated set of traders called “authorized participants” (APs)
who are able to carry out arbitrage between the fund and its underlying assets,
whether stocks, bonds or commodities. When the price changes, APs respond by
buying and selling the underlying assets, and by either creating or redeeming
shares of the ETF, until the two values come back into line. They are, by design,
rational arbitrageurs.
Generally, an ETF’s APs do a good job of keeping the fund’s value close to the
value of the assets it owns. Many studies confirm this. But Brown et al. find that
APs’ arbitrage coincides with a deviation of asset values from their fundamentals.

When traders other than the APs push around the price, the changes in the prices of
the assets tend to reverse themselves over the subsequent months. Anyone
watching the APs’ arbitrage trades -- which are public record, since they involve
the creation and destruction of ETF shares -- can then bet that the recent rise or fall
in the price of the assets underlying the ETF will be reversed. And make a lot of
money. Under efficient markets theory, that’s not supposed to happen.

Now, the APs’ job is to equalize the price of the ETF and its assets, not to make
sure the asset prices themselves reflect fundamentals. But the fact that they exist
implies that there are at least some rational arbitrageurs in the market. And the fact
that ETF asset prices have predictable reversals implies that rational arbitrageurs
aren’t trying very hard to correct prices that are out of line with long-term
fundamental values.

In other words, even markets with some rational participants can behave
irrationally. Speculation can move prices around for irrational reasons, and rational
traders often either can’t or won’t bother to correct them. However, it’s worth
noting that the effect is less pronounced in 2012-2016 than in 2007-2011,
suggesting the possibility that this particular market inefficiency may have been a
temporary phenomenon.

Efficient markets theory never really fits the facts, but it never quite dies, either.

BRIEF OVERVIEW OF CLASSICLA FINANCIAL


THEORIES

1) EXPECTED UTILITY THEORY:

The expected utility of an entity is derived from the expected utility hypothesis.
This hypothesis states that under uncertainty, the weighted average of all
possible levels of utility will best represent the utility at any given point in time.

Expected utility theory is used as a tool for analyzing situations where individuals
must make a decision without knowing which outcomes may result from that
decision, i.e., decision making under uncertainty. These individuals will choose
the action that will result in the highest expected utility, which is the sum of the
products of probability and utility over all possible outcomes. The decision made
will also depend on the agent’s risk aversion and the utility of other agents.

This theory also notes that the utility of a money does not necessarily equate to
the total value of money. This theory helps explains why people may take out
insurance policies to cover themselves for a variety of risks. The expected value
from paying for insurance would be to lose out monetarily. But, the possibility of
large-scale losses could lead to a serious decline in utility because of diminishing
marginal utility of wealth.
History of the Expected Utility Concept
The concept of expected utility was first posited by Daniel Bernoulli, who used it
as a tool to solve the St. Petersburg Paradox.

The St. Petersburg Paradox can be illustrated as a game of chance in which a coin
is tossed at in each play of the game. For instance, if the stakes starts at $2 and
double every time heads appears, and the first time tails appears, the game ends
and the player wins whatever is in the pot. Under such game rules, the player wins
$2 if tails appears on the first toss, $4 if heads appears on the first toss and tails on
the second, $8 if heads appears on the first two tosses and tails on the third, and so
on. Mathematically, the player wins 2k dollars, where k equals number of tosses (k
must be a whole number and greater than zero). Assuming the game can continue
as long as the coin toss results in heads and in particular that the casino has
unlimited resources, this sum grows without bound and so the expected win for
repeated play is an infinite amount of money.

Bernoulli solved the St. Petersburg Paradox by making the distinction


between expected value and expected utility, as the latter uses
weighted utility multiplied by probabilities, instead of using weighted outcomes.

Expected Utility and Marginal Utility


Expected utility is also related to the concept of marginal utility. The expected
utility of a reward or wealth decreases, when a person is rich or has sufficient
wealth. In such cases, a person may choose the safer option as opposed to a riskier
one.

For example, consider the case of a lottery ticket with expected winnings of $1
million. Suppose a poor person buys the ticket for $1. A wealthy man offers to buy
the ticket off him for $500,000. Logically, the lottery holder has a 50-50 chance of
profiting from the transaction. It is likely that he will opt for the safer option of
selling the ticket and pocketing the $500,000. This is due to the diminishing
marginal utility of amounts over $500,000 for the ticket holder. In other words, it is
much more profitable for him to get from $0 - $500,000 than from $500,000 - $1
million.

Now consider the same offer made to a rich person, possibly a millionaire. It is
likely that the millionaire will not sell the ticket because he hopes to make another
million from it.
A 1999 paper by economist Matthew Rabin argued that the expected utility theory
is implausible over modest stakes. This means that the expected utility theory fails
when the incremental marginal utility amounts are insignificant.

Example of Expected Utility


Decisions involving expected utility are decisions involving uncertain outcomes. In
such events, an individual calculates probability of expected outcomes and weighs
them against the expected utility before taking a decision.

For example, purchasing a lottery ticket represents two possible outcomes for the
buyer. He or she could end up losing the amount they invested in buying the ticket
or they could end up making a smart profit by winning either a portion or the entire
lottery. Assigning probability values to the costs involved (in this case, the nominal
purchase price of a lottery ticket), it is not difficult to see that the expected utility
to be gained from purchasing a lottery ticket is greater than not buying it.

Expected utility is also used to evaluating situations without immediate payback,


such as insurance. When one weighs the expected utility to be gained from making
payments in an insurance product (possible tax breaks and guaranteed income at
the end of a predetermined period) versus the expected utility of retaining the
investment amount and spending it on other opportunities and products, insurance
seems like a better option.

2) Modern Portfolio Theory (MPT)

Modern portfolio theory (MPT) is a theory on how risk-averse investors can


construct portfolios to maximize expected return based on a given level of market
risk. Harry Markowitz pioneered this theory in his paper "Portfolio Selection,"
which was published in the Journal of Finance in 1952. He was later awarded a
Nobel Prize for his work on modern portfolio theory.

Modern portfolio theory argues that an investment's risk and return characteristics
should not be viewed alone, but should be evaluated by how the investment affects
the overall portfolio's risk and return. MPT shows that an investor can construct a
portfolio of multiple assets that will maximize returns for a given level of risk.
Likewise, given a desired level of expected return, an investor can construct a
portfolio with the lowest possible risk. Based on statistical measures such
as variance and correlation, an individual investment's performance is less
important than how it impacts the entire portfolio.

MPT assumes that investors are risk-averse, meaning they prefer a less risky
portfolio to a riskier one for a given level of return. As a practical matter, risk
aversion implies that most people should invest in multiple asset classes.

The expected return of the portfolio is calculated as a weighted sum of the


individual assets' returns. If a portfolio contained four equally weighted assets with
expected returns of 4, 6, 10, and 14%, the portfolio's expected return would be:

(4% x 25%) + (6% x 25%) + (10% x 25%) + (14% x 25%) = 8.5%

The portfolio's risk is a complicated function of the variances of each asset and the
correlations of each pair of assets. To calculate the risk of a four-asset portfolio, an
investor needs each of the four assets' variances and six correlation values, since
there are six possible two-asset combinations with four assets. Because of the asset
correlations, the total portfolio risk, or standard deviation, is lower than what
would be calculated by a weighted sum.

Benefits of Modern Portfolio Theory (MPT)


MPT is a useful tool for investors trying to build diversified portfolios. In fact, the
growth of exchange traded funds (ETFs) made MPT more relevant by giving
investors easier access to different asset classes. Stock investors can use MPT to
reduce risk by putting a small portion of their portfolios in government bond ETFs.
The variance of the portfolio will be significantly lower because government bonds
have a negative correlation with stocks. Adding a small investment in Treasuries to
a stock portfolio will not have a large impact on expected returns because of this
loss reducing effect.

Similarly, MPT can be used to reduce the volatility of a U.S. Treasury portfolio by
putting 10% in a small-cap value index fund or ETF. Although small-cap value
stocks are far riskier than Treasuries on their own, they often do well during
periods of high inflation when bonds do poorly. As a result, the portfolio's overall
volatility is lower than one consisting entirely of government bonds. Furthermore,
the expected returns are higher.

Modern portfolio theory allows investors to construct more efficient portfolios.


Every possible combination of assets that exists can be plotted on a graph, with the
portfolio's risk on the X-axis and the expected return on the Y-axis. This plot
reveals the most desirable portfolios. For example, suppose Portfolio A has an
expected return of 8.5% and a standard deviation of 8%. Further, assume that
Portfolio B has an expected return of 8.5% and a standard deviation of 9.5%.
Portfolio A would be deemed more efficient because it has the same expected
return but lower risk.3

It is possible to draw an upward sloping curve to connect all of the most efficient
portfolios. This curve is called the efficient frontier. Investing in a portfolio
underneath the curve is not desirable because it does not maximize returns for a
given level of risk.

Criticism of Modern Portfolio Theory (MPT)


Perhaps the most serious criticism of MPT is that it evaluates portfolios based on
variance rather than downside risk. Two portfolios that have the same level of
variance and returns are considered equally desirable under modern portfolio
theory. One portfolio may have that variance because of frequent small losses. In
contrast, the other could have that variance because of rare spectacular declines.
Most investors would prefer frequent small losses, which would be easier to
endure. Post-modern portfolio theory (PMPT) attempts to improve on modern
portfolio theory by minimizing downside risk instead of variance

3) Capital Asset Pricing Model (CAPM)

No matter how much you diversify your investments, some level of risk will
always exist. So investors naturally seek a rate of return that compensates for that
risk. The capital asset pricing model (CAPM) helps to calculate investment risk
and what return on investment an investor should expect.

Systematic Risk vs. Unsystematic Risk


The capital asset pricing model was developed by the financial economist (and
later, Nobel laureate in economics) William Sharpe, set out in his 1970
book Portfolio Theory and Capital Markets. His model starts with the idea that
individual investment contains two types of risk:

1. Systematic Risk – These are market risks—that is, general perils of


investing—that cannot be diversified away. Interest rates, recessions, and
wars are examples of systematic risks.
2. Unsystematic Risk – Also known as "specific risk," this risk relates to
individual stocks. In more technical terms, it represents the component of a
stock's return that is not correlated with general market moves.

Modern portfolio theory shows that specific risk can be removed or at least
mitigated through diversification of a portfolio. The trouble is that diversification
still does not solve the problem of systematic risk; even a portfolio holding all the
shares in the stock market can't eliminate that risk. Therefore, when calculating a
deserved return, systematic risk is what most plagues investors.

The CAPM Formula


CAPM evolved as a way to measure this systematic risk. Sharpe found that the
return on an individual stock, or a portfolio of stocks, should equal its cost of
capital. The standard formula remains the CAPM, which describes the relationship
between risk and expected return.

Here is the formula:

\ where:

Ra=Expected return on a security

Rrf=Risk-free rate

Rm=Expected return of the market

βa=The beta of the security

(Rm−Rrf)=Equity market premium

CAPM's starting point is the risk-free rate–typically a 10-year government bond


yield. A premium is added, one that equity investors demand as compensation for
the extra risk they accrue. This equity market premium consists of the expected
return from the market as a whole less the risk-free rate of return. The equity risk
premium is multiplied by a coefficient that Sharpe called "beta."

Beta's Role in CAPM


According to CAPM, beta is the only relevant measure of a stock's risk. It
measures a stock's relative volatility–that is, it shows how much the price of a
particular stock jumps up and down compared with how much the entire stock
market jumps up and down. If a share price moves exactly in line with the market,
then the stock's beta is 1. A stock with a beta of 1.5 would rise by 15% if the
market rose by 10% and fall by 15% if the market fell by 10%.

Beta is found by statistical analysis of individual, daily share price returns in


comparison with the market's daily returns over precisely the same period. In their
classic 1972 study "The Capital Asset Pricing Model: Some Empirical Tests,"
financial economists Fischer Black, Michael C. Jensen, and Myron Scholes
confirmed a linear relationship between the financial returns of stock portfolios and
their betas. They studied the price movements of the stocks on the New York Stock
Exchange between 1931 and 1965.1

Beta, compared with the equity risk premium, shows the amount of compensation
equity investors need for taking on additional risk. If the stock's beta is 2.0, the
risk-free rate is 3%, and the market rate of return is 7%, the market's excess return
is 4% (7% - 3%). Accordingly, the stock's excess return is 8% (2 x 4%, multiplying
market return by the beta), and the stock's total required return is 11% (8% + 3%,
the stock's excess return plus the risk-free rate).

What the beta calculation shows is that a riskier investment should earn a premium
over the risk-free rate. The amount over the risk-free rate is calculated by the
equity market premium multiplied by its beta. In other words, it is possible, by
knowing the individual parts of the CAPM, to gauge whether or not the current
price of a stock is consistent with its likely return.

What CAPM Means for Investors


This model presents a simple theory that delivers a simple result. The theory says
that the only reason an investor should earn more, on average, by investing in one
stock rather than another is that one stock is riskier. Not surprisingly, the model
has come to dominate modern financial theory. But does it really work?

It's not entirely clear. The big sticking point is beta. When professors Eugene Fama
and Kenneth French looked at share returns on the New York Stock Exchange,
the American Stock Exchange, and Nasdaq, they found that differences in betas
over a lengthy period did not explain the performance of different stocks. The
linear relationship between beta and individual stock returns also breaks down over
shorter periods of time. These findings seem to suggest that CAPM may be
wrong.2

While some studies raise doubts about CAPM's validity, the model is still widely
used in the investment community. Although it is difficult to predict from beta how
individual stocks might react to particular movements, investors can probably
safely deduce that a portfolio of high-beta stocks will move more than the market
in either direction, and a portfolio of low-beta stocks will move less than the
market.

This is important for investors, especially fund managers, because they may be
unwilling to or prevented from holding cash if they feel that the market is likely to
fall. If so, they can hold low-beta stocks instead. Investors can tailor a portfolio to
their specific risk-return requirements, aiming to hold securities with betas in
excess of 1 while the market is rising, and securities with betas of less than 1 when
the market is falling.

Not surprisingly, CAPM contributed to the rise in the use of indexing–assembling


a portfolio of shares to mimic a particular market or asset class–by risk-
averse investors. This is largely due to CAPM's message that it is only possible to
earn higher returns than those of the market as a whole by taking on higher risk
(beta).

The Bottom Line


The capital asset pricing model is by no means a perfect theory. But the spirit of
CAPM is correct. It provides a useful measure that helps investors determine what
return they deserve on an investment, in exchange for putting their money at risk
on it.

4) Efficient Market Hypothesis (EMH)

The efficient market hypothesis (EMH), alternatively known as the efficient


market theory, is a hypothesis that states that share prices reflect all information
and consistent alpha generation is impossible.

According to the EMH, stocks always trade at their fair value on exchanges,
making it impossible for investors to purchase undervalued stocks or sell stocks for
inflated prices. Therefore, it should be impossible to outperform the overall market
through expert stock selection or market timing, and the only way an investor can
obtain higher returns is by purchasing riskier investments.
Although it is a cornerstone of modern financial theory, the EMH is highly
controversial and often disputed. Believers argue it is pointless to search for
undervalued stocks or to try to predict trends in the market through either
fundamental or technical analysis.

Theoretically, neither technical nor fundamental analysis can produce risk-adjusted


excess returns (alpha) consistently, and only inside information can result in
outsized risk-adjusted returns.

While academics point to a large body of evidence in support of EMH, an equal


amount of dissension also exists. For example, investors such as Warren Buffett
have consistently beaten the market over long periods, which by definition is
impossible according to the EMH.

Detractors of the EMH also point to events such as the 1987 stock market crash,
when the Dow Jones Industrial Average (DJIA) fell by over 20 percent in a single
day, and asset bubbles as evidence that stock prices can seriously deviate from
their fair values.

Proponents of the Efficient Market Hypothesis conclude that, because of the


randomness of the market, investors could do better by investing in a low-cost,
passive portfolio.

Data compiled by Morningstar Inc., in its June 2019 Active/Passive Barometer


study, supports the EMH. Morningstar compared active managers’ returns in all
categories against a composite made of related index funds and exchange-traded
funds (ETFs). The study found that over a 10 year period beginning June 2009,
only 23% of active managers were able to outperform their passive peers. Better
success rates were found in foreign equity funds and bond funds. Lower success
rates were found in US large cap funds. In general, investors have fared better by
investing in low-cost index funds or ETFs.

While a percentage of active managers do outperform passive funds at some point,


the challenge for investors is being able to identify which ones will do so over the
long-term. Less than 25 percent of the top-performing active managers can
consistently outperform their passive manager counterparts over time.

5)Modigliani- Miller Theory


According to MM, under a perfect market condition, the dividend policy of the
company is irrelevant and it does not affect the value of the firm. According to the
theory, the value of a firm depends solely on its earnings power resulting from the
investment policy.
“Under conditions of a perfect market, rational investors, absence of tax
discrimination between dividend income and capital appreciation, given the firm’s
investment policy, its dividend policy may have no influence on the market price
of shares”.
When a firm pays its earnings as dividends, it will have to approach the market for
procuring funds to meet a given investment programme. Acquisition of additional
capital will dilute the firms share capital which will result in drop in share values.
Thus, what the stockholders gain in cash dividends they lose in decreased share
values. The market price before and after payment of dividend would be identical
and hence the shareholders would be indifferent between dividend and retention of
earnings. This suggests that dividend decision is irrelevant. M-M’s argument of
irrelevance of dividend remains unchanged whether external funds are obtained by
means of share capital or borrowings. This is for the fact that investors are
indifferent between debt and equity with respect to leverage and cost of debt is the
same as the real cost of equity.
Finally, even under conditions of uncertainty, dividend decision will be of no
relevance because of operation of arbitrage. The market value of share of the two
firms would be the same if they are identical with respect to business risk,
prospective future earnings and investment policies. This is because of rational
behaviour of investor who would prefer more wealth to less wealth. The difference
in respect of current and future dividend policies cannot influence share values of
the two firms.

Assumptions of MM Theory is that

1. Capital markets are perfect: Investors are rational. it means that information is
available to all investors easily and freely. Investors are free to buy and sell
securities. There is no transaction cost. investors can lend and borrow at the
same time.
2. Dividends. Earnings and capital gains are subject to same tax rates.
3. The firm has a fixed investment policy which will remain unchanged in future.
4. No risk is there in forecasting income, dividends and prices.
Formula

Where
P0 = Value of share in the beginning or zero period, P1 = value of share in theend ,
D1 = Dividend Per share at the end of the period, k = Cost of capital

Value of the company may be ascertained as under

n = Number of outstanding shares, I = Total investment, △ n = Additional number


of shares, E = earning of the company

Criticism of MM theory

1. The assumption that there are no floatation cost is invalid. In practical life
whenever fresh issue is made by a company it has to bear floatation cost.
2. Another assumption that investors are not subject to transaction cost during sale
and purchase of securities, is not valid. In actual, they are subject to transaction
cost which affects sale and purchase value.
3. Besides equity financing of a new project, a company may have a number of
other alternatives which can be used for financing.
4. MM’s assumption that taxes do not exist is far from reality. Dividends are not
taxed whereas tax is levied on capital gains. So the shareholders may prefer
dividend to capital gains.

5) Arbitrage Pricing Theory (APT)


Arbitrage pricing theory (APT) is a multi-factor asset pricing model based
on the idea that an asset's returns can be predicted using the linear
relationship between the asset’s expected return and a number of
macroeconomic variables that capture systematic risk. It is a useful tool for
analyzing portfolios from a value investing perspective, in order to identify
securities that may be temporarily mispriced.

The Formula for the Arbitrage Pricing Theory Model Is


where:

E(R)i=Expected return on the asset

Rz=Risk-free rate of return

βn=Sensitivity of the asset price to macroeconomicfactor

nEi=Risk premium associated with factor i

The beta coefficients in the APT model are estimated by using linear regression. In
general, historical securities returns are regressed on the factor to estimate its beta.

How the Arbitrage Pricing Theory Works


The arbitrage pricing theory was developed by the economist Stephen Ross in
1976, as an alternative to the capital asset pricing model (CAPM). Unlike the
CAPM, which assume markets are perfectly efficient, APT assumes markets
sometimes misprice securities, before the market eventually corrects and securities
move back to fair value. Using APT, arbitrageurs hope to take advantage of any
deviations from fair market value.

However, this is not a risk-free operation in the classic sense of arbitrage, because
investors are assuming that the model is correct and making directional trades—
rather than locking in risk-free profits.

Mathematical Model for the APT


While APT is more flexible than the CAPM, it is more complex. The CAPM only
takes into account one factor—market risk—while the APT formula has multiple
factors. And it takes a considerable amount of research to determine how sensitive
a security is to various macroeconomic risks.

The factors as well as how many of them are used are subjective choices, which
means investors will have varying results depending on their choice. However,
four or five factors will usually explain most of a security's return. (For more on
the differences between the CAPM and APT, read more about how CAPM and
arbitrage pricing theory differ.)
APT factors are the systematic risk that cannot be reduced by the diversification of
an investment portfolio. The macroeconomic factors that have proven most reliable
as price predictors include unexpected changes in inflation, gross national
product (GNP), corporate bond spreads and shifts in the yield curve. Other
commonly used factors are gross domestic product (GDP), commodities prices,
market indices, and exchange rates.

Example of How Arbitrage Pricing Theory Is Used


For example, the following four factors have been identified as explaining a stock's
return and its sensitivity to each factor and the risk premium associated with each
factor have been calculated:

 Gross domestic product (GDP) growth: ß = 0.6, RP = 4%


 Inflation rate: ß = 0.8, RP = 2%
 Gold prices: ß = -0.7, RP = 5%
 Standard and Poor's 500 index return: ß = 1.3, RP = 9%
 The risk-free rate is 3%

Using the APT formula, the expected return is calculated as:

 Expected return = 3% + (0.6 x 4%) + (0.8 x 2%) + (-0.7 x 5%) + (1.3 x 9%)
= 15.2%

6) Random Walk Theory

Random walk theory suggests that changes in stock prices have the same
distribution and are independent of each other. Therefore, it assumes the past
movement or trend of a stock price or market cannot be used to predict its future
movement. In short, random walk theory proclaims that stocks take a random and
unpredictable path that makes all methods of predicting stock prices futile in the
long run.

Random walk theory believes it's impossible to outperform the market without
assuming additional risk. It considers technical analysis undependable because
chartists only buy or sell a security after an established trend has developed.
Likewise, the theory finds fundamental analysis undependable due to the often-
poor quality of information collected and its ability to be misinterpreted. Critics of
the theory contend that stocks do maintain price trends over time – in other words,
that it is possible to outperform the market by carefully selecting entry and exit
points for equity investments.

Efficient Markets are Random


The random walk theory raised many eyebrows in 1973 when author Burton
Malkiel coined the term in his book "A Random Walk Down Wall Street."1 The
book popularized the efficient market hypothesis (EMH), an earlier theory posed
by University of Chicago professor William Sharp. The efficient market hypothesis
states that stock prices fully reflect all available information and expectations, so
current prices are the best approximation of a company’s intrinsic value. This
would preclude anyone from exploiting mispriced stocks consistently because
price movements are mostly random and driven by unforeseen events.

Sharp and Malkiel concluded that, due to the short-term randomness of returns,
investors would be better off investing in a passively managed, well-
diversified fund. A controversial aspect of Malkiel’s book theorized that "a
blindfolded monkey throwing darts at a newspaper's financial pages could select
a portfolio that would do just as well as one carefully selected by experts."

Random Walk Theory in Action


The most well-known practical example of random walk theory occurred in 1988
when the Wall Street Journal sought to test Malkiel's theory by creating the annual
Wall Street Journal Dartboard Contest, pitting professional investors against darts
for stock-picking supremacy. Wall Street Journal staff members played the role of
the dart-throwing monkeys.3

After more than 140 contests, the Wall Street Journal presented the results, which
showed the experts won 87 of the contests and the dart throwers won 55. However,
the experts were only able to beat the Dow Jones Industrial Average (DJIA) in 76
contests. Malkiel commented that the experts' picks benefited from the publicity
jump in the price of a stock that tends to occur when stock experts make a
recommendation. Passive management proponents contend that, because the
experts could only beat the market half the time, investors would be better off
investing in a passive fund that charges far lower management fees.
FINANCIAL MARKET ANOMALIES
In the non-investing world, an anomaly is a strange or unusual occurrence.
In financial markets, anomalies refer to situations when a security or group of
securities performs contrary to the notion of efficient markets, where security
prices are said to reflect all available information at any point in time.

With the constant release and rapid dissemination of new information, sometimes
efficient markets are hard to achieve and even more difficult to maintain. There are
many market anomalies; some occur once and disappear, while others are
continuously observed.

Calendar Effects
Anomalies that are linked to a particular time are called calendar effects. Some of
the most popular calendar effects include the weekend effect, the turn-of-the-
month effect, the turn-of-the-year effect and the January effect.

Weekend Effect: The weekend effect describes the tendency of stock prices
to decrease on Mondays, meaning that closing prices on Monday are lower
than closing prices on the previous Friday. From 1950 through 2010, returns
on Mondays for the S&P 500 were consistently lower than every other day
of the week. In fact, Monday was the only weekday with a negative
average rate of return.1

 Turn-of-the-Month Effect: The turn-of-the-month effect refers to the


tendency of stock prices to rise on the last trading day of the month and the
first three trading days of the next month.
 Turn-of-the-Year Effect: The turn-of-the-year effect describes a pattern of
increased trading volume and higher stock prices in the last week of
December and the first two weeks of January.
 January Effect: Amid the turn-of-the-year market optimism, there is one
class of securities that consistently outperforms the rest. Small-company
stocks outperform the market and other asset classes during the first two to
three weeks of January. This phenomenon is referred to as the January
effect. (Keep reading about this effect in January Effect Revives Battered
Stocks.) Occasionally, the turn-of-the-year effect and the January effect may
be addressed as the same trend, because much of the January effect can be
attributed to the returns of small-company stocks.2
Why Do Calendar Effects Occur?
So, what's with Mondays? Why are turning days better than any other days? It has
been jokingly suggested that people are happier heading into the weekend and not
so happy heading back to work on Mondays, but there is no universally accepted
reason for the negative returns on Mondays.

Unfortunately, this is the case for many calendar anomalies. The January effect
may have the most valid explanation. It is often attributed to the turn of the tax
calendar; investors sell off stocks at year's end to cash in gains and sell losing
stocks to offset their gains for tax purposes. Once the New Year begins, there is a
rush back into the market and particularly into small-cap stocks.

Announcements and Anomalies


Not all anomalies are related to the time of week, month or year. Some are linked
to the announcement of information regarding stock splits, earnings, and mergers
and acquisitions.

 Stock Split Effect: Stock splits increase the number of shares outstanding
and decrease the value of each outstanding share, with a net effect of zero on
the company's market capitalization. However, before and after a company
announces a stock split, the stock price normally rises. The increase in price
is known as the stock split effect. Many companies issue stock splits when
their stock has risen to a price that may be too expensive for the average
investor. As such, stock splits are often viewed by investors as a signal that
the company's stock will continue to rise. Empirical evidence suggests that
the signal is correct. (For related reading, see Understanding Stock Splits.)
 Short-Term Price Drift: After announcements, stock prices react and often
continue to move in the same direction. For example, if a positive earnings
surprise is announced, the stock price may immediately move higher. Short-
term price drift occurs when stock price movements related to the
announcement continue long after the announcement. Short-term price drift
occurs because information may not be immediately reflected in the stock's
price.
 Merger Arbitrage: When companies announce a merger or acquisition, the
value of the company being acquired tends to rise while the value of the
bidding firm tends to fall. Merger arbitrage plays on potential mispricing
after the announcement of a merger or acquisition. The bid submitted for an
acquisition may not be an accurate reflection of the target firm's intrinsic
value; this represents the market anomaly that arbitrageurs aim to exploit.
Arbitrageurs aim to take advantage of the pattern that bidders usually offer
premium rates to purchase target firms. (To read more about M&As, see The
Merger - What To Do When Companies Converge and Biggest Merger and
Acquisition Disasters.)

Superstitious Indicators
Aside from anomalies, there are some nonmarket signals that some people believe
will accurately indicate the direction of the market. Here is a short list of
superstitious market indicators:

 The Super Bowl Indicator: When a team from the old American Football
League wins the game, the market will close lower for the year. When an old
National Football League team wins, the market will end the year higher.
Silly as it may seem, the Super Bowl indicator was correct more than 80% of
the time over a 40-year period ending in 2008 . However, the indicator has
one limitation: It contains no allowance for an expansion-team victory.3
 The Hemline Indicator: The market rises and falls with the length of skirts.
Sometimes this indicator is referred to as the "bare knees, bull market"
theory. To its merit, the hemline indicator was accurate in 1987, when
designers switched from miniskirts to floor-length skirts just before the
market crashed. A similar change also took place in 1929, but many argue as
to which came first, the crash or the hemline shifts.
 The Aspirin Indicator: Stock prices and aspirin production are inversely
related. This indicator suggests that when the market is rising, fewer people
need aspirin to heal market-induced headaches. Lower aspirin sales should
indicate a rising market. (See more superstitious anomalies at World's
Wackiest Stock Indicators.)

Why Do Anomalies Persist?


These effects are called anomalies for a reason: they should not occur and they
definitely should not persist. No one knows exactly why anomalies happen. People
have offered several different opinions, but many of the anomalies have no
conclusive explanations. There seems to be a chicken-or-the-egg scenario with
them too - which came first is highly debatable.

Profiting From Anomalies


It is highly unlikely that anyone could consistently profit from exploiting
anomalies. The first problem lies in the need for history to repeat itself. Second,
even if the anomalies recurred like clockwork, once trading costs and taxes are
taken into account, profits could dwindle or disappear. Finally, any returns will
have to be risk-adjusted to determine whether trading on the anomaly allowed an
investor to beat the market. (To learn much more about efficient markets,
read Working Through The Efficient Market Hypothesis.)

Conclusion
Anomalies reflect inefficiency within markets. Some anomalies occur once and
disappear, while others occur repeatedly. History is no predictor of future
performance, so you should not expect every Monday to be disastrous and every
January to be great, but there also will be days that will "prove" these anomalies
true!

Technical anomalies
Technical analysis sets its beginnings in the work and theories of Charles Henry
Dow, its fundamental principles referring to the fact that:

· Market actions update everything - the price of listed financial instruments seen
as the intersection of the supply and demand for such securities reflects through its
value the influence of various factors. The purpose for technical analysis is not to
identify the factors that influence the price, but only the price movement and its
analysis in time.

· The existing configurations - Technical analysis attempts to give models of the


evolution of market prices based on historical data, so these configurations offer
some probability that certain results can be expected.

· The history tends to repeat itself - graphics configurations proposed by technical


analysis tend to repeat over time due to the characteristics of human psychology.
Technical analysis involves besides many advantages also many disadvantages
demonstrated in time (Reuters, Introducere în studiul analizei tehnice, 2001, p. 22).
From the most important positive aspects that technical analysis involves, we can
include the followings: ·

It can be used for a wide range of financial instruments listed on every


market.Hence a very important feature of technical analysis, namely the flexibility.
Technical analysis can easily be adapted to different products traded or different
types of markets, the principle remains the same.
· Graphical representations of the evolution of equity prices can be realized for
different periods: for hours to historical data for decades - this is because of the
technology used (computers).

· In time, a development of the instruments that are used in technical analysis could
be noticed. We could say that these instruments are innovations of research in this
area.

· The data used by technical analysis are historical data, but recent years
technology has allowed even the use of real time data to conduct technical analysis
(or with a subtle delay).

· Due to the fact that technical analysis is realized by analysts, who are human
beings, the subjective factor is not eliminated. In fact, this is the biggest
disadvantage of technical analysis, meaning that forecasts based on the data
analysis rely heavily on the subjective factor - the way the same data are
differently interpreted by various analysts.

· The technical analysis is based on the extrapolation of events and the moving
quotations in time. This is a subject of probability theory, the future and upcoming
events being unknown to mankind. Technical analysis addresses an issue so long
coveted by mankind: knowing the future.

· Regarding the probabilistic nature of the evolutions that technical analysis is


studying, we can say that it is concerned with determining the probability of stock
market quotations, and not concerned about the certainty that they will come true.

· Information used by technical analysis can be sometimes wrong, or less accurate,


which distorts also the result (as forecast).

The main elements technical analysis is based on are: the prices of financial
securities, the repeatability of price trends in the market, and the fact the prices
tend to enroll in some trends. The opponents of the results’ accuracy obtained
using technical analysis bring as an explanation the random walk theory and the
theory of confirmed projection. The random walk theory was first discussed in the
research filed by Jules Regnault, a French broker in 1863. Later, in 1900, the
theory has gained new dimensions in terms of the interpretation by Louis Bachelier
in his doctoral thesis, then the subject is approached by Cootner Paul (1964),
Burton Malkiel (1973) and Francis Eugene Fama (1965). According to the random
walk theory, the future prices of listed financial securities can not be determined or
predicted, because they have a random evolution to their intrinsic value. The
theory of confirmed projection highlights the subjective interpretation of graphs.

Technical analysis tries, using historical prices and statistics with this regard, to
forecast future prices of the listed financial securities. The simplest techniques and
trading strategies are based on a classical graphical analysis, which includes the
interpretation of straight trend of building configurations or reversibility, the lines
of support, the lines of resistance, moving averages or gaps. More complex
techniques include in the analysis indicators like: RSI (Relative Strength Index),
stochastic oscillators, moving average convergence-divergence or elements of
Elliott wave theory, Fibonacci numbers and Gann graphics.

Fundamental anomalies
Patel, Yao and Barefoot showed in 2006 that financial securities that offer high
dividend yields have given better performance (from the perspective of market
price) than financial securities offering lower dividends yields (Patel, Yao, &
Barefoot 2006).

There are investors who use as an investment strategy the technique "against the
market". This means that they will select for their portfolios, securities most
neglected on the market (the less traded). The technique was demonstrated in 1970
by Werner DeBondt and Richard Thaler that through a study concluded that the
highest performance is achieved by the neglected securities of the market, their
yields are higher than the overall average of the market .

An Introduction to Behavioral Finance

The idea that psychology drives stock market movements flies in the face of
established theories that advocate the notion that financial markets are efficient.
Proponents of the efficient market hypothesis, for instance, claim that any new
information relevant to a company's value is quickly priced by the market. As a
result, future price moves are random because all available (public and some non-
public) information is already discounted in current values.

However, for anyone who has been through the Internet bubble and the subsequent
crash, the efficient market theory is pretty hard to swallow. Behaviorists explain
that, rather than being anomalies, irrational behavior is commonplace. In fact,
researchers have regularly reproduced examples of irrational behavior outside of
finance using very simple experiments.

The Importance of Losses Versus Significance of Gains


Here is one experiment: Offer someone a choice of a sure $50 or, on the flip of a
coin, the possibility of winning $100 or winning nothing. Chances are the person
will pocket the sure thing. Conversely, offer a choice of 1) a sure loss of $50 or 2)
on a flip of a coin, either a loss of $100 or nothing. The person, rather than accept a
$50 loss, will probably pick the second option and flip the coin.

The chance of the coin landing on one side or the other is equivalent in any
scenario, yet people will go for the coin toss to save themselves from a $50 loss
even though the coin flip could mean an even greater loss of $100. That's because
people tend to view the possibility of recouping a loss as more important than the
possibility of greater gain.

The priority of avoiding losses also holds true for investors. Just think of Nortel
Networks shareholders who watched their stock's value plummet from over $100 a
share in early 2000 to less than $2 a few years later. No matter how low the price
drops, investors—believing that the price will eventually come back—often hold
stocks rather than suffer the pain of taking a loss.

The Herd vs. Self


The herd instinct explains why people tend to imitate others. When a market is
moving up or down, investors are subject to a fear that others know more or have
more information. As a consequence, investors feel a strong impulse to do what
others are doing.

Behavior finance has also found that investors tend to place too much worth on
judgments derived from small samples of data or from single sources. For instance,
investors are known to attribute skill rather than luck to an analyst that picks a
winning stock.
On the other hand, beliefs are not easily shaken. One notion that gripped investors
through the late 1990s, for example, was that any sudden drop in the market is a
buying opportunity. Indeed, this buy-the-dip view still pervades. Investors are
often overconfident in their judgments and tend to pounce on a single "telling"
detail rather than the more obvious average. In doing so, they fail to see the larger
picture by focusing too much on smaller details.

How Practical Is Behavioral Finance?


We can ask ourselves if these studies will help investors beat the market. After all,
rational shortcomings should provide plenty of profitable opportunities for wise
investors. In practice, however, few if any value investors are deploying behavioral
principles to sort out which cheap stocks actually offer returns that are consistently
above the norm.

The impact of behavioral finance research still remains greater in academia than in
practical money management. While theories point to numerous rational
shortcomings, the field offers little in the way of solutions that make money from
market manias.

Robert Shiller, the author of "Irrational Exuberance" (2000), showed that in the late
1990s, the market was in the thick of a bubble. But he couldn't say when the
bubble would pop. Similarly, today's behaviorists can't tell us when the market has
hit a top, just as they could not tell when it would bottom after the 2007-2008
financial crisis. They can, however, describe what an important turning point might
look like.

The Bottom Line


The behavioralists have yet to come up with a coherent model that actually predicts
the future rather than merely explains, with the benefit of hindsight, what the
market did in the past. The big lesson is that theory doesn't tell people how to beat
the market. Instead, it tells us that psychology causes market prices and
fundamental values to diverge for a long time.

Behavioral Finance Concepts


Behavioral finance typically encompasses five main concepts:

 Mental accounting: Mental accounting refers to the propensity for people to


allocate money for specific purposes.
 Herd behavior: Herd behavior states that people tend to mimic the financial
behaviors of the majority of the herd. Herding is notorious in the stock
market as the cause behind dramatic rallies and sell-offs.
 Emotional gap: The emotional gap refers to decision making based on
extreme emotions or emotional strains such as anxiety, anger, fear, or
excitement. Oftentimes, emotions are a key reason why people do not make
rational choices.
 Anchoring: Anchoring refers to attaching a spending level to a certain
reference. Examples may include spending consistently based on a budget
level or rationalizing spending based on different satisfaction utilities.
 Self-attribution: Self-attribution refers to a tendency to make choices based
on a confidence in self-based knowledge. Self-attribution usually stems from
intrinsic confidence of a particular area. Within this category, individuals
tend to rank their knowledge higher than others.

Biases Studied in Behavioral Finance


Breaking down biases further, many individual biases and tendencies have been
identified for behavioral finance analysis, including:

Disposition Bias
Disposition bias refers to when investors sell their winners and hang onto their
losers. Investors' thinking is that they want to realize gains quickly. However,
when an investment is losing money, they'll hold onto it because they want to get
back to even or their initial price. Investors tend to admit they are correct about an
investment quickly (when there's a gain). However, investors are reluctant to admit
when they made an investment mistake (when there's a loss). The flaw in
disposition bias is that the performance of the investment is often tied to the entry
price for the investor. In other words, investors gauge the performance of their
investment based on their individual entry price disregarding fundamentals or
attributes of the investment that may have changed.

Confirmation Bias
Confirmation bias is when investors have a bias toward accepting information that
confirms their already-held belief in an investment. If information surfaces,
investors accept it readily to confirm that they're correct about their investment
decision—even if the information is flawed.

Experiential Bias
An experiential bias occurs when investors' memory of recent events makes them
biased or leads them to believe that the event is far more likely to occur again. For
example, the financial crisis in 2008 and 2009 led many investors to exit the stock
market. Many had a dismal view of the markets and likely expected more
economic hardship in the coming years. The experience of having gone through
such a negative event increased their bias or likelihood that the event could
reoccur. In reality, the economy recovered, and the market bounced back in the
years to follow.

Loss Aversion
Loss aversion occurs when investors place a greater weighting on the concern for
losses than the pleasure from market gains. In other words, they're far more likely
to try to assign a higher priority on avoiding losses than making investment gains.
As a result, some investors might want a higher payout to compensate for losses. If
the high payout isn't likely, they might try to avoid losses altogether even if the
investment's risk is acceptable from a rational standpoint.

Familiarity Bias
The familiarity bias is when investors tend to invest in what they know, such as
domestic companies or locally owned investments. As a result, investors are not
diversified across multiple sectors and types of investments, which can reduce risk.
Investors tend to go with investments that they have a history with or have
familiarity.

Behavioral Finance in the Stock Market


The efficient market hypothesis (EMH) says that at any given time in a
highly liquid market, stock prices are efficiently valued to reflect all the available
information. However, many studies have documented long-term historical
phenomena in securities markets that contradict the efficient market hypothesis and
cannot be captured plausibly in models based on perfect investor rationality.

The EMH is generally based on the belief that market participants view stock
prices rationally based on all current and future intrinsic and external factors.
When studying the stock market, behavioral finance takes the view that markets
are not fully efficient. This allows for observation of how psychological factors can
influence the buying and selling of stocks.

The understanding and usage of behavioral finance biases is applied to stock and
other trading market movements on a daily basis. Broadly, behavioral finance
theories have also been used to provide clearer explanations of substantial market
anomalies like bubbles and deep recessions. While not a part of EMH, investors
and portfolio managers have a vested interest in understanding behavioral finance
trends. These trends can be used to help analyze market price levels and
fluctuations for speculation as well as decision-making purposes.

Mental Accounting
Mental accounting refers to the different values a person places on the same
amount of money, based on subjective criteria, often with detrimental results.
Mental accounting is a concept in the field of behavioral economics. Developed by
economist Richard H. Thaler, it contends that individuals classify funds differently
and therefore are prone to irrational decision-making in their spending and
investment behavior.

Richard Thaler, currently a professor of economics at the University of


Chicago Booth School of Business, introduced mental accounting in his 1999
paper "Mental Accounting Matters," which appeared in the Journal of Behavioral
Decision Making. He begins with this definition: "Mental accounting is the set of
cognitive operations used by individuals and households to organize, evaluate, and
keep track of financial activities." The paper is rich with examples of how mental
accounting leads to irrational spending and investment behavior.

Underlying the theory is the concept of fungibility of money. To say money is


fungible means that, regardless of its origins or intended use, all money is the
same. To avoid the mental accounting bias, individuals should treat money as
perfectly fungible when they allocate among different accounts, be it a budget
account (everyday living expenses), a discretionary spending account, or a wealth
account (savings and investments).

They also should value a dollar the same whether it is earned through work or
given to them. However, Thaler observed that people frequently violate the
fungibility principle, especially in a windfall situation. Take a tax refund. Getting a
check from the IRS is generally regarded as "found money," something extra that
the recipient often feels free to spend on a discretionary item. But in fact, the
money rightfully belonged to the individual in the first place, as the word "refund"
implies, and is mainly a restoration of money (in this case, an over-payment of
tax), not a gift. Therefore, it should not be treated as a gift, but rather viewed in
much the same way that the individual would view their regular income.
Example of Mental Accounting
Individuals don't realize the mental accounting line of thinking seems to make
sense, but is in fact highly illogical. For instance, some people keep a special
“money jar” or similar fund set aside for a vacation or a new home, while at the
same time carrying substantial credit card debt. They are likely to treat the money
in this special fund differently from the money that is being used to pay down debt,
in spite of the fact that diverting funds from the debt repayment process increases
interest payments, thereby reducing their total net worth.

Broken down further, it’s illogical (and, in fact, detrimental) to maintain a savings
jar that earns little or no interest while simultaneously holding credit-card debt that
accrues double-digit figures annually. In many cases, the interest on this debt will
erode any interest you could earn in a savings account. Individuals in this scenario
would be best off using the funds they have saved in the special account to pay off
the expensive debt before it accumulates any further.

Put in this way, the solution to this problem seems straightforward. Nonetheless,
many people do not behave in this way. The reason has to do with the type of
personal value that individuals place on particular assets. Many people feel, for
example, that money saved for a new house or a child’s college fund is simply “too
important” to relinquish, even if doing so would be the most logical and beneficial
move. So the practice of maintaining money in a low- or no-interest account while
also carrying outstanding debt remains common.

Prospect Theory
Prospect theory assumes that losses and gains are valued differently, and thus
individuals make decisions based on perceived gains instead of perceived losses.
Also known as the "loss-aversion" theory, the general concept is that if two choices
are put before an individual, both equal, with one presented in terms of potential
gains and the other in terms of possible losses, the former option will be chosen.

How the Prospect Theory Works


Prospect theory belongs to the behavioral economic subgroup, describing how
individuals make a choice between probabilistic alternatives where risk is involved
and the probability of different outcomes is unknown. This theory was formulated
in 1979 and further developed in 1992 by Amos Tversky and Daniel Kahneman,
deeming it more psychologically accurate of how decisions are made when
compared to the expected utility theory.
The underlying explanation for an individual’s behavior, under prospect theory, is
that because the choices are independent and singular, the probability of a gain or a
loss is reasonably assumed as being 50/50 instead of the probability that is actually
presented. Essentially, the probability of a gain is generally perceived as greater.

Tversky and Kahneman proposed that losses cause a greater emotional impact on
an individual than does an equivalent amount of gain, so given choices presented
two ways—with both offering the same result—an individual will pick the option
offering perceived gains.

For example, assume that the end result is receiving $25. One option is being given
the straight $25. The other option is gaining $50 and losing $25. The utility of the
$25 is exactly the same in both options. However, individuals are most likely to
choose to receive straight cash because a single gain is generally observed as more
favorable than initially having more cash and then suffering a loss.

Types of Prospect Theory


According to Tversky and Kahneman, the certainty effect is exhibited when people
prefer certain outcomes and underweight outcomes that are only probable. The
certainty effect leads to individuals avoiding risk when there is a prospect of a sure
gain. It also contributes to individuals seeking risk when one of their options is a
sure loss.

The isolation effect occurs when people have presented two options with the same
outcome, but different routes to the outcome. In this case, people are likely to
cancel out similar information to lighten the cognitive load, and their conclusions
will vary depending on how the options are framed.

Prospect Theory Example


Consider an investor is given a pitch for the same mutual fund by two separate
financial advisors. One advisor presents the fund to the investor, highlighting that
it has an average return of 12% over the past three years. The other advisor tells the
investor that the fund has had above-average returns in the past 10 years, but in
recent years it has been declining. Prospect theory assumes that though the investor
was presented with the exact same mutual fund, he is likely to buy the fund from
the first advisor, who expressed the fund’s rate of return as an overall gain instead
of the advisor presenting the fund as having high returns and losses.
Important Contributors
Like every other branch of finance, the field of behavioral finance has certain
people that have provided major theoretical and empirical contributions. The
following section provides a brief introduction to three of the biggest names
associated with the field.
Daniel Kahneman and Amos TverskyCognitive
psychologists Daniel Kahneman and Amos Tversky are considered the fathers of
behavioral economics/finance. Since their initial collaborations in the late 1960s,
this duo has published about 200 works, most of which relate to psychological
concepts with implications for behavioral finance. In 2002, Kahneman received the
Nobel Memorial Prize in Economic Sciences for his contributions to the study of
rationality in economics.
Kahneman and Tversky have focused much of their research on the cognitive
biasesand heuristics (i.e. approaches to problem solving)that cause people to
engage in unanticipated irrational behavior. Their most popular and notable works

You might also like