Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Personal Indexes

One day, artificial intelligence


will build them for each investor.
By Andrew W. Lo
i
n
d
e
x
e
s

s
e
c
o
n
d

q
u
a
r
t
e
r

Illustration by Garrian Manning

63
ne of the main purposes of an index tages at their disposal, and in doing so, they

O of any kind is to facilitate the extrac-


tion and summary of information
through an algorithmic process, e.g., averag-
incorporate their information into market
prices and quickly eliminate the profit oppor-
tunities that gave rise to their trading.
ing. Now typically done by computers, aver- While this argument for randomness is
aging is a simple example of artificial intelli- surprisingly compelling, a number of theo-
gence! Indeed, an important factor in the retical and empirical studies over the past 20
early popularity of the Dow Jones Industrial years have cast serious doubt on both its
Average, first introduced in May 1896, was premises and its implications. For example,
the “Dow Theory,” a collection of heuristics from a theoretical perspective, LeRoy
proposed by Charles H. Dow and later (1973), Lucas (1978), and many others have
expanded by William P. Hamilton in which shown in many ways and in many contexts i
specific economic meaning is attributed to that the Random Walk Hypothesis is neither n
certain time series patterns in the index.1 The a necessary nor a sufficient condition for d
dream of developing automated processes rationally determined security prices. And
for making better investment decisions is empirically, numerous researchers have
e
obviously not unique to our times. documented departures from the Random x
But what is unique about our times is the Walk Hypothesis in financial data.3 Financial e
confluence of breakthroughs in financial tech- markets are predictable to some degree. s
nology, computer technology, and institutional The rejection of the Random Walk
infrastructure that, for the first time in the his- Hypothesis opens the door to the possibility
tory of modern civilization, makes automated of superior long-term investment returns
s
personalized investment management a prac- through disciplined active investment man- e
tical possibility. The combination of artificial agement. In much the same way that inno- c
intelligence and financial technology may one vations in biotechnology can garner superior o
day render general market indexes obsolete: returns for venture capitalists, innovations in n
Each investor may have a “personal” index financial technology can, in principle, garner
constructed specifically to meet his or her life- superior returns for investors. This is com-
d
time objectives and risk preferences, and a pelling motivation for the application of artifi-
software agent to actively manage the portfo- cial intelligence in financial contexts. q
lio accordingly. If this seems more like science u
fiction than reality, that is precisely the motiva- Artificial Neural Networks a
tion for a review of some basic recent devel- Recent advances in the theory and imple-
opments in artificial intelligence and their mentation of (artificial) neural networks have
r
applications to financial technology. captured the imagination and fancy of the t
One of the earliest and most enduring financial community.4 Although they are only e
models of the behavior of security prices— one of the many types of statistical tools for r
and stock indexes in particular—is the modeling nonlinear relationships, neural net-
Random Walk Hypothesis, an idea con- works seem to be surrounded by a great deal
ceived in the sixteenth century as a model of of mystique and, sometimes, misunderstand-
games of chance.2 As with so many of the ing. Because they have their roots in neuro-
ideas of modern economics, the first serious physiology and the cognitive sciences, neural
application of the Random Walk Hypothesis networks are often assumed to have brain-like
to financial markets can be traced back to qualities: learning capacity, problem-solving
Paul Samuelson (1965), whose contribution abilities, and ultimately cognition and self-
is neatly summarized by the title of his arti- awareness. Alternatively, neural networks are
cle, “Proof that Properly Anticipated Prices often viewed as “black boxes” that can yield
Fluctuate Randomly”. Samuelson argued accurate predictions with little modeling effort.
that in financial markets randomness is In fact, neural networks are neither. They
achieved through the active participation of are an interesting and potentially powerful
many investors seeking greater wealth. An modeling technique in some, but surely not
army of greedy investors trade aggressively all, applications. To develop some basic intu-
on even the smallest informational advan- ition for neural networks, consider a typical

63
nerve cell or “neuron.” A neuron has dendrites estimators and, perhaps the most powerful
(receptors) at different sites that react to stim- of all, human intuition.
ulus. This stimulus is transmitted along the Even simple neural networks can cap-
axon by an electrical pulse. If the electrical ture a variety of nonlinearities. Consider, for
pulse exceeds some threshold level when it example, the sine function plus a random
hits the nucleus, this triggers the nucleus to error term:
react, e.g., to make a particular muscle con-
tract. This basic biological unit is what mathe- Yt = Sin(Xt) + 0.5εt (1)
maticians attempt to capture in a neural net-
work model.5 where εt is a standard normal random vari-
Even though neurobiologists have come to able. Can a neural network extract the sine
i realize that actual nerve cells exhibit consider- function from observations (Xt,Yt)?
n ably more complex and subtle behavior, nev- To answer this question, 500 (Xt,Yt) pairs
d ertheless AI researchers have found great use were randomly generated subject to the
for the simple on/off or “binary threshold” nonlinear relationship (3), and a neural net-
e model in approximating nonlinear relation- work model was estimated using this artifi-
x ships efficiently. In particular, neural network cial data (or in the jargon of this literature, a
e models have a very useful feature known as neural network was trained on this data set).6
s the “universal approximation property”. This The following equation is the result of train-
property means that with enough neurons ing a neural network on the data using non-
linked together in an appropriate way, a neural linear least squares:
s network can approximate any nonlinear rela-
e tionship, no matter how strange. Yt= 5.282 – 14.576 Θ(–1.472+1.869Xt) –
c Viewed as a statistical estimation tech- 5.411Θ(–2.628+0.642Xt) –
o nique, neural networks are a flexible model 3.071Θ(13.288–2.347Xt) +
n of nonlinearities. In this sense, they are just 6.320Θ(–2.009+4.009Xt) +
one of many techniques for modeling com- 7.892Θ(–3.316+2.474Xt) (2)
d plex relationships. Examples of other nonlin-
ear estimation techniques include: splines, where Θ(x) is the logistic function 1/(1 +
q wavelets, kernel regression, projection pur- exp[x]). This network has five identical acti-
u suit, radial basis functions, nearest-neighbor vation functions Θ(x) (corresponding to the
a
r
t Figure 1 MLP Estimator of Y = Sin(X)+.5Z
e
3

r
2
1
E [ YIX ]
0
-1

MLP
-2

Data
Sin(X)
-3

0.00 1.57 3.14 4.71 6.28


X
63
five nodes in the hidden layer) and a con- come without some cost—the nonparamet-
stant term. The network has only two ric pricing method is highly data-intensive,
inputs, Xt and 1. requiring large quantities of historical prices
Now (2) looks nothing like the sine func- to obtain a sufficiently well-trained network.
tion, so in what sense has the neural network Therefore, such an approach would be inap-
“approximated” the nonlinear relation (1)? In propriate for thinly traded derivatives, or
Figure 1, the data points (Xt,Yt) are plotted as newly created derivatives that have no simi-
triangles, the dashed line is the theoretical lar counterparts among existing securities.7
relation to be estimated (the sine function), Also, if the fundamental asset’s price
and the solid line is the relation as estimated dynamics are well-understood and an ana-
by the neural network (2). The solid line is lytical expression for the derivative's price is
impressively close to the dashed line, despite available under these dynamics, then the i
the noise that the data clearly contain. parametric formula will almost always out- n
Therefore, although the functional form of (2) perform the network formula in pricing and d
does not resemble any trigonometric func- hedging accuracy. Nevertheless, these con-
tion, its numerical values do. Add more data ditions occur rarely enough that there may
e
points, and it would likely get even closer. still be great practical value in constructing x
Hutchinson, Lo, and Poggio (1994) pro- derivative pricing formulas by learning net- e
posed neural network models for estimating works. To illustrate the practical relevance of s
derivative pricing formulas. In particular, they their approach, Hutchinson, Lo, and Poggio
take as inputs the primary economic vari- (1994) apply it to the pricing and delta-hedg-
ables that influence the derivative’s price— ing of S&P 500 futures options from 1987 to
s
current underlying asset price, strike price, 1992. They show that neural network models e
time-to-maturity, etc.—and define the deriv- perform well, yielding delta-hedging errors c
ative price to be the output into which the and option prices that are comparable to o
neural network maps the inputs. When prop- and, in some cases, better than traditional n
erly trained, the network “becomes” the methods like Black-Scholes.
derivative pricing formula, which may be
d
used in the same way that formulas Data Mining and Data Snooping
obtained from parametric pricing methods A substantial portion of the recent litera- q
such as the well-known Black-Scholes for- ture in artificial intelligence is devoted to a u
mula are used: for pricing, delta-hedging, discipline now known as “data mining” or a
simulation exercises, etc. “knowledge discovery in databases” (KDD).
These neural network models have sever- The combination of very large scale data-
r
al important advantages over the more tradi- bases in many research and business con- t
tional parametric models. First, since they texts and the tremendous growth in comput- e
do not rely on restrictive parametric assump- ing power over the past several decades has r
tions such as lognormality or sample-path naturally led to the development of computa-
continuity, they are robust to the specifica- tionally intensive methods for systematically
tion errors that plague parametric models. sifting through large quantities of data.
(In other words, you don’t have to worry In fact, Internet-based search engines are
about having the right assumptions if you perhaps the most common examples of data
don’t have to make any assumptions). mining applications, and there are many
Second, they are adaptive, and respond to other prominent examples in marketing,
structural changes in the data-generating financial services, telecommunications, and
processes in ways that parametric models molecular biology. Perhaps the most chal-
cannot. Third, they are flexible enough to lenging issue facing data miners today is a
encompass a wide range of derivative secu- statistical one: how to determine whether the
rities and fundamental asset price dynam- results from a data-mining search are gen-
ics, yet relatively simple to implement. And uine or spurious? For example, suppose we
finally, they are easily parallelizable and may search a database of mutual funds to find
be computationally more efficient. the one with the most successful track
Of course, all these advantages do not record over the past five years, and the

63
process yields XYZ Growth Fund. Does this same expected return, the same volatility,
imply that XYZ is a good fund, or is it possi- and the same probability law.
ble that XYZ’s performance is a fluke? This distinction is the essence of the data-
In searching for the presence of any snooping problem. There will always be a
effect, whether it is superior investment per- winner. The question is: Does winning tell us
formance or a causal relationship between anything about the true nature of the winner?
two characteristics, the dilemma will always In the case of standardized test scores, the
be present: If the effect exists, data mining generally accepted answer is yes. In the
algorithms will generally detect it; if the effect case of the n independently and identically
does not exist, data mining algorithms will distributed mutual funds, the answer is no.
usually still find an “effect” anyway. The latter The larger the sample, the larger (and there-
i case is known as a “data-snooping bias.” fore perhaps more tempting and hard to
n The problem is being aware of the kind of ignore) the largest score is likely to be.
d result you have. To quantify this effect, we can derive the
To see how serious a problem this dilem- probability law of the return R* of the “best-
e ma can be, suppose we have a collection of performing” fund:
x n mutual funds with (random) annual returns
e R1, R2,…, Rn respectively that are mutually R* = Max [ R 1, R 2,…, R n ]
s independent and have the same probability (4)
distribution function FR(r).8 That is, they have
nothing to do with each other. which is given by the following
s Now, for concreteness, suppose that distribution function:
e these returns are normally distributed with
c an expected value of 10% per year and a FR*(r) = [FR(r)]n. (5)
o standard deviation of 20% per year, roughly
n comparable to the historical behavior of the After data snooping, the
S&P 500. Under these assumptions, what is probability of observing per-
d the probability that the return on fund i formance greater than 50% is
exceeds 50%? Because the distribution is given by:
q normal, we know the probability in any given
u year is about 2.3%: Prob(R* > 0.50) = 1 – Prob(R* ≤ 0.50) = 1 –
a [FR(r)]n = 1 – (0.9772)n. (6)
Prob(R > 0.50) = 1 – Prob(R ≤ 0.50) = 0.0228 .
r (3) When n = 1, the probability that R*
t exceeds 50% is the same as the probability
e But suppose we focus not on any arbi- that R exceeds 50%: 2.3%. But among a
r trary fund i, but rather on the fund that has sample of n =100 securities, the probability
the largest return among all n funds. that R* exceeds 50% is 1 – (0.9772)100 or
Although we do not know in advance which 90.0%! Not surprisingly, the probability that
fund this will be, nevertheless we can char- the largest of 100 independent returns
acterize this best-performing fund in the exceeds 50% is considerably greater than
abstract, in much the same way that college the probability that any individual fund’s
admissions offices can construct the profiles return exceeds 50%. However, this has no
of the applicants with the highest standard- bearing on future returns since we have
ized test scores. However, this analogy is assumed that all n funds have the same
not completely accurate because on aver- mean and variance, and they are statistical-
age, test scores do seem to bear some rela- ly independent of each other.
tion to subsequent academic performance. How are the properties of R* related to
In our n-fund example, even though there data-snooping biases in financial analysis?
will always be a “best-performing” fund or Investors often focus on past performance as
“winner,” the subsequent performance of a guide to future performance, associating
this winner will be statistically identical to all past successes with significant investment
the other funds by assumption, i.e., the skills. But if superior performance is the

63
unavoidable result of the selection proce- strong buying and selling opportunities in
dure—picking the strategy or manager with the near-term.
the most successful track record, for exam-
ple—then past performance is not necessari- The magnitudes and decay pattern of the first
ly an accurate indicator of future performance. 12 autocorrelations and the statistical signif-
In other words, the selection procedure icance of the Box-Pierce Q-sta-
may bias our perception of performance, tistic suggest the presence of a
causing us to attribute superior performance high-frequency predictable
to an investment strategy or manager that component in stock returns.
was merely “lucky.”
There are statistical procedures that can Despite the fact that both
partially offset the most obvious types of statements have the same i
data-snooping biases,9 but the final arbiter basic meaning—that past n
must inevitably be the end-user of the data- prices contain information for d
mining algorithm. By trading off the cost of predicting future returns—
one type of error (detecting effects that do most academics find the first
e
not exist) with the other (not detecting statement puzzling and the second plausi- x
effects that do exist), a sensible balance ble. e
between data mining and data snooping These linguistic barriers underscore an s
needs to be struck.10 The necessity of at important difference: Technical analysis is
least some human intervention at the deci- primarily visual, while quantitative finance is
sion-making point may yet prove to an insur- primarily algebraic and numerical. Technical
s
mountable limitation of artificial intelligence. analysis employs the tools of geometry and e
pattern recognition, while quantitative c
Pattern Recognition finance employs the tools of mathematical o
One of the biggest rifts that divide aca- analysis and probability and statistics. In the n
demic finance and industry practice is the wake of recent breakthroughs in financial
separation between technical analysts and engineering, computer technology, and
d
their academic critics. In contrast to funda- numerical algorithms, it is no wonder that
mental analysis, which was quick to be quantitative finance has overtaken technical q
adopted by the scholars of modern quanti- analysis in popularity. The principles of port- u
tative finance, technical analysis has been folio optimization are far easier to program a
an orphan from the very start. In some cir- into a computer than the basic tenets of
cles, technical analysis is known as “voodoo technical analysis.
r
finance.” In his influential book A Random Nevertheless, technical analysis has sur- t
Walk Down Wall Street, Burton Malkiel (1996) vived, perhaps because its visual mode of e
concludes that “[u]nder scientific scrutiny, analysis is more conducive to human cogni- r
chart-reading must share a pedestal with tion, and because pattern recognition is one
alchemy.” of the few repetitive activities for which com-
One explanation for this state of contro- puters do not have an absolute advantage
versy and confusion is the unique and some- (yet). However, this is changing. Artificial
times impenetrable jargon used by technical intelligence has made admirable progress in
analysts. Campbell, Lo, and MacKinlay the automation of pattern detection, hence
(1997, pp. 43–44) provide a striking example the possibility of automating technical analy-
of the linguistic barriers between technical sis is becoming a reality. In particular, Lo,
analysts and academic finance by contrast- Mamaysky, and Wang (2000) have pro-
ing these two statements: posed an algorithm for detecting technical
indicators such as “head-and-shoulders”
The presence of clearly identified support and “double bottoms,” and have applied it
and resistance levels, coupled with a one- to the daily prices of several hundred U.S.
third retracement parameter when prices lie stocks over a 30-year period to evaluate the
between them, suggests the presence of information content of such patterns.

63
This process is motivated by the general can be developed.
goal of technical analysis, which is to identi- 2. Construct a kernel-regression estimator
fy regularities in the time series of prices by of a given time series of prices so that its
extracting nonlinear patterns from noisy extrema can be determined numerically.
data. Implicit in this goal is the recognition 3.Analyze the fitted curve of this esti-
that some price movements are signifi- mator for occurrences of each techni-
cant—they contribute to the formation of a cal pattern.
specific pattern—and others are merely ran- The first step is the most challenging
dom fluctuations to be ignored. In many since this is the heart of the pattern-recog-
cases, the human eye can perform this “sig- nition algorithm and where much of the
nal extraction” quickly and accurately, and creativity of human technical analysts
i until recently computer algorithms could not. comes into play. For example, note that
n However, “smoothing” estimators such as only five consecutive extrema are required
d kernel regression are ideally suited to this to identify a head-and-shoulders pattern
task because they extract nonlinear relations (although its completion requires two
e by “averaging out” the noise. Lo, Mamaysky, more, where it initially and finally crosses
x and Wang (2000) use these estimators to the “neckline”). This follows from the for-
e mimic, and in some cases sharpen, the skills malization of the geometry of a head-and-
s of a trained technical analyst in identifying shoulders pattern: three peaks, with the
certain patterns in historical price series. middle peak higher than the other two.
Armed with a mathematical representa- Because consecutive extrema must alter-
s tion of the time series of historical prices nate between maxima and minima for
e from which geometric properties can be smooth functions,11 the three-peaks pat-
c characterized in an objective manner, they tern corresponds to a sequence of five
o construct an algorithm for automating the local extrema: maximum, minimum, high-
n detection of technical patterns consisting of est maximum, minimum, and maximum.
three steps: Lo, Mamaysky and Wang (2000) define
d 1. Define each technical pattern in terms of nine other technical patterns in similar fashion,
its geometric properties, e.g., local and once they have been given mathematical
q extrema (maxima and minima) so that precision in this way, the detection of these
u algorithms for identifying its occurrence patterns can be readily automated. An illustra-
a
Figure 2
r
t Head and Shoulders
e 36

r
35

34

33
Price

32

31

30

29

28
0 5 10 15 20 25 30 35 40
Day

63
tion of their algorithm at work is given in Figure visualization. Artificial intelligence will
2. When they apply their algorithm to daily undoubtedly play a more central role in
prices of over 300 US stocks from 1962 to active investment manage-
1996, they find that certain technical indicators ment, but this does not imply
do provide incremental information, and that that indexation will become
technical indicators tend to be more informa- less relevant for investors.
tive for NASDAQ stocks than for NYSE or Artificial intelligence and
AMEX stocks. active management are not
While pattern-recognition techniques at odds with indexation, but
have been successful in automating a num- instead imply the evolution of
ber of tasks that were previously considered a more sophisticated set of
to be uniquely human endeavors—finger- indexes and portfolio man- i
print identification, handwriting analysis, and agement policies for the typi- n
face recognition, for example—neverthe- cal investor, something investors can look d
less, it is possible that no machine algorithm forward to, perhaps within the next
is a perfect substitute for the skills of an decade. Imagine a software program that
e
experienced technical analyst. However, if constructs a custom-designed index for x
an algorithm can provide a reasonable each investor according to his or her risk e
approximation to some of the cognitive abil- preferences, financial objectives, insur- s
ities of a human analyst, such an algorithm ance needs, retirement plans, tax bracket,
can be used to leverage the skills of any etc. This “SmartIndex” will serve to guide
technician. Moreover, if technical analysis is each investor towards a path of long-term
s
an art form that can be taught, then surely its financial security—a path that is unique to e
basic precepts can be quantified and auto- each investor—so that if an investor’s port- c
mated to some degree. And as increasingly folio return differs by more than a certain o
sophisticated pattern-recognition tech- margin from the return of his or her n
niques are developed, a larger fraction of SmartIndex, this will serve as an “early-
the art will become a science. warning signal” to change the investment
d
policy to get back on track. Whether or not
Conclusions an investor’s portfolio outperforms the S&P q
In this article, I have only scratched the 500 in any given year will not be as relevant u
surface of the many applications of artifi- as whether it outperforms the investor’s a
cial intelligence that will be transforming SmartIndex, hence such an innovation will
financial technology over the next few change the very nature of indexation and
r
years. Other emerging technologies the role of index funds and benchmark t
include artificial markets and agent-based returns. This concept may well be science e
models of financial transactions, electronic fiction today, but the technology for r
market-making, modeling emotional SmartIndexes already exists. As with the
responses as computational algorithms transformation of all great ideas from theo-
(“affective computing”), the psychophysi- ry into practice, it is only a matter of time.
ology of risk preferences, and financial

63
References
Ait-Sahalia, Y. and A. Lo, 1998, “ Nonparametric Estimation of State-Price Densities Implicit In Financial Asset Prices” , Journal
of Finance 52, 499–548.
Campbell, J., Lo, A., and C. MacKinlay, 1997, The Econometrics of Financial Markets. Princeton, NJ: Princeton University Press.
Hald, A., 1990, A History of Probability and Statistics and Their Applications Before 1750. New York: John Wiley and Sons.
Hertz, J., A. Krogh, and R. Palmer, 1991, Introduction to the Theory of Neural Computation. Reading, MA: Addison-
Wesley Publishing Company.
Hamilton, W., 1922, The Stock Market Barometer. New York: Harper and Row (republished in 1998 by John Wiley & Sons).
Hutchinson, J., Lo, A., and T. Poggio, 1994, “ A Nonparametric Approach to Pricing and Hedging Derivative Securities via
Learning Networks” , Journal of Finance 49, 851–889.
Iyengar, S. and J. Greenhouse, 1988, “ Selection Models and the File Drawer Problem” , Statistical Science 3, 109–135.
Jin, L. and A. Lo, 2001, “ How Anomalous Are Financial Anomalies?” , unpublished working paper, MIT Sloan School of
Management.
i Leamer, E., 1978, Specification Searches. New York: John Wiley & Sons.
LeRoy, S., 1973, “ Risk Aversion and the Martingale Property of Stock Returns” , International Economic Review 14, 436–446.
n
Lo, A., 1994a, “ Neural Networks and Other Nonparametric Techniques in Economics and Finance” , in H. Russell Fogler, ed.:
d Blending Quantitative and Traditional Equity Analysis. Charlottesville, VA: Association for Investment Management and
e Research.

x Lo, A., 1994b, “ Data-Snooping Biases in Financial Analysis” , in H. Russell Fogler, ed.: Blending Quantitative and Traditional
Equity Analysis. Charlottesville, VA: Association for Investment Management and Research.
e Lo, A. and C. MacKinlay, 1990, “ Data Snooping Biases in Tests of Financial Asset Pricing Models” , Review of Financial Studies
s 3, 431–468.
Lo, A. and C. MacKinlay, 1999, A Non-Random Walk Down Wall Street. Princeton, NJ: Princeton University Press.
Lo, A., Mamaysky, H., and J. Wang, 2000, “ Foundations of Technical Analysis: Computational Algorithms, Statistical Inference,
s and Empirical Implementation” , Journal of Finance 55, 1705–1765.
e Lucas, R., 1978, “ Asset Prices in an Exchange Economy” , Econometrica 46, 1429–1446.
c Malkiel, B., 1996, A Random Walk Down Wall Street. New York: W.W. Norton.
Samuelson, P., 1965, “ Proof that Properly Anticipated Prices Fluctuate Randomly” , Industrial Management Review 6, 41–49.
o
White, H., 1992, Artificial Neural Networks: Approximation and Learning Theory. Cambridge, MA: Blackwell Publishers.
n
d
Footnotes
1 See Hamilton (1922) for a thorough exposition of the Dow Theory.
q 2 See, for example, Hald (1990, Chapter 4).
3 See, for example, Lo and MacKinlay (1988, 1999).
u 4 The adjective “ artificial” is often used to distinguish mathematical models of neural networks from their biological counter-
a parts. For brevity, I shall omit this qualifier for the remainder of this article although it will be implicit in all of my references to
neural networks.
r 5 Since this is meant to be an overview, I will not hesitate to sacrifice rigor for clarity. A more extensive overview is given in Lo
t (1994). More mathematically-inclined readers are encouraged to consult Hertz, Krogh, and Palmer (1991) for the general the-
ory of neural computation, White (1992) for the statistics of neural networks, and Ait-Sahalia and Lo (1994) and Hutchinson,
e Lo, and Poggio (1994) for financial applications.
6 Specifically, the particular neural network is a “ single hidden-layer feedforward perceptron” with five nodes. As of yet, there
r are no formal rules for selecting the optimal configuration or “ topology” of a neural network, and this is one of the primary
drawbacks of neural network models. Currently, experience and heuristics are the only guides we have for specifying the net-
work topology.
7 However, since newly created derivative securities can often be replicated by a combination of existing derivatives, this is not
as much of a limitation as it may seem at first.
8 Recall that the distribution function FR(r) of a random variable R is defined as FR(r) = Prob(R <r).
9 See, for example, Leamer (1978), Iyengar and Greenhouse (1988), Lo and MacKinlay (1990, and Lo (1994).
10 See Jin and Lo (2001) for an example of this kind of analysis in a financial context.
11 After all, for two consecutive maxima to be local maxima, there must be a local minumum in between, and vice versa for two
consecutive minima.

63

You might also like