Professional Documents
Culture Documents
Market Technician No 60
Market Technician No 60
Market Technician No 60
IN THIS ISSUE
Bytes and pieces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 M. Feeny Book review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Systematic trading: it is more about risk than you may imagine . . . . . . . . . . . . . . . . 6 System building and back testing opportunities, methods and caveats . . . . . 8 Technical trading systems . . . . . . . . . . . . . . . . 12 Rediscovering Ganns Law of Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Honing your trading skills . . . . . . . . . . . . . . . . 20
F. Cavasino
S. Downey
D. Linton J. Smithson
A. Rudolph
Networking
WHO TO CONTACT ON YOUR COMMITTEE CHAIRMAN Adam Sorab: adam.sorab@cqsm.com TREASURER Simon Warren: warrens@bupa.com PROGRAMME ORGANISATION Mark Tennyson-d'Eyncourt: mdeyncourt@csv.org.uk Axel Rudolph: axel.rudolph@dowjones.com LIBRARY AND LIAISON Michael Feeny: michaelfeeny@yahoo.co.uk The Barbican library contains our collection. Michael buys new books for it where appropriate. Any suggestions for new books should be made to him. EDUCATION John Cameron: jrlcameronta@tiscali.co.uk IFTA Robin Griffiths: robin.griffiths@rathbones.com MARKETING Clive Lambert: clive@futurestechs.co.uk David Sneddon: david.sneddon@csfb.com Simon Warren: warrens@bupa.com Karen Jones: karen.jones@ commerzbank.com MEMBERSHIP Simon Warren: warrens@bupa.com REGIONAL CHAPTERS Alasdair McKinnon: AMcKinnon@sit.co.uk SECRETARY Mark Tennyson dEyncourt: mdeyncourt@csv.org.uk STA JOURNAL Editor, Deborah Owen: editorial@irc100.com WEBSITE David Watts: DWattsUK@aol.com Simon Warren: warrens@bupa.com Deborah Owen: editorial@irc100.com Please keep the articles coming in the success of the Journal depends on its authors, and we would like to thank all those who have supported us with their high standard of work. The aim is to make the Journal a valuable showcase for members research as well as to inform and entertain readers. The Society is not responsible for any material published in The Market Technician and publication of any material or expression of opinions does not necessarily imply that the Society agrees with them. The Society is not authorised to conduct investment business and does not provide investment advice or recommendations. Articles are published without responsibility on the part of the Society, the editor or authors for loss occasioned by any person acting or refraining from action as a result of any view expressed therein.
MARKET TECHNICIAN
Book review
By Michael Feeny, MSTA
The area on the chart between j and m is the cloud. Traditionally this area is shaded. One colour is used if j > m and a different colour is used if j < m.This colourisation dominates the chart, and is what makes an Ichimoku Kinko chart so immediately distinctive. As Elliott notes: The thickness of the cloud is important. The thicker the cloud, the less likely it is that prices will manage a sustained break through it. The thinner the cloud, the better chance of a break through. So cloud is cloud regardless of whether Span A or Span B is on top; the thickness is what matters.
.N225, Last Trade [Candle], Last Trade [Ichimoku 9, 26, 52, 26] Daily
15Aug07 - 15Jan08 Pr JPY 17800 17600 17400 17200 17000 16800 16600 16400
Ichimoku Charts: an introduction to Ichimoku Kinko Clouds, by Elliott, N. Harriman House Ltd, 2007.
If you have studied hard to learn to appreciate Japanese candle charts and now think you know all about Japanese techniques of technical analysis, I have to tell you that there is more to come, there is another name with which to become familiar: Ichimoku Kinko. Essentially Ichimoku Kinko is a method of overlaying moving averages on a candle chart, but with a time shift. Fortunately Nicole Elliott has opened up this method of forecasting price over time to Western analysts by writing a clear guide on the methodology. Goichi Hosoda began overlaying moving averages onto candle charts in the 1940s, and he published his method in 1968 (writing under the pseudonym Ichimoku Sanjin), but, being computationally intensive, the method did not take off until the mid-1990s with the advent of affordable computers.The revival was triggered by Sasaki (1996), a classic work now in its 18th edition.
.N225 , Last Trade, Candle 10Dec07 16007.33 16017.14 15826.25 15924.39 .N225 , Last Trade, Tenkan Sen 9 10Dec07 15598.50 .N225 , Last Trade, Kijun Sen 26 10Dec07 15662.29 .N225 , Last Trade, Senkou Span(a) 52 14Jan08 15630.39 .N225 , Last Trade, Senkou Span(b) 52 14Jan08 16079.41 .N225 , Last Trade, Chikou Span 26 05Nov07 15924.39
15Aug07
29Aug
05Sep
12Sep
19Sep
26Sep
03Oct
10Oct
17Oct
24Oct
31Oct
07Nov
14Nov
21Nov
28Nov
05Dec
12Dec
19Dec
26Dec
02Jan
09Jan
The terms used in Ichimoku Kinko may seem forbiddingly unfamiliar, but that is only because they are in Japanese; the concepts themselves are straightforward enough. The system uses a daily candle chart, on which are overlaid two moving averages and three other derived lines. The area on the chart between two of these lines is called a cloud. The cloud is plotted on the chart not concurrently, but with a lead of 26 days ahead. So the latest plot of the cloud lies 26 working days ahead of the daily candle from which it is constructed. A worksheet would contain the following data points: a. days date b. days open c. days high d. days low e. days close f. days close plotted 26 days behind days date g. mid-point of the days range = (c + d)/2 h. 9-day moving average of g, plotted at days date i. j. l. 26-day moving average of g, plotted at days date (h + i)/2 plotted 26 days ahead of days date lowest g of past 52 working days
Elliott does not find the crossover point of the cloud lines (and the consequent change of colour of the cloud) to be significant other than the fact that at that point the cloud is at its thinnest . So the principle is that Ichimoku Kinko is a method of overlaying moving averages on a daily candle chart. Two moving averages are plotted concurrently, but some other derived numbers are plotted with a time shift in the belief that these may then act as support or resistance.The cloud referred to is the area on the chart lying between two of these numbers (Senkou Span A and B). If todays candle is above the cloud, the trend is for higher prices: the top of the cloud is the first level of support, and the bottom of the cloud is the second level of support. Conversely, if todays candle is below the cloud, the trend is for lower prices: the bottom of the cloud is the first level of resistance, and the top of the cloud is the second level of resistance. A trend reversal could be suggested if the price moves through the cloud and closes beyond the cloud. If the price moves into the cloud but fails to go through it, the daily candles should be watched to see if they give a reversal signal. But in principle Ichimoku Kinko is a system for markets that are trending. It is not for sideways markets. The thicker the cloud, the less likely it is that prices will manage a sustained break through it, regardless of whether Span A or Span B is on top. Software packages strongly distinguish these two cloud types by the use of bright colours, but this is over-emphasis the colour of the chart leaps out, to become the most instantly recognisable feature of Ichimoku charts, whereas in reality it is the thickness of a cloud
k. highest g of past 52 working days m. (k + l)/2 plotted 26 days ahead of days date. From this worksheet, b to e are plotted as the candles, and the other plotted lines have the following names: f. i. j. Chikou Span Kijun-sen Senkou Span A h. Tenkan-sen
m. Senkou Span B
MARKET TECHNICIAN
that matters rather than its colour per se. If a cloud is becoming thicker that indicates a lessening in the chance of a trend reversal in the period ahead. Thin sections in the cloud suggest times when the market is likely to change trend, and at what price. The clouds indicate price levels that need to be broken for a significant turn, with the move accelerating as prices break through the crossover point. Elliott notes the caveats: support and resistance levels have to be given a little leeway ,very often the market seems to fail somewhere in the middle of the cloud , and target dates should be taken to mean three or four days around the central day . A wide distance between the cloud and the current price is not significant according to Elliott. In a fast-accelerating uptrend, she advises watching the candles for signs of a crash, rather than watching the cloud.
the cloud changes colour) indicates price levels that need to be broken for a significant turn, with the move accelerating as prices slice through the crossover point . By contrast, if the current price is far above the cloud that does not in itself cause a presumption that the price move is overstretched to unsustainable levels; rather the opposite, it indicates that a powerful trend is in place. In such a situation, Elliott would watch the daily candles closely for warning signs of instability or reversal.
Using further analysis: interlinked principles of waves, price targets and timespan
Consolidation patterns can be divided into small waves and, on breakout, the size of the consolidation will determine the size of the breakout wave. Elliott describes the wave theory of Hosoda (Ichimoku Sanjin) and Sasaki, and discusses their differences from Western wave counts. Sasakis long-term counts can be of unlimited number, but she concludes: Personally, I do not feel these are true wave counts, as Westerners understand them. I think Sasaki is merely marking intermediate highs and lows in some way so that these can be pinpointed clearly, prior to further analysis and classification into the different pattern types. Elliott is dismissive of the Ichimoku price target technique, finding it too fiddly and short-term. Elliott also describes Ichimoku timespan principle, in which approximate compounds of 9 days give the series 9, 17, 26, 33, 42, 65, 76, 129, 172, 257. Counted ahead from an important high or low, these timespan numbers are used to project how many days ahead an interim high or low is likely to occur.The more counts that end at, or close to, the same day in the future, the more likely that that day will see a trend end and reverse. Again, all these numbers were discovered through trial and error... Elliott notes the formidable caveats: the need to allow a day or two either side of each number and the fact that there can be gaps in the count. She feels, and I would agree, that there are simply too many potential turning dates to cope with. [I would extend this criticism to areas of Western technical analysis.] She concludes that in looking for turning points it is the cloud itself rather than the wave counts or price targets that are the basis of my view; if and when the two coincide the likelihood increases .
MARKET TECHNICIAN
In very strongly trending markets Elliott does not use wave counts nor most price targets, nor the Relative Strength Index (RSI), but she would be cognisant of time targets. In sideways markets she would not use moving averages nor the Ichimoku cloud. Chapter 5 usefully puts the whole technique together in a set of case studies. Chapter 6 introduces the application of options to Ichimoku time and price analysis. Buy options on a daily or weekly close through key levels such as the cloud and be ready to sell out an option which has reached its price target sooner than expected in order to recoup some time premium. Grant options whose strikes are just beyond pattern objectives or big clouds. Elliott is strong on options, and this chapter is recommended reading for its examples of options strategies at least as much as for its application of Ichimoku techniques.
Written in a straightforward and conversational style, Nicole Elliott has done us a good service in promulgating Ichimoku Kinko. It is essentially an empirical study and my only disappointment was the scope of the book did not extend to some rationalisation of the technique together with back-testing of results and probabilities of success. It is, however, likely to trigger further study, perhaps from a more academic perspective. Being generally little known, at least among Western investors, Ichimoku Kinko adds a new dimension to candle stick charts. As a keen proponent of candle charts, I must confess that I have not, as yet, developed the same rapport with clouds. But who knows? Perhaps armed with Elliotts very useful book, I will come to see that, yes, every cloud does indeed have a silver lining. Michael Feeny was for 15 years Chief Economist and Market Analyst at Sumitomo Mitsui Banking Corporation.
MARKET TECHNICIAN
Instrument selection
Harry M. Markowitz and his CAPM showed that positions which are highly positively correlated will increase the overall portfolio risk; this is why most professional systematic traders attempt to trade a high number of uncorrelated markets, ranging from soft commodities to equities and grains, from energy and foreign exchange to short term interest rates.They are looking for lowly correlated instruments to aid their portfolio diversification to which they apply their trading models. As a result, a multi-asset class portfolio is likely to be more efficient than one focused exclusively on one area. However, even the exercise of diversification is not easy. Correlations are not stable at all, they are much more volatile than returns and volatility. In particular market conditions such as a flight to quality - correlations will jump, making the portfolio risk measures increase dramatically and unpredictably. Most importantly, the more a portfolio composition is based on the
Risk allocation
After having gone through the process of instrument selection, strategy design and testing, several statistical tests such as stresstesting, scenario and what-if analysis should be constructed in order to estimate the potential future losses and to calibrate risk according to the given mandate. In this business, the correct determination of the targeted risk is a step which is at least as important as strategy design. It is industry practice to express the sizing as a function of capital and be aware
MARKET TECHNICIAN
of the exposure limits.These limits can be expressed as gross and net exposures, beta exposure, leverage and so on.This way of looking at the portfolio is partially misleading from a risk perspective: being invested in Tesco for the 5% of the capital under management does not tell us much about the risk.The only thing measured is the exposure towards the stock, not its riskiness. Any sizing done as a function of the capital managed is calibrating risk only as a by-product. Sizing should be done as a function of the fluctuations of the asset traded i.e. its volatility. Investing 5% of capital in Vodafone has a different risk compared to 5% invested in BHP Billiton: the yearly volatility of Vodafone may well be around 25% while BHPs could be as high as 40%.Without considering the impact of correlations, having positions rescaled as a function of volatility should generate a more diversified and efficient portfolio. The exercise of rescaling positions as a function of volatility can be done with a variety of indicators: from the standard deviation of historical returns to Average True Range (ATR) or by computing the Value at Risk or VaR, which measures the maximum loss that could be incurred in a set period of time at a certain level of probability. At a single position level, for the sake of accuracy ATR is probably the most effective measure because it also considers the intraday swings while the others consider just close-to-close variations. By calibrating the size as a function of the instrument variability, the strategy will continuously be adjusting its size maintaining the targeted risk. In this way one can manage markets volatility, without being driven by it. Targeting the correct amount of risk, given the risk limits associated to the mandate is critical. Lets say there are two traders: Lewis and Fernando, running the same system called Silver Arrow on the same instruments and asset classes, with the same amount of capital and trading limits, over the same period. Can Lewis end up hitting his portfolio stop-loss while Fernando generates a profit by year end? Certainly he can. Lewis could target an average risk level too high for the given mandate and end up being stopped out before the realisation of the following run up. Meanwhile the other trader by keeping a lower targeted risk, does not trigger the stop loss and still has the chance to recover the losses generating a positive return for the period. So, even with an overall profitable strategy, a trader can trigger his stop loss if he is targeting a risk which is too high, given the mandate. In a way, risk budgeting is more important than the strategy itself and the above mentioned example demonstrates the implications of being too greedy. At portfolio level, risk metrics are expressed in a variety of ways: stop loss year-to-date, realised volatility and Value at Risk (VaR) are among the most common. Applied to a directional system, VaR and realised volatility can help determine the actual risk of the portfolio in normal market conditions. Financial literature describes many ways to compute these statistics especially VaR each one based on some underlying statistical assumptions. One of the most interesting VaR calculations is called non-parametric or historical: it just computes the value associated to the n-th percentile of the historical returns distribution of a portfolio with no underlying assumption on the shape of the distribution. This allows the analyst to capture the overall instruments volatility, correlations and even the sudden changes in correlations which generate the fat-tails across the analyzed period. Computed on at least a couple of years of daily data, it should provide a fairly
reliable measure of risk and its implementation is also quite straightforward. Realised annualised volatility is the standard deviation of the realised daily returns of a portfolio. It is a measure of the realised risk of the book and it is an industry standard. For directional portfolios it is a useful tool to determine the average risk taken by the manager. Realised annualised volatility and parametric VaR are statistically linked together. In a nutshell, under the assumptions of normality, stationarity of volatilities and correlations and independency of portfolio returns, an annualised volatility of 16.00% should correspond to a daily standard deviation of 1.00% and an average daily VaR @ 97.5% probability of 2.00% meaning that a book that has 16% annualised volatility should not lose more than 2% in one day at 97.5% probability. By taking advantage of the statistics associated with the normal distribution, we also know that events of +/- 2 standard deviations will happen over time with a probability of 2.5%. Assuming an Information Ratio1 of 1.002, a volatility of 16% and the normality and independency of returns assumptions, a negative annual return should roughly be realised one year out of seven and a -16% annual return or more will be realised one year out of 40. Assuming we have a stop loss year to date of -16% with an information ratio of 1.00 and targeting a volatility of 16%, statistical simulations demonstrate the worse drawdowns could be in the region of 30% i.e. -2 standard deviations. Given the statistics that many strategies generate when back tested, these numbers sound much more worrying or are they just more realistic than our over-optimistic expectations? Especially when considering that the hypothesis of normality, stationarity and independency are quite simplistic and not exactly the most conservative ones! Results can be exciting in backtesting but it is very difficult to move away from these figures.Too often, system designers focus their efforts on the historical results of their system without realising that those performance analytics are just a reflection of a limited data sample drawn from an unknown population.They should be followed as an indication rather like a map in a treasure hunt, not as the output of a precise navigation system. In order to obtain a more realistic picture of risk, some stress testing should also be applied. Scenario and what-if analysis will help in formulating estimates for market conditions which did not happen in the historical dataset but that could still occur in the future. However, even stress test design is full of perils and designing a stress test is more of an art than a science. In essence it is about designing a market scenario which might never happen or may occur on a handful of occasions but which is also plausible from an economic point of view. One can increase volatilities, change correlations, create trends, manipulate the data in a million ways but defining where lies the fine line between the possible and the science fiction is not easy and it is completely arbitrary. Alternatively, it is also possible to let the computer randomly design a huge number of scenarios and then look at the aggregate results. In this case some of the outliers will be truly extreme and should be considered cum grano salis . However, even if some of those scenarios are meaningless, they will give a deeper understanding of the risks the portfolio will be running. Continued on page 11
MARKET TECHNICIAN
systems can be when placed as part of a portfolio is the sideways system. The entry is just the 5 period Stochastic crossing up or down provided that the Adx is falling, which indicates a lack of trend. The sensitive nature of the Stochastic means that it acts as a stop loss, the only other money management being a profit target. This is set at a relatively high percentage so that each time a new trend begins both the trend following and sideways systems are active. The next key area in building systems in the element of time and timing. Several questions can be asked. G G G G G G What is the time of day that entry is occurring? Based on that should risk and expectation be adjusted? Where are you entering in the trend? What is your risk profile over the first 5 bars? How long should your trade last? Did losers turn into winners?
Taking the first two questions first. Figure 2 shows a study called Volatility Time Bands. These compute the normalized range on any time frame chart and then take a user defined average for that time of day. When the bar opens it then plots 1 2 and 3 standard deviations around that opening price, which provides a fixed view of risk and expectation within the early part of the trades history. As can be seen on the chart, the bands concertina up and down depending on the time of day. This is critical information that allows not only a firm understanding of what to expect but, depending on the time of day, allows for changes in volume as absolute risk adjusts. The importance of this when system building (or for that matter prop trading) should not be underestimated.
Figure 2: Point A is Australia, Point B and C Japans and Londons opening, whilst D and E are when American statistics are released.
The next four questions are answered by using the Cqg Entry Signal Evaluator.This application allows the trader to input their entry code and then assess its performance 60 bars into the future.There is no money management so it provides both the best and worst case scenario.Various analytical tools and graphics are included in the results but one of the key ones is the summary of the profit curve (Figure 3). Running a basic system test may say that the system is poor when in fact it is simply that the timing of entry is poor. Entry
MARKET TECHNICIAN
Signal Evaluator will tell you whether your timing of entry is correct. Where you are entering in the trend is determined by where the low point occurs and peak point before the first reaction. Exhaustive and extensive years of testing reveals that normalized trends peak at 15 to 20 bars from inception before a correction occurs.Therefore, understanding where your peak point is, tells you how far the trend must have been going before your system recognized that fact.
Figure 3:
What will be the opening position size of your next trade? Nearly all systems will experience a period of at least 10 losses in a row and these occurrences normally are associated with the longest drawdown durations and greatest damage. Therefore, creating code that lowers volume after each loss and then returns to the original volume after a winning trade will often make a huge difference to the systems overall robustness. The next volume based consideration is pyramiding. Most pyramiding techniques increase profitability but at the expense of stability. This is usually due to the fact that most pyramiding techniques enter the trade based on the close. Not surprisingly, in a downtrend closes tend to be near the lows and in up trends near the highs. This means that risk increases. One way around this is to use a more sophisticated technique that involves identifying levels that are intraday corrections to the trend. Range Deviation Pivots are based on the days opening and therefore provide a fixed historical reference to both the length and directional bias. They have an in built trend identifier which means that in contrast to the Volatility Time Bands they are not symmetrical. The Deviation Pivots are closer together and nearer the opening above the market when the trend is identified as down and vice versa when the trend is up. This means that the pyramid can be placed as a limit order at the 1st and 2nd deviation. In all the systems I have ever added this method to, profits always increase and in the vast majority of cases it has a limited impact on robustness - and can actually improve it. Figures 4 and 5 show the difference between using the pyramids and not on a Cqg customers trend following system.
Figure 4:
When building trading models, a common error is to focus mainly on entry signals. But exit methodology and sound money management will have far more effect on individual model profitability, while careful portfolio selection (market, trading model or timeframe) can underpin a profitable long term strategy. A generic exit code should ideally have the following: It must be dynamic to volatility or range and timeframe especially on commodities, bonds and indices. FX can rely on tick based concepts. Percentage money management over initial period or sensitive technical exit based on the time of day of entry. From there a wider money management stop. High level low level disaster exit. Profit target based on percentage (not FX) and linked to the time of day as the trade develops. Time. The trade can only last so long before it must be exited. (This concept and reason for its importance are discussed later). A variable look back test so market timing is determined.
Figure 5:
All of the concepts above reflect money management. When using technical analysis based code, the rules are different. A common error is not building an exit code of sufficient robustness. This is an easy trap to fall into as the code only creates an exit if there has been an entry. This means there may have been many times when an exit was true but is not shown, because some other rule beats it to it, or there simply wasnt a trade that needed to be exited. Therefore the basis on any exit must be tested as though it is an entry. If it stands up to the rigors of the Entry Signal Evaluator then not only is the code valid, it can also potentially form the basis of a contra trend system. The time of day that a trade is entered can allow for moments of increased volume but there are other volume based considerations.
MARKET TECHNICIAN
Common traps and pitfalls include curve fitting, optimization abuse, sequences, poor data, and synchronization.The first two are by far the most common as the latter three have become less of an issue as technology and electronic data have ironed out these problems. Curve fitting is a natural human tendency that involves looking at charts and then creating code that captures the major trends without any one off disasters.This is especially true if one is trying to build systems that are all things to all market conditions. If your entry code has more than six rules and you find yourself adding code in order to stop one off large losses, or a sequence of losses, you are probably falling into this trap. Whilst curve fitting is delusional and means that it is highly unlikely that future results will match the historical ones, optimization abuse is a one way ticket to oblivion. Markets continually evolve and change, sometimes drastically such as crude oil, so building systems or money management that use what was the best combination in the past, is a sure fire way of guaranteeing that they will not be the best combination in the future. This is especially true in individual stocks as the value can change dramatically over time, in contrast to futures and FX which tend to be more mean reverting. Highlighting the folly of balancing your system on a pin is the statistical fact that if you took the most recent 100 trades and then the next 100 trades followed the same buy sell and profit sequence it will have a 1 in approximately 11 billion chance of happening. However, in the correct hands the use of optimization is an integral part of understanding the driving force of any system, and just as importantly provides a true guide to the robustness of the system itself.This is done via the use of three dimensional graphics. Nearly all systems whatever the number of rules and exits normally have a few key components that are the key to the system. Optimizing all the variables associated with the system reveals what those keys are. This information is essential when the system goes live as, if it does not live up to expectations, you must know which areas of the system you must concentrate your efforts. If you dont know this information you must re-analyze everything from scratch. The building blocks of optimization are as follows. The first test is on look back (entry timing point) and pyramid. The second optimization takes just 2 variables and is a short test. Third test has narrowed down ranges of first test and a 3rd variable is added. This continues until the final variable Lastly a large test is done of all the variables at once, remembering that this inevitably means it is creating a compromise solution. This whole process is repeated on an out of sample basis and compared. Out of sample tests are usually worse than the first test (due to subconscious curve fitting), but if they are better this is a good sign. The process is repeated once again with buys and sells separately. Uptrends and downtrends do not behave in a symmetrical fashion so consideration should be given to differences between buy and sell signals particularly from a time based point of view. It is crucial that when analyzing any two variables in 3-d that a change is made in one of the other variables to assess its impact. Figure 6 shows an ideal graphic where by any combination of variables provides a relatively flat profitable outcome. This means
there is a far better chance that the system will work in the future. Sudden peaks and troughs are a recipe for disaster as only a small change in the underlying markets behaviour can cause a huge difference in performance.
Figure 6:
Even if a model appears robust in out of sample testing, it may still be overly dependent upon a particular sequence of trades. Monte Carlo simulation can be used to test for this by reordering the trades randomly, and also provides thresholds for best/worst case scenarios. Minimum and maximum excursion analysis will then tell you how much of any particular trend you are actually capturing. Whilst there are many statistics that can be placed on trading systems the standard deviation of risk is one of the most important. However for the statistics to make sense it is essential that various building blocks are in place in regards to how a trade is money managed. These are stop loss, profit target, and time. The first two seem obvious but the last one is an often overlooked component. By creating a fixed loss and fixed profit we are putting boundaries to what any one trade can produce. However, by placing a limit of time as well, we are creating a box or cube of expectation which means that the standard deviation of risk has the opportunity to provide a sound assessment of a systems ability to reflect its historical results. Most systems statistics are distorted by what I call elephant trades. Curve fitting usually means that there are no elephant losses and also means that there are some elephant winners. These are often very unusual or one off events and the longer the timeframe chart the more dangerous these trades are. If a large win (usually an extended trend) only occurs once every 10 years, you may have to wait another 10 years or more for the next one. Therefore, it is essential that any trend following system has a limit in time of how long the trade lasts. It then must have sufficient sophistication to identify that the trend is continuing and begin a new trend following trade with the same time based exit. The Pyramid system shown earlier utilises this concept. By doing this it means that if an extended trend does occur you are capturing the majority of it by a series of trades rather than one huge one. This means that the standard deviation of risk statistic (and all the other stats) is not distorted by one off events. This time based concept has two approaches. The first is a simple limit in time to how long the trade can last. As mentioned earlier, extensive years of testing using the Entry Signal Evaluator reveal that the first part of a trend (excluding stocks) typically lasts 15 to 20 bars, whatever the timeframe. The key is whether once the correction has ended the trend restarts. This means that in order
10
MARKET TECHNICIAN
to capture a two tier trend the time based exit should either be 20 or 40 bars. The second time based exit is more sophisticated and adjusts automatically depending on how long the trade has lasted. Most system builders and systems themselves have some sort of trailing stop. Whilst there is nothing inherently wrong with this concept, what it guarantees is that any exit is normally a considerable distance from the absolute peak of the trend whatever the level of sensitivity. The parabolic is a sound and consistent trailing stop method if used with the correct variables and has an in built acceleration based on how long the trend has
If wanting to take this concept still further and add another level of sophistication, we can return to the time of day that the trade finds itself. Using the Volatility Time Bands we already have computed the limit of range within any particular timeframe for that time of day and, if this is linked to the overall length of trend as price hits the 3rd deviation, powerful exit points emerge. So powerful that they have the ability to be the basis of contra trend trades in the same fashion as the Rsi divergence based patterns mentioned earlier. The key difference is the fact that the Volatility Time based concepts are infinitely more common (as they use short timeframe charts) and therefore are consistently part of the traders and system builders armoury.
lasted. Whilst the market will eventually catch the parabolic up and create an exit, it makes far more sense to take this concept and reverse the logic. Taking an uptrend as an example, as the trade develops a parabolic based theory starts above the market and drifts downwards as the trend develops and accelerates towards the current price. If built correctly this means those profits are limit orders and are far more likely to be exits nearer to the top of the trend rather that allowing the market to retrace significantly. This theory can be evolved still further by looking at concepts of average true range in relationship to length of trend, enabling the creation of exits that are based on expansion of range that are adjusted based on the number of bars since the trade was initiated. These can also be built on a multitude of levels so that a ratio can be created that links trade length with that expansion. An example would be that the trend is now more than 20 bars since inception and price has reached two times its average true range of the long time average of range.
The power of such concepts is apparent when looking at two wildly different markets using the same system. One is the semaphoric Euribor and the other is the Dollar Swiss. Figure 7 shows the statistics of both markets. Superficiality the Dollar Swiss looks the better bet as profits are 196%. However, closer scrutiny shows that drawdown is 15% and the standard deviation of risk if above 2. You make lots of money but you will be in for wild ride. The Euribor stats are far more reassuring. Drawdown is only 2% and was actually below 1 for the vast majority of the test period. More important is the fact that the standard deviation of risk is very low at 0.2%, whilst still returning 20%.
Shaun Downey is Technical Analyst at Cqg and Writer of Technical commentaries at www.Ransquawk.com. He has recently written a book Trading Time (www.trading-time.com)
Systematic trading: it is more about risk than you may imagine Continued from page 7 Conclusions
This brief discussion is aimed at showing that portfolio composition and risk analysis play a critical part in systematic trading development.They are far more important than defining the optimal length of the RSI or the most effective method of calculating the moving average. Once the mandate and trading limits are defined and the Information Ratio estimated through the strategy and the time series, targeting the right level of risk becomes critical, just as in the Fernando and Lewis example. There are very few things we can control in trading - as in life generally. It is not possible to control the profitability of the strategy;
one can only design it according to a given set of parameters, estimate the profitability on the historical data at disposal and run it. Furthermore, it is not possible to control volatilities and correlations either; one can only estimate them ex-post and react to their sudden changes. However, what one can do is to size the bets using historical data, quantitative tools and some common sense, adjust sizing according to volatility changes and evaluate by how much one could potentially go under water. And believe me, that in itself is a fairly daunting task.
1 2
Information Ratio = Average Annual Return/ Annualised Volatility In real life, for a directional systematic diversified portfolio an Information Ratio of 1.00 is an excellent result.
MARKET TECHNICIAN
11
12
MARKET TECHNICIAN
Remove the in front of COVER (and SHORT) for long/short trading COVER BUY ELSEIF HASX(RSI(#PERIOD),#SL,DOWN) SELL Remove the in front of SHORT for long/short trading SHORT ENDIF @PLOT=RSI(#PERIOD) NEXT The key elements of this short test program start with some comments that allow description of what the system does. These can also be used to comment out commands with a . Then you set some parameters so, for instance, in this example the RSI period and the levels at which to buy and sell. Setting them as parameters makes them easy to change on the fly and allows you to optimise these values for the best profit. Then you set how you want the lines to look on your chart when you have run the test. For the script to be fully flexible, iterative calculations need to be run through the price history and this is done by way of a loop (here FOR NEXT) and within this loop we can ask the questions we want. If the RSI crosses up through 30, buy, and if it falls Chart 2
signals. For instance you may be trading the cross of an exponential average (signal line) on a momentum line but decide you only want buy signals generated if the momentum is less than zero. The sky is really the limit with the flexibility of being able to code anything yourself. Optimisation One of the most valuable aspects of system testing is that you can optimise the parameters you set to find the values that have produced the best profits. This lets you question the conventional wisdom surrounding the various technical analysis tools we use. For instance are 30 and 70 the best levels to read an RSI? Which periods produce the best results? Another aspect that optimisation of system tests allows is to pitch different buy and sell criteria against each other for the best trading results. From this we learn that more often than not, the best results are obtained where a criteria or technique that gets you into a trade is different from the one that gets you out. And running exhaustive tests we find that the optimised stop-loss is frequently one of the best tools for exiting trades with the best overall profit results. Here we see how trading an optimised RSI for entry and an optimised stop-loss looks on the chart.
through 70, sell, and we plot the result. You could write the code for an RSI yourself (by the way RSI is actually incorrectly calculated on most systems) but in this case you can simply call a prewritten RSI function using the RSI command. Rather than writing all this code from scratch most traders prefer to draw it from a custom indicator library which contains all the standard technical analysis indicators as well as ones that have been published in the public domain. Editing and reverse engineering the code behind these can also be easier than starting with a blank sheet. System testing can be as complex as you like and once you see the signals of your system in the context of your chart you may then decide to add further conditions to filter out less good
Recently for those trading the sterling/dollar rate (chart 2), the 19-day RSI rising through 40-day has been the best entry with a 1.9% stop-loss providing the best exit. It is also interesting to note that the stop-loss is a guaranteed exit as it works on the price while the RSI may not reach 70 to give us a sell signal as we see in the last trade. Apart from developing your very own system to test and trade, you may do one of the following: 1. System test some basic technical analysis indicators 2. System test some of your own scans 3. System test things that you notice occurring on charts 4. System test indicators you see published.
MARKET TECHNICIAN
13
These are all very straight forward to do and may ultimately provide you with better value than attempting to develop the all elusive holy grail system. Stop-loss as a system We could look at a number of systems, but one simple system that is quite simple is the flip-flop stop-loss which swaps between a long and short stop. By setting these values as variables, we can optimise for the best stops to use for longs and the best for short trades. Chart 3 shows the equity line for the system on Euro/dollar with the trades marked on.
Chart 3
indicators have been chosen. In the case of exponential moving averages, the average periods were optimised between 5 and 34 thereby covering five Fibonacci numbers. The same range of periods was used for RSI, directional movement and commodity channel index using the textbook standard entry and exit levels for flipping from long to short. In the case of Bollinger bands, the standard deviation was kept constant and only the periods were optimised for a cross back into the bands. Conclusion System testing has a value way beyond developing your own trading system. It can be used to assess how well the indicators you use work and what the best settings are for trading them.
So how does the flip-flop match up against more traditional technical analysis indicators as a system to trade? This is one of the real advantages of system testing. You can run tests to compare techniques. Running some optimisations on various technical analysis techniques as a Long:Short Strategy on 25 currency rates, we can see the results above. This is by no means an exhaustive exercise and only a handful of
One thing you realise when you run lots of different systems is that many strategies do not work. Indeed, it is often tempting to swap the buy and sell criteria around to turn a losing strategy into a winning one. Perhaps the greatest value of system testing for technical analysts is what it can tell us about the standard tools that we use. Try it yourself! David Linton is chief executive of Updata plc
14
MARKET TECHNICIAN
MARKET TECHNICIAN
15
9. It could be these astrological influences to which Gann was referring when he described market trends as acting in a way that is analogous to radio waves; i.e. they have a specific wavelength, they travel through space and they are received by and influence those stocks and commodities that vibrate with a resonant frequency. It is impossible here to give an adequate idea of the Law of Vibration as I apply it to the markets. However, the layman may be able to grasp some of the principles when I state that the Law of Vibration is the fundamental law upon which wireless telegraphy, wireless telephones and phonographs are based (Ticker interview). 10. From time to time a stock or commodity will lose its sensitivity (or receptivity) to certain influences (astrological or otherwise). As a consequence it will become inert (i.e. its rate of vibration will fall) and its price will typically enter a downtrend. Stocks create their own field of action and power; power to attract and repel, which principle explains why certain stocks at times lead the market and turn dead at other times (Ticker interview). Thus, if the assumption that astrology plays an important role in Ganns method of forecasting markets is correct, the key technique in applying the Law of Vibration is to accurately identify the major astrological influence driving a particular stock or commodity. It is then necessary to identify the resultant rate of vibration and to forecast how future astrological influences will impact this rate of vibration. By my method I can determine the vibration of each stock and by also taking certain time values into consideration I can in the majority of cases tell exactly what the stock will do under given conditions (Ticker interview). Taken together, these principles of the Law of Vibration constitute a coherent theory of how financial markets work. Indeed, they constitute a new paradigm. However, as with any new paradigm, it challenges the conventional wisdom and therefore encounters resistance from practitioners of the conventional wisdom. It appears to be a fact that Mr Gann has developed an entirely new idea as to the principles governing stock market movements.We have asked Mr Gann for an outline of his work and have secured some remarkable evidence as to the results obtained therefrom.We submit this in full recognition of the fact that in Wall Street a man with a new idea, an idea which violates the traditions and encourages a scientific view of the proposition, is not usually welcomed by the majority, for the reason that he stimulates thought and research.These activities said majority abhors (Ticker interview).
high and all-time low price of the stock or commodity because at these price extremes the astrological influences will typically be very strong or very weak, respectively. Thus, under the Law of Vibration, a high price is caused by a high rate of vibration, which in turn is caused by strongly positive astrological influences (and vice versa). G Identify the general rate of vibration of the uptrend or downtrend. This can be achieved by placing the origin of the Gann angles (i.e. 1 x 8, 1 x 4, 1 x 2, 1 x 1, 2 x 1, 4 x 1, 8 x 1 and their subdivisions) or Gann fan lines at the starting point (in time and price) of the uptrend or downtrend. G Forecast the approximate date when the predominant astrological influence (or cycle) that is driving the uptrend (and was identified in point 2 above) will end. This can be achieved by consulting an ephemeris and the astrological chart of the start of the uptrend or downtrend. G Forecast the future price when the uptrend or downtrend will end. This can be achieved by identifying the intersection of the general rate of vibration of the uptrend or downtrend (point 3 above) and the forecast date that the uptrend or downtrend will end (point 4 above). G Monitor ones forecast, which comprises all of the above elements. In particular, note that short-term positive astrological influences will increase the rate of vibration and temporarily drive prices above the long-term rate of vibration (i.e. above the long-term Gann angle identified in point 3 above). Conversely, note that short-term negative astrological influences will decrease the rate of vibration and temporarily drive prices below the long-term rate of vibration (i.e. below the long-term Gann angle). However, when these short-term influences expire, stock or commodity prices will revert to their long-term rate of vibration (i.e. the long-term Gann angle).
16
MARKET TECHNICIAN
Chart 1
Gann forecast that at the end of the futures contract the price would be $1.20. This was based on the starting point of the uptrend (point i above), the long-term rate of vibration (point iii above) and the contracts expiry date of September 30 1909. vi In monitoring his forecast, Gann observed that since the beginning of the uptrend on January 26 1909 short-term astrological influences had temporarily driven prices above and below the long-term trend or rate of vibration. Gann also observed that between July 21 and August 26 1909 stronger short-term negative (or malefic) astrological influences had driven prices down well below the long-term rate of vibration. Moreover Gann observed that commencing August 26 1909 (i.e. the low point of 963/4 cents) these strongly negative short-term influences started to expire and he forecast that they would fully expire over the next month, when prices would revert to their earlier long-term rate of vibration. Importantly, Gann received corroboration of the low point in August from the fact that a price of 963/4 cents on August 26 1909 equates to a rate of vibration of 0.0132 cents per day (based on the starting point of 94 cents on January 26 1909). This rate of vibration is one eighth of the long-term rate of vibration of 0.1053 cents per day. Another perspective is that on August 26 1909 the long-term rate of vibration of this wheat futures contract had halved three times. Thus from August 26 1909 Gann forecast and observed the simultaneous expiration of the short-term negative astrological influences and the doubling three times of the rate of vibration, as the long-term rate of vibration was regained on September 30 1909. B) United States Steel Stock Price (see chart 2). He (i.e Mr Gann) came to me when United States Steel was selling around 50 and said,This Steel will run up to 58 but it will not sell at 59. From there it should break 16 points. We sold it short around 58 3/8 with a stop at 59. The highest it went was 583/4. From there it declined to 411/4; -171/2 points (Ticker article). i. Gann identified the start of the uptrend in U.S. Steel as a price of 21 7/8 cents on October 23 1907.
Chart 2
ii. Gann identified the predominant astrological influences driving this uptrend. iii. Gann identified the long-term rate of vibration of this uptrend, which was 0.0950 cents per day (or 1 cent per 10.5 days). iv. Gann forecast that the predominant astrological influences driving this uptrend would remain in force until October 1909
MARKET TECHNICIAN
17
and hence in November 1908 he was only forecasting a shortterm correction. More specifically, Gann made his forecast When United States Steel was selling around 50", which was in early November 1908. Gann then forecast that due to shortterm negative astrological influences a correction would start on November 14 1908 (i.e. within two weeks). v. Based on the starting point of the uptrend (point i above) and the long-term rate of vibration (point iii above) and the starting date of the correction (point iv above), Gann was able to forecast that Steel will run up to 58 but it will not sell at 59 . In fact the price of U.S. Steel peaked at 583/4 on November 14 1908. vi. Gann then forecast that the short-term negative astrological influences that he had identified would remain in force until February 23 1909. vii. Gann then had to forecast what the rate of vibration would fall to on February 23 1909 (i.e which Gann angle would provide support before the long-term uptrend was resumed). Importantly, in making this forecast Gann sub-divided the rate of vibration. More specifically and, as the price chart of U.S. Steel shows, Gann forecast that the price of U.S. Steel would fall to the bottom of its current vibratory band and then finally fall three quarters of the band below. Thus Gann firstly forecast that the short-term correction would last until February 23 1909 and secondly that the rate of vibration of U.S. Steel would fall on that day from its long-term rate of 0.0950 cents per day to (1/1.5) X (1.25/2) X 0.0950 = 0.0396 cents per day. This is in fact exactly what happened. More specifically, on February 23 1909 U.S. Steel made a low price of 411/4 cents (which based on the starting point of 21 7/8 cents on October 23 1907 equates to a rate of vibration of 0.0396 cents per day). From that point the long-term uptrend of U.S. Steel was resumed. An important point from this example is that Gann did not merely use his so-called Gann angles as a crude measure of the rate of vibration of stocks and commodities. More specifically, he did not use them simply to measure the doubling and halving of the rate of vibration. Rather, he also discovered and employed sub-shells within a principal energy level. This is analogous to modern quantum theory. Therefore we have discovered another important principle of Ganns Law Of Vibration; namely the rate of vibration of stocks and commodities, as measured by so-called Gann angles, conforms to a series of principal energy levels and sub-shells. As we have seen, an important implication (and practical application) of this is that rates of vibration, as measured by these principal energy levels and sub-shells, constitute support and resistance levels. This therefore clarifies the statement made by Gann: By knowing the exact vibration of each individual stock I am able to determine at what point each will receive support and at what point the greatest resistance is to be met (Ticker interview). Moreover, this principle in turn sheds light upon a somewhat obscure concept that Gann briefly introduced in both his stock market course and his commodities course, namely the concept of lost motion: As there is lost motion in every kind of machinery, so there is lost motion in the stock market due to momentum, which drives a stock slightly above or below a resistance level. The average lost motion is 1 7/8 points. When a stock is very active and advances or declines fast on heavy volume, it will often go from 1 to 1 7/8 points above a halfway point or other strong resistance level and not go 3 points. The same rule applies on a decline. It will often pass an important resistance point by 1 7/8 points but not go 3 full points beyond it. That is why I advise using a stop-loss order 3 points above a top or 3 points below a
bottom (W. D. Gann Stock Market Course, chapter 10). In summary therefore, from examining examples of the practical application of Ganns Law Of Vibration, we have identified three further principles of the Law Of Vibration to those stated on pages 15 and 16: 11) The rate of vibration of stocks and commodities conforms to a series of principal energy levels and sub-shells. More specifically, the principal energy levels equate to a doubling and halving of the rate of vibration and the sub-shells equate to a fourfold division of a principal energy level. 12) These principal energy levels and sub-shells constitute important support and resistance points. 13) When a stock or commodity is very active, momentum will often drive the price very slightly above or below the precise support or resistance point, which is determined by the rate of vibration (in conjunction with astrological influences).
18
MARKET TECHNICIAN
Therefore astrological influences are the most important element in the Law of Vibration. The time given for tops and bottoms is the most important factor for you to know and watch. It makes no difference about the price a stock is selling at. So long as you know when it will reach low or high levels you can buy or sell and make money. Remember you must buy and sell at the right time regardless of prices. No matter how high stocks are, if they are going higher, you should buy. It makes no difference how low they are; if the trend is down and they are going lower, you must sell short and go with the trend (1929 Annual Stock Market Forecast). 15) In applying the Law of Vibration to the stock market, it is important not only to assess the astrological influences (i.e. external vibrations) but also the internal vibrations of a stock. For example, assume that during a general stock market uptrend there is a short-term correction (due to negative astrological influences) and most stocks fall in price. If during that period of time a particular stock merely moves sideways, rather than falls, it indicates that the internal vibrations of that stock are especially strong and therefore it will subsequently perform strongly when the overall uptrend (i.e. positive astrological influence) is resumed. The Dow Jones 30 Industrial stocks are representative of the active industrials and most of them will follow the Industrial Curve (i.e. Ganns forecast) very closely. But some of the individual stocks that are in strong or weak position will vary from this Curve and make tops and bottoms at different times. These special stocks and their position will be covered in the supplements each month (1929 Annual Stock Market Forecast). 16) Stocks and commodities typically do not switch from an uptrend to a downtrend until their rate of vibration has slowed down. This reduction in the rate of vibration can of course be observed as prices move sideways (or down) over time to lower (and slower) Gann angles. The ones (i.e. the stocks) that make top in the early part of the year and fail to reach higher levels in July or August will be the ones to lead the decline, because they will have had longer time for distribution. Guard against selling short the late movers until they have had time to complete distribution (1929 Annual Stock Market Forecast). It is important to note that although for many years Gann produced and sold annual forecasts of the stock and commodity markets he did not intend his subscribers to rely on these alone. Consequently Gann issued supplements, or updates, three times each week by means of his Supply And Demand Letter , which he described as follows: People often write for my opinion on Baldwin, U.S. Steel, General Asphalt or some special stock. I judge the stock by the position of time and volume as it is today. If in a few days I see a large amount of volume, up or down, I change my position, so it is not always what I think of a stock today, but what I am going to think of it later that counts. That is why I issue a tri-weekly letter because the market changes and I can advise my subscribers to change their position and protect themselves against losses. If the market never reversed its trend, there would be no need of a tri-weekly letter (Wall Street Stock Selector , 1930, appendix).
Thus, in the practical application of the Law of Vibration the time factor, or astrological influences, are the most important element. Consequently, in preparing his annual forecasts, Gann was required to assess numerous astrological influences over the subsequent year. Assessing months in advance how strong astrological influences (both positive and negative) will be and how long they will remain in force is a complex task and prone to error on occasion. Therefore Gann found it necessary to issue regular letters to his subscribers in order to clarify and correct his annual forecasts when necessary, as well as to identify particular stocks and commodities that were especially strong or weak. Therefore, at a fundamental level, Gann was constantly forecasting and monitoring external vibrations (in the form of astrological influences) and internal vibrations (in the form of assessing the particular strength or weakness of individual stocks and commodities).
Conclusion
This paper has identified, or rediscovered, many of the elements of Ganns Law of Vibration. However, the most important and most difficult element is to identify and assess the past, present and future astrological influences on a stock or commodity. Consequently over the next year, as we approach the centenary on August 8 2008 of Ganns discovery of the Law of Vibration, this should be the principal focus of Gann researchers world wide. Indeed, correctly identifying the true principles of astrology and the precise astrological influences impacting a particular financial market should perhaps be the focus of technical analysts in general who truly wish to extend the frontiers of technical analysis in the early twenty-first century. Finally, the centenary of Ganns discovery of the Law of Vibration, August 8 2008, is also the start of the next Olympic Games, which will take place in Beijing, China. The ancient Greeks not only provided the world with the Olympic Games but also an early example of financial forecasting through correctly assessing astrological influences. Aristotles account of the dealings of the Greek philosopher and mathematician Thales of Miletus (who lived from 625 to 547 B.C) provides one such example. There is an anecdote of Thales the Milesian and his financial device, which involves a principle of universal application, but is attributed to him on account of his reputation for wisdom. He was reproached for his poverty, which was supposed to show that philosophy was of no use. According to the story, he knew by his skill in the stars while it was yet winter that there would be a great harvest of olives in the coming year. So, having a little money, he gave deposits for the use of all the olive presses in Chios and Miletus, which he hired at a low price because no one bid against him. When the harvest time came, and many wanted them all at once and of a sudden, he let them out at any rate which he pleased, and made a quantity of money. Thus he showed the world that philosophers can easily be rich if they like (Aristotles Politics , book one, chapter eleven). The investment paradigm and methodology that Gann discovered approximately 100 years ago might, in fact, have been discovered and been in use some 2,500 years earlier.
James Smithson is an investor, trader and student of Gann based in London, England. smithsonjames@hotmail.com
MARKET TECHNICIAN
19
Trading Workshop
One such course is Dr Van Tharps three-day workshop called "Blueprint for Trading Success". As a Dow Jones journalist, I was invited to participate on a complimentary basis in July. The other people attending the course had come not as a result of any hard sell advertising but because they had come to realise that to become a successful trader entails a lot of hard work on one's beliefs and weaknesses amongst other things.
Contingency planning
The first day's homework was on "worst-case contingency planning" where everyone had to come up with as many things
20
MARKET TECHNICIAN