The Second Annual Summary of Forex Automaton Research Progress, April 2010

User Rating: / 6
PoorBest 
Written by Forex Automaton   
Friday, 02 April 2010 12:06
Article Index
The Second Annual Summary of Forex Automaton Research Progress, April 2010
Brute force approach: parametric optimization via simulated trading
Modular development approach: the forecasting component
Modular development approach: the trigger component
Modular development approach: the capital allocation component
Summary and outlook
All Pages

The Forex Automaton project was launched in April 2008 with the ambitious mission of leveraging certain algorithmic know-how to create a trading signal service useful to institutional and retail forex traders. This report highlights the noteworthy new developments that took place during the project's second year of life, from April 2009 through March 2010. The intent is to help the reader navigate what is becoming a rather complex network of research topics, concepts and results, by providing an overarching logical framework. Links to complete stories are provided.

The previous, Year One report featured non-trivial (non-zero time lag) auto-correlations and inter-market correlations in forex, mostly on the hour scale, and in LIBOR. This year, the research focus has been shifted from getting familiar with the landscape of forex predictability to actively exploiting that predictability in the framework of a positive expectancy trading system. We began with the one-decision-a-day time scale. This time scale is the least demanding computing-wise while at the same time rich enough data sets are available.

We entered the New Year 2010 with a day-scale demo system named Danica. The system, which is alive and well as of this writing, publishes the "raw" output of the forecasting engine.

Before going any further, it may be in order to restate the fundamental principles of our approach to trading system optimization:

  • Definite time scale. Data acquisition, aggregation and analysis take place in fixed time steps. The algorithms are optimized for their time scale. We make no assumptions of scale independence.

  • Adaptability. "Modeling" is reduced to a single parameter optimization; that single parameter (nicknamed Fred to avoid discussing its nature) determines how recent past interacts with longer-term accumulated knowledge to determine the projection into the future. The system learns continuously and discards outdated knowledge.

  • Causality. This is enforced by Nature in real-life; in simulated trading, one has to enforce it by making sure the decision making has no access to future data.

We began with what is termed in this report a brute force approach to optimization (BFO for short; a special section below is dedicated to it). The BFO produced interesting results, identifying areas in the parameter space representing both good and bad ways of trading. The approach was later found to be too superficial in that while several factors, including extreme features of historical data (such as the financial panic of 2008), were tightly intertangled, the approach itself, focusing on the "bottom line" and risk by simulating trading histories, lacked the analytic depth needed to separate effects of forecasting quality from those of money management.

UML sequence diagram of the trading system

Fig.1. A UML sequence diagram showing interaction of the trading system components directly involved in making a trade decision.

Subsequently, the optimization approach has been refined by singling out the forecasting as a separate component with an independent optimization and quality assurance process. Trigger, the component charged with acceptance or rejection of the forecasts on the basis of their critical evaluation, has an independent development process. So does the capital allocation or money management component. Each piece of the problem is solved separately, the goal being to combine a winning forecasting system with a winning trigger and a winning money management system, having an independent set of quality criteria for each. Much of this document deals with progress in each of the three respective areas.

In terms of this refined framework, BFO appears as a special case where the trigger was elementary and was described as a pair of parameters, called entry and exit thresholds. The historical BFO of 2009 did not recognize the quantitative delicacy of setting the upper limit on the capital to be risked, and its connection to the quantitative success of forecasting. Instead, a one-size-fits-all allocation was used in all cases.

The structure of this document is as follows: we will briefly summarize the results of the BFO, then discuss the development progress for each of the three algorithmic components just introduced, and conclude with a discussion of open directions and plans for the future.

The data sample used in analysis and simulated trading begins on August 20, 2002. The end date used in the individual study may vary, depending on when the study was conducted.



Last Updated ( Monday, 04 April 2011 07:38 )