I wanted to pass on a tip for those of us that are testing Automated Models.
When developing Automated Models it is very easy to develop "Tunnel Vision".
We all want our models to be profitable so sometimes we unknowingly "curve fit".
Here is an easy solution by taking your queue from Drug Companies and how they decide whether a drug has legs in clinical trials.
Applied to your model now:
Test your model by assuming "Entries" are uniformly distributed between longs and shorts and assume a Poisson Distribution for time of trade. This randomness will shed light on whether your Algorithm works as a result of predictive behavior or (I hate when this happens) you curve fit some parameters or use risk management to make a "non predictive" model, look good.
Run your back test with the Placebo. If the randomly distributed Entries perform similar to your actual model (use something simple like the Sharpe Ratio).
Well... "Houston we have a problem"
Save yourself some aggravation later on. Do this before forward test.
Cheers
When developing Automated Models it is very easy to develop "Tunnel Vision".
We all want our models to be profitable so sometimes we unknowingly "curve fit".
Here is an easy solution by taking your queue from Drug Companies and how they decide whether a drug has legs in clinical trials.
Applied to your model now:
Test your model by assuming "Entries" are uniformly distributed between longs and shorts and assume a Poisson Distribution for time of trade. This randomness will shed light on whether your Algorithm works as a result of predictive behavior or (I hate when this happens) you curve fit some parameters or use risk management to make a "non predictive" model, look good.
Run your back test with the Placebo. If the randomly distributed Entries perform similar to your actual model (use something simple like the Sharpe Ratio).
Well... "Houston we have a problem"
Save yourself some aggravation later on. Do this before forward test.
Cheers