© Ed Seykota, 2003 - 2005 ... Write for permission to reprint.

Ed Seykota's

Frequently Asked Questions

FAQ Index & Ground Rules  ...  Tribe Directory - How to Join

TTP - The Trading Tribe Process  ...  Glossary

  TTP Workshop  ...  Resources  ...  The Trading Tribe Book

TSP: Trading System Project  ...  Breathwork  ...  Charts

 

Support & Resistance System

Reader Feedback

Back to S&R System

 

 

System Test Runs

October 15, 2005 <== latest

October 9, 2005

October 8, 2005

October 3, 2005

September 25, 2005

 

 

 

 

 

Date: Tue, 25 Oct 2005

 

ICAGR Still not right ...

Hi Ed,


thanks for publishing the results about the 120/45 run. I found the bug, it was in my code. But when I searched my code for bugs I did notice something about the ICAGR calculation. If you look at the 140/20 run the calculation should be:

ratio     = 2585500 / 1000000

          = 2.5855
dateRange = (11228 - 25) / 365.25

          = 30.67214...
icagr     = ln(2.5855) / 30.67214...

          = 0.03097...

          = 0.0310

If I do the calculation with subtracting the days to warm up I get the right results (but the wrong dateRange)

ratio     = 2.5855
dateRange = 11228 / 365.25

          = 30.74059...
icagr     = ln(2.5855) / 30.74059...

          = 0.0309011...

Doing the same for 120/45 run

ratio     = 2,33828
dateRange = 30.67214...
icagr     = 0.027693

Without days to warm up:

dateRange = 30.74059...
icagr     = 0.027632


I am note sure this is related to some rounding issues or that my ln function does not have enough precision or simply maybe because I am doing the icagr calucaltion wrong. Can you please comment?

... Thank you for the catch - I am currently checking into this.

Fri, 21 Oct 2005

 

Reproduced Results for Support and Resistance System

To the Penny


Dear Ed,

I reproduced your results (140/20 run October 15, 2005 version and 120/45 run) for the Support and Resistance System to the penny.

However I wonder about one small detail:

To reproduce your results I had to program the stops exactly at the support and resistance prices and use the "less" and "greater" operations - not the "less or equal" and "greater or equal" operations - to evaluate if a stop order is activated.

You write: "When the long-term trend is positive, the system then enters the market with a stop just above the short term resistance and then places a protective stop below the short-term support."

I am also of the opinion that a stop order already becomes a market order once the specified stop price is attained. Thus it is not necessary that the price is penetrated.

This seems to contradict the code which reproduces your results. Am I right to assume that you use the "greater" operation and "a stop exactly at the resistance" as an approximation for "greater or equal" and "a stop just above resistance" respectively?

Greetings!

 

Yes, in this test, I use > and < instead of >= and <=. 

Wed, 19 Oct 2005

 

Wants Details for 120/45



Hi Ed,


I tried to match other values optimization table but ran into some trouble. As I told you before my code matches the 20/140 system. It also matches 30/120 and 20/140. But if I try 45/120 I get the following result cagr / drdn / bliss: 0.0276 / 0.2605 / 0.1058. According to your table the values should be: 0.028 / 0.369 / 0.075.

 

It might be my drdn calculation that is not correct but I am not sure. Would it be possible for you to publish the logs from the 45/120 run?

I get this errors on more places but I think I only need the 45/120 run to find out the problem.

 

Yes, SR 120/45 is now on line.

Tue, 18 Oct 2005

 

Order Question


Hello Ed,

I am working on your Support and Resistance Trading system and cannot understand what kind of orders you are using. Any clarification would be greatly appreciated.

You are saying " Signals issue right after the close. The system enters orders before the next open, to trade on stop, the following day. " and " When the long-term trend is negative, the system enters the market on a stop just below the short-term support with protection just above the short-term resistance. "

First, if today, after close we calculated that 20-day support was broken by today's lowest price, the order can only be executed tomorrow or later.

 

By looking at your Metrics we can see that first day when the 20-day support (739.10) was broken was 75-02-11, with value of 738.20. But your first trade is also on 75-02-11, how is it possible?

Second, what kind of order was used to enter into a first position? If it was a stop order of 738.20, it wouldn't have gone through because the price would not be hit until 75-04.


Also is protection a stop order as well? If so, shouldn't it be recalculated every day?

Thank you in advance,

 

 

Directly after the close on Monday, your trend (T:-1) is down. You enter a sell stop at 739.10, basis the Fast Metric.

 

On Tuesday, the order fills at 738.65. 

 

The Fast Metric also gives the protective stop. Directly after the close on Tuesday, you enter a protective stop at 749.90. You can see this, graphically, on the chart at the bottom of the run.

 

[Metrics] 75-02-10-M Px:[742.90 743.30 742.40 742.50] [Slow:762.30/735.90 Fast:749.90/739.10 T:-1]


[Metrics] 75-02-11-T Px:[743.20 744.00 738.20 743.70] [Slow:762.30/735.90 Fast:749.90/738.20 T:-1]


[Metrics] 75-02-13-H Px:[747.40 753.60 747.10 752.70] [Slow:762.30/735.90 Fast:753.60/738.20 T:-1]

GC----C -4600 0 75-02-11 738.650

Oct 15 Test

 

Match

Another exact match. Likely "correct" this time :-)

Sat, 15 Oct 2005

 

Problem on 75-07-29 - Stop Not Electing


If we are short on 75-07-28, on 75-07-29 price hits a high of 735.1, the High Fast is 735.10, how come we are not taken out of the trade?

[Metrics] 75-07-29-T Px:[734.70 735.10 731.90 732.00] [Slow:755.90/724.30 Fast:735.10/725.70 T:-1]

I’ve fooled with this for days too embarrassed to email …

 

On 7/29 the 735.10 does not elect the 737.60 stop, from 7/28.

After the close on 7/29, the stop falls to 735.10.

On 7/30 the high price of 733.30 does not elect the 735.10 stop.

 

[Metrics] 75-07-28-M Px:[731.40 734.40 731.40 734.10]

 [Slow:755.90/724.30 Fast:737.60/724.30 T:-1]


[Metrics] 75-07-29-T Px:[734.70 735.10 731.90 732.00]

 [Slow:755.90/724.30 Fast:735.10/725.70 T:-1]


[Metrics] 75-07-30-W Px:[733.10 733.30 731.60 732.20]

 [Slow:755.90/724.30 Fast:735.10/725.70 T:-1]

Fri, 14 Oct 2005

 

Math Mystery Solution


Hi Ed,

Refer our discussion the morning and your question: On doing a typecast, how does the compiler manufacture extra digits, seemingly random, but exactly repeatable on any computer?

Consider the number 0.333333
...
float f;
double d1, d2;
f = 0.333333;
d1 = f;
d2 = (double) 0.333333;
...

The values of f, d1, d2 as set by the program on execution are:
f       =    0.333333;          // 0.333332986
d1     = f;                        // 0.33333298563957214
d2     = (double) 0.333333; // 0.33333299999999999

Now if you observe the binary data, you see:

1. f and d1 have the same (binary) bits, but as above they have different (decimal) digits. This means d1 just uses the same 23 bit mantissa as f. This explains why the newer digits in d1 are not zero -- the typecast assigns mantissa bits 24-52 to 0, but the old bits 1-23 contribute to the newer decimal digits.

2. The bits in d2 are different from those in d1 -- there are more
mantissa bits in double (52 in double vs 23 in float) and the "extra" lower order bits give it more precision, thus the newer decimal digits are more in line with intuition.

Obvious test to this hypothesis, encode 1.125 and see that the extra bits are not nonsensical, that d1==d2 as .125 = 1/8 exact.

In short, to get greater precision, the conversion from decimal needs to be made DIRECTLY to double, not via float. Seeing as how precision loss is a result of the Decimal->IEEE conversion, the best solution is to not use float at all.

However, if your data is in float already, do the following one-time maintenance operation:

1. Convert all data to decimal and print as a string. If you try to do this step by writing your own function, then you get, f1=0.333332986 and not what you want (0.333333) ! C++ guarantees 6 digit precision for float, so the system stores the value as 0.333332986 but reports it as 0.333333, so use "sprintf" to print exactly 6 digits into a character array/string.

2. From that decimal string, re-encode the number as double using
the library function "atof". Voila! "high precision data". This high precision data prints the same decimal digits as the old data, but the lower mantissa bits are now meaningful.


Well this works because price data (decimal) has very few significant figures and high as well as low precision IEEE can store it well.

Note, this only deals with the price data precision, you still have to write equality check etc. See "C++ FAQ 36.14" from prior mail. The advantage to doing this price data munging is that the floating point operation checks become easier, eg equal can be "agrees to 14 significant places".

 

 

------

 

 

 

Floating Point Arithmetic


Section 36, FAQ 14 in C++ FAQ.


Background info which you probably know and some references to check, e.g., the scientific computing community does high precision work.

http://www.cs.uu.nl/wais/html/na-dir/C++-faq/part14.html

[36.14] Why doesn't my floating-point comparison work?

Because floating point arithmetic is different from real number arithmetic.

Here's a simple example:

double x = 1.0 / 10.0;
double y = x * 10.0;
if (y != 1.0)
std::cout << "surprise: " << y << " != 1\n";

The above "surprise" message will appear on some (but not all) compilers/machines. But even if your particular compiler/machine doesn't causethe above "surprise" message (and if you write me telling me whether it does, you'll show you've missed the whole point of this FAQ), floating point will surprise you at some point. So read this FAQ and you'll know what to do.

The reason floating point will surprise you is that float and double values are normally represented using a finite precision binary format. In other words, floating point numbers are not real numbers. For example, in your machine's floating point format it might be impossible to exactly represent the number 0.1. By way of analogy, it's impossible to exactly represent the number one third in decimal format (unless you use an infinite number of digits).

To dig a little deeper, let's examine what the decimal number 0.625 means. This number has a 6 in the "tenths" place, a 2 in the"hundreths" place, and a 5 in the "thousanths" place. In other words, we have a digit for each power of 10. But in binary, we might, depending on the details of your machine's floating point format, have a bit for each power of 2. So the fractional part might have a "halves" place, a "quarters" place, an "eighths" place, "sixteenths" place, etc., and each of these places has a bit.

Let's pretend your machine represents the fractional part of floating point numbers using the above scheme (it's normally more complicated than that, but if you already know exactly how floating point numbers are stored, chances are you don't need this FAQ to begin with, so look at this as a good starting point). On that pretend machine, the bits of the fractional part of 0.625 would be 101: 1 in the "halves" place, 0 in the "quarters" place, and 1 in the "eighths" place. In other words, 0.625 is 1/2 + 1/8.

But on this pretend machine, 0.1 cannot be represented exactly since it cannot be formed as a sum of (negative) powers of 2 -- at least not without an infinite number of (negative) powers of 2. We can get close, but we can't represent it exactly. In particular we'd have a 0 in the "halves" place, a 0 in the "quarters" place, a 0 in the "eighths" place, and finally a 1 in the "sixteenths" place, leaving a remainder of 1/10 - 1/16 = 3/80. Figuring out the other bits is left as an exercise (hint: look for a repeating bit-pattern).

The message is that some floating point numbers cannot always be represented exactly, so comparisons don't always do what you'd like them to do. In other words, if the computer actually multiplies 10.0 by 1.0/10.0, it might not exactly get 1.0 back.

That's the problem. Now here's the solution: be very careful when comparing floating point numbers for equality (or when doing other things with floating point numbers; e.g., finding the average of two floating point numbers seems simple but to do it right requires an if/else with at least
three cases).

Here's the wrong way to do it:

void dubious(double x, double y)
{
// ...

if (x == y) // Dubious!
foo();

// ...
}

If what you really want is to make sure they're "very
close" to each other
(e.g., if variable a contains the value 1.0 / 10.0 and you
want to see
if (10*a == 1)), you'll probably want to do something
fancier than the above:

void smarter(double x, double y)
{
// ...

if (isEqual(x, y)) // Smarter!
foo();

// ...
}

Here's the isEqual() function:

inline bool isEqual(double x, double y)
{
// left as an exercise for the reader :-)
// see one of the references below
...
}

For the definition for the above function, check out references such as the following (in random order):


* Knuth, Donald E., The Art of Computer Programming, Volume II: Seminumerical Algorithms, Addison-Wesley, 1969.
* LAPACK -- Linear Algebra Subroutine Library, www.siam.org
* Stoer, J. and Bulirsch, R., Introduction to Numerical Analysis, Springer Verlag, in German.
* Isaacson, E. and Keller, H., Analysis of Numerical Methods, Dover.
* Ralston and Rabinowitz, A First Course in Numerical Analysis: Second
Edition, Dover Press et al., Numerical Recipes.
* Kahan, W., http.cs.berkeley.edu/~wkahan/.

Reminder: be sure to check out all the other primitives, such as averages, solutions to the quadratic equation, etc., etc. Do not assume the formulas you learned in High School will work with floating point numbers!

Hope this helps.

 

Very nice !

Thu, 13 Oct 2005

 

Fix for Rounding Problem

 

Hi Ed,

I have once more updated the code. This time I get exact results if I modify the trade on 95-11-22. From your metric log you have 11700 contracts short. When I do the calculation I get 11800 contracts short. If I do the calculation by hand I get the following:

1974000 * 0.05 / ( 470.6 – 462.2) = 11750 = 11800 contracts

The source code included has a fix for this.

// This is probably the worlds most inefficient rounding routine...
 

int rounder(int position)

{
int test = position % 100;

if( test < 50 )
     position = position - test; 
else
     position = position - test + 100;
return position;

}

 

Yes, I address this problem in the October 15, 2005 version.

Wed, 12 Oct 2005

 

My results for Simple SR

Ed,

Here is a spreadsheet that includes the equity log and metrics for a
test run that I believe correctly incorporates all of the corrections to
the simple 20:140 SR system so far.

The ending equity is a couple of hundred dollars less than your latest
results post (October 9th), and the metrics reflect the resultant small
changes.

I use 75-01-27 as the start date for calculating rates of return. Your
test uses 75-01-25. That is a Saturday. I figure no money is at risk if
we start trading on a non-trading day, so I use the following Monday as the start date.

I wonder if your results match mine?

 

Yes, with my rounding fix (October 15, 2005 version) I think we now match.

Tue, 11 Oct 2005

 

TSP Project -- Exponential Average Crossover Optimal Heat


I ran a simulation using C++ of the Exponential Average Crossover system and my results are exact to those you have published.

I also used the steamroller method to find an approximation for optimal heat. Two charts are included: one showing heat (Fractional Bet) VS CAGR the other illustrating heat VS Bliss.

The Bliss function uses Lake Ratio rather than PDD in calculation.

Bliss = CAGR/(Lake Ratio)

I find it interesting that fractional bet values above 0.00625 are not that blissful.

 

 

 

 

Mon, 10 Oct 2005

 

October 9th SR Results Verification

Ed,

I confirm that the small precision error you note cascades to the next trade. When I manipulate my systems position size on trade #158 to match yours, my position size for #160 also matches yours. It appears that trade #158 is the only place our math differs.

My ending equity now matches yours.

 

Yes, as numerous readers notice, I have an error in my computation:

 

Equity Log:

 

95-11-20 1,974,000.00           0 1,974,000.00
95-11-21 1,974,000.00           0 1,974,000.00
95-11-22 1,974,000.00    2,340.00 1,976,340.00
95-11-27 1,974,000.00  -30,420.00 1,943,580.00
95-11-28 1,974,000.00  -65,520.00 1,908,480.00
 

Metrics Log:


[Metrics] 95-11-20-M Px:[466.40 466.60 465.60 466.20]

 [Slow:480.20/461.70 Fast:470.60/462.20 T:-1]
[Metrics] 95-11-21-T Px:[466.10 466.90 464.40 464.50]

 [Slow:480.20/461.70 Fast:470.60/462.20 T:-1]
[Metrics] 95-11-22-W Px:[464.60 465.10 460.60 461.20]

 [Slow:480.20/460.60 Fast:470.60/460.60 T:-1]
[Metrics] 95-11-27-M Px:[462.50 464.70 462.00 464.00]

 [Slow:480.20/460.60 Fast:470.60/460.60 T:-1]
[Metrics] 95-11-28-T Px:[464.20 469.60 464.20 467.00]

 [Slow:480.20/460.60 Fast:470.60/460.60 T:-1]
 

Trade Log:


GC----C -24100 156 95-10-19 462.750

               157 95-11-09 466.750 -96,400.00
GC----C -11700 158 95-11-22 461.400

               159 96-01-02 469.850 -98,865.00
GC----C   5800 160 96-01-22 481.450

               161 96-02-21 474.900 -37,990.00

 

My code runs along these lines:

Risk:         1974000 * .05  = 98700 // five percent of Eq.
Risk Per Lot: 470.6 - 462.2  = 8.4   // box (hi - low)
Lots:         98700 / 8.4    = 11750 // divide 1. / 2.

Rounding:                    = 11800 // to nearest 500

 

Checking my registers for this computation, I find:


ndEnter      =   462.19999999999999
ndExit       =   470.60000000000002
RiskPerLot   =     8.4000000000000341
Heat         =     0.050000000000000003
ndSize       = 11749.999999999953

 

So the size comes out about 4.7 * 10 ^-12 shy of 11750

and this rounds down to 11700. 

 

To address this issue, I round ndSize up to 11750 before rounding it again, this time, to 11800.  See the results in the October 15, 2005 version.

Mon, 10 Oct 2005

 

October 9th SR Results Verification


Ed,

I recently experience this type of precision problem with Mechanica as well. When I report it to the developer he sends me a "fix" later in that same day and now the problem does not manifest. I do not know the details of the fix. Since I am not much of a programmer (outside the Mechanica language) I didn't ask how the fix works, I just verify that it remedies multiple cases of precision errors.

I do know that the software uses primarily double precision numbers. Bob (the developer) tells me that there are some potential problems with converting doubles to floats and vice-versa, so he tries to keep everything in terms of doubles. He says this slows the software down some but makes for better precision. Again, I reiterate that I am not enough of a programmer to evaluate the veracity of this statement. However, Mechanica does seem to get most stuff right.

Sun, 9 Oct 2005

 

Rounding Problem with Trade #158

I recheck my results. When I format my trade log like yours our logs agree byte for byte. My results match yours. We are both wrong.

I see the problem with trade #158 but I am not sure about the best way to fix it.

Evidently our granulators have the same flaw in that they use the binary number system which cannot express tenths exactly. Because of this we calculate an entry risk per lot that is off by ~ 3.4e-14. This gives a position size very slightly less than 11750.

The difference for trade #160 likely cascades from the first problem..

BTW, the source of the 1 cent discrepancy in the Benchmark value is that you are truncating to the nearest penny (via cast to long long) while I am rounding. I'm glad the difference is something I can easily explain.

 

Good catch. The October 15, 2005 version fixes this rounding problem.

Sun, 9 Oct 2005

 

October 9th SR Results - Two Differences

Ed,

I compare my output for the 40:120 SR system with yours, and I find two differences.

There are two trades, #158 and #160, where my position sizing disagrees with yours.

 

For trade #158, I get an exact position size of -11750, which rounds to -11,800. Your position size is -11700.

 

For trade #160 I get and exact position size of 5749.35, which rounds to 5700. Your position size is 5800.

I double-check my results and find that trade # 160 is in order. It is sized differently in my test due to the size discrepancy on the previous trade #158-#159.

I cannot find an explanation for trade #158. I get a non-rounded position size of -11750 even. Your test seems to round this to -11700. My software rounds the position to -11800.

 

Good catch, The October 9, 2005 version has a rounding bug.  The October 15, 2005 version remedies this bug.

Sun, 9 Oct 2005

 

Matches Results with Some Changes


With appropriate changes to my system rules and execution logic I can exactly match the results for each of your test runs.

Interestingly, the October 9th results are actually the first results I obtain with the new Gold data. To match your October 3rd results I had to change the execution logic for gap days and the point at which I measure equity for sizing positions. I agree that the new execution logic is more natural. As for when to measure equity, I think that's more a matter of preference.

Sun, 9 Oct 2005

 

Fills Still Not Right


Ed,

As of the October 8th Simple S/R test results, orders where the market gaps through the stop price are still not handled quite right, I believe.

Note the short entry on 75-06-02. Orders for this date are basis short term support at 729.20. The opening price of 727.30 is through support. If you award a fill at 50% slippage between the open and the low on this day, the fill is 727.05.

You report a fill of 727.75, which is above the opening price. This fill price represents 50% slippage between the daily high and low. I think the most appropriate treatment of this case is to use the open as the order price and fill the order at 727.05.

 

Good Catch. The fill algorithm, in the October 0-th run, now fills at a price halfway between the open and the extreme price, for gap-open markets.  This modification changes the results, over the course of the 30-year run by about 40% of initial equity. Compare the runs for October 9, 2005 and October 8, 2005.

Thu, 6 Oct 2005

 

Contract Sizing


Ed,


I have a question regarding position sizing. I am not clear on how exactly you size your positions for the EA system. For the 150/15 run, your first trade indicates a position size of 6500 units, does this equal 6500/$ per S+P point = 6500/250 = 26 contracts or does this mean an actual position size of 6500 contracts. I am assuming it is 26 contracts, since this matches up with the equity calculations, but I want to make sure I am clear on this.

 

Yes.  Units are worth $1 per handle. If you are trading a $250 / handle contract, then you have, as you state, 26 contracts.  Using a $1/handle convention avoids the complications of different contract sizes for different futures options.

Thu, 6 Oct 2005

 

Margin Requirements Essential / Exponential off by 1 cent.

 


Ed,

I'm happy to see that you are covering how to handle margin requirements next. Without system rules that acknowledge trading limits, my optimization results are pretty meaningless.

I'm wondering about a (meaningless) 1 cent difference in our 'Benchmark' values that I fail to notice before. From your dump.txt file I see that the same difference occurs in most early values as well. I wonder if you are using a VERY slightly smaller value of 'e' than I am.

The formula I use (after substituting variables and expressing in the C
language),

double benchmark = 1000000 * exp(0.03 * 11228 / 365.25);

yields 2514861.315622 which rounds to 2514861.32.

 

Yes, the system, as it is, has no margin requirements, thus limit as to how much it can buy or sell and no penalty for adverse market swings.  To maximize gains, simply bet more.  Yes, Including margin requirements in the model, enables optimization of heat, as too much heat then leads to stubbing out during market corrections.

 

Good catch on the exponential.

 

Meaningless differences sometimes lead to major discoveries.  A number of readers, below, pick up on a "rounding error" from 3700 to 3800.  Investigating this reveals a better way to sequence the order entry computation.

 

The benchmark exponential I use does not enter the trading system computation. It is an independent computation, for visual comparison only.  I am saving investigation the inner-workings of the MS exp function for a rainy day project.  Inaccuracies might arise during some of my type conversions. I am currently using:

 

long long ComputeExponential()
{
double ndICAGR, ndYears;
long long nllExp;

// Instantaneous Compound Annual Growth Rate
ndICAGR = cPreferences.sPreferences.ndExponentialRate; // from dialog box

// start counting at the first possible trading day ...
// ... the first data point + the days to warm up (from dialog box)

ndYears = (nusNow - nusFirstData - cPreferences.sPreferences.nusDaysToWarmUp) / 365.25;


if(ndYears < 0) ndYears = 0; // flat before the growth starts

nllExp = (long long) ((double)N_STARTING_EQUITY_PENNIES * exp(ndYears * ndICAGR));


return nllExp;
}

Wed, 5 Oct 2005

 

TSP: Support and Resistance

Hi Ed,

In trade log, should 75-06-30 units be -3700 instead of -3800?

-3700 = -100 * (long) (928940 * 0.05 / (739.3 - 726.8) / 100 + 0.5)
-3800 = 100 * (long) (928940 * 0.05 / (726.8 - 739.3) / 100 - 0.5)

Good Catch !

 

The latest version (October 8) agrees with your result.  The previous version sizes positions basis one-day-old equity.

Tue, 4 Oct 2005

 

Match, with Adjustments

Dear Ed,

Using Excel, I was able to to match your result to the penny after making some adjustments.

1. On 75-06-30, my system sells 3,700 units, your trade log shows a sale of 3,800 units. I can't explain the difference. My position sizes match yours for every other trade.

2. I notice that when the opening price is higher than the previous day's fast resistance for buys (or lower than the previous day's fast support for sells) your system sometimes buys or sells at the average of today's high and today's low. What is the algorithm you use?

3. When the trend changes direction and the short-term signal already points in the direction of the new trend, your system waits until the next penetration of fast support or resistance. I had my system buy or sell immediately at the change of trend direction.

Thank you for sharing your knowledge and experience.

 

1. The latest version, October 8, above, now agrees with your results.

 

2. The algorithm awards prices half-way between the day high and low.  Your way makes more sense.

 

3. In this system, the long-term indicator sets the posture and the short-term indicator then does the trading, starting the next day.  You might try your alternative formulation to see if it produces substantially different results

Tue, 4 Oct 2005

 

Visual C++ 2003

Hi Ed,


Ed Says:  "I am using MDE 7.1 and the option to change Consistency seems unavailable."


If you mean Visual C++ 2003, try this way: open the project --> menu Project: Properties --> Floating Point Consistency ...

 

Yes, on my version, the option is in light grey and I am unable to access it. 

Tue, 4 Oct 2005

 

SR Results Verification - Problems with Fills & Sizing

Ed,

I now have results for the 20:140 SR system that match yours exactly. See the spreadsheet for evidence.

For fills on bars where the entire daily range is beyond the order price, you assess slippage as 50% of the daily range. For fills on bars where the open is beyond the order price but the daily range contains the order price, you award fills 50% of the distance from the order price to the most adverse price of the day. In both cases, I believe that it is more realistic to award fills basis the difference between the open and the most adverse price of the day.

The logic is that stop orders become market orders if the price trades through the stop level. In any instance where the first price of the day is through the stop price, the order acts like a market order at the opening price. Since we assess slippage as a percentage of the distance between the order price and the most adverse price of the day, it makes sense to use the open as the order price any time there is a gap through the stop price.

Also, I notice one position sizing anomaly.

Look at Pos #242 on 75-06-30. The order that incepts this trade generates on Friday 06-27. The entry-to-exit risk for the trade as of 06-27 is 12.5 points (Fast:739.30 to 726.80). The ending equity on 06-27 is $928,940. The sizing math is: (928940 * .05) / 12.5 = 3715.76. Rounding this number to the nearest round lot of 100 results in a position size of 3700. In order to obtain your position size of 3800, I must use the previous day's equity value of $950,180 in the sizing calculation.

I check to see if there is something post-dictive about using the end of day equity of $928,940 for 75-06-27 in order to size the trade that enters during the next session. I do not think there is. I think that we can tally up the daily P/L for today, adjust the end of day equity number accordingly, and use that number to size orders for the next session.

Can you confirm whether you size trades on today's end of day equity or on yesterday's end of day equity?

This is the only case in this test run where this potential bug manifests since it is the only instance where a trade exits on one day and enters on the next, before the equity has a chance to stabilize for more than one bar.

 

Good calls. 

 

1. In the latest run (October 8, 2005), I now award fills, for markets that gap open above stops, at a price halfway between the open and the high of the day.

 

2. In the latest run, I now figure equity at the close and then enter orders, for the next morning, basis the closing equity.

Tue, 4 Oct 2005

 

Exact Match


I cleaned up the code and got an match down to the penny. Looking forward for your next assignment.

Good Work !

Mon, 3 Oct 2005

Exact Match

Ed,

I have an exact match.

 

Good Work !

Mon, 3 Oct 2005

Metrics Log Snafu


Ed,

There appears to be a problem with your new metrics log on or about Bastille Day 2000.

[Metrics] 00-07-13-H Prices:[317.20 318.30 316.20 316.40] [Slow:364.20 to 307.30 Fast:330.30 to 316.10 T:-1]

 

[Metrics] 00-07-14-F Pric ... 91.60]

[Slow:320.10 to 290.10 Fast:297.60 to 290.10 T:-1]

 

[Metrics] 01-02-07-W Prices:[291.10 291.80 290.70 291.10] [Slow:318.70 to 290.10 Fast:297.40 to 290.10 T:-1]

Regards,

 

Thank you for the catch. Data seems absent from 7/14/2000 - 2/7/01.  The source file is OK - indicating some uploading glitch.  The file now stands correctly.

Mon, 3 Oct 2005 (by Ed)

 

Revision Run

 

I am now posting a revision run, reflecting comments, corrections and suggestions from readers.

 

Revision Run: SYS_SR-05-10-03-08-44 (October 3)

 

Original Run: SYS_SR-140-20 (September 25) This run is the basis for reader comments below this point.

Sat, 1 Oct 2005

 

Problem with Heat


Dear Ed,

Please clarify how the system can generate a trade as early as 75-02-11 if it takes 140 days to calculate long-term support and resistance levels.

Also, I could not match your position sizes. Please clarify if the following is correct. I calculate the size of a short position as follows:

Size = Equity x Heat / (Stop Price - Entry Price)

where

Stop Price = Previous day's short-term resistance

If this is correct, I can't explain the difference between your position size and mine. For example, on 1975-11-14 the system enters a short at 701.50 with a stop of 709.10. Ending equity on 1975-11-13 is 1,001,090. I calculate the position size as follows:

1,001,090 x 0.05 / (709.10 - 701.50) = 6,600

However, your trade log shows a position size of 3,500.

The system initializes support and resistance by back-posting the first high and low.

 

The heat in the run is actually 2.5%, not the 5% I claim.  I am preparing a revision run.

Thu, 29 Sep 2005

 

No Data for Pos # 0


Could you please explain how you arrived at the 738.65 entry price. My data doesn’t have a ohlc for the 2/12/75.

Thanks,

 

Thank you for the catch.

The fill is for 2/11/75: The date is out of alignment.

The data for 2/11/75: 743.20  744.00  738.20  743.70

Wed, 28 Sep 2005

 

Precision Issue

Ed,

There were a few postings on newsgroups regarding floating point accuracy in visual studio and someone also mentioned setting the /Op compiler option to resolve the problem. As I think you had mentioned, this is located in:

C/C++->Optimization->Floating-Point Consistency

Set to “Improve Consistency /Op”

Unfortunately the optimization compiler option is not available in C# and I haven’t been able to find an equivalent. I don’t have any C++ code to reproduce the problem and test the results, so I have no way of knowing if it works. I am still looking for a similar option on the C# side. From what you were saying, it sounds like you did set it to “Improve Consistency” with no apparent results. Is this true?

 

I am using MDE 7.1 and the option to change Consistency seems unavailable. 

Wed, 28 Sep 2005

 

Continuing with Continuous

see Sample Data


Ed,

I am examining your latest Panama Gold (sounds like some wild drug
strain) continuous contract. I find that 18 of the zero-range days occur
in a cluster during June-July 1993. My chart shows plenty of ranging
days in that period. Other than those 18, most of your other zero-range
days coincide with the ones on my chart.

You might want to take a look at what your roll strategy is doing during
that time.

Wed, 28 Sep 2005

 

VC++ /Op Switch



Ed,

It's not obvious to the casual observer but your screen shot shows /Op as off. To turn it on, change 'Default Consistency' to 'Improved Consistency'.

It rarely matters much and apparently does not matter in this case considering your other e-mail.

I don't usually bother setting it unless it's important for my numerical programs to produce identical output from version to version.

 

 

Screenshot of VC++ Visual Studio

 

Wed, 28 Sep 2005

 

Simple S&R System Comparison:

Problems with Skid, Heat


Hi Ed,

I attempt to verify your results for the 140:20 Simple S/R System using the latest continuous contract data that you post on the resources page of your website. My results do not match yours 100%. Here are the
differences I notice so far:

Your system seems to have no entry stops at the long term support and resistance breakouts. Thus, it does not enter on the bar where the long term trend changes. Rather, it waits for the next short term breakout subsequent to the bar where the trend changes.

 


Originally, I program the system to enter entry stops at trend change inflection points. This method is consistent with trading with the long term trend and entering on a short term breakout.

I duplicate your exact entry and exit dates by removing the code that enters on a stop at the trend change inflection point. Now the soonest I can enter is on a short term breakout the day after a trend change. This
matches your output.

Is this how you intend the system to work?

 

Yes. The long-term trend sets up the condition for the short-term trend to issue the signal.  You might experiment with an alternate formulation and see if it works better.

Also, I notice that your system awards a fill at the slippage adjusted order price even on days where price gaps beyond the order price on the open. Here is an example trade:

Inst Units Entry Exit
P&L
GC----C -1800 Pos # 4 75-06-02 728.000 Pos # 5 75-06-27
736.200 -14,760.00

On 75-06-02 the opening price is 727.30. The sell stop short entry order for that session is at 729.20. Your fill of 728.00 represents slippage of 50% of the distance between the order price and the most adverse price of the day, 726.80. My system fills this order at 727.05, 50% slippage from the opening price to the most adverse price of the day.

I suggest that the way my system currently awards a fill on this trade is more realistic than using the actual order price on days where price gaps beyond the order price.

I also notice that the position sizes seem to indicate 2.5% heat trade sizing rather than 5%. The round lot size is 100 contracts if each contract changes $1 for each big point move. Can you confirm this?

 

Yes, the heat is actually 2.5%.

Wed, 28 Sep 2005

 

Rounding Fix

Ed,


I wonder whether you expect negative input data and how you might want to round those values.

A safe but SLOW method that ought to always match someone reading in your CSV files in double precision is to print the data into a buffer using the the same format specifier and then read it back in from the buffer as a double.

Perhaps a better way is with the following function.

// round the input value to the specified precision
// ex: round(32.125, 0.01) ==> 32.13
// round(32.13, 0.125) => 32.125
double round(double in, double precision)
{
     if ( in >= 0 )
          return (long)(in / precision + 0.5) * precision;
     else
          return -(long)(-in / precision + 0.5) * precision;
}

then the block of code in your loop below might collapse to one line and look something like this...


pcDataBuf->ndPrice[k] = round(pRec->fData[k], 0.001);

 


Good catch! 

 

This works if you know the precision you want.

 

If not, the precision of floats is also a function of the absolute value of the float.  To acknowledge value-specific precision, I use:

 

double rounder(double ndIn)
{
   return rounder(ndIn, getPrecision(ndIn));
}

double getPrecision(double ndIn)
{
   if(ndIn > 100000.0) return .5;
   if(ndIn > 10000.0) return .05;
   if(ndIn > 1000.0) return .005;
   if(ndIn > 100.0) return .0005;
   if(ndIn > 10.0) return .00005;
   if(ndIn > 1.0) return .000005;
   return .0000005;
}

double rounder(double ndIn, double ndPrecision)
{
   // round positive numbers up, negative numbers down;
   if ( ndIn >= 0 )
   return (double)((long)( ndIn / ndPrecision + 0.5 )) * ndPrecision;
   else
   return (double)((long)( ndIn / ndPrecision - 0.5 )) * ndPrecision;
}

Wed, 28 Sep 2005

 

Exact Match to the Penny - with Some Modifications



OK. It's a moving target but with some tweaks I can now hit it to the penny.

Metrics log:

matches exactly (and no longer requires seeding resistance with a mysterious value as did the first simulation)

Trade log:

to obtain an exact match I make 2 changes to my simulation:

1- budget only half my equity for determining the position size
2- skid my final trade exit price instead of using the closing price

Equity log:

I have have an entry for the final day of the simulation which does not appear in your file

05-09-26 1,846,420.00 13,770.00 1,860,190.00
05-09-27 1,846,420.00 7,830.00 1,854,250.00

It seems that your indicated ending value is for 9/26 instead of 9/27 which matches my result for that day to the penny.

 

Good catch.  Yes, the run uses 2.5% heat, not 5% as I claim.

Wed, 28 Sep 2005

 

January 29, 1993 Trade Discrepancy

Hi,


I did confirm (with some modifications) your results yesterday. I include the exe.

 

 

Good catch.

Tue, 27 Sep 2005

 

GC Sample Data

see: Continuous Contracts


Ed,

This data has many fewer zero-range days. I count 32 of them. This seems much more plausible than the previous data.

I like your theory about rolling into the contract with the widest range. The reduction in the number of zero-range days indicates it works pretty well.

Mechanica is the software that defaults to no trading on zero-range days. I can likely find a way to tell it to do something different if I wish. It is generally pretty flexible.

Concerning CSI, I think that their "close old contract, close new contract" method is identical to their "close - open with new gap reintroduced" method. Both methods seem to preserve the price relationship between the close of the new contract on the day before a splice and the open of the new contract on the day of a splice. I don't completely understand their description of the methods, but when I compare the two methods on a chart the results are identical.

Tue, 27 Sep 2005

 

Continuous Contracts

see: Thin Sections


Ed,

Glad to help. Here is what CSI has to say about the different splicing methods:


· Close old contract, close new contract - Here the close of the new lead contract is compared with the same-day close of the former lead contract and the price difference is applied to all historical data such that the new prices are seamlessly appended to earlier data without a gap.

· Open old contract, open new contract - Same as close-to-close (above), except that the same-day open prices are used.

· Close old contract, open new contract - This formula compares the open price of the new lead contract with the previous day's close price of the former lead contract and applies the difference to all historical data, such that new data can be added without a gap. A more refined close-to-open adjustment is available through the last two options.

· Close - open with old gap reintroduced - With this option, past (from-contract) history is adjusted so that the old contract close equates to the old contract open, such that the close-to-open gap of successive days of the old contract are preserved. The result accurately reflects the old contract when bridging the two-day period from contract to contract.

· Close - open with new gap reintroduced (preferred) - With this option, past (from-contract) history is adjusted so that the new contract close equates to the new contract open, such that the close-to-open gap of successive days of the new contract are preserved. The result accurately reflects the new contract when bridging the two-day period from contract to contract. This is the preferred choice because it represents the best future contract to emulate.



* * * * *



It seems the theory behind method A is to preserve, in the Panama contract, the price relationship one sees between the roll day and the previous day when examining a chart of the front month contract. In reality, I notice no difference between the two methods.

Upon giving this some thought, I can't understand how method A and B could produce different prices if B equalizes the closes the day prior to the first bar of new contract data. If that is the case, then the "new gap" is always present in both methods. This jibes with what I see when I create two charts of the same contract, one using method A and one using method B. The charts are identical across the board.

If you can see an error in my logic, please let me know, otherwise it looks like CSI has at least one superfluous splicing method. I haven't done the chart comparison on the other methods yet.

I think the simpler plan B is the way to go.

 

Following your observation, I am now picking a roll target, basis the delivery with the greatest ten-day sum of (high - low) - on the theory that I want the delivery with the biggest trading ranges - to eliminate the thin ones. It turns out my data base does not carry individual delivery volumes for gold, so my former volume-seeking algorithm defaults to the nearby, often very thin market.

 

I find the language describing the CSI algorithm a bit ponderous, even after diagramming out the sentence.

I wonder what software you use that does not allow an override to trade zero-range days.

Tue, 27 Sep 2005

 

Thin Sections of Continuous Contract


Ed,

I count 1978 instances where the daily high equals the low in the GC sample data file. That is close to 8 years of trading history. This seems excessive when I compare it to a Panama style data file of my own construction. Mine contains 15 H=L bars. I think the large number of H=L days might be due to your contract using data from very illiquid contract months at times.

My software interprets a bar where the high and low are the same price to be a locked limit day, and it does not allow a trade on such a bar. I can look into circumventing this behavior, but I generally believe in erring to the conservative side in matters like this.

I am including my Panama/last true contract for your perusal. Note that the last field in the file indicates which delivery the price data comes from. The algorithm I use to create this contract is to roll on the 22nd day (or the next trading day if the 22nd is on a non-trading day) the month prior to contract expiration. This data file includes the following deliveries: February, April, June, August, and December.

The splicing method on the roll day is what CSI calls, "Close to Open with New Gap Reintroduced." This works by calculating the distance between the open of roll day and the close of the previous day in the new contract. This distance is the "roll gap." The close of the old contract, and all the other prices in the data file, are then offset by the roll gap from the open of the new contract. This method preserves the close to open relationship of the new contract between roll day and roll day - 1. Price data from roll day - 1 is from the old contract. Price data from roll day forward is from the new front month contract.

 

Good catch.  I see no difference between the "Close to Open with New Gap Reintroduced" and the simple Panama Splice.

Tue, 27 Sep 2005

 

Problems with Precision


While there are some small equity differences early on (due to your use of single precision prices?) we begin a more serious divergence on the short trade initiated on 92-12-22.

// from my Trade log
GC----C -47000 92-10-12 413.750 92-12-16 405.500 387,750.00
GC----C -41200 92-12-22 400.400 93-02-10 400.850 -18,540.00
GC----C -43800 93-02-22 395.750 93-03-22 399.900 -181,770.00


While investigating this, I notice that an inconsistency in the way your system treats stops.

Your 92-12-22 position stops out on 93-01-29 at 400.40 when the high of the day just touches short term resistance. Your 75-06-02 position does not stop out on 75-07-01 at 436.90 when the high of the day touches short term resistance.

Here are the results I get with consistent stop treatment:

Slow=140 Fast=20 Heat=0.05 Bliss=0.1565 ICAGR=0.0626 Dr-dn=0.3997
Final=6832510.00

when the system stops out only when price trades through the stop; and Slow=140 Fast=20 Heat=0.05 Bliss=0.1385 ICAGR=0.0540 Dr-Dn=0.3901 Final=5257160.00

when the system stops out when the price touches the stop:

 

Good catch!  I suspect this issue relates to the precision issue below.

Mon, 26 Sep 2005

 

Close and Not Exact

Dear Ed,

If I initialize both resistances to 474.90 (I snarfed it from your metrics log - not sure where it comes from) then I can duplicate your metrics log exactly. The dates of the trades are the same but the equity values and some of the later position sizes are not quite the same since you are apparently using lower precision than I am.

For example, you sell 4200 units on 75-03-20 at 445.75. You report equity at the close to be 1,000,629.46. The precise value of equity is 1,000,630.00 based on the closing price of 445.60.

 

Good catch. I trace the imprecision to the way MS VC++ converts floats to doubles. I am now implementing a workaround.

Mon, 26 Sep 2005

 

How to Define the Trend


Hi Ed,


In the description of the support/resistance system you said:
"This system uses two sets of S-R lines: (1) long-term support and resistance to define the long-term trend and (2) short-term support and resistance to define the short-term trend. When the long-term trend is positive, the system enters the market with a stop just above the short term resistance and then places a protective stop below the short-term support."

Questions:


- What is the definition of the long term trend being positive (or negative)?


- How is it computed in terms of the relationship between the long and short term support/resistance lines and price?

Thanks,

 

When the price trades above the resistance, it defines the trend as up.  The trend stays up until the price trades below the support.  This defines the trend as down. 

Mon, 26 Sep 2005

 

Unrealistic Fills on Gap Openings


Ed,

I think there may be a bug in your code for this system that produces unrealistic fills on days where the opening price gaps through the stop order price.

An example occurs upon the exit of the first trade in the 140:20 trade
log.

A short trade is in progress with a protective exit stop to buy at 440.10. On 05/14/1975 the opening price is 440.30. Your system awards a fill of 440.20. This is based on 50% slippage between the order price of 440.10 and the high price of the day of 440.30.

In my opinion, when the open is through the order price it is more realistic to treat the open price as the order price. In this case the result is a fill of 440.30, since the opening and high price of the day are the same. If the high is not equal to the opening price, the fill is slipped the appropriate amount from the opening price.

 

Good catch.


* * * * *


I also want to share with you my opinion on how to treat the start of trading with a system like this. I know the priming method for the startup of any system is arbitrary, so there is likely no single correct method. However, there may be methods that work better for a given purpose.

In your system, when the lookback period for a trend indicator is greater than the available number of bars, the indicator uses however many bars are available.

This means that for the first part of the test the system uses shorter term lookbacks than for the rest of the test. I see a potential problem with this. It means that we are seeing results from a relatively shorter term system mixed in with results from our intended system. As we test longer lookback periods, a greater portion of the overall trades are generated by shorter than intended trend indicators.

One property of this method is that all system lookbacks share a certain set of common trades at the beginning of the test period. When we move to the optimization phase I hypothesize that this could produce results that tend to make systems of different lookbacks appear more similar than they are in actual trading experience.

As an alternative, I suggest taking no trades until the indicator with the longest lookback has enough data to calculate. When we go to the optimization phase I suggest comparing trades of all systems over a common trading period determined by the system with the longest lookback.

A potential problem with this method is that for very long lookbacks a good bit of the available history is not used. In this case we seem to have plenty of data (going back to 1975), and my opinion is that the problem of inconsistent lookback periods within a single test and across tests of different parameter values is more serious than the problem of a shorter test history.

As you see, there are several opinions here. This perhaps illustrates the ultimately discretionary nature of systematic trading.

Thanks for leading us through this exercise!

 

I plan to address the issues surrounding waiting while metrics warm up when we take up optimizations in more detail.  For now, I am concentrating on getting various systems to work.