Support & Resistance System
Back to S&R System
System Test Runs
October 15, 2005 <== latest
October 9, 2005
October 8, 2005
October 3, 2005
September 25, 2005
25 Oct 2005
ICAGR Still not
thanks for publishing the results about the 120/45 run. I found the
bug, it was in my code. But when I searched my code for bugs I did
notice something about the ICAGR calculation. If you look at the
140/20 run the calculation should be:
ratio = 2585500 / 1000000
dateRange = (11228 - 25) / 365.25
icagr = ln(2.5855) / 30.67214...
If I do the calculation with subtracting the days to warm up I get
the right results (but the wrong dateRange)
ratio = 2.5855
dateRange = 11228 / 365.25
icagr = ln(2.5855) / 30.74059...
Doing the same for 120/45 run
ratio = 2,33828
dateRange = 30.67214...
icagr = 0.027693
Without days to warm up:
dateRange = 30.74059...
icagr = 0.027632
I am note sure this is related to some rounding issues or that my ln
function does not have enough precision or simply maybe because I am
doing the icagr calucaltion wrong. Can you please comment?
... Thank you for the catch - I am currently
checking into this.
Reproduced Results for Support and Resistance System
I reproduced your results (140/20 run October 15, 2005 version and
run) for the Support and Resistance System to the penny.
However I wonder about one small detail:
To reproduce your results I had to program the stops exactly at the
and resistance prices and use the "less" and "greater" operations -
"less or equal" and "greater or equal" operations - to evaluate if a
order is activated.
You write: "When the long-term trend is positive, the system then
market with a stop just above the short term resistance and then
protective stop below the short-term support."
I am also of the opinion that a stop order already becomes a market
once the specified stop price is attained. Thus it is not necessary
price is penetrated.
This seems to contradict the code which reproduces your results. Am
to assume that you use the "greater" operation and "a stop exactly
resistance" as an approximation for "greater or equal" and "a stop
above resistance" respectively?
Yes, in this test, I use > and <
instead of >= and <=.
Details for 120/45
I tried to match other values optimization table but ran into some
trouble. As I told you before my code matches the 20/140 system. It
also matches 30/120 and 20/140. But if I try 45/120 I get the
following result cagr / drdn / bliss: 0.0276 / 0.2605 / 0.1058.
According to your table the values should be: 0.028 / 0.369 / 0.075.
be my drdn calculation that is not correct but I am not sure. Would
it be possible for you to publish the logs from the 45/120 run?
I get this errors on more places but I think I only need the 45/120
run to find out the problem.
Yes, SR 120/45 is now
18 Oct 2005
I am working on your Support and Resistance Trading system and
cannot understand what kind of orders you are using. Any
clarification would be greatly appreciated.
You are saying " Signals issue right after the
close. The system enters orders before the next open, to trade on
stop, the following day. " and " When the long-term trend is
negative, the system enters the market on a stop just below the
short-term support with protection just above the short-term
First, if today, after close we calculated that 20-day support was
broken by today's lowest price, the order can only be executed
tomorrow or later.
By looking at your Metrics we can see that first day
when the 20-day support (739.10) was broken was 75-02-11, with value
of 738.20. But your first trade is also on 75-02-11, how is it
Second, what kind of order was used to enter into a first position?
If it was a stop order of 738.20, it wouldn't have gone through
because the price would not be hit until 75-04.
Also is protection a stop order as well? If so, shouldn't it be
recalculated every day?
Thank you in advance,
Directly after the close on Monday,
your trend (T:-1) is down. You enter a sell stop at 739.10, basis
the Fast Metric.
On Tuesday, the order fills at
The Fast Metric also gives the
protective stop. Directly after the close on Tuesday, you enter a
protective stop at 749.90. You can see this, graphically, on the
chart at the bottom of the run.
[Metrics] 75-02-10-M Px:[742.90
743.30 742.40 742.50] [Slow:762.30/735.90 Fast:749.90/739.10
[Metrics] 75-02-11-T Px:[743.20 744.00 738.20 743.70]
[Slow:762.30/735.90 Fast:749.90/738.20 T:-1]
[Metrics] 75-02-13-H Px:[747.40 753.60 747.10 752.70]
[Slow:762.30/735.90 Fast:753.60/738.20 T:-1]
GC----C -4600 0 75-02-11 738.650
Oct 15 Test
Another exact match. Likely "correct" this time :-)
on 75-07-29 - Stop Not Electing
If we are short on 75-07-28, on 75-07-29 price hits a high of 735.1,
the High Fast is 735.10, how come we are not taken out of the
[Metrics] 75-07-29-T Px:[734.70 735.10 731.90 732.00]
[Slow:755.90/724.30 Fast:735.10/725.70 T:-1]
I’ve fooled with this for days too embarrassed to email …
On 7/29 the 735.10 does not elect the
737.60 stop, from 7/28.
After the close on 7/29, the stop
falls to 735.10.
On 7/30 the high price of 733.30 does
not elect the 735.10 stop.
[Metrics] 75-07-28-M Px:[731.40
734.40 731.40 734.10]
[Metrics] 75-07-29-T Px:[734.70 735.10 731.90 732.00]
[Metrics] 75-07-30-W Px:[733.10 733.30 731.60 732.20]
Refer our discussion the morning and your question: On doing a
typecast, how does the compiler manufacture extra digits, seemingly
random, but exactly repeatable on any computer?
Consider the number 0.333333
double d1, d2;
f = 0.333333;
d1 = f;
d2 = (double) 0.333333;
The values of f, d1, d2 as set by the program on execution are:
f = 0.333333;
d1 = f;
d2 = (double) 0.333333; //
Now if you observe the binary data, you see:
1. f and d1 have the same (binary) bits, but as above they have
different (decimal) digits. This means d1 just uses the same 23 bit
mantissa as f. This explains why the newer digits in d1 are not zero
-- the typecast assigns mantissa bits 24-52 to 0, but the old bits
1-23 contribute to the newer decimal digits.
2. The bits in d2 are different from those in d1 -- there are more
mantissa bits in double (52 in double vs 23 in float) and the
"extra" lower order bits give it more precision, thus the newer
decimal digits are more in line with intuition.
Obvious test to this hypothesis, encode 1.125 and see that the extra
bits are not nonsensical, that d1==d2 as .125 = 1/8 exact.
In short, to get greater precision, the conversion from decimal
needs to be made DIRECTLY to double, not via float. Seeing as how
precision loss is a result of the Decimal->IEEE conversion, the best
solution is to not use float at all.
However, if your data is in float already, do the following one-time
1. Convert all data to decimal and print as a string. If you try to
do this step by writing your own function, then you get,
f1=0.333332986 and not what you want (0.333333) ! C++ guarantees 6
digit precision for float, so the system stores the value as
0.333332986 but reports it as 0.333333, so use "sprintf" to print
exactly 6 digits into a character array/string.
2. From that decimal string, re-encode the number as double using
the library function "atof". Voila! "high precision data". This high
precision data prints the same decimal digits as the old data, but
the lower mantissa bits are now meaningful.
Well this works because price data (decimal) has very few significant
figures and high as well as low precision IEEE can store it well.
Note, this only deals with the price data precision, you still have
to write equality check etc. See "C++ FAQ 36.14" from prior mail.
The advantage to doing this price data munging is that the floating
point operation checks become easier, eg equal can be "agrees to 14
Section 36, FAQ 14 in C++ FAQ.
Background info which you probably know and some references
to check, e.g., the scientific computing community does
high precision work.
[36.14] Why doesn't my floating-point comparison work?
Because floating point arithmetic is different from real
Here's a simple example:
double x = 1.0 / 10.0;
double y = x * 10.0;
if (y != 1.0)
std::cout << "surprise: " << y << " != 1\n";
The above "surprise" message will appear on some (but not
compilers/machines. But even if your particular
compiler/machine doesn't causethe above "surprise" message (and if you write me telling
me whether it does,
you'll show you've missed the whole point of this FAQ),
floating point will
surprise you at some point. So read this FAQ and you'll
know what to do.
The reason floating point will surprise you is that float
and double values are
normally represented using a finite precision binary
format. In other words,
floating point numbers are not real numbers. For example,
in your machine's
floating point format it might be impossible to exactly
represent the number
0.1. By way of analogy, it's impossible to exactly
represent the number one
third in decimal format (unless you use an infinite number
To dig a little deeper, let's examine what the decimal
number 0.625 means.
This number has a 6 in the "tenths" place, a 2 in the"hundreths" place, and a
5 in the "thousanths" place. In other words, we have a
digit for each power of
10. But in binary, we might, depending on the details of
floating point format, have a bit for each power of 2. So
the fractional part
might have a "halves" place, a "quarters" place, an
"sixteenths" place, etc., and each of these places has a
Let's pretend your machine represents the fractional part
of floating point
numbers using the above scheme (it's normally more
complicated than that, but
if you already know exactly how floating point numbers are
stored, chances are
you don't need this FAQ to begin with, so look at this as a
point). On that pretend machine, the bits of the
fractional part of 0.625
would be 101: 1 in the "halves" place, 0 in the "quarters"
place, and 1 in the
"eighths" place. In other words, 0.625 is 1/2 + 1/8.
But on this pretend machine, 0.1 cannot be represented
exactly since it cannot
be formed as a sum of (negative) powers of 2 -- at least
not without an
infinite number of (negative) powers of 2. We can get
close, but we can't
represent it exactly. In particular we'd have a 0 in the
"halves" place, a 0
in the "quarters" place, a 0 in the "eighths" place, and
finally a 1 in the
"sixteenths" place, leaving a remainder of 1/10 - 1/16 =
3/80. Figuring out
the other bits is left as an exercise (hint: look for a
The message is that some floating point numbers cannot
always be represented
exactly, so comparisons don't always do what you'd like
them to do. In other
words, if the computer actually multiplies 10.0 by
1.0/10.0, it might not
exactly get 1.0 back.
That's the problem. Now here's the solution: be very
careful when comparing
floating point numbers for equality (or when doing other
things with floating
point numbers; e.g., finding the average of two floating
point numbers seems
simple but to do it right requires an if/else with at least
Here's the wrong way to do it:
void dubious(double x, double y)
if (x == y) // Dubious!
If what you really want is to make sure they're "very
close" to each other
(e.g., if variable a contains the value 1.0 / 10.0 and you
want to see
if (10*a == 1)), you'll probably want to do something
fancier than the above:
void smarter(double x, double y)
if (isEqual(x, y)) // Smarter!
Here's the isEqual() function:
inline bool isEqual(double x, double y)
// left as an exercise for the reader :-)
// see one of the references below
For the definition for the above function, check out
references such as the
following (in random order):
* Knuth, Donald E., The Art of Computer Programming,
Volume II: Seminumerical
Algorithms, Addison-Wesley, 1969.
* LAPACK -- Linear Algebra Subroutine Library,
* Stoer, J. and Bulirsch, R., Introduction to Numerical
Verlag, in German.
* Isaacson, E. and Keller, H., Analysis of Numerical
* Ralston and Rabinowitz, A First Course in Numerical
Edition, Dover Press et al., Numerical Recipes.
* Kahan, W., http.cs.berkeley.edu/~wkahan/.
Reminder: be sure to check out all the other primitives,
such as averages,
solutions to the quadratic equation, etc., etc. Do not
assume the formulas you
learned in High School will work with floating point numbers!
Hope this helps.
Very nice !
I have once more updated the code. This time I get exact results if
I modify the trade on 95-11-22. From your metric log you have 11700
contracts short. When I do the calculation I get 11800 contracts
short. If I do the calculation by hand I get the following:
1974000 * 0.05 / ( 470.6 – 462.2) = 11750 = 11800 contracts
The source code included has a fix for this.
// This is probably the worlds most inefficient rounding routine...
int test = position % 100;
if( test < 50 )
position = position - test;
position = position - test + 100;
Yes, I address this problem
in the October 15, 2005 version.
My results for Simple SR
Here is a spreadsheet that includes the equity log and metrics for a
test run that I believe correctly incorporates all of the
the simple 20:140 SR system so far.
The ending equity is a couple of hundred dollars less than your
results post (October 9th), and the metrics reflect the resultant
I use 75-01-27 as the start date for calculating rates of return.
test uses 75-01-25. That is a Saturday. I figure no money is at risk
we start trading on a non-trading day, so I use the following Monday
the start date.
I wonder if your results match mine?
Yes, with my rounding fix (October
15, 2005 version) I think we
TSP Project -- Exponential Average Crossover Optimal Heat
I ran a simulation using C++ of the Exponential Average Crossover
system and my results are exact to those you have published.
I also used the steamroller method to find an approximation for
optimal heat. Two charts are included: one showing heat (Fractional
Bet) VS CAGR the other illustrating heat VS Bliss.
The Bliss function uses Lake Ratio rather than PDD in calculation.
Bliss = CAGR/(Lake Ratio)
I find it interesting that fractional bet values above 0.00625 are
not that blissful.
October 9th SR
I confirm that the small precision error you note cascades to the
next trade. When I manipulate my systems position size on trade #158
to match yours, my position size for #160 also matches yours. It
appears that trade #158 is the only place our math differs.
My ending equity now matches yours.
Yes, as numerous readers notice, I
have an error in my computation:
95-11-22 1,974,000.00 2,340.00 1,976,340.00
95-11-27 1,974,000.00 -30,420.00 1,943,580.00
95-11-28 1,974,000.00 -65,520.00 1,908,480.00
[Metrics] 95-11-20-M Px:[466.40 466.60 465.60 466.20]
[Metrics] 95-11-21-T Px:[466.10 466.90 464.40 464.50]
[Metrics] 95-11-22-W Px:[464.60 465.10 460.60 461.20]
[Metrics] 95-11-27-M Px:[462.50 464.70 462.00 464.00]
[Metrics] 95-11-28-T Px:[464.20 469.60 464.20 467.00]
GC----C -24100 156 95-10-19 462.750
157 95-11-09 466.750 -96,400.00
GC----C -11700 158 95-11-22 461.400
159 96-01-02 469.850 -98,865.00
GC----C 5800 160 96-01-22 481.450
161 96-02-21 474.900 -37,990.00
My code runs along
Risk: 1974000 * .05
= 98700 // five percent of Eq.
Risk Per Lot: 470.6 - 462.2 = 8.4 // box (hi -
Lots: 98700 / 8.4
= 11750 // divide 1. / 2.
= 11800 // to nearest 500
registers for this computation, I find:
RiskPerLot = 8.4000000000000341
ndSize = 11749.999999999953
So the size comes
out about 4.7 * 10 ^-12 shy of 11750
and this rounds
down to 11700.
To address this
issue, I round ndSize up to 11750 before rounding it again, this
time, to 11800. See the results in the October 15, 2005
October 9th SR
I recently experience this type of precision problem with Mechanica
as well. When I report it to the developer he sends me a "fix" later
in that same day and now the problem does not manifest. I do not
know the details of the fix. Since I am not much of a programmer
(outside the Mechanica language) I didn't ask how the fix works, I
just verify that it remedies multiple cases of precision errors.
I do know that the software uses primarily double precision numbers.
Bob (the developer) tells me that there are some potential problems
with converting doubles to floats and vice-versa, so he tries to
keep everything in terms of doubles. He says this slows the software
down some but makes for better precision. Again, I reiterate that I
am not enough of a programmer to evaluate the veracity of this
statement. However, Mechanica does seem to get most stuff right.
Sun, 9 Oct
Rounding Problem with Trade #158
I recheck my results. When I format my trade log like yours our logs
agree byte for byte. My results match yours. We are both wrong.
I see the problem with trade #158 but I am not sure about the best
way to fix it.
Evidently our granulators have the same flaw in that they use the
binary number system which cannot express tenths exactly. Because of
this we calculate an entry risk per lot that is off by ~ 3.4e-14.
This gives a position size very slightly less than 11750.
The difference for trade #160 likely cascades from the first
BTW, the source of the 1 cent discrepancy in the Benchmark value is
that you are truncating to the nearest penny (via cast to long long)
while I am rounding. I'm glad the difference is something I can
Good catch. The October 15, 2005
version fixes this rounding problem.
Sun, 9 Oct
9th SR Results - Two Differences
I compare my output for the 40:120 SR system with yours, and I find
There are two trades, #158 and #160, where my position sizing
For trade #158, I get an exact position size of -11750,
which rounds to -11,800. Your position size is -11700.
get and exact position size of 5749.35, which rounds to 5700. Your
position size is 5800.
I double-check my results and find that trade # 160 is in order. It
is sized differently in my test due to the size discrepancy on the
previous trade #158-#159.
I cannot find an explanation for trade #158. I get a non-rounded
position size of -11750 even. Your test seems to round this to
-11700. My software rounds the position to -11800.
Good catch, The October 9, 2005
version has a rounding bug. The October 15, 2005 version
remedies this bug.
Sun, 9 Oct
Results with Some Changes
With appropriate changes to my system rules and execution logic I
match the results for each of your test runs.
Interestingly, the October 9th results are actually the first results
with the new Gold data. To match your October 3rd results I had to
execution logic for gap days and the point at which I measure equity
positions. I agree that the new execution logic is more natural. As
to measure equity, I think that's more a matter of preference.
Sun, 9 Oct 2005
Still Not Right
As of the October 8th Simple S/R test results, orders where the
market gaps through the stop price are still not handled quite
right, I believe.
Note the short entry on 75-06-02. Orders for this date are basis
short term support at 729.20. The opening price of 727.30 is through
support. If you award a fill at 50% slippage between the open and
the low on this day, the fill is 727.05.
You report a fill of 727.75, which is above the opening price. This
fill price represents 50% slippage between the daily high and low. I
think the most appropriate treatment of this case is to use the open
as the order price and fill the order at 727.05.
Good Catch. The fill algorithm, in
the October 0-th run, now
fills at a price halfway between the open and the extreme price, for gap-open
markets. This modification changes the results, over the
course of the 30-year run by about 40% of initial equity. Compare
the runs for October 9, 2005 and October 8, 2005.
Thu, 6 Oct 2005
I have a question regarding position sizing. I am not clear on how
exactly you size your positions for the EA system. For the 150/15
run, your first trade indicates a position size of 6500 units, does
this equal 6500/$ per S+P point = 6500/250 = 26 contracts or does this
mean an actual position size of 6500 contracts. I am assuming it is
26 contracts, since this matches up with the equity calculations,
but I want to make sure I am clear on this.
Yes. Units are worth $1 per
handle. If you are trading a $250 / handle contract, then you have,
as you state, 26 contracts. Using a $1/handle convention
avoids the complications of different contract sizes for different
Thu, 6 Oct
Requirements Essential / Exponential off by 1 cent.
I'm happy to see that you are covering how to handle margin
Without system rules that acknowledge trading limits, my
are pretty meaningless.
I'm wondering about a (meaningless) 1 cent difference in our
that I fail to notice before. From your dump.txt file I see that the
difference occurs in most early values as well. I wonder if you are
using a VERY
slightly smaller value of 'e' than I am.
The formula I use (after substituting variables and expressing in
double benchmark = 1000000 * exp(0.03 * 11228 / 365.25);
yields 2514861.315622 which rounds to 2514861.32.
Yes, the system, as it is, has no
margin requirements, thus limit as to how much it can buy or sell
and no penalty for adverse market swings. To maximize gains,
simply bet more. Yes, Including margin requirements in the
model, enables optimization of heat, as too much heat then leads to
stubbing out during market corrections.
Good catch on the exponential.
Meaningless differences sometimes
lead to major discoveries. A number of readers, below, pick up
on a "rounding error" from 3700 to 3800. Investigating this
reveals a better way to sequence the order entry computation.
The benchmark exponential I use does
not enter the trading system computation. It is an independent
computation, for visual comparison only. I am saving investigation
the inner-workings of the MS exp function for a rainy day project.
Inaccuracies might arise during some of my type
conversions. I am currently using:
long long ComputeExponential()
double ndICAGR, ndYears;
long long nllExp;
// Instantaneous Compound Annual Growth Rate
ndICAGR = cPreferences.sPreferences.ndExponentialRate;
// from dialog box
// start counting at the first possible
trading day ...
// ... the first data point + the days to warm up (from dialog box)
ndYears = (nusNow - nusFirstData -
cPreferences.sPreferences.nusDaysToWarmUp) / 365.25;
if(ndYears < 0) ndYears = 0; // flat before
the growth starts
nllExp = (long long) ((double)N_STARTING_EQUITY_PENNIES *
exp(ndYears * ndICAGR));
Wed, 5 Oct
TSP: Support and Resistance
In trade log, should 75-06-30 units be -3700 instead of -3800?
-3700 = -100 * (long) (928940 * 0.05 / (739.3 - 726.8) / 100 + 0.5)
-3800 = 100 * (long) (928940 * 0.05 / (726.8 - 739.3) / 100 - 0.5)
Good Catch !
The latest version (October 8) agrees
with your result. The previous version sizes positions basis
Tue, 4 Oct
Using Excel, I was able to to match your result to the penny after
1. On 75-06-30, my system sells 3,700 units, your trade log shows a
of 3,800 units. I can't explain the difference. My position sizes
yours for every other trade.
2. I notice that when the opening price is higher than the previous
fast resistance for buys (or lower than the previous day's fast
for sells) your system sometimes buys or sells at the average of
high and today's low. What is the algorithm you use?
3. When the trend changes direction and the short-term signal
points in the direction of the new trend, your system waits until
penetration of fast support or resistance. I had my system buy or
immediately at the change of trend direction.
Thank you for sharing your knowledge and experience.
1. The latest version, October 8,
above, now agrees with your results.
2. The algorithm awards prices
half-way between the day high and low. Your way makes more
3. In this system, the long-term
indicator sets the posture and the short-term indicator then does
the trading, starting the next day. You might try your
alternative formulation to see if it produces substantially
Tue, 4 Oct
Ed Says: "I am using MDE 7.1 and the option to change Consistency
If you mean Visual C++ 2003, try this way: open the
project --> menu Project: Properties --> Floating Point
Yes, on my version, the option is in light
grey and I am unable to access it.
Tue, 4 Oct
Verification - Problems with Fills & Sizing
I now have results for the 20:140 SR system that match yours
See the spreadsheet for evidence.
For fills on bars where the entire daily range is beyond the order
price, you assess slippage as 50% of the daily range. For fills on
where the open is beyond the order price but the daily range
the order price, you award fills 50% of the distance from the order
price to the most adverse price of the day. In both cases, I believe
that it is more realistic to award fills basis the difference
the open and the most adverse price of the day.
The logic is that stop orders become market orders if the price
through the stop level. In any instance where the first price of the
is through the stop price, the order acts like a market order at the
opening price. Since we assess slippage as a percentage of the
between the order price and the most adverse price of the day, it
sense to use the open as the order price any time there is a gap
the stop price.
Also, I notice one position sizing anomaly.
Look at Pos #242 on 75-06-30. The order that incepts this trade
generates on Friday 06-27. The entry-to-exit risk for the trade as
06-27 is 12.5 points (Fast:739.30 to 726.80). The ending equity on
is $928,940. The sizing math is: (928940 * .05) / 12.5 = 3715.76.
Rounding this number to the nearest round lot of 100 results in a
position size of 3700. In order to obtain your position size of
must use the previous day's equity value of $950,180 in the sizing
I check to see if there is something post-dictive about using the
day equity of $928,940 for 75-06-27 in order to size the trade that
enters during the next session. I do not think there is. I think
can tally up the daily P/L for today, adjust the end of day equity
number accordingly, and use that number to size orders for the next
Can you confirm whether you size trades on today's end of day equity
on yesterday's end of day equity?
This is the only case in this test run where this potential bug
manifests since it is the only instance where a trade exits on one
and enters on the next, before the equity has a chance to stabilize
more than one bar.
1. In the latest run (October
8, 2005), I now award fills, for
markets that gap open above stops, at a price halfway between the
open and the high of the day.
2. In the latest run, I now figure
equity at the close and then enter orders, for the next morning,
basis the closing equity.
Tue, 4 Oct
I cleaned up the code and got an match down to the penny. Looking
forward for your next assignment.
Good Work !
Mon, 3 Oct 2005
I have an exact match.
Good Work !
Mon, 3 Oct
Metrics Log Snafu
There appears to be a problem with your new metrics log on or about
Bastille Day 2000.
[Metrics] 00-07-13-H Prices:[317.20 318.30 316.20 316.40]
[Slow:364.20 to 307.30 Fast:330.30 to 316.10 T:-1]
[Metrics] 00-07-14-F Pric ... 91.60]
[Slow:320.10 to 290.10 Fast:297.60 to 290.10 T:-1]
01-02-07-W Prices:[291.10 291.80 290.70 291.10] [Slow:318.70 to
290.10 Fast:297.40 to 290.10 T:-1]
Thank you for the catch. Data seems
absent from 7/14/2000 - 2/7/01. The source file is OK -
indicating some uploading glitch. The file now stands
Mon, 3 Oct
2005 (by Ed)
I am now
posting a revision run, reflecting comments, corrections and
suggestions from readers.
SYS_SR-05-10-03-08-44 (October 3)
SYS_SR-140-20 (September 25) This run is the basis for
reader comments below this point.
Sat, 1 Oct
Please clarify how the system can generate a trade as early as
75-02-11 if it takes 140 days to calculate long-term support and
Also, I could not match your position sizes. Please clarify if the
following is correct. I calculate the size of a short position as
Size = Equity x Heat / (Stop Price - Entry Price)
Stop Price = Previous day's short-term resistance
If this is correct, I can't explain the difference between your
position size and mine. For example, on 1975-11-14 the system enters
a short at 701.50 with a stop of 709.10. Ending equity on 1975-11-13
is 1,001,090. I calculate the position size as follows:
1,001,090 x 0.05 / (709.10 - 701.50) = 6,600
However, your trade log shows a position size of 3,500.
The system initializes support and resistance by back-posting the
first high and low.
The heat in the run is actually 2.5%,
not the 5% I claim. I am preparing a revision run.
for Pos # 0
Could you please explain how you arrived at the 738.65 entry price.
My data doesn’t have a ohlc for the 2/12/75.
Thank you for the catch.
The fill is for 2/11/75: The date is
out of alignment.
The data for 2/11/75: 743.20
744.00 738.20 743.70
There were a few postings on newsgroups regarding floating point
accuracy in visual studio and someone also mentioned setting the /Op
compiler option to resolve the problem. As I think you had
mentioned, this is located in:
Set to “Improve Consistency /Op”
Unfortunately the optimization compiler option is not available in
C# and I haven’t been able to find an equivalent. I don’t have any
C++ code to reproduce the problem and test the results, so I have no
way of knowing if it works. I am still looking for a similar option
on the C# side. From what you were saying, it sounds like you did
set it to “Improve Consistency” with no apparent results. Is this
I am using MDE 7.1 and the option to
change Consistency seems unavailable.
I am examining your latest Panama Gold (sounds like some wild drug
strain) continuous contract. I find that 18 of the zero-range days
in a cluster during June-July 1993. My chart shows plenty of ranging
days in that period. Other than those 18, most of your other
days coincide with the ones on my chart.
You might want to take a look at what your roll strategy is doing
It's not obvious to the casual observer but your screen shot shows
/Op as off. To turn it on, change 'Default Consistency' to 'Improved
It rarely matters much and apparently does not matter in this case
considering your other e-mail.
I don't usually bother setting it unless it's important for my
numerical programs to produce identical output from version to
Screenshot of VC++ Visual Studio
S&R System Comparison:
Problems with Skid, Heat
I attempt to verify your results for the 140:20 Simple S/R System
the latest continuous contract data that you post on the resources
of your website. My results do not match yours 100%. Here are the
differences I notice so far:
Your system seems to have no entry stops at the long term support
resistance breakouts. Thus, it does not enter on the bar where the
term trend changes. Rather, it waits for the next short term
subsequent to the bar where the trend changes.
Originally, I program the system to enter entry stops at trend
inflection points. This method is consistent with trading with the
term trend and entering on a short term breakout.
I duplicate your exact entry and exit dates by removing the code
enters on a stop at the trend change inflection point. Now the
can enter is on a short term breakout the day after a trend change.
matches your output.
Is this how you intend the system to work?
Yes. The long-term trend sets up the
condition for the short-term trend to issue the signal. You
might experiment with an alternate formulation and see if it works
Also, I notice that your system awards a fill at the slippage
order price even on days where price gaps beyond the order price on
open. Here is an example trade:
Inst Units Entry Exit
GC----C -1800 Pos # 4 75-06-02 728.000 Pos # 5 75-06-27
On 75-06-02 the opening price is 727.30. The sell stop short entry
for that session is at 729.20. Your fill of 728.00 represents
of 50% of the distance between the order price and the most adverse
price of the day, 726.80. My system fills this order at 727.05, 50%
slippage from the opening price to the most adverse price of the
I suggest that the way my system currently awards a fill on this
is more realistic than using the actual order price on days where
gaps beyond the order price.
I also notice that the position sizes seem to indicate 2.5% heat
sizing rather than 5%. The round lot size is 100 contracts if each
contract changes $1 for each big point move. Can you confirm this?
Yes, the heat is actually 2.5%.
I wonder whether you expect negative input data and how you might
want to round those values.
A safe but SLOW method that ought to always match someone reading in
your CSV files in double precision is to print the data into a
buffer using the the same format specifier and then read it back in
from the buffer as a double.
Perhaps a better way is with the following function.
// round the input value to the specified precision
// ex: round(32.125, 0.01) ==> 32.13
// round(32.13, 0.125) => 32.125
double round(double in, double precision)
if ( in >= 0 )
return (long)(in / precision + 0.5) * precision;
return -(long)(-in / precision + 0.5) * precision;
then the block of code in your loop below might collapse to one line
and look something like this...
pcDataBuf->ndPrice[k] = round(pRec->fData[k], 0.001);
This works if you know the precision
If not, the precision of floats is
also a function of the absolute value of the float. To
acknowledge value-specific precision, I use:
double rounder(double ndIn)
return rounder(ndIn, getPrecision(ndIn));
double getPrecision(double ndIn)
if(ndIn > 100000.0) return .5;
if(ndIn > 10000.0) return .05;
if(ndIn > 1000.0) return .005;
if(ndIn > 100.0) return .0005;
if(ndIn > 10.0) return .00005;
if(ndIn > 1.0) return .000005;
double rounder(double ndIn, double ndPrecision)
// round positive numbers up, negative numbers down;
if ( ndIn >= 0 )
return (double)((long)( ndIn / ndPrecision + 0.5 )) * ndPrecision;
return (double)((long)( ndIn / ndPrecision - 0.5 )) * ndPrecision;
to the Penny - with Some Modifications
OK. It's a moving target but with some tweaks I can now hit it to
matches exactly (and no longer requires seeding resistance with a
mysterious value as did the first simulation)
to obtain an exact match I make 2 changes to my simulation:
1- budget only half my equity for determining the position size
2- skid my final trade exit price instead of using the closing price
I have have an entry for the final day of the simulation which does
not appear in your file
05-09-26 1,846,420.00 13,770.00 1,860,190.00
05-09-27 1,846,420.00 7,830.00 1,854,250.00
It seems that your indicated ending value is for 9/26 instead of
9/27 which matches my result for that day to the penny.
Good catch. Yes, the run uses
2.5% heat, not 5% as I claim.
29, 1993 Trade Discrepancy
I did confirm (with some modifications) your results yesterday.
I include the exe.
GC Sample Data
This data has many fewer zero-range days. I count 32 of them. This
seems much more plausible than the previous data.
I like your theory about rolling
into the contract with the widest range. The reduction in the number
of zero-range days indicates it works pretty well.
Mechanica is the software that defaults to no trading on zero-range
days. I can likely find a way to tell it to do something different
if I wish. It is generally pretty flexible.
Concerning CSI, I think that their "close old contract, close new
contract" method is identical to their "close - open with new gap
reintroduced" method. Both methods seem to preserve the price
relationship between the close of the new contract on the day before
a splice and the open of the new contract on the day of a splice. I
don't completely understand their description of the methods, but
when I compare the two methods on a chart the results are identical.
see: Thin Sections
Glad to help. Here is what CSI has to say about the different
· Close old contract, close new contract - Here the close of the new
lead contract is compared with the same-day close of the former lead
contract and the price difference is applied to all historical data
such that the new prices are seamlessly appended to earlier data
without a gap.
· Open old contract, open new contract - Same as close-to-close
(above), except that the same-day open prices are used.
· Close old contract, open new contract - This formula compares the
open price of the new lead contract with the previous day's close
price of the former lead contract and applies the difference to all
historical data, such that new data can be added without a gap. A
more refined close-to-open adjustment is available through the last
· Close - open with old gap reintroduced - With this option, past
(from-contract) history is adjusted so that the old contract close
equates to the old contract open, such that the close-to-open gap of
successive days of the old contract are preserved. The result
accurately reflects the old contract when bridging the two-day
period from contract to contract.
· Close - open with new gap reintroduced (preferred) - With this
option, past (from-contract) history is adjusted so that the new
contract close equates to the new contract open, such that the
close-to-open gap of successive days of the new contract are
preserved. The result accurately reflects the new contract when
bridging the two-day period from contract to contract. This is the
preferred choice because it represents the best future contract to
* * * * *
It seems the theory behind method A is to preserve, in the Panama
contract, the price relationship one sees between the roll day and
the previous day when examining a chart of the front month contract.
In reality, I notice no difference between the two methods.
Upon giving this some thought, I can't understand how method A and B
could produce different prices if B equalizes the closes the day
prior to the first bar of new contract data. If that is the case,
then the "new gap" is always present in both methods. This jibes
with what I see when I create two charts of the same contract, one
using method A and one using method B. The charts are identical
across the board.
If you can see an error in my logic, please let me know, otherwise
it looks like CSI has at least one superfluous splicing method. I
haven't done the chart comparison on the other methods yet.
I think the simpler plan B is the way to go.
Following your observation, I am now picking a roll target, basis
the delivery with the greatest ten-day sum of (high - low) - on the
theory that I want the delivery with the biggest trading ranges - to
eliminate the thin ones. It turns out my data base does not carry
individual delivery volumes for gold, so my former volume-seeking
algorithm defaults to the nearby, often very thin market.
I find the language describing the CSI algorithm a bit ponderous,
even after diagramming out the sentence.
I wonder what software you use that does not allow an override to
trade zero-range days.
Thin Sections of Continuous Contract
I count 1978 instances where the daily high equals the low in the GC
sample data file. That is close to 8 years of trading history. This
seems excessive when I compare it to a Panama style data file of my
construction. Mine contains 15 H=L bars. I think the large number of
days might be due to your contract using data from very illiquid
contract months at times.
My software interprets a bar where the high and low are the same
to be a locked limit day, and it does not allow a trade on such a
can look into circumventing this behavior, but I generally believe
erring to the conservative side in matters like this.
I am including my Panama/last true contract for your perusal. Note
the last field in the file indicates which delivery the price data
from. The algorithm I use to create this contract is to roll on the
day (or the next trading day if the 22nd is on a non-trading day)
month prior to contract expiration. This data file includes the
following deliveries: February, April, June, August, and December.
The splicing method on the roll day is what CSI calls, "Close to
with New Gap Reintroduced." This works by calculating the distance
between the open of roll day and the close of the previous day in
new contract. This distance is the "roll gap." The close of the old
contract, and all the other prices in the data file, are then offset
the roll gap from the open of the new contract. This method
the close to open relationship of the new contract between roll day
roll day - 1. Price data from roll day - 1 is from the old contract.
Price data from roll day forward is from the new front month
Good catch. I see no difference
between the "Close to Open with New Gap Reintroduced" and the simple
Problems with Precision
While there are some small equity differences early on (due to your
single precision prices?) we begin a more serious divergence on the
initiated on 92-12-22.
// from my Trade log
GC----C -47000 92-10-12 413.750 92-12-16 405.500 387,750.00
GC----C -41200 92-12-22 400.400 93-02-10 400.850 -18,540.00
GC----C -43800 93-02-22 395.750 93-03-22 399.900 -181,770.00
While investigating this, I notice that an inconsistency in the way
Your 92-12-22 position stops out on 93-01-29 at 400.40 when the high
of the day
just touches short term resistance.
Your 75-06-02 position does not stop out on 75-07-01 at 436.90 when
the high of
the day touches short term resistance.
Here are the results I get with consistent stop treatment:
Slow=140 Fast=20 Heat=0.05 Bliss=0.1565 ICAGR=0.0626 Dr-dn=0.3997
when the system stops out only when price trades through the stop;
Slow=140 Fast=20 Heat=0.05 Bliss=0.1385 ICAGR=0.0540 Dr-Dn=0.3901
when the system stops out when the price touches the stop:
Good catch! I suspect this
issue relates to the precision issue below.
and Not Exact
If I initialize both resistances to 474.90 (I snarfed it from your
metrics log -
not sure where it comes from) then I can duplicate your metrics log
The dates of the trades are the same but the equity values and some
of the later
position sizes are not quite the same since you are apparently using
precision than I am.
For example, you sell 4200 units on 75-03-20 at 445.75. You report
the close to be 1,000,629.46. The precise value of equity is
on the closing price of 445.60.
Good catch. I trace the imprecision
to the way MS VC++ converts floats to doubles. I am now implementing
Define the Trend
In the description of the support/resistance system you said:
"This system uses two sets of S-R lines: (1) long-term support and
resistance to define the long-term trend and (2) short-term support
and resistance to define the short-term trend. When the long-term
trend is positive, the system enters the market with a stop just
above the short term resistance and then places a protective stop
below the short-term support."
- What is the definition of the long term trend being positive (or
- How is it computed in terms of the relationship between the long
and short term support/resistance lines and price?
When the price trades above the
resistance, it defines the trend as up. The trend stays up until
the price trades below the support. This defines the trend as
Unrealistic Fills on Gap Openings
I think there may be a bug in your code for this system that
unrealistic fills on days where the opening price gaps through the
An example occurs upon the exit of the first trade in the 140:20
A short trade is in progress with a protective exit stop to buy at
440.10. On 05/14/1975 the opening price is 440.30. Your system
fill of 440.20. This is based on 50% slippage between the order
440.10 and the high price of the day of 440.30.
In my opinion, when the open is through the order price it is more
realistic to treat the open price as the order price. In this case
result is a fill of 440.30, since the opening and high price of the
are the same. If the high is not equal to the opening price, the
slipped the appropriate amount from the opening price.
* * * * *
I also want to share with you my opinion on how to treat the start
trading with a system like this. I know the priming method for the
startup of any system is arbitrary, so there is likely no single
method. However, there may be methods that work better for a given
In your system, when the lookback period for a trend indicator is
greater than the available number of bars, the indicator uses
many bars are available.
This means that for the first part of the test the system uses
term lookbacks than for the rest of the test. I see a potential
with this. It means that we are seeing results from a relatively
term system mixed in with results from our intended system. As we
longer lookback periods, a greater portion of the overall trades are
generated by shorter than intended trend indicators.
One property of this method is that all system lookbacks share a
set of common trades at the beginning of the test period. When we
to the optimization phase I hypothesize that this could produce
that tend to make systems of different lookbacks appear more similar
than they are in actual trading experience.
As an alternative, I suggest taking no trades until the indicator
the longest lookback has enough data to calculate. When we go to the
optimization phase I suggest comparing trades of all systems over a
common trading period determined by the system with the longest
A potential problem with this method is that for very long lookbacks
good bit of the available history is not used. In this case we seem
have plenty of data (going back to 1975), and my opinion is that the
problem of inconsistent lookback periods within a single test and
tests of different parameter values is more serious than the problem
a shorter test history.
As you see, there are several opinions here. This perhaps
the ultimately discretionary nature of systematic trading.
Thanks for leading us through this exercise!
I plan to address the issues
surrounding waiting while metrics warm up when
we take up optimizations in more detail. For now, I am
concentrating on getting various systems to work.