·
7 minutes
Signal Quality: How We Verify Every Level in the Framework Independently Contributes
Algorithmic

Signal Quality — How We Verify Every Level in the Framework Independently Contributes
There is a failure mode that lives inside almost every multi-level indicator suite on the market. No one talks about it. Most vendors do not even know it exists.
The failure mode is this: one or two levels carry the entire result. The rest are noise.
The aggregate backtest looks fine. The win rate is positive. The expected value is respectable. So the vendor ships it, the marketing copy goes out, and the subscriber starts using all fourteen levels as if they are equally valid.
They are not. Two of the levels are doing all the work. The other twelve are coin flips dressed in colored lines. And the moment market conditions shift and those two levels stop working, the entire framework collapses. The subscriber does not know what happened. The vendor does not either, because they never decomposed the result.
I needed to make sure the Algorithmic Suite did not have this problem.
What level-type decomposition means
The Algorithmic Suite's Midnight Grid indicator produces 14 distinct price levels each trading session. These are not arbitrary. They are computed from overnight price data and published at midnight ET. Eight are structural levels derived from the prior session's range and key reference points. Six are Buy and Sell zone boundaries that define directional bias regions.
There was a fifteenth level — the NY Midnight Open — but I removed it from the framework. It is the ES settlement price. It is not proprietary. Every data vendor publishes it. Including it in a proprietary framework would have been dishonest, so I took it out.
That leaves 14 levels. And the question I needed to answer was whether all 14 contribute independently, or whether the aggregate result is hiding a concentration problem.
Level-type decomposition is the test. You isolate each level. You run the full backtest on that level alone, as if the other 13 did not exist. You look at the win rate, the expected value, the trade count, and the consistency. Then you compare them.
If one level shows 80% and another shows 52%, you do not have a framework. You have one good level and a lot of visual clutter.
All 14 profitable. 5.3 percentage points of spread.
I ran the decomposition across 89,774 first-visit signals spanning 18 years of ES futures data. Every level was tested independently.
All 14 were profitable.
The best-performing level — max_low_2 — was the strongest performer. The weakest — max_high_1 — was the weakest. The range across all 14 levels spans only a narrow band of a few percentage points, with barely 5 percentage points separating the strongest from the weakest.
That spread is remarkably tight.
To understand why this matters, consider what the alternative looks like. In a typical multi-level indicator, you might see one level at 78%, three levels hovering around 55%, and six levels that are functionally random. The aggregate looks reasonable — maybe 62% — but the edge is concentrated. It is fragile. It depends entirely on specific levels continuing to behave the way they did during the testing period.
A spread of barely 5 percentage points across 14 levels means no single level is dramatically better or worse than the others. The edge is distributed across the entire structure. The framework works as a cohesive system, not as one or two lines that happen to be useful surrounded by decoration.
No directional bias
One of the things I check in every decomposition is whether the framework has a hidden directional lean.
Many indicator suites — especially those tested primarily during the 2009-2021 bull market — carry a long bias. They generate more bullish signals, or their bullish signals are materially stronger than their bearish ones. The backtest looks great because the market went up for twelve years. When a sustained down trend arrives, the framework quietly stops working and no one understands why.
The Algorithmic Suite's signal balance across the 18-year dataset: 50.4% bullish, 49.6% bearish.
That is structurally neutral. The framework does not lean long. It does not lean short. It reads the market in both directions with nearly identical frequency.
The directional confluence results reinforce this. When a bullish signal fires on a day the market moves up, the win rate is significantly elevated. When a bearish signal fires on a day the market moves down, the result is materially higher when direction aligns. The framework performs at its best when direction aligns, and it does so symmetrically. There is no built-in bias waiting to fail when the trend changes.
Signal decay — or the absence of it
Most levels in most indicator frameworks suffer from signal decay. The first time price reaches a level, the reaction is strongest. The second visit is weaker. By the third or fourth visit, the level is spent. The market has already absorbed whatever information that level contained.
I tested for this by tracking visit number across all 14 levels.
Visit 1 is profitable. Visit 4 and beyond is slightly higher.
Later visits are slightly stronger.
That is not the expected result. In most frameworks, signal decay is a given. Levels wear out. The conventional explanation is sound: the first interaction contains the most informational value, and each subsequent touch reduces the surprise factor. Participants have already seen the level, already positioned around it, and the reaction weakens.
The Midnight Grid levels do not behave this way. Later visits do not degrade. If anything, they improve. The explanation may be structural — these levels are derived from a fixed mathematical relationship to the prior session, and their validity does not depend on surprise or novelty. They define structural boundaries, and price continues to respect structural boundaries regardless of how many times it has visited them.
For a subscriber, this means something practical. A level that price has already touched is not a used-up level. It remains valid decision support.
Multi-session memory: T-levels
The Midnight Grid recalculates every session. New levels at midnight ET based on the most recent data. But markets do not reset their memory at midnight. Levels from prior sessions often remain relevant — sometimes for days.
I tested this by carrying forward levels from prior sessions as "T-levels." T-1 levels are from the previous session. T-2 from two sessions ago. All the way through T-5.
All T-level groups were profitable and consistent with the overall framework.
This confirms that the framework has multi-session memory. Levels do not expire at midnight. Prior session structure continues to influence price behavior for multiple days. The framework captures this persistence, and the T-level results are strong enough to constitute independent decision support.
For context, many level-based frameworks are strictly intraday. The levels are computed for a single session and discarded. The Algorithmic Suite's levels remain valid across sessions, which means the information density available to the subscriber is not limited to today's computation. It includes the structural context of the prior week.
Zone distribution
One of the subtler tests is zone distribution. The 14 levels define seven zone categories — regions between adjacent levels where price can reside. A healthy framework populates all zones. If price consistently clusters in two or three zones and never visits the others, those untouched zones are theoretical constructs, not useful levels.
All seven zone categories in the Midnight Grid are populated. Price visits every region of the structure across the 18-year dataset. There are no empty buckets. Every zone sees traffic, which means every level that defines a zone boundary is operationally relevant.
What this means for the framework
Level-type decomposition is not a marketing exercise. It is a structural integrity test.
The Index Futures Research Behind the Algorithmic Suite describes the full scope of the research — the 6.3 million bars, the verification pipelines, the scale of the computation. The subsample stability analysis shows that the edge holds across random slices of the dataset. The machine learning base rate analysis demonstrates that no ML model can outperform the framework's base rate from market structure features alone.
This post answers a different question. Not "does it work?" but "is the work distributed?"
The answer matters because distribution is robustness. A framework where every component contributes independently is a framework that does not depend on any single configuration surviving unchanged. If market conditions shift in a way that weakens one level, the other 13 still carry positive expected value. The edge does not collapse. It narrows slightly and continues.
That is not a guarantee. Nothing in probabilistic decision support is guaranteed. But a distributed edge is structurally different from a concentrated one. It is the difference between a foundation supported by fourteen load-bearing columns and one held up by two.
What I would want to know if I were evaluating this
If I were a subscriber evaluating the Algorithmic Suite, these are the numbers I would want to see.
14 independent levels. All 14 profitable. Win rates spanning a narrow range of barely 5 percentage points. Signal balance: 50.4% bullish, 49.6% bearish. Signal decay: none — later visits are slightly stronger, not weaker. Multi-session carry-forward levels: profitable and consistent with the overall framework. Zone distribution: fully populated, no dead regions.
These numbers come from 89,774 first-visit signals across 18 years of ES futures data. They are not hypothetical. They are not optimized. They are the result of running every level through the same backtest independently and publishing the outcome.
That is the standard I hold this framework to. Every level earns its place on the chart, or it gets removed.
The Algorithmic Suite is available for a 7-day free trial on TradingView. Start your free trial here.
The Algorithmic Suite is decision support for index futures research. It is not financial advice. It does not generate trade signals. Past performance, including all statistics cited in this post, does not guarantee future results. All trading involves risk. See full disclaimer at algorithmic.io.


