·

9 minutes

The Research Behind the Algorithmic Suite

Algorithmic

The Research Behind the Algorithmic Suite

I have written before about why I built Algorithmic. About the decade-plus of research. About the frustration with retail noise. About the gap between what most indicators promise and what they actually do under live conditions.

But I have not written about the depth of the research itself.

Not the theory. Not a single session's results. The actual scope of what was built, computed, and verified before any of this ever appeared on a chart.

This post is about that.

What real testing requires

Here is what "testing" looks like in most of the trading indicator space.

Someone builds a tool. They scroll to a chart where it happened to look clean. They screenshot it. They post it. Then they ask for your money.

That is not testing. That is selection bias with a caption.

Real testing requires building independent verification systems. Running the same logic across years of raw market data — not just the days that went well. Comparing outputs across separate pipelines and looking for anything that does not match. And applying the same statistical rigor that any quantitative fund would use before risking real capital.

That is the process the Algorithmic Suite went through.

The data foundation: 18 years, two timeframes, 6.3 million bars

Everything starts with the data.

The research runs on ES futures — the E-mini S&P 500, the most actively traded index futures contract. Building a credible foundation meant starting with granular, uninterrupted data going as far back as the records allow.

The dataset spans January 2008 through early 2026. Nearly two decades.

The raw input: 6,373,158 individual 1-minute OHLC bars from 77 individual ES futures contract files, each quarterly contract stitched into one continuous series using a volume-based roll detection methodology. The framework was also validated against 5-minute data going back to 2008 — a second, independent dataset requiring its own pipeline and its own verification layer.

Two timeframes. One unbroken dataset. 18 years of raw market data with no cherry-picked starting or ending point.

But the scale of the data matters less than what it contains. Because 18 years of ES futures is not just a large number. It is a complete map of modern market history.

18 years means 18 different markets

This is the part that most indicator testing quietly skips.

A framework tested on the last two years is a framework that was tuned on exactly the conditions it grew up in. The moment those conditions shift, the edge disappears. You find out the hard way.

Testing across 18 years forces the research to confront every market environment that has existed in the modern era.

2008–2009: The financial crisis and its aftermath. The fastest-moving, most violent market conditions in a generation. Circuit breakers triggering. Gaps measured in dozens of points. VIX reaching 80.

2010–2012: The European debt crisis. Multiple flash crashes. A US credit rating downgrade. Repeated spikes of volatility inside a generally recovering market. The kind of environment where patterns that look clean in a backtest start to crack.

2013–2015: The low-volatility QE era. The opposite problem. Compressed ranges, subdued volume, a market that drifted upward with almost no meaningful pullbacks. Tools built for volatility struggle in these conditions.

2016: Brexit and the election shock. Two separate overnight gap events that moved ES by dozens of points in hours. If a framework cannot handle surprise gaps, this is where you find out.

2017: Historically calm. The VIX hit all-time lows. This is the graveyard of strategies that need volatility to work.

2018: The return of volatility. A rate-hike cycle, a Q4 selloff of nearly 20%, and the February vol spike — one of the sharpest single-week drawdowns in years.

2019: The trade war and the pivot. A year defined by headline risk, whipsaw moves on tariff announcements, and then an abrupt Fed pivot and year-end melt-up.

2020: COVID. The fastest bear market in history — 35% in 23 trading days. Followed immediately by a V-shaped recovery unlike anything the modern market had produced. Circuit breakers multiple times in a single week. Then, somehow, new all-time highs by August.

2021: The meme era. Retail participation at all-time highs. Gamma squeezes. Compressed realized volatility in the index despite extraordinary single-stock moves. A unique and difficult environment to model.

2022: The rate hike cycle. The fastest pace of Fed tightening in 40 years. A 27% bear market in the S&P. The highest inflation since 1981. Sustained directional pressure unlike anything seen since 2008.

2023: Banking contagion and the AI emergence. SVB and Credit Suisse collapses. Then an abrupt pivot to risk-on as AI enthusiasm rewrote the narrative mid-year.

2024–2026: Soft landing, rate cuts, election year volatility, and the current regime.

A framework that only works in one of those environments is not a framework. It is a coincidence.

The Algorithmic Suite research was run across all of them.

Three indicators. Three independent verification engines. Three comparison layers.

For each indicator in the Algorithmic Suite — Midnight Grid, Quantum Vision, and Turning Points — a completely independent computation engine was built outside of TradingView.

No shared code with the PineScript. No connection to the TradingView platform. A different language, a different data source, a different timezone and calendar library.

The principle is simple. If an indicator is doing real, deterministic math, that math should produce the same output regardless of where it is run. If results differ between implementations, something is wrong and needs to be found.

Each indicator was verified across three independent layers:

Layer 1 — The live TradingView indicator. What the user sees on the chart. The PineScript implementation running in real time.

Layer 2 — The independent Python engine. A complete rebuild of the indicator logic from raw OHLC data. Nothing shared with Layer 1.

Layer 3 — The database. The stored, computed values powering the Algorithmic platform. A third independent output verified against both.

All three must agree. If they do not, the work is not done.

Across over 350,000 individual verified data points — levels, signals, and markings across 18 years of data — the three-way match produced results that held on every indicator.

The permutation space: from millions of trade evaluations to tens of billions of analytical combinations

This is where the scale of the research becomes genuinely difficult to communicate.

The backtesting framework processes every instance where the indicators in the Algorithmic Suite converge — where a key level and a reversal signal appear at the same location. For each instance, it simulates what happens next.

Across the full 18-year ES 1-minute dataset, the framework identified 89,774 qualifying first-visit signal interactions across 4,721 trading sessions.

Each of those 89,774 interactions was then evaluated under 45 different target and stop combinations — varying the take-profit from 3 points to 15 points and the stop-loss from 2 points to 6 points. Every combination. Every signal.

That is over 4 million individual trade evaluations from a single model run on a single timeframe.

But the evaluation did not stop there.

The framework then cross-examines every one of those interactions across every analytical dimension we could construct from the data:

  • 15 Midnight Grid level types — each with its own position in the session structure

  • 2 Turning Points signal directions — bullish and bearish

  • 7 Quantum Vision signal configurations — including triangle types and level markings

  • 5 days of the trading week

  • 12 months of the year

  • 8 distinct session time windows — from the Asian session through the European open, pre-market, the RTH open, midday, RTH close, and after-hours

  • 7 individual hours within regular trading hours

  • 2 bar timeframes — 1-minute and 5-minute

  • 5 volatility regimes — from historically calm to extreme

  • 6 level-age categories — today's levels through levels carried forward from the previous five sessions

  • 2 volume states — above and below the 20-period moving average

  • 3 market direction states — trending up, trending down, moving flat

  • 5 proximity distance bands — how close price was to the level at signal time

When you multiply those dimensions together, the full analytical permutation space the framework is built to interrogate exceeds 50 billion unique combinations.

Not all 50 billion produce statistically meaningful sample sizes. But the framework was designed to answer questions across all of them — not to cherry-pick the slices that looked good and present them as the whole.

That is the difference between research and marketing.

The metrics we track

If you have spent time around quantitative trading research, you already know that win rate alone is not a meaningful statistic. It needs context. It needs error bounds. It needs to hold under adversarial testing conditions.

Every result from the Algorithmic Suite research framework is evaluated across a full battery of institutional-grade quantitative metrics:

Return metrics
Win Rate, Profit Factor, Expected Value per trade (in points and dollars), Net Expected Value after all friction (commission and slippage), annual PnL, PnL per trading day.

Risk-adjusted metrics
Sharpe Ratio, Sortino Ratio, Maximum Drawdown, Maximum Consecutive Losses, Profitable Days percentage.

Statistical validation
Monte Carlo permutation testing (2,000 permutations), session-level bootstrap confidence intervals (5,000 resamples), 10-fold time-series cross-validation, walk-forward out-of-sample testing, Bonferroni multiple-comparison correction across all 45 target/stop combinations.

Robustness tests
Subsample stability testing (dropping 60% of data at random and measuring win rate variance), roll boundary contamination check, same-bar tie-break impact measurement, regime-conditional analysis across 5 volatility environments, market direction neutrality test across trending and flat markets.

Statistical integrity checks
Augmented Dickey-Fuller stationarity test (equity curve and returns), Autocorrelation Function analysis (Ljung-Box and Runs tests), Random Forest and XGBoost cross-validation against the base rate, SHAP feature importance analysis, equity curve R² and slope regression.

Signal quality metrics
Maximum Adverse Excursion measured at the exact bar of trade resolution, Maximum Favorable Excursion, signal decay analysis across session position, level-type decomposition across all 15 Midnight Grid levels, T-level performance by age (T-1 through T-5), volume decile breakdown across 10 buckets.

That is not a partial list. Every result produced by the framework goes through all of it.

What the research showed

Across 18 years, every market regime, and tens of billions of analytical permutations, the conclusion is consistent.

The Algorithmic Suite works.

Midnight Grid levels are active. Price interacts with them in statistically consistent, measurable ways across all 18 years of data — in bull markets and bear markets, in calm periods and in crisis conditions, in trending environments and in flat ones. Every level in the system contributes positively. The edge is distributed, not concentrated in one or two lucky configurations.

Turning Points signals, when they appear near Midnight Grid levels, identify genuine inflection points. The win rate holds across every market regime, every day of the week, every month of the year, and every session hour tested. Machine learning models trained on all available features — time, direction, volume, proximity, level type, session position — cannot outperform the base rate by a single percentage point. The edge is structural. It is in the signal itself, not in any metadata filter applied after the fact.

Quantum Vision markings track real-time session structure with a fidelity that reproduces accurately when independently computed.

The 10-fold time-series cross-validation produced positive results in all 10 folds. The walk-forward out-of-sample test showed the edge holding in every year used as the test period. The Monte Carlo permutation test returned a p-value below 0.0005. The session-level bootstrap showed a 100% probability that expected value is positive.

The framework was stress-tested in every way we could construct. None of the stress tests broke the edge.

Why this matters for you

Most indicator vendors do not publish their research process. Some do not have one. They trust their PineScript, hope it works, and move on to the next marketing cycle.

Testing at this scale is not fast. It is not glamorous. It occasionally reveals things that require starting over. It is exactly the kind of work that is easy to skip when you are in a hurry to go to market.

I did not skip it.

Not to build a brand story. Because I was not willing to put something in front of traders unless I was confident in what it was doing and why.

If you are a serious futures trader, you should ask this question of every tool you consider: how was this tested? Over what timeframe? Across what market conditions? With what methodology? Can the results be reproduced independently?

If the answer is screenshots from a good week, that is your answer.

If the answer is 6.3 million price bars, three independent verification pipelines, over 4 million individual trade evaluations, and a validation framework that covers tens of billions of analytical permutations across 18 years of every market environment the modern era has produced — that is a different kind of answer.

The Algorithmic Suite

Midnight Grid. Quantum Vision. Turning Points.

Three indicators. One framework. Built on research that earned the right to exist.

Available on your TradingView charts today.

Algorithmic is charting software for decision support on TradingView. It is not financial advice. Trading involves risk. Outcomes depend on your rules, risk management, and execution. Past performance does not guarantee future results.