Trap & Structure Coach internals.
Lesson 16 introduced the 10-source confluence merger as a black box: ten candidate levels in, fewer merged levels out, weight-3+ clusters drive the framework's structural reads. This lesson opens the box. The actual algorithm in structure_geometry.py is straightforward — pairwise merging until convergence — but the calibration is the interesting part. Why 0.15× ATR specifically? Why those ten sources and not eight or fifteen? And what does the trap detector actually look for inside the merged level set? The answers shape every trade the framework approves or refuses.
The merger algorithm in code
The merger runs in three passes:
# Pass 1: gather all 10 candidate levels
candidates = [pivot_R1, pivot_R2, pivot_S1, pivot_S2,
prev_day_high, prev_day_low,
prev_week_high, prev_week_low,
sma_20, sma_50, sma_200,
vwap_session, vwap_5d,
hvn_top3, lvn_top3,
gap_edges_recent, swept_levels,
round_numbers_in_range]
# (some of those produce multiple values; total ~10-15 raw)
# Pass 2: pairwise merge under 0.15× ATR
band = 0.15 * atr_14
clusters = []
for level in sorted(candidates):
if not clusters or level - clusters[-1].avg_price > band:
clusters.append(Cluster(level))
else:
clusters[-1].add(level)
# Pass 3: re-merge — clusters whose centers drifted within band
# during Pass 2 sometimes overlap after avg recalculation
for c1, c2 in adjacent_pairs(clusters):
if abs(c1.avg - c2.avg) <= band:
merge(c1, c2)
Output: each surviving cluster has an avg_price (weighted average of contributing source prices) and a weight (count of sources). The framework's structural reads operate on weight ≥ 3 clusters; weight-2 are stored but not surfaced; weight-1 are computed but discarded after merging.
Why 0.15× ATR specifically
This was empirical — backtested across 200 large-cap names over 5 years, varying the merge band from 0.05× to 0.40× ATR. Three measurements per band setting:
- False-merge rate: distinct levels collapsed into one cluster when they shouldn't have been. Higher band = more false merges.
- Missed-confluence rate: genuine clusters of 3+ sources where one source landed just outside the band, leaving the cluster at weight-2. Lower band = more missed confluences.
- Predictive value: probability that the next 3-day price action respected the merged level (within 0.5× ATR).
0.15× was the maximum-predictive-value point. Below 0.10×, missed-confluence rate climbed sharply (band too tight, clusters fragmented). Above 0.20×, false-merge rate rose without offsetting predictive gain. 0.15× is the local optimum — and it's been stable across recalibrations because the underlying microstructure (retail order-book clustering at "approximately the same price") is itself stable.
Why those 10 sources, not 8 or 15
Each source contributes information of a different kind. Adding more correlated sources doesn't add information; adding fewer types loses information.
| Source category | Information type | Examples in the set |
|---|---|---|
| Mechanical | Computed from prior session — same for every trader | Pivot points, prev-day H/L, prev-week H/L |
| Smoothed price | Time-averaged consensus levels | SMA 20/50/200 |
| Volume-weighted | Where volume actually transacted | VWAP, HVN/LVN from volume profile |
| Event-driven | Discrete points of recent flow | Gap edges, swept levels |
| Psychological | Round-number / cognitive anchoring | $100, $250, $500, etc. |
Five categories; ten total levels with two from each broad category. Adding a sixth category (e.g., Fibonacci levels — purely mathematical, no real-flow grounding) adds correlation without adding new information types. Removing a category (e.g., dropping volume-weighted) loses real flow data. The 10-source set is the local optimum on information diversity per computational cost.
The trap detector's specific patterns
The Trap & Structure Coach doesn't just identify confluence — it specifically looks for traps: setups that look structurally clean but have a hidden tell that flips them from tradeable to dangerous. The patterns it explicitly checks:
- Bull-stack distribution — price rising with HT score < 4 and OBV falling. The cluster is real but the buyers aren't. Lesson 17.
- Gravestone at confluence resistance — long upper wick on volume at a weight-3+ resistance level. Sellers aggressive at the level, even though the candle "looked OK" mid-day.
- Sweep aftermath conflict — sweep_clean = false (Lesson 18) but the proposed entry is at the swept level. Stops cleared, the level no longer reliable.
- Negative gamma at the level — heavily-optioned name with the proposed entry just below a negative gamma wall (Lesson 27). The wall predicts amplification on a break, which means the stop is structurally too tight.
When any pattern fires, the audit card surfaces a trap chip naming the specific pattern. Override exists; the journal records each.
What changes when you read the internals
Two practical shifts for the trader:
- You stop over-trusting weight-2 clusters. They're not the framework's structural reads. The dashboard sometimes still shows them; that doesn't make them tradeable. The audit only weights 3+.
- You read trap chips as specific patterns, not generic warnings. "Trap: bull-stack distribution" means a specific HT/OBV signature; "trap: gravestone at resistance" means a specific candle anatomy. Knowing the patterns lets you read other charts manually for the same signature.
The real lesson
The confluence merger is mechanically simple but its calibration carries the framework's empirical edge. 0.15× ATR isn't arbitrary — it's the local optimum on predictive value across 5 years and 200 names. The 10-source set covers five distinct information categories; adding correlated sources doesn't help. The trap detector's specific pattern names tell you what the framework refuses, why, and lets you read the same patterns manually. The whole machinery exists to keep weight-1 noise from polluting decisions and to surface the specific trap signatures that look clean in retail chart-reading but consistently fail in disciplined backtests.
Related: L16 — confluence merger · L17 — hidden tape · L18 — sweep detection