◆ AI SURFACES (V5.7)

The six AI surfaces in Swing Deck

v5.7 ships six per-ticker AI surfaces. Each fires on a specific framework moment, narrates in the framework's voice, and stays out of the way until it's needed. There is no chat box and no free-text prompt by design. For the philosophy behind this decision, read Six AI surfaces, zero chat boxes. This page is the reference for what each surface does, when it appears, and what data it sees.

Surface-bound principle

Every AI surface in Swing Deck obeys five rules:

  1. Triggered, not summoned. The user doesn't choose when AI appears. The framework does.
  2. Bounded inputs. Each surface receives only the data relevant to the moment it's narrating. There is no global context window of "the user's entire trading history."
  3. Bounded outputs. Every response has a known structure (paragraph length, point count, mandatory anchors). No drift, no advice-sounding language, no speculation.
  4. Auditable. Every output traces to defined inputs. The same inputs produce a similar output. Cache fingerprints invalidate only when inputs materially change.
  5. Logged. Every successful generation appends a record to a single append-only JSONL log (ai_thesis_log.jsonl) with a kind discriminator. One source of truth, queryable from the dashboard's history modal.

All six surfaces below derive from a single backend module (ai_thesis.py) with a registry-driven coach pattern. Adding a seventh AI surface to v5.8 or v6.0 will be a registry entry, not a new file.

AI Thesis

✨ AI THESIS — the framework's interpretation, in your style

Reveal conditionAlways available on every ticker card
ColorGold
Cache key{ticker}:{state}-{score}-{trigger_count}
InputsTicker score / grade / state / triggers / pillar violations / 30d hit rate / macro context (VIX, oil, regime) / consensus (if licensed) / user preferences (length, focus weights, tone)
Output shapeSingle paragraph (40-220 words depending on user preference). No bullet structure.
PersonalizationSix feedback pills: 👍 / 👎 / shorter / longer / +macro / +price. Each click updates user preferences in ai_thesis_prefs.json. Future generations honor those preferences.
EndpointGET /thesis/ai/generate?ticker=X&force=0|1

Unlike the coaches, the AI Thesis is styled to the user. The pills let the user nudge the writing toward their preferred density and emphasis. This is the only AI surface in Swing Deck that personalizes. The others are framework voice and cannot be tuned by user feedback — that's a feature, not a limitation.

Devil's Advocate

⚖ DEVIL'S ADVOCATE — argues the opposite of the framework's conclusion

Reveal conditionAlways available. Auto-pairs with AI Thesis — clicking either header expands both panels and triggers both LLM calls in parallel. The discipline play: every framework conclusion gets a contrarian voice automatically, no opt-in.
ColorPurple
Direction logic EXIT → argue why exit may be premature (bull case)
TIGHTEN → argue why tightening is overcaution
ARMED → argue why entering is risky (bear case)
HOLD → argue against complacency — what could break
WATCH → pick the side the data tilts toward, or stand-aside
(unknown) → fall back to indicator bias as direction signal
Cache key{bias}-{state}-{score}-{trigger_count} — busts on state flip too
Header sublabelShows the framework's actual conclusion: · framework: EXIT
Output shape3 numbered points: incomplete-read reason · surprise data point · action if counter-view is right.
EndpointGET /coach/advocate/generate?ticker=X&force=0|1

State drives the direction, not indicator bias. The position-level conclusion (state) is the actionable signal the trader sees; bias is the indicator-level reading. These can disagree (a ticker can be in EXIT state while indicators are still bullish). The Devil's Advocate counters what the trader will act on, not what the indicators say.

Pillar Coach

⚠ PILLAR COACH — narrates active vetoes

Reveal conditionState ∈ {EXIT, TIGHTEN, WATCH} OR pillar_violations is non-empty
ColorRed
Header sublabelSurfaces violated pillars at-a-glance: · P6 P7 or · TIGHTEN
InputsPillar IDs + framework's terse reason + ticker state + score + sleeve + active triggers
Output shape3-sentence structure: framework's conclusion · the data that triggered it · framework's recommendation. 60-100 words.
System prompt forbids"Consult an advisor" · "do your own research" · disclaimer language · speculation beyond data
EndpointGET /coach/pillar/generate?ticker=X&force=0|1

The Pillar Coach maps pillar IDs to human names internally (P3 → Pre-Market Firewall, P7 → Stop Breach, etc.) so the LLM sees both the framework code and the plain-English meaning. When multiple pillars fire, the coach addresses the dominant cause (the primary field from the position-state classifier) and mentions secondary violations as supporting context.

Exit Coach

◆ EXIT COACH — TP/stop-breach narrator

Reveal conditionOwned position (shares > 0) AND (price ≥ any TP rung OR price < chandelier stop)
ColorAmber
Header sublabel· TP1 HIT / · TP2 HIT / · STOP BREACHED
InputsTicker price / score / state / sleeve / TP ladder (TP1, TP2, TP3 with scale-out percentages) / chandelier stop / which rung was hit / triggers
Output shape3 sentences anchored on dollar amounts: what just happened (TP rung or stop) · ladder math (40/40/20 scaling) · price reference vs entry.
EndpointGET /coach/exit/generate?ticker=X&force=0|1

The chandelier stop (22-day high − 3 × ATR) is the framework's reference stop. If the user's manual stop is below chandelier, the position is already too loose — an exit-coach-worthy event in itself. The coach uses chandelier as the reference even when the user's stop hasn't yet been touched, because the framework's stop is what would have been recommended.

Entry Coach

▲ ENTRY COACH — armed-trigger narrator

Reveal conditionUnowned position (shares == 0) AND entry_trigger.ready === true
ColorGreen
Header sublabelSetup type + check count: · pullback_to_ma · 4/4 checks
InputsTicker price / score / state / sleeve / entry-trigger setup type / individual check pass-fail list / earnings proximity
Output shape3 sentences: qualifying signal · framework size suggestion · caveats (earnings within window, sector tilt, etc.)
System prompt forbidsThe word "opportunity" · inflating language · making the trade sound easier than it is
EndpointGET /coach/entry/generate?ticker=X&force=0|1

The Entry Coach explicitly says: "the trader has fired the trigger; the coach's role is to confirm or temper, never inflate." If a caveat materially weakens the setup — earnings in 12 sessions, a sector that's already over-allocated — the coach says so plainly.

Position Audit

📊 POSITION AUDIT — the discipline mirror

Reveal conditionTicker has ≥1 prior AI thesis OR ≥1 trade journal entry
ColorCyan
EndpointPOST /coach/audit/generate — CSRF-protected; body includes {ticker, force, journal: [...]}
Phase 1 (5.7.4)Reads server-side data: thesis log filtered to kind=thesis + audit_history snapshots (last 60) + current state. Surfaces score drift, thesis evolution, state stability.
Phase 2 (5.7.5)Adds the user's local trade journal via POST body. Computes win/loss split, win rate, avg P/L %, max win/max loss, avg hold days, avg entry/exit prices, last 5 trades by date. Surfaces fills-vs-thesis divergence ("selling TP1 rung on two trades, holding a loss on one").
PrivacyJournal data is POSTed to localhost:8001 only. Never persisted server-side — used for prompt assembly, then discarded. Journal storage stays in browser localStorage.
Output shape3 observations: clearest pattern · drift signal · framework suggestion. 80-150 words.

This is the local-first AI moat. Cloud-hosted competitors cannot read your fills + your prior theses + your framework state and tell you "low conviction masquerading as activity." They don't have the data. Position Audit makes the trader's behavior the AI's primary input, which is only possible because the journal lives on your machine and nowhere else.

The system prompt explicitly affirms correct restraint when the data shows it. If the trader is appropriately avoiding a low-quality setup, the audit says "restraint (EXIT) is correct." It doesn't manufacture fault.

Catalyst Interpreter

📰 CATALYST INTERPRETER — news signal vs noise

Reveal conditionTicker has ≥1 news headline today (via /news data-provider cascade)
ColorBlue
Header sublabelLive count: · 8 headlines today
InputsUp to 12 headlines: title + publisher + date + source tier; ticker context (price, score, state, regime)
Material rules (signal)Earnings results / guidance change / analyst-day reset / M&A / regulatory action / major contract or customer event / insider buying or selling at scale / macro headlines that mechanically affect the ticker's sector
Noise rulesPrice-change recaps · generic analyst chatter without rating change · reblog/aggregator headlines · macro that doesn't mechanically affect this ticker · "every investor should..." clickbait
Output shape3 lines: classification (N total, M material, K noise) · most material item with publisher + direction + magnitude · framework action anchored to current state
EndpointGET /coach/catalyst/generate?ticker=X&force=0|1

The interpreter is ruthless about classification by design. On any given day, most headlines for any given ticker are noise. The interpreter's job is to make filtering effortless — the trader reads one paragraph instead of scanning twelve headlines.

Critically, when the framework state and the news disagree (e.g., framework says EXIT but news brought a guidance raise + analyst upgrade), the interpreter surfaces the conflict and says exactly what to verify before re-acting. It doesn't flip the trade for you.

History modal

Every AI surface generation appends one record to a single append-only log: ai_thesis_log.jsonl. Each record carries a kind discriminator (thesis, pillar_coach, devils_advocate, exit_coach, entry_coach, position_audit, catalyst_interpreter) plus the ticker, timestamp, score, state, and full narration text.

The dashboard's history modal (accessible via the history button on any AI panel) reads this log directly. As of v5.7.6 it opens in unified All view by default. The toggle row at the top of the modal is built dynamically — only kinds that have at least one record in your log show as tabs, with their per-kind counts. Tab colors match the panel colors throughout the app:

TAB KIND BADGE COLOR
All(no filter)cyan
Thesisthesisgold
Counterdevils_advocatepurple
Coachpillar_coachred
Exitexit_coachamber
Entryentry_coachgreen
Auditposition_auditcyan
Catalystcatalyst_interpreterblue

Trim policy: the log retains the last 5,000 records (HISTORY_MAX_LINES in ai_thesis.py). At ~500 bytes per record, that's roughly 2.5 MB — plenty for years of typical use.

Privacy & cost

BYOK (bring your own key)

Every AI surface uses your own Anthropic or OpenAI API key, configured in Settings → Data Sources. Swing Deck does not run a hosted LLM. We don't see your prompts or your outputs. Two implications:

What leaves your machine

For each AI surface fire, the dashboard sends the prompt (system + user payload) to your configured provider. The user payload contains:

What does not leave your machine: your full portfolio, your account balance, broker tokens, journal entries for any ticker you're not currently asking about, the AI history log itself.

Cost ceiling

Each panel caches by a fingerprint that invalidates only when inputs materially change (state flip, new TP rung hit, new news cycle, new fills appended). Repeatedly opening the same panel without changes returns the cached entry — no LLM call, no cost. A typical active-trading day with 10 portfolio tickers and 2-3 framework events per ticker generates 30-50 LLM calls, well under $0.10 in total at default-model pricing.

Troubleshooting

"Not found" errors after upgrading

The dashboard is HTML/CSS/JS that auto-reloads on hard-refresh, but the Python control server doesn't hot-reload. After a Python-code change (most v5.7.x ships), click ↻ Restart Server in the topbar (next to ? Glossary). Server re-execs in place via os.execv; the dashboard polls /heartbeat and auto-reloads when the server returns. ~3-5 seconds typical.

An AI surface panel doesn't appear on a ticker

Each panel has a defined reveal condition (see the table at the top of each section above). If the condition isn't met, the panel is hidden by design — that's the surface-bound principle. Confirm the relevant data:

"Bring your own key" CTA shows up

Your Anthropic or OpenAI API key isn't configured. Open Settings → Data Sources, paste your key, save. All six AI surfaces light up with the same key — no per-surface configuration.

Dev preview mode

For development or shake-down testing only: open browser DevTools console and run document.body.classList.add('coach-preview') to reveal all panels on every ticker regardless of signal. Endpoints accept ?preview=1 to synthesize plausible framework signals so the LLM has something to narrate. Disable with document.body.classList.remove('coach-preview') or refresh the page. Not for production use — preview narrations are based on synthetic signals, not your real data.

Read the philosophy behind v5.7

For the architectural decisions that produced this design (no chat box, surface-bound prompts, pills + slash palette over NLQ, paired counter-cases), read the v5.7 release blog post.

Read the v5.7 release post →