Complete Spec Bundle — 5 Documents, 3 Repos, Building
CampaignForge operations platform. SvelteKit + Svelte 5 + shadcn. 12 comparative decisions. Agent SDK pipeline execution. Full database schema.
PlatformContent site architecture. Astro 5 + Svelte 5 islands. Zero JS articles, ~5KB tools. 10 tool patterns. 5-6 week migration plan.
ContentForgePlatform trust and account health. Meta HiVA scoring. Google EDU restrictions. 4-week warm-up playbooks. Weekly intel monitoring.
TrustMaster execution timeline. 5 phases, ~150 tasks. BUILD + OPERATE parallel tracks. GATE milestones. Week-by-week timeline.
ExecutionDaily operations guide. 3-4 hrs/day rhythm. Campaign lifecycle. Solo to team-of-3 scaling. Incident response playbooks.
RunbookSvelteKit 2 + Svelte 5 + Tailwind 4 + Drizzle ORM + shadcn-svelte
Astro 5 + Svelte islands + MDX + Tailwind 4 • 42 pages, 10 tools, Lighthouse 95+
Pivot-with-examples framing resolves taxonomy-expansion decisions ~3x faster than option-menu framing. OQ walkthrough presented each verdict with (a) command-seat's recommended ruling, (b) per-pivot reasoning citing persona v2 where relevant, (c) concrete examples (SNHU direct vs TX State redirect; native-ad advertorial × TOFU traffic; Meta lead form vs tool-embedded lander). Operator ruled 10 OQs at ~30-45 sec each via accept/flip/modify. Option-menu framing ("pick a/b/c from these approaches") extends rulings by 2-3x because operator has to reconstruct per-option tradeoffs mentally. When command-seat has already reasoned through tradeoffs, share BOTH the reasoning AND the recommendation — recommend-and-defend beats neutral-menu when operator trust is calibrated. Codify: multi-decision walkthroughs default to recommend-and-defend framing; fall back to option-menu only when operator explicitly asks for multiple options or command-seat is genuinely unsure of the right pivot.
Scope-doc bugs propagate into worker output unless caught at the scope-understanding checkpoint — not at deliverable audit. Worker A faithfully echoed my "8 schema bumps" miscount from the scope statement into its scope-understanding confirmation; OQ-VERDICTS inventory had 9. Command-seat caught this at the scope-understanding audit (the mandatory 1-paragraph confirmation worker produces before drafting), not at ADR audit. If the check had waited until ADR draft completion, the miscount would have been baked into the Consequences section and the fix would require a second worker turn. Pattern: scope-doc drift is a predictable class of command-seat authoring error. The scope-understanding checkpoint exists precisely to catch it early. Command-seat audit discipline: always cross-check the worker's echoed scope details against the canonical scoping grid BEFORE authorizing draft. A 2-minute checkpoint audit prevents a 30-minute correction cycle.
Schema-read-before-implement discipline is bidirectional — it protects workers from command-seat authoring errors AND protects command-seat from worker blind spots. Worker B applied the memory's discipline (feedback_scope_statement_schema_read.md from 2026-04-20) by reading winners_vault_readback.schema.json v0.4.1 thoroughly before implementing. Caught that my scope spec conflated two orthogonal concepts into one field name — the schema had a frozen stage_r_emission_reason enum with different values (data-presence state) than the retro-verdict taxonomy I specified. Worker proposed separate retro_verdict field + kept schema enum verbatim. Command-seat endorsed + updated scope doc mid-flight. If Worker B had followed scope literally, output would have been schema-invalid. Pattern: discipline memories aren't bureaucratic — they actively prevent both kinds of drift (agent hallucinates vs command-seat miscoded). The "right thing happens" feels identical in both directions; the operational value is the FAILURE MODE IT PREVENTS. Invest in memory-anchored workflow discipline for exactly the reasons it feels obvious in retrospect.
Parallel-delegation 5-precondition rubric (non-overlapping files / no shared mid-session state / contract-bounded outputs / orthogonal decisions / no pending-ADR reconciliation) is the right gate — when all 5 hold, default to parallel; when any fail, run serial. Session #40 codified the rubric explicitly in the command-seat skill Step 4 addendum. Three parallel workers (Risk 1 hotfix + Worker 3.5-A + Worker 3.5-B) fit all 5 preconditions cleanly and merged same-day. A 6th worker (3.5-C) was initially proposed but FAILED precondition #5 (pending ADR 0018 reconciliation) so was correctly HELD. Pattern: the rubric is a sanity check before spawn, not a post-hoc justification. Write the non-overlap one-liner before paste; if you can't, the work isn't parallel-safe. Codify in every command-seat skill call: explicit non-overlap statement ("Worker A owns X. Worker B owns Y. Neither reads the other's target. Both read Z read-only.") is required when proposing parallel.
Cross-agent open questions route to command-seat, NOT back to the surfacing agent. When a worker or sub-agent surfaces a question ("should X be Y or Z?", "need operator input on...", "ambiguity between A and B"), operator brings the question to command-seat — never back to the agent that raised it. Reason: the surfacing agent has repo-local context only. It can't see what's been ruled in ADRs elsewhere, what's in-flight in other worktrees, what's in persistent memory, or cross-workstream constraints. Command-seat holds all of that. Fragmenting decisions across N agents produces incoherent rulings that conflict and require later reconciliation. Pattern: surfacing agent halts + reports open question → operator brings question to command-seat → command-seat decides with operator using full context → command-seat drafts resolved answer as "resume message" for original agent → operator routes answer back → agent applies modifications + commits. Codified in feedback_command_seat_cross_agent_routing.md. Command-seat skill Step 4 explicitly enforces this.
Simplification-request framing surfaces taxonomy drift that complex-decision-menu framing hides. Operator's mid-session "just tell me the simplest version" request on the monetization enum surfaced that ADR 0013's 3-value enum (affiliate_online | direct_school | content_only) was systematically insufficient — direct_school conflated 3 distinct monetization regimes (aggregator portal / direct online affiliate / ping-post marketplace) plus an orthogonal lander-routing axis (content-site-tool vs archetype-embedded vs archetype-prequalifier). A complex-decision-menu framing would have asked "accept, modify, or extend the existing enum?" and defaulted to accept-and-extend. Asking "what's the simplest model that captures your actual flywheel?" forced a first-principles reconstruction and surfaced 6 paths + 3 modes + Yellow Ribbon-as-attribute. Pattern: when an existing schema feels labored or inconsistent, ask for the simplest restatement first; complexity-justified-by-accumulation is a common failure mode. Codified in project_monetization_taxonomy_expansion.md.
3-parallel-persona-v2-loaded coherence-audit sub-agents + non-overlapping lenses (completeness / schema coherence / strategic-thesis encoding) produces high-signal findings cheaply — cross-lens convergence is the strongest signal. Audit ran in ~1 command-seat turn / ~100 min wall-clock for ~3× serial depth. 4 HIGH findings surfaced; cross-lens convergence on Finding 1 (surfaced on all 3 lenses) and Finding 4 (on 2 lenses). Pattern: cadenced coherence audits during active build phases are cheap insurance. When multiple independent lenses converge on the same finding, the finding is load-bearing; when only one lens surfaces something, it may be an artifact of that lens's framing. Schedule audit passes at phase boundaries (pre-regen, pre-merge, post-shipping) rather than on-demand only. Codify: 3-parallel-persona-v2 read-only sub-agent pattern for all coherence-sensitive audit points.
Description-text schema constraints don't actually constrain anything — JSON Schema allOf/if/then is the enforcement mechanism, and v0.4.0→v0.4.1 proved this in one commit. winners_vault_readback.schema.json v0.4.0 said in description text that tool_id + tool_version were required when stage_r_emission_reason == "populated". This was "documented" but not "enforced." Schema validators don't read description fields. An agent that emitted a populated readback without tool_id would pass validation despite violating the documented invariant. Finding 2 surfaced this; v0.4.1 fixed it via allOf/if/then conditional that actually fails validation when the constraint is violated. Pattern: "the schema says X in description" is not the same as "the schema enforces X." When a constraint matters (especially for signal integrity downstream), express it as enforceable JSON Schema structure, not prose. Audit discipline: when reviewing a schema, ignore description text when checking what's ACTUALLY enforced — only types, required arrays, enums, and conditional blocks carry enforcement weight.
Audit-before-regen discipline catches drift that structural audit at regen-time would miss. Session #39 ran the coherence audit BEFORE the Phase 5.1-5.8 skill regen batch fires. Had the audit run AFTER regen, Finding 1 (compliance_angle_map split-brain causing active drift since 2026-04-20) would have been baked into 8 regenerated skills, multiplying the fix cost. Finding 2 (schema non-enforcement) would have let invalid readbacks land during actual campaign retros. Finding 3 (trigger coverage gap) would have under-gripped veteran + employer cells in Phase 5.4 creative-director regen. Finding 4 (archetype_tag) would have forced Phase 5.8 landing-page regen to re-invent mapping ad-hoc per cell. Pattern: coherence audits in the gap BETWEEN phase-N-shipped and phase-N+1-kickoff are the highest-leverage audit cadence. Phase 5 shipped Session #38; Phase 5.1-5.8 regen batch queued for Sessions #40+; Session #39 is the audit window that prevents drift-baked-into-8-skills. Codify: every N+1-phase regen that consumes N-phase outputs should be gated on an explicit coherence audit of the N-phase delivery BEFORE spawning N+1 workers.
Parallel-worker + command-seat pattern scales from 2 workers to 4 concurrent worktrees when file-based receipts replace conversation-based handoff. Session #38 ran Track A worker (39 commits Phase 5 implementation) + Track B worker (1 commit ContentForge retrofit) + Audit Fork A (structural) + Audit Fork B (strategic) in 4 parallel Claude Code sessions with command-seat as the cross-session coordinator. Critical enabling factor: file-based receipts — each subagent within each worker wrote detailed output to /tmp/<name>-report.md and returned only a one-line verdict + file pointer. Main window context stayed lean (Track A hit ~278k by Phase 2 end, ~340k by Checkpoint D — never triggered compaction) despite 26 sequential subagent dispatches across SKILL regen + schema chain + validator extension + fixture + audit verification. Without the file-based-receipt pattern, Track A would have compacted 2-3 times and degraded SKILL regen quality at exactly the highest-quality-sensitive step. Codify: parallel worker orchestration with state-based coordination (STATUS.md + BR grid + file receipts) and zero conversation-based handoff cost. Command-seat holds the cross-cutting picture; workers stay narrow; handoffs are cheap because state lives in files. Pattern extends beyond 4 workers — the bottleneck is operator attention across sessions, not coordination infrastructure.
Fresh-session audit (not subagent-in-command-seat, not self-audit by Track A) catches drift that invested eyes miss — test-green ≠ strategically-sound, functional validator ≠ semantic auditor. Auditor B's F1 finding was a latent 3-variable interaction (is_exploratory + monetization_path + 32-cell zero-exploration current state) that neither Track A's 42 validator tests, nor command-seat's BR ruling walkthrough, nor Auditor A's structural audit surfaced. The finding: Step 3A line 394-395 splits exploration_candidates by is_exploratory only — but per ADR 0012 line 77 it should ALSO filter by monetization_path == "affiliate_online" (exploration budget doesn't apply to direct_school / content_only / hybrid paths). Unreachable today because v0.2.4 has zero exploratory cells; tests pass; everything looked green. Latent landmine for Phase 5 v2 when cell reclassification lands. Why fresh-session caught it: (1) compaction is lossy on exactly the cross-cutting reasoning an audit needs (compacted context blurs specific BR verdicts + line numbers + constraint interactions); (2) self-audit bias — same cognitive lens that produced SKILL.md won't question its own Step 3A split logic; (3) persona-v2 loaded into pristine context is the strongest strategic-review voice available. Codify: always fresh-session audit before merge authorization on architectural scope work. Two-parallel-auditor pattern (structural + strategic) is the right decomposition — one covers checklist-completeness (auditor A), the other covers semantic-soundness (auditor B). Neither is sufficient alone.
Cross-track enum-string sync verbatim is a real checkpoint gate, not optional hygiene. Track B hardcoded 7 canonical enum strings directly from ADR 0013 text + BR 13-2b verbatim (affiliate_online, direct_school, content_only, hybrid, traditional, online_partnered, paid_traffic_eligible). Track A authored Phase 5 schemas consuming the same enums. Risk: silent drift (e.g., direct-school vs direct_school, or paid_eligible vs paid_traffic_eligible) would surface as pipeline errors weeks later. Checkpoint B cross-track verification at Phase 2 completion (before schema migration lock-in) caught ZERO drift because ADR text was canonical source for both tracks. Bonus cross-track ruling made during the day: `paid_traffic_eligible` (from BR 13-2b verbatim) wins over `paid_eligible` (from ADR 0013 commitment #4 draft) because BR 13-2b is more specific + narrower name extends cleanly for future axes (paid_email_eligible etc.). Pattern: whenever cross-repo feature work shares a vocabulary, (a) ADR text is canonical source for both tracks; (b) Track that authors first hardcodes verbatim from ADR; (c) Track that authors second conforms to first-Track's hardcoded strings; (d) verbatim-match verification fires as checkpoint gate BEFORE merge, not after.
Subagent-driven execution with atomic-per-version-bump commits is bisectable; operator-specified atomicity lived where bisectability mattered. Command-seat-required atomic-per-version-bump on cell-angle-rules (3 commits v0.2.2/v0.2.3/v0.2.4), each carrying one ADR's schema changes + 32-cell migration. Worker over-satisfied on result_personalization (3 commits v0.4.0/v0.5.0/v0.6.0 even though not constrained — good discipline). Worker under-satisfied on winners_vault_readback (1 commit v0.1.0→v0.4.0) — accepted as consumer/mirror schema where bisectability matters less than on source of truth. Pattern: explicit atomic-per-bump constraint on source-of-truth schemas where each bump maps to an independent ADR's changes; looser allowed on consumer/mirror schemas that track upstream ADRs as a package. Bisectability is a revert-time property; when bisecting, you want each commit to match a single architectural decision. Source-of-truth schemas get each-ADR-one-commit; consumer schemas can bundle.
Operator's 500k context cap is correctly calibrated for generative work — raising to 750k would buy marginal headroom at real quality cost. Track A main window hit ~278k by Phase 2 end and ~340k by Checkpoint D across 39 commits + subagent dispatches + audit verification + housekeeping. Never exceeded cap; never triggered forced compaction. Parallel subagent dispatch + file-based receipts + sequential checkpoint reports did the context conservation work. For generative tasks (SKILL regen, schema authoring, cross-cutting compliance patterns), quality degrades noticeably above 300-400k — the cap enforced discipline that kept SKILL.md high-fidelity through 5 sequential regen subtasks. The 1M ceiling exists for retrieval-dominated use cases (one-shot ingestion of a codebase for Q&A, prompt-cached base contexts amortized across rapid-fire calls). It's not an operating point. Codify: 500k cap is the right heuristic for generative work. 750k gives up quality for marginal headroom you won't use well. Context engineering (subagents, file receipts, fresh-session handoffs) dwarfs "raise the cap" as a quality lever.
Persona v2 reload on strategic-decision triggers produces real signal that default-accept would have shipped — 4 BR flips + 1 supersession over 28 BRs caught drift sub-agents alone missed. Operator-requested reloads mid-walkthrough (after BR 12-1, before 13-1, after 13-4, before 13-5) each yielded sharper recommendations than my initial accept-as-drafted pass. BR 12-2 (phase-aware v1/v2 composite) flipped on persona v2 lines 16-17 "lead quality > CPL" reasoning that surfaced only on reload — first pass saw sub-agent's single composite as "reasonable" because 70% of the weight (lead_to_sale + ltv_ratio) is structurally correct IN ISOLATION but has no data source in current white-label phase, making the metric unusable for ~3-6 months. BR 14-7 (vertical-namespaced tool_ids) flipped because persona v2 line 82-87 multi-vertical pattern is per-vertical by construction — tools encode per-vertical regulations, sharing breaks at implementation level, sub-agent's shared-tool-id pattern creates future MAJOR-bump cascading breaking changes. Pattern: when persona v2 reload yields reassessment on >10% of BRs, the first pass was under-scrutinized — not failing but incomplete. The reload is not ceremonial; it's the mechanism that catches local-optimization blind spots that compound across BR accumulation. Codify: on high-BR-count walkthroughs (>15 BRs), schedule persona reloads every ~5-7 BRs, not just on demand.
Phase-aware architectures are load-bearing during monetization-model transitions — shipping single-phase metrics when multiple phases are in flight = shipping unusable metrics until the future phase activates. BR 12-2 flipped to dual composites: v1 white-label (CPL_delta × 0.4 + audience_novelty × 0.4 + compliance_survival × 0.2) vs v2 direct-school-live (lead_to_sale × 0.5 + audience_novelty × 0.3 + ltv_ratio × 0.2) with explicit switch trigger (first verified direct-school partnership with ≥30 attributed leads). Sub-agent's single composite would have shipped with 70% of the weight having no data source in the current $35 white-label-only phase, degrading exploration grading to "audience_novelty × 0.3 only" — burning ~$630/mo in foregone profit for audience-overlap data we can't yet convert into decisions. Pattern: whenever an architectural decision crosses a monetization-model transition (affiliate-only → aggregators → direct-buyer), design for the transition not the endpoint. Version the metric with explicit phase markers. Trigger the switch on data-availability signals, not operator manual toggle. Generalizes beyond exploration budget: any metric that depends on downstream attribution data needs phase-versioning until attribution is continuous.
Two-schema separation (categorization vs implementation) is a durable architectural pattern that prevents decay of stable metadata as implementation details rot. BR 13-3 walkthrough surfaced this in the direct-school-partners schema: one file (verticals/edu/research/direct-school-partners.json) holds categorization metadata (school_id, name, school_type, paid_traffic_allowed, subject_taxonomy_offered, partnership_status) that Phase 5 uses for routing/matching decisions — stable across partnership lifecycle. A second file per active partner (verticals/edu/config/partner-endpoints/<school_id>.json) holds implementation details (endpoint field-mapping, transformations, TCPA requirements, credentials, auth method) — rots when partner APIs change. If both concerns were in one schema, API changes would force touching routing-decision data too, risking drift on the stable metadata. Pattern also applies to tool_registry (BR 14-6): registry = source of truth for tool metadata; content-brief = source of truth for site-IA referencing tool_id as FK. Generalize: when a data object has fields that update at different cadences, split into categorization-layer (slow-changing) + implementation-layer (fast-changing). The FK linkage between them stays stable.
Sub-agent recommendations consistently undersell cross-cutting concerns — local-BR correctness doesn't catch how BRs interact with accumulated scope. Three BRs exemplified this: BR 14-7 (shared tool_id with verticals[] array — missed that tools encode per-vertical regulations); BR-A (Layer-3 experimental slot — missed that ADR 0012 exploration budget had already obsoleted it systematically); BR 12-2 (single composite — missed that 70% of weight has no data source in current phase). Each sub-agent was optimizing for local-BR correctness against its own ADR's context window without catching cross-BR + cross-ADR interactions. Persona v2 re-read + operator override were the correction mechanism. Pattern: on multi-BR walkthroughs, the command-seat agent needs to hold cross-BR scope in working memory explicitly — cross-cutting concerns should be surfaced as first-class discussion items, not discovered by flip moments. Practical move: maintain a "cross-cutting concerns" list alongside BR grid that the reviewing agent updates as each BR lands; check new BRs against the list before default-accepting. Sub-agents can't see their own blind spots; the command-seat's job is to be the one-level-up observer.
Command-seat state-driven coordination pattern earning its keep — STATUS.md + BR grid as authoritative source means fresh sessions pick up with full context at zero token-cost. Today's session consumed meaningful context walking all 28 BRs + persona v2 reloads + BR grid amendments. Fresh command-seat tomorrow reads the same files with 100% window available. Worker sessions spawn with embedded scope statements from STATUS Next Priorities — no re-derivation needed. Compare to conversation-driven handoffs where operator would need to summarize context into next session's first message, losing nuance or over-loading context. State-driven = files do the work. Pattern: on long-context orchestration sessions (command-seat, strategic review, multi-phase planning), invest in STATE over CONVERSATION — every ruling gets logged inline in an authoritative coordination doc (BR grid Final Verdicts table), every amendment gets a session-dated section, every future scope statement gets embedded in STATUS. Next session's command-seat reads committed files and is instantly oriented. The pattern only works if the coordination doc is read-friendly enough that a fresh agent can pick it up cold — BR grid amendment discipline (inline tables + session-dated sub-sections + Final Verdicts summary) passes this test.
Completion-audit discipline (map plan tasks → file existence + git log + dashboard + STATUS) catches STATUS drift at ~5 min wall-clock and prevents "work done but untracked" anti-pattern. Session #35 Track B opened with a 28-task completion audit against the ContentForge Phase 1 plan. Immediately surfaced: every task was already DONE (commits e61e84c + 8c081ec + f025a37 + 5510202 on contentforge + 641fd85 on brain, executed in Session #31) — but STATUS.md still said "SPEC + PLAN COMMITTED — EXECUTION PENDING" across 3 lines (workstream row, phase-table row, Next Session Priorities). The "next action" framing in the session brief was wrong — execution wasn't pending, closure was. This generalizes the Phase 4 Session C kickoff-audit pattern (diffing source-of-truth priority_tier distribution against existing cells) from pipeline-phase work to ANY multi-session workstream. Pattern: open every session with an audit pass before touching work; the cost is ~5 min, the save is catching "I thought we finished this, let me redo it" errors that compound across sessions. Codify: STATUS drift is a symptom of missing session-end discipline in the prior session — the fix is both cleaning up current drift AND tightening the session-end protocol to flip workstream rows + Next Session Priorities on execution, not just on spec approval.
Parallel-sub-agent verification against manifest-declared sources is the right QA pattern for operator-gut-check-gated deliverables — 2 agents, <2 min wall-clock, bounded output, no file writes. Before presenting edu.md (45KB, 749 lines) to operator for §11.2 gut-check, Session #35 dispatched 2 general-purpose sub-agents to verify the 12-section synthesis against the 9 brain sources in content-source-manifest.yaml — one for full-refresh (edu.md), one for skeleton-mode (ssd.md). Scope fences: read-only, report-only, <400 word / <300 word bounded outputs with explicit hard-gate verdicts + per-section tables. Both returned SHIP with distinct caveats (edu.md: sections 3/5/7 operator-synthesized with hash-placeholder; ssd.md: alert includes section 6 beyond spec defensibly). This verification pattern — manifest-declared sources as the ground-truth anchor, parallel agents against disjoint sub-artifacts, bounded report-only output — is cheaper than operator-doing-manual-audit AND more thorough than single-agent inline re-read (context budget pressure causes single-agent passes to summarize rather than verify). Pattern: when a generated artifact depends on a declared source set, verify by dispatching one agent per artifact against the same source manifest with explicit hard-gate criteria and report-only instructions. The receipts feed the operator's actual gut-check decision; the operator stays the decider, the agents de-risk the decision.
Aggregate-lead-data is load-bearing when end-user verbatim research is sparse — 142K clean rows beats zero verbatim quotes, and the `[OPERATOR-SYNTHESIZED — LIMITED VERBATIM RESEARCH]` flag preserves the distinction without refusing to ship. Operator gut-check on edu.md identified a persona miscalibration (Marcus "may have a bachelor's already" claim, missing 55+ cohort covering 26.7% of lead volume, under-weighting of Counseling as primary subject for 45-54 + 55+) caught by diffing against historical_leads (n=142,658 clean rows). Without that data cross-check, single-pass expert synthesis shipped a reasonable-sounding persona set that was 25-30% off on cohort coverage + leading-subject-by-age-band. The fix: don't refuse to author personas absent verbatim research, but do calibrate against aggregate lead data when available AND flag the section explicitly so downstream Phase 2 skills know the sections are expert-synthesis-plus-aggregate-data, not end-user interview output. Pattern: aggregate operator-captured data (leads, retention, retro results) is a legitimate second-order calibration source for persona synthesis — NOT a replacement for verbatim research, but a correction mechanism that catches cohort-weight errors single-pass synthesis misses. The [OPERATOR-SYNTHESIZED — LIMITED VERBATIM RESEARCH] flag preserves the distinction without blocking shipping. Anti-pattern: refusing to ship personas until verbatim research lands — that's perfectionism blocking the 80% foundation from being usable while research cycles continue.
Diff-personas-against-real-lead-data should be a codified Phase 2 pre-ship gate, not an operator-caught ad-hoc correction. The Counseling finding (19.9-21.9% of 45-54 + 55+ lead volume as primary subject) was a material persona gap that single-pass expert synthesis would have missed and that expert review alone did not catch — it took operator cross-reference against historical_leads rows to surface. Running this diff manually per-session doesn't scale; the right move is codifying it as a Phase 2 pre-ship gate in the article-writer / content-brief-builder skills. Pattern: whenever a content skill outputs persona-grounded claims (primary subject per cohort, demographics per segment, income-band distributions), the skill must auto-diff the claim against the aggregate-data source of truth and flag any cohort-coverage or leading-subject deviations > some threshold (e.g., ±5 percentage points). Phase 2 spec should name this gate explicitly. Generalizes to any future vertical where operator has lead volume: the aggregate-data-diff is cheap, catches an entire class of miscalibration errors, and produces an auditable trace for downstream skills. Phase 1 closure validates the pattern; Phase 2 skills inherit it.
Three parallel sub-agents authoring non-overlapping spec docs in the same directory is a clean parallelization pattern when scope fences are explicit and all agents have read-only access to the same upstream inputs — ~7 min wall-clock for 1,765 lines of spec vs ~25-30 min inline. Session #35 dispatched three general-purpose sub-agents to produce PHASE-5-SPEC.md (458 lines) / SKILL-GAP-ANALYSIS.md (908 lines) / VALIDATOR-EXTENSION-SPEC.md (399 lines) in parallel within the new docs/psychology-engine/phase-5-strategy-engine/ directory. Scope fences by FILE: each agent's prompt explicitly enumerates the OTHER agents' output files + forbids touching them. Logic conflicts were zero because the deps graph is input->output (all three read the same upstream v0.2.1/v0.3.0/v1.1 inputs + write to disjoint output files); no cross-agent state. Pattern: when shipping a multi-document kickoff package where each document covers a distinct angle (contract / diff / coupling) and upstream is read-only, parallel-dispatch beats inline every time. Inline-over-subagents rule still applies when the work is SEQUENTIAL SHARED-FILE mutations — that's a logic dep. Spec doc fan-out with disjoint output is the complementary case. Rule of thumb: if two files never need to be touched in the same commit, spec authoring can fan out.
Persona discipline survives sub-agent dispatch when the dispatch prompt explicitly instructs the agent to load the persona file at start — relying on tone inference is unreliable. All three agents produced spec content in persona voice (blunt, peer-register, opinion-forward, with recommendations not neutral menus) because each prompt contained: "Load the advanced-affiliate-marketer persona from /Users/BeLoving/Dev/campaign-forge/advanced-affiliate-marketer-system-prompt-v2.md... Voice: 15yr media buyer, $10M+ managed, blunt peer-register, no sycophancy." Without that explicit instruction, general-purpose sub-agents drift toward neutral-summary default voice because the parent conversation's persona context doesn't transfer. Pattern: when dispatching sub-agents for strategic-decision or operator-voice work, include the persona-load instruction + a one-line voice directive explicitly. Cost: ~50 tokens per prompt. Benefit: output doesn't dilute on the hand-off. Anti-pattern: assuming the sub-agent will infer persona from prompt tone alone — that's how generic assistant voice leaks into spec docs and then operators read them thinking "this feels off" without being able to name why.
"Spec first, implementation later" pacing within a phase protects operator leverage on block-review asks — extends the "complete spec before build" feedback memory from across-phase to within-phase. Session #35's scope could have included regenerating the SKILL.md inline after the specs shipped. That would have saved ~1 wall-clock hour on the next Track A session but would have arrived at operator block-review with the implementation already done — creating sunk-cost pressure to accept-as-shipped rather than rework when BR-A..BR-F land as actual modifications. Shipping only specs preserves operator leverage: each block-review ask can land as "flip this one, keep the rest" without rework thrash. Pattern: when a phase has ≥3 operator-judgment-call decision points, ship spec alone in session N, implement in session N+1. Applies to phase kickoffs where the architectural decision space is wide. Feedback memory feedback_spec_before_build.md already codifies this across-phase; Session #35 validates it within-phase when judgment-call surface area is high. Inverse case: when block-review asks are limited to 1 or 2 and scope is bounded, spec-and-implement in one session is fine.
Sub-agent return-message length should be capped strictly (<150 words) — the real output is the files they wrote, not the message they return; verify via direct file inspection not agent self-report. All three Session #35 agents returned tight receipts (<150 words each) naming exactly what their spec covered without embellishment. Longer-form receipts would have pulled context budget without improving the composite view — the parent agent needs to SYNTHESIZE the receipts into a status update, not re-read embellished descriptions of work that's in a file 10 lines below. Pattern: on parallel-agent work where the main agent composes results, cap agent return length explicitly ("Return a single paragraph under 150 words"). The receipt is telemetry; the spec is the artifact. Parent agent verifies via ls -la + wc -l + targeted Read on the shipped files. This is the "trust but verify" principle in the Claude Code system prompt applied concretely: agents can confidently claim shipping without the parent having to re-read every line, but the parent must sanity-check file sizes + existence + spot-check content.
Persona v2 reload-reassessment is bidirectional discipline — 4-flipped/1-held is as valid as 4-held/1-flipped, and what matters is the honest measurement of first-pass-vs-persona-aligned gap, not matching some target ratio. Session A's operator-invoked reassessment yielded 4-held/1-flipped — first pass had already been persona-aligned on most decisions. Session C's reassessment yielded the inverse: 4-flipped/1-held. This initially felt concerning ("did I do a worse first pass?") but on reflection isn't — Session C's block-review defaults were written DURING the session when the work was fresh + the friction of going back to rework felt expensive. Persona scrutiny surfaced that "expensive to rework" wasn't actually high when measured against the downside of shipping Phase 4 DoD with silent validator failures (BR-5) or unit-economics auto-flip on N=1 CPL data (BR-4). Pattern: when reassessment yield is high in either direction, the first-pass was under-scrutinized — not failing but incomplete. Codify: don't target a held/flipped ratio. Target honest measurement. If reload produces reassessment, act on it.
Nomenclature drift on pre-gen compliance hard-blocks is a silent failure mode that the Phase 3 closing audit alone doesn't catch — need the phase-where-it-triggers test, not the phase-where-it's-identified test. Phase 3 closing audit (session #30) correctly identified the drift (_scan_S4 ambiguity, _scan_S5 undefined, 4 novel anti-patterns unformalized) and deferred reconciliation as "non-blocking for Phase 4." Technically correct at Phase 4 authoring time (result_personalization reads copy_blueprint, not compliance_hard_blocks directly). Wrong at Phase 5 kickoff time (rule_10_compliance_hard_blocks_pre_gen silently fails when FK names don't resolve to compliance-angle-map scan categories). "Non-blocking for THIS phase" is not the same as "non-blocking overall." Pattern: when surfacing a deferred fix, identify the phase where the silent failure triggers (rule_10 runs at Phase 5), not just the phase where the issue was identified (Phase 3). Fix before the trigger-phase, not just before the current-phase. BR-5 flipped on exactly this reasoning — Phase 5 kickoff was the trigger-phase for nomenclature, so the reconciliation had to ship BEFORE Phase 5, not WITH Phase 5.
Inlining ADR rules into deterministic input files is non-negotiable when agents consume them at runtime — ADR 0009 input-layer-rigor thesis makes this explicit, and BR-3 flipped on exactly this boundary. ADR 0011 originally specified rule_7a-7d in the ADR markdown file with a note that cell-angle-rules.json rule_7 "is extended in spirit by the ADR." That framing treats the ADR as a runtime dependency — Phase 5 agent would have to either (a) read ADR markdown at runtime (fragility; markdown isn't the input-layer contract), or (b) have rules re-specified in agent prompts (duplication risk; prompt drifts from ADR). ADR 0009 thesis: "the pipeline's output capacity is bounded by input-layer rigor." Input files carry HOW. ADRs document WHY. When a rule crosses the threshold from "reference doc" to "runtime dependency," the rule belongs in the input file — inlined, canonical, FK-resolvable. The ADR stays as rationale doc (WHY this decision was made) but loses its runtime role (HOW to execute). Pattern: at any ADR addition that touches agent-consumed input files, ask explicitly "does an agent consume these rules at runtime?" If yes, the rules must ship inlined in the input file. If no, the ADR is the canonical home. BR-3 flip validates this as an explicit pattern — future ADRs touching agent-runtime rules should skip the "ADR-only" intermediate state and ship inlined from day one.
A single reconciliation session can compress multiple block-review flips when they share a version bump. BR-3 (inline rule_7a-7d) + BR-4 (tighten rule_7d) + BR-5 (nomenclature reconciliation) all hit cell-angle-rules.json. Initial routing thought: "schedule 3 separate sessions, one per flip." Actual routing: "one v0.2.1 bump carries all three." Output: 10 renames + 4 new rules + metadata additions + companion compliance-angle-map v1.1 + ADR 0011 update + PHASE-4-DOD closure — one session, one file-version bump, atomic commit. Pattern: when multiple approved changes touch the same file, bundle them into one version bump rather than sequencing. Atomicity preserves reader-integrity (anyone reading cell-angle-rules.json v0.2.1 sees all three flips applied coherently; no half-state). Cost: single-session work. Benefit: downstream consumers (Phase 5 kickoff) read one consistent input layer. Rule of thumb: same-file version bumps should bundle; different-file changes can sequence.
Kickoff-audit discipline (comparing source-of-truth priority_tier distribution against existing personalization cells) catches scope-miscounts at 30 seconds vs catching them at DoD sign-off. Session C's first action was diffing cell-angle-rules.json v0.2.0 tier distribution (P1_unclaimed 7 + P1_reclaim 5 + P2 15 + P3 5 = 32) against result_personalization.json v0.2.0 existing 10 cells. Immediately surfaced: Session B claimed "all 10 P1 cells complete" but shipped only 3 of 5 P1_reclaim cells (S1.F2 + S2.F6 missed). Actual P1 total was 12. Extended Session C scope to 22 cells (20 P2/P3 per operator + 2 P1 backfill) vs catching the gap at DoD sign-off and having to schedule a Phase 4-B. Net cost: ~30 seconds of audit + ~2 cells of additional authoring. Net save: one full future session. Pattern: at every multi-session-phase kickoff, diff the source-of-truth structural distribution against the actual work-product distribution BEFORE authoring any new content. The 30-second audit is insurance against 1-hour-of-future-work errors. Codify this as standard Phase N kickoff discipline going forward — every DoD-gated phase starts with this audit.
Variant density is the joint function of priority_tier AND activity_level, not priority_tier alone. P3_gold_standard_parity cells typically deserve minimum variant density (2/2/2/2) because the strategic posture is authority discipline, not multi-segment routing. BUT primary-activity P3 cells (S6.F8 full veteran picture Ch 33 + MHA + Yellow Ribbon + VR&E + TEB; S7.F1 dual-§127 household employer tuition stack) demand wide segmentation because their primary activity_level means they carry significant campaign volume + varied user profiles. These P3 cells earn P1-equivalent density (4-5 rav / 2 dlo / 3-4 lpv / 3-4 trr). Without the joint rule, applying priority_tier minimum to S6.F8 would have under-personalized the highest-volume veteran cell in the system. Pattern: density decisions need the joint axis, not the individual axes. Priority_tier alone misses volume-driven segmentation demand; activity_level alone misses strategic-moat-criticality. The joint rule captures both signals. Rule-of-thumb: primary-activity P3 = P1-equivalent density; secondary-activity P3 = minimum density.
Typed winners_vault_readback schema (ADR 0011) resolves null-vs-populated-vs-partial ambiguity before it compounds at Phase 5. Without an explicit schema, Phase 5 would have had to reason at runtime about: (1) is this cell null because it was never bought, or because Stage R errored out; (2) is this partial population legitimate or schema-violation; (3) does a populated-but-below-threshold readback count as signal or fallback-to-null. ADR 0011 makes all three cases schema-enforceable via the stage_r_emission_reason enum (populated / null_never_bought / null_bought_below_volume_threshold / null_stage_r_error) + the minimum-field-set-when-populated rule + the variant-attribution threshold gate (~200 leads/cell). Phase 5 reader logic becomes defensive-at-boundary + trust-inside: validate schema once at ingestion, then trust the shape downstream. Pattern: when two phases of a pipeline exchange structured signal via a single field, invest in the schema BEFORE the consumer phase lands. Schema is cheap at definition time; ambiguity-resolution is expensive at runtime when Phase 5 production code has to handle every partial/null/error combinatorial case.
ADR index hygiene: audit the index at any ADR addition, not just at session-end housekeeping. While appending the ADR 0011 entry to docs/adr/README.md, discovered ADR 0010 (content skills in ContentForge, session #31) was silently missing from the index. ADR file existed in the directory but the human-readable index didn't reference it. Index hygiene matters because it's the only view most future-sessions will ever get of the ADR catalog — if the index is stale, the ADR is invisible. Pattern: every ADR addition should audit the index for drift, not just append. Takes 5 extra seconds; prevents silent ADR orphaning. Could be automated as a pre-commit hook later but habit first, automation second. The general lesson: indexes are the most-read artifact but the least-audited; automate or codify audit habits around them.
Session A's template + closure-notes discipline reduces Session B authoring load by a measurable fraction — pattern-copying works when the pattern carries rationale per variant. Session A was 4 cells in what felt like a dense single-session push; Session B landed 6 cells in a short continuation at identical quality because the authoring pattern was fully codified in Session A's session_a_closure_notes + schema_notes + phase_5_runtime_contract. Each cell in Session B was authored as a pattern-fill operation over (axes × band-specific variants × reclaim-or-unclaimed framing) with no re-derivation of format, FK convention, fallback convention, or null-readback contract. Pattern: when Session N defines the template, Session N+1 should not re-invent it — closure-notes pay off exactly when the follow-up session reads them before touching output. Investment-in-codifying-the-pattern returns 1.5x+ throughput on the same quality bar.
Reclaim-vs-unclaimed dlo discipline is a real threat-model distinction that wasn't obvious until Session B made all three reclaim cells adjacent to all three unclaimed cells. Session A was 4 unclaimed cells (all S8), so reclaim discipline didn't surface. Session B forced the distinction: reclaim cells (S1.F6 × Yorkshire, S2.F3 × Degree SNAP, S4.F5 × Learn Grant Writing) require dlo overlays that specifically counter the adjacent hard-violation archetype — anti-establishment framing, $6K+laptop bait, $15K-in-5hrs income lure — by name. Unclaimed cells (S1.F5, S4.F6, S4.F8) require dlo overlays that counter generic affiliate-drift in adjacent ad space — private refinance stripping federal protections, paid debt-relief scams targeting PSLF-denial users, career-coach income promises. Both are tightening overlays but with different threat models. Pattern: when authoring compliance overlays across priority tiers, tier the language-suppression by what competitors ARE doing in that exact ad space, not a generic persona-drift blacklist. Reclaim cells get named hard-violation archetypes as disallowed-context; unclaimed cells get named soft-drift affiliate patterns as disallowed-context. The taxonomy lives in session_b_closure_notes.reclaim_slot_discipline + unclaimed_lane_construction.
Service-commitment conversion risk belongs inline in rav + lpv, not footer-disclosed. Session B's S4.F5 + S4.F8 cells surfaced three scholarship-for-service archetypes with real downside risks at execution: Teach Grant ($4K/yr converts-to-unsub-loan if 4-yr shortage-area service fails), NHSC/HRSA (HPSA contract + conversion penalty at market-plus-penalty rate), AmeriCorps Segal (1700 hrs = ~10-11 months full-time). Cheaper affiliate framing omits the conversion math entirely or footer-discloses it. Session B variants surface the conversion risk in the result-card subhead + preempt text — visible at decision moment, not buried. Persona: "match promise to delivery" requires the delivery-risk to be visible where users can act on it. Pattern: when a scholarship/grant is conditional on service, the conversion-to-loan math is load-bearing disclosure, not footer compliance. Lead-to-sale quality depends on users who sign knowing the downside — those users convert and complete the service commitment; users who sign and learn later churn. Transparency at the decision moment protects downstream economics that naive CPL optimization misses.
Retroactive-qualifying-payment recovery for never-certified-at-qualifying-employer is the biggest real unlock in the S4.F6 cell — and it needs to be the rav.01 variant, not a footnote. When authoring S4.F6 (PSLF employer-qualification audit), the axes matrix surfaced a specific high-value user segment: confirmed_qualifying_501c3 employer × never_certified past_pslf status. These users have potentially years of retroactive qualifying payments available under the existing ECF statutory mechanism that most affiliates don't even surface, let alone lead with. Making this the rav.01 variant (first priority in axes-match resolution) flips the cell's dominant outcome from "find out you might qualify in the future" to "discover retroactive months you already earned." Pattern: when axes-matrix surfaces a specific subsegment with asymmetric upside and an existing statutory mechanism, that subsegment's rav.01 slot should lead with the statutory recovery explicitly, not treat it as an edge case. The downside is users who aren't in that subsegment skip past the rav.01 copy; the upside is users who ARE get the maximum-value pathway surfaced first. Axes-first authoring respects this automatically.
Persona-reload-reassessment during block-review isn't ceremonial — it catches decisions that pass as "reasonable" on first pass but collapse under persona attention weight. Operator requested persona v2 reload mid-review with "internalize. Then reassess decisions. If the same then great. If different explain why." 4 of 5 recommendations held but with stronger conviction (J15/J16/J17/J19). 1 flipped (J18 S8.F5 loan_scenario_band thresholds zero/$10K/$30K → zero/$20K/$40K) because on reread persona's "strong opinions formed by spending real money" explicitly argues against deferring known-wrong thresholds to Stage R validation. 4-held-1-flipped is the operator validation that reload produces real signal — if every recommendation flipped, first pass was sloppy; if zero flipped, reload was ceremonial. The 80% hold + 20% sharpen ratio is what a healthy persona-reload looks like in practice.
Fourth-dimension tool_result_offer_routes is the load-bearing addition that prevents segment-driven lead-quality flattening at the offer gate. Initial Phase 4 scope draft had 3 personalization-deliverable categories (result_archetype_variants + disallowed_language_overlay + lander_preempt_variants). Persona reload surfaced the missing category: given the user's specific tool result + segment, which offer/lander path maximizes lead-to-sale (not CPL)? An S8.F1 user who sees $18K combined aid routes DIFFERENTLY than an S1.F1 user who sees $2K after SAI adjustment — same tool, same number-shape, different objection stack at the offer gate, different lead-to-sale conversion path. Without segment-specific routing, all segments route to the same offer and the $18K-eligibility-specific-LTV signal flattens. Pattern: when designing a personalization layer, the question "does everyone in this segment get the same next step?" is where hidden coupling hides. If the answer is yes, the personalization is incomplete — you've personalized the copy but homogenized the funnel.
Scope-defer on state + employer_name axes is the persona-aligned anti-hallucination move even though it feels like incompleteness. Svelte sessionStorage schema from adaptive-question-flow-spec-v2 includes both. Including them with state: UNPOPULATED placeholders would have let Phase 5 read them as valid variant matchers — EXACTLY the hallucination surface the persona forbids in anti-hallucination rule 6 ("tool-derived numbers are NOT hallucinations — but always frame as what the tool shows"). Placeholder-as-data is not anti-data; it's fake-data. Defer until authority-data-cache is populated in Session C / Phase 8. Pattern: "we'll add more later" is acceptable when the scaffold cost of adding later is low. "We'll populate later" is acceptable when the scaffold exists. "We'll pretend the axis exists and populate later" creates hallucination surface area in the meantime. Choose the first two.
Phase-4 S8.F5 loan-scenario threshold correction ($10K/$30K → $20K/$40K) is specifically where 15-year media-buying experience beats a first-pass gut call. First pass picked thresholds by bucket-rounding ("small" = under $10K, "medium" $10-30K, "large" $30K+). Reassessment surfaced that Standard 10-yr monthly at $10K residual is ~$110/mo — below anyone's IDR-vs-Standard decision threshold. Real pivot is where Standard starts squeezing the budget: ~$220/mo (i.e. residual $20K) for typical middle-income returners; aggressive IDR + OBBBA-cap-biting starts above $40K residual where Standard exceeds $450/mo. The persona's "strong opinions formed by spending real money" IS this kind of threshold calibration. Pattern: when drafting user-input-band cutoffs, ground them in the downstream decision they trigger, not in bucket-rounding convenience. The whole point of bands is to produce different behavior per band — if both adjacent bands produce the same behavior, the threshold is in the wrong place.
Inline-executing a skill's documented logic produces artifacts that are indistinguishable from a real invocation — as long as the skill spec is complete. The 28-task plan declared that acceptance tests 3–6 (/content refresh edu dry-run + write, section-12 refresh, ssd skeleton) would be operator-run in a ContentForge-rooted Claude Code session. But the campaign-forge root session that built the skills had all the same inputs: brain path resolved (current working dir), v2 persona loaded, references written, manifest YAML parseable, every brain source file readable. Executing the documented /context-refresh flow inline — load manifest, load each source, synthesize 12 sections through persona lens, run hard-gate checks — produced edu.md (755 lines, 16 angles, 4/4 hard gates passing) and ssd.md (skeleton per manifest skeleton_mode: true) without the skill directory being the active cwd. Pattern: complete skill specs are executable specifications — if the SKILL.md can't be walked inline to produce the artifact, the spec has gaps. Every new skill should pass the "can another agent execute this inline from the markdown alone" test before being marked built.
Gitignore defaults can silently block cross-repo skill deployment — check before committing skill files into a repo that was cloned from a template. ContentForge's .gitignore had .claude/ as a catch-all (default posture inherited from the original scaffold). First git add .claude/ failed with "paths are ignored." Fixed by splitting the ignore: .claude/worktrees/, .claude/settings.local.json, .claude/sessions/ stay ignored; skills, references, configs, vertical contexts are tracked. Pattern: when a repo ships skills that need to be versioned across environments, audit the gitignore for blanket .claude/ rules and split them before first commit. The default "ignore all claude state" is right for single-operator worktrees and wrong for shared-skill repos.
The manifest pattern paid off on its very first run — compliance file is config/compliance/education.json, not edu.json, and archived buyer-intelligence lives under _archived-leadamplify-port/. Both are real brain-side path conventions that would have broken any hardcoded path in the /context-refresh skill. Session #28's pre-emptive move to a brain-owned YAML manifest (docs/content-source-manifest.yaml) absorbed both quirks cleanly: the compliance path points to config/compliance/education.json with a commented note; the buyer-intelligence entry is marked required: false with a note that it's expected to relocate post-Psychology-Engine. Manifest rewrites in one PR when the path moves; zero skill code changes. Pattern: the payoff of externalizing unstable-producer coupling to a manifest shows up immediately, not at Phase N+1. Track every manifest entry that has a note or a required: false flag as a technical-debt ledger entry — when the producer stabilizes, the manifest can shed the workaround.
Inline execution on sequential shared-file work is dramatically faster than subagent chains — operator memory rule validated in practice. feedback_inline_over_subagents.md says: "For sequential work on shared files, execute inline rather than dispatching slow subagent chains." The 28 tasks were all single-threaded — every task modified or created files that later tasks would reference, commit, or depend on. Spawning even one subagent per task would have introduced cold-start overhead and lost shared-context on every handoff; a 3-repo build would have blown well past an hour. Inline execution ran brain commits first (3 files, 1 commit), ContentForge scaffold + references (9 files, 1 commit), skills (2 files, 1 commit), CLAUDE.md + ADR (2 files, 1 commit), context docs (2 files, 1 commit), specs dashboard (2 files, 1 commit) — sequential because each commit's metadata referenced earlier artifacts. Pattern: subagents are for independent parallel work. Sequential-shared-file work is inline. Check the task-dependency graph before dispatching — if every task reads the previous task's output, don't dispatch.
The block-review pattern established session #27 scales beyond Phase 2 synthesis JSONs to Phase 3 operationalization artifacts — same cadence closed Phase 3 in one session after a prior-session draft. Session #27 used per-artifact summary + 3–6 explicit judgment calls + operator verdict against 4 Phase 2 JSONs (situation_family_map / desire_family_matrix / reality_map / trigger_reality_matrix) all machine-verified with programmatic coverage claims. Session #30 applied the identical cadence to Phase 3's 4 operationalization artifacts (cell-angle-rules.json v0.1.0 + tool-blueprint-patches.md + adaptive-question-flow-spec-v2.md + PHASE-3-NOTES.md) with different content characteristics — structured rule encodings, tool-spec diffs, priority-tier re-rankings, operator-decision queues. Cadence held: 5 judgment calls on artifact #1, 5 on #2, 4 on #3, 1 resolution on #4. Total operator engagement ~45 min matching session #27's time budget. Pattern: per-artifact + judgment-call surfacing is a process artifact worth preserving across phase transitions — don't invent a new review mode when Phase N+1 output has different schema than Phase N output. The meta-pattern (structured questions on load-bearing decisions) transfers cleanly.
Persona v2 reload at Strategic-decision trigger produces operator-voice recommendations with actionable teeth — without it, drift toward neutral summary is invisible in real time. Operator explicitly asked "Then strategically answer J1-J5 as the operator" after presenting the J1-J5 judgment calls for artifact #1. Session #30 had persona loaded at start and again at the strategic-review trigger. Output: operator-voice recommendations with explicit modifications ("Option B with teeth" on unit-economics flag, "Affirm with one caveat" on pre-gen compliance cut, named budget caps, named validator rules, declared tradeoffs). Persona-loaded agents don't produce "here are both sides, your choice" summaries — they produce "here's my call and here's why, with fallback if you push back." That's the actionable input operators can route around or accept, not a neutral wishy-washy option matrix. Pattern: if operator reviews feel like a neutral summary menu, the persona isn't loaded. Re-load explicitly before any judgment-call surfacing where "what would the $50M media-buyer say?" is the unlock.
Nomenclature drift on large cross-referenced artifacts is inevitable in single-session authoring — spot-audit at close catches drift cheaply, full-file validator automation can wait. Session #29 authored cell-angle-rules.json with 32 cells each carrying a compliance_hard_blocks[] array that references scan-pattern IDs from compliance-angle-map.md Part III. The compliance-angle-map defines S1–S4 scan categories. Session #29 authoring introduced _scan_S5 suffixes on absolute-promise entries and 4 novel anti-pattern entries not in Part III. Full-file FK-validation would have been ~30 min of scripting session #29. Session #30 did a 5-cell spot-audit instead (1 cell from each priority tier + S8 lead), found the drift categories, documented nomenclature-reconciliation recommendations for Phase 5 kickoff, and confirmed every entry was conceptually fail-class (no false-positive risk). Total time: ~10 min. Pattern: when authoring output that cross-references a living document under revision, expect nomenclature drift. Spot-audit on close > full-file validator in mid-session. The full-file validator is a Phase N+2 investment, not a Phase N blocker. Accept the drift, document it explicitly, queue reconciliation for the next natural break.
"APPROVED AS-DRAFTED" default-accept on queued decisions is efficient when the drafted options carry the operator's expected defaults. Phase 3 had 10 queued operator decisions across artifacts #2 + #3 + #4. Operator approved #2 and #3 "AS-DRAFTED" — default options stand on all 9 relevant decisions, documented in PHASE-3-NOTES. Only Decision #10 (S8 rollout sequencing) was explicitly selected (option B). Total decision-selection time: one-liner per artifact. Alternative would have been asking the operator 10 sequential binary choices, which is cognitively taxing and often produces decisions by fatigue rather than judgment. Default-accept works because the drafted options were authored in-persona with operator's strategic preferences as the north star — the defaults represent a senior-ops reasoning best-guess, not arbitrary picks. Pattern: when surfacing N queued decisions to an operator, author the defaults in the operator's voice + strategic frame. Then "approve as-drafted" is a reasonable response, not a failure mode. If defaults feel arbitrary, re-draft before review.
Priority-tier encoding ({unclaimed-lane / hard-violation-reclaim / competitor-occupied-differentiated / gold-standard-parity}) is more useful for Phase 5 queue ordering than activity-level (primary/secondary). Phase 2's reality_map.json tagged each cell with activity_level indicating whether the cell is load-bearing for angle generation. That's necessary but insufficient for ordering — Phase 5 needs to know which cells represent structural moat vs parity play. S8.F3 is secondary in activity-level but P1_unclaimed_lane in priority-tier because zero competitor density + VoC-validated demand = strongest category-1 opportunity even at secondary load-bearing. Without the tier axis, Phase 5 would burn budget on high-density cells that merely require differentiation rather than unclaimed cells that require presence. Pattern: when a schema has to serve both "is this cell active" and "which active cell should Phase 5 prioritize", split the concerns — activity-level answers the first, priority-tier the second. Mixing them produces a ranking that Phase 5 can't act on deterministically.
Under-deployed trigger concentration is a fifth axis adaptive-question prioritization needs — four axes missed the moat. v1 of adaptive-question-flow-spec.md scored branch points on conversion impact × compliance impact × data requirement × implementation complexity. That's the right four axes for a tool team. It missed that some HIGH-conversion-impact branches activate triggers the competitor cohort has already saturated (diminishing returns on differentiation) while other HIGH-conversion-impact branches activate under-deployed Tier-2/3 triggers (structural moat activation). Adding the trigger-deployment axis re-ranked: Aid Letter Decoder PLUS-heavy red flag jumped from HIGH to PRIORITY-1 (activates demonstration_beats_claim + test_dont_guess_proof, both zero-competitor); occupation disambiguation dropped to PRIORITY-4 (activates only saturated cognitive_ease_fluency). Pattern: adding an axis feels like complexity but it's actually the axis that makes the existing axes actionable. Without it, two branches with identical conversion/compliance/complexity scores look equivalent; with it, one is moat activation and one is parity maintenance.
Tool-spec gaps surface cleanest when mapping cells, not when auditing tools. Session #25's annotation-pass surfaced the PLA-modeling gap in Time-to-Degree and employer-credential gap in Career Salary Explorer while annotating competitor-teardown clean references (WGU competency-based + Coursera × Google). Phase 3 cell mapping then confirmed them — cells S5.F3, S6.F8, S7.F1 all load-bearing on PLA; cells S3.F6, S4.F4, S7.F4 all load-bearing on employer-credential surfacing. Auditing a tool in isolation ("is this tool complete?") produces generic "we could add more features" answers. Asking "what does this cell need from its primary tool to instantiate the archetype?" produces specific "this tool is missing THIS feature to serve THIS cell" answers. Pattern: tool-spec refinement should be consumer-driven (cell rules demand X) not producer-driven (tool audit identifies Y). The consumer-driven gap list is smaller, more specific, and blocked on real usage downstream — the producer-driven list has no ship-priority signal.
S8 4-of-4 unclaimed-lane concentration is atypical and should be treated as Phase 5 validation budget, not scale budget. Normal situation distribution has 0–2 unclaimed-lane cells (S1 has 1, S4 has 2). S8 has 4 — every active cell is unclaimed. That's a strong signal of structural moat AND a strong signal of "no baseline data to estimate CPL / CVR against." situation_family_map.json honestly flagged S8 unit economics as "speculative; no competitor baseline." First S8 campaign at Phase 7 should be treated as validation (not scale) — small budget, fast-read cadence, feeding Stage R learnings back into cell-angle-rules.json → winners_vault_readback. Pattern: when a situation is fully unclaimed across its active cells, it's an opportunity concentrated enough that your first campaign is more about belief-update than revenue. Size the budget accordingly.
The superpowers brainstorm → spec → plan workflow enforces the "complete spec before build" rule as a process, not a hope. Operator's stated preference is polished first-time builds over MVP iteration, and "never build until vision is 100% locked." Doing this by memory works for small features but breaks on multi-session architectural work like ContentForge Phase 1 Foundation. The superpowers chain imposes structure: brainstorming skill FORCES clarifying-questions-one-at-a-time + 2-3 architectural approaches with tradeoffs + section-by-section design approval + self-review + operator review before ANY plan is written. Writing-plans then produces 28 bite-sized tasks with full file contents inline (no placeholders). Total output was a 524-line spec + 2,371-line plan + 3 saved memories before a single line of skill code was written. Feels slow — is exactly the operator-stated pace. Pattern: when the operator explicitly values polished builds over iteration, the workflow that produces polish is the superpowers chain, not ad-hoc design. Use it whenever the scope spans multiple files + multiple sessions.
Pattern synthesis from 3 external repos produced higher-quality design than inventing from scratch. Three repos were analyzed this session via operator-surfaced Obsidian breakdowns: addyosmani/web-quality-skills (CWV budgets + WCAG POUR + 75th-percentile rule), coreyhaines31/marketingskills (shared product-marketing-context foundation + angle-first generation + JTBD Four Forces + freetool scorecard), AgriciDaniel/claude-seo (orchestrator fan-out + progressive disclosure + graduated quality gates + schema deprecations). Every hard-gate rule in the final design traces to one of these proven sources + the v2 persona. Inventing equivalents from scratch would have produced weaker rules with no prior validation. Pattern: before designing a skill bundle, find 2-3 high-star community repos doing something close; steal the patterns with attribution. The ones with 1,700+, 5,000+, 21,000+ GitHub stars earned that by shipping what works.
Manifest-driven source loading is mandatory when research pipeline is mid-flight. First draft of the design hardcoded brain source paths inside the /context-refresh skill. Operator flagged: "we won't know exact references until the research pipeline is complete, which it almost is." True. Path-reality check confirmed mismatches: compliance is at config/compliance/education.json (not edu.json); buyer-intelligence.md is under _archived-leadamplify-port/ awaiting Psychology Engine promotion. Hardcoding those would have broken as soon as the next research deliverable landed. Externalized to docs/content-source-manifest.yaml (brain-owned). Research updates touch one YAML; skills keep working. Pattern: when a consumer depends on an unstable producer, put the coupling in a single versioned manifest document the producer owns. Both sides stay simple; the coupling is explicit.
Content writers and copywriters sharing ONE persona is a structural rule, not stylistic guidance. Initial architectural instinct was two persona tracks: v2 for pipeline copy skills, something-else for ContentForge content skills. Operator rejected during clarifying-questions: "The writers of this content need to have the persona of the affiliate. Our copywriting team has to be in sync with our content writing team. They work together — it's got to convert." One persona loaded by every skill across both repos. Articles funnel to tools, tools are honest-by-design, research-backed AND converts (not either/or). Saved to memory as feedback_content_copy_persona_unity.md. Pattern: when an operator with 15 years of media-buying experience says "these are the same thing," believe it. The instinct to separate content and copy comes from SaaS org charts; operator-run affiliate shops are one voice by necessity.
SSD replacing auto-insurance as #2 vertical is an operator-grade strategic decision future-Claude sessions must preserve. Auto-insurance was the planned #2 per earlier memory (project_expanded_verticals.md). Operator surfaced mid-session: "auto insurance is already so competitive. SSD on the other hand — I'll bet we can find a lot of valuable data outside of whatever the common things are. And I have buyers." Three decision factors: competitive landscape saturation, existing buyer relationships, alignment with gov-fine-print research moat. project_go_to_market.md updated with full reasoning chain. Pattern: when the operator pivots a strategic decision, capture not just the new decision but the reasoning that produced it — future sessions need to evaluate future pivots against the same criteria. "SSD is #2" is a fact; "SSD is #2 because of X, Y, Z decision factors" is a reusable framework.
Block-review with structured judgment-call surfacing is more efficient than line-by-line walkthrough for artifacts where synthesis rigor already passed programmatic validation. Session #26's 4 Phase 2 JSONs carried 134/134 trigger coverage, 8 situations, 8 families, 32 active cells — all machine-verified. Bringing those to operator review as a schema walkthrough would have burned an hour on plumbing the operator already trusted. Instead, per-JSON summary + 3–6 explicit judgment calls (S8 priority, TikTok exclusion, CPL bands, F7 retention, universal moat trigger, anti-pattern hard-block) let the operator interrogate the strategic bets and skip the structural verification. Approval landed in one round-trip. Pattern: when an artifact has already passed programmatic validation, review should index on strategic judgment, not structural correctness. Surface the load-bearing decisions explicitly, let the reviewer approve or edit by section, skip sections that don't need eyes.
The moat thesis is the single load-bearing claim Phase 5 angle generation leans hardest on — operator pressure-test on it is a bigger unlock than any individual cell approval. The claim "interactive_tool_conversion_lift universal across our cells + absent across competitor cohort" is the one-line structural moat the entire Phase 2 output depends on. Every cell in reality_map.json activates it; every angle Phase 5 generates rides on it. If the thesis is wrong, 32 cells rebuild. Getting operator pressure-test on that single claim during review was higher-leverage than reviewing any individual cell's tool-match justification. Pattern: every phase has one load-bearing claim the rest of the structure rests on. Identify it explicitly and stress-test it during review — don't let it pass silently while reviewer attention is on lower-stakes details. The cheapest time to catch a load-bearing-claim error is before downstream work builds on it.
Phase 2 proper synthesis was truly mechanical per scope-doc prediction — the 8 Gate 1–3 deliverables carried every input needed, and session time went into structuring + cross-referencing rather than inventing. Writing the 4 JSONs (situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix) was a combinatorial exercise against the compliance-angle-map (F1–F8 definitions) + floor-multiplier-map (per-tool multiplier stories) + voc-themes (desire-family VoC validation) + competitor-teardown + compliance-line-crossing-inventory (competitor deployment evidence). The scope doc predicted this: "combine inputs → emit four JSONs → validate against cross-cutting principles → ship. No invention, no memory-pull, no blind spots." It held exactly. Pattern: when a research sprint genuinely does the work of scoping its own downstream synthesis, the synthesis phase stops requiring creativity and becomes operational. The payoff justifies the upfront investment in rigorous research deliverables.
S8 unclaimed-lane situation surfaces only when P1 decision-rule forces discipline. Meta corpus had ZERO advertisers targeting "middle-income returner delayed by cost" — the VoC-validated self-disqualification persona (quotes 8, 9, 11 "too much money to qualify"). Without the P1 principle ("only categories 1 and 3 ship"), S8 might have been cut for lack of competitor-demand validation. With P1, S8 is identified as the strongest category-1 strategic opportunity in the entire situation map — tool-native dismantling of a belief with zero competitor contest. Pattern: discipline around decision rules surfaces moats that pure market-validation signals miss, because the strongest moats have empty competitor cells for a reason. Where the signal is "no one is doing this," the question is whether the empty cell has a structural barrier (tool-requirement, data-requirement, compliance-requirement) or a market-signal deficit — the former is moat, the latter is trap.
Tier-1 interactive_tool_conversion_lift mapping across 100% of reality-map cells + 0% of competitor cells is the structural moat in one line. Every active cell (32 of 32) activates this trigger. Competitor corpus shows zero school-direct + zero aggressive-affiliate deployment (per competitor-teardown.md "calculator-led creative: zero schools deploying"). The moat is not "CampaignForge has some tool-led angles" — it's "the entire reality map is tool-native by construction while competitors are tool-absent by construction." Pattern: when a trigger library entry maps across 100% of your cells and 0% of competitor cells, the advantage is structural, not differentiation. Structural advantages compound because competitors can't copy them without building equivalent infrastructure; differentiation advantages converge because copy is cheap.
Documenting inactive cells with explicit rationale preserves optionality for Stage R. Of the 64 possible cells (8 situations × 8 families), 32 are primary/secondary active and 32 are inactive-this-pass. Rather than omit the inactive half, reality_map.json documents each inactive cell with rationale (e.g., "S1 × F3/F4/F7/F8 — SAI-blindsided families are already post-qualification and don't need effortless-qualification or urgency"). Stage R campaign signal can later elevate an inactive cell based on unexpected conversion patterns. Pattern: mark the map complete, don't leave it sparse. Future-you reviewing the map after campaigns run needs to distinguish "we decided this cell doesn't matter" from "we forgot this cell existed." Explicit inactivation ≠ omission.
Compliance-line-crossing inventory (session #25) + reality_map cells produce a tight one-to-one mapping on first overlay. Every Yorkshire violation (A1–A8) traces cleanly to S1.F6 (high-SAI institutional-legitimacy). Every Degree SNAP variant (A10) traces to S2.F3 + S2.F1. Every Learn Grant Writing violation (A12) traces to S4.F5 + S4.F4. The inventory is functioning as a competitor-pattern-to-reality-cell dictionary exactly as session #25's Lessons Learned predicted. Pattern: when two artifacts from different sessions produce tight structural mapping on first overlay, the underlying taxonomy is sound — don't revise. Taxonomy-validation-via-cross-referencing is one of the cheapest quality signals available, and a mismatch would have been the clearest signal that the taxonomy needed rework.
Programmatic validation catches transcription errors that semantic review misses. A single Python script (json.load + set comparison) verified 134/134 trigger coverage + JSON parse validity + structural cell count in <2 seconds. During drafting I'd written "detailed_entries_written: 135" in the trigger-reality-matrix metadata block; the script caught the drift — actual count was 134. Zero semantic review pass would have flagged that. Pattern: machine-verifiable invariants catch the errors human review invariably misses. Every structured research output should have a one-command validation script, and the script should run before "shipped" is claimed.
Pre-flagged collection-pass observations are the single highest-leverage synthesis throughput multiplier. Session #24's Meta agent SUMMARY.md pre-identified the top 5 line-crossing patterns (Yorkshire / Degree SNAP / Scholarship System / Prestige Health / Learn Grant Writing) plus the moat-validation-by-omission finding ("financial aid calculator" returned 1 ad). When session #25 opened, the case-study skeleton was already built — synthesis time went into trigger annotations + tool-backed translations + cross-cohort patterns rather than re-discovering structure from raw text. Pattern: every collection agent should be tasked with producing a SUMMARY that pre-identifies 5–10 highest-signal cases for the synthesis pass, not just dumping raw data. The collector knows the corpus better than the synthesizer ever will after a one-pass read — capture that knowledge upfront.
Compliance heuristic gaps surface during synthesis, not collection. The simple regex/keyword classifier in _parse.py missed 4 Yorkshire variants that the synthesis pass flagged as violations: A3 "$72,000 lie" (no dollar-anchor regex match), A11 "don't claim your child as a dependent" (insider-knowledge without explicit "free money" token), A12 "never submit your FAFSA same day" (cliffhanger-curiosity-gap with no aggressive trigger word), A15 "$5,500 federal loan = no real help" (anti-establishment framing weaponizing federal-aid distinction). Heuristic was tuned for explicit-claim violations; missed framing-violations entirely. Phase 2 v2 pass needs: cliffhanger-curiosity-gap detection + anti-establishment-framing detection + N=1-case-as-method detection. Pattern: heuristics built from prior-known violations don't catch novel framing patterns; always run a manual-review pass on a sample to surface what the regex isn't seeing.
SimilarWeb engagement metrics give measurable UX benchmarks for tool pages. SNHU's 21.25% bounce + 7.30 pages/visit + 7:35 duration is a concrete reference standard for any CampaignForge tool page that gets paid traffic; Strayer's 11.77 pages/visit is the high-engagement ceiling reference. ASU Online subdomain's 70.81% bounce / 1.61 pages/visit / 1:14 duration is the failure-mode reference (paid landing without next-click logic). Phase 6 lander-archetype specs should reference these benchmarks explicitly rather than rely on internal-only quality conventions. Engagement metrics from SimilarWeb are not an "interesting data point" — they're the empirical floor your competitive landing experience has to clear, and the absolute ceiling of what's been validated as feasible by competitors at scale.
Operator-requested mid-session persona reload validates the "Strategic-decision triggers" rule from CLAUDE.md. Operator surfaced advanced-affiliate-marketer-system-prompt-v2.md mid-session before the synthesis pass. Re-load preceded both deliverables; both drafted with fresh attention weight on the structural-edge thesis (tool-backed proof > aggressive claim) and the compliance-as-angle-input frame. Without the reload, synthesis would likely have collapsed the school-direct vs aggressive-affiliate cohort distinction into "competitor analysis" rather than treating it as the empirical foundation of the moat thesis. Pattern: re-loads at strategic-decision points are not optional; the compounding cost of weakened persona attention across a multi-hour synthesis pass is invisible until the output looks generic.
The "calculator-led creative" cross-cohort pattern table cell is empty across all 11 schools. Of 7 patterns mapped (floor-number / competency-based / military-discount / transfer-credit / tuition-guarantee / creator-testimonial / calculator-led), six have multiple school-direct deployers; the seventh has zero. That's not a coincidence — school-direct advertisers don't have multi-school calculators because they only know their own pricing/aid model. Aggressive affiliates don't have calculators because building a real EFC calculator is engineering investment they refuse to make. The CampaignForge moat is structurally locked — not because it's clever, but because no one else can build it from where they sit. Pattern: when a competitive teardown surfaces an empty cell that nobody is filling, that's a moat opportunity if and only if there's a structural reason no one fills it. Empty cells with structural barriers compound; empty cells without barriers get filled by the next entrant.
"Tool-backed compliant translation" is the most reusable per-case annotation in the entire inventory. Every Section A (hard violations) case got tagged with the specific CampaignForge tool that captures the same psychological payload through proof rather than claim. That tagging is the direct seed for Phase 5+ Strategy Engine angle-generation: when the agent reads "Yorkshire's $100K windfall claim violates F6 institutional-legitimacy", it now has the explicit "EFC Calculator with transparent asset-treatment disclosure" route to the same audience. The line-crossing inventory is functioning as a competitor-pattern-to-tool-mapping dictionary, not just a violations catalog. Pattern: every research-input artifact should carry direct-actionable tagging for downstream agent consumption, not just observations — the translation work happens during research, not during generation.
OBBBA-aware tool integration mapping in clean-reference cases (C-section) reveals tool-coverage gaps. While annotating the WGU competency-based reference, surfaced that Time-to-Degree Calculator needs a PLA (prior learning assessment) modeling layer to compete on competency-based pathway comparison — current scope doesn't include this. While annotating Coursera × Google Career Certificate reference, surfaced that Career Salary Explorer should surface employer-recognized credentials as part of the pathway, not just degrees — current scope is degree-centric. Both gaps now flagged for Phase 3 tool blueprint refinement. Pattern: clean-reference annotation in a competitive teardown is the cheapest possible scope-validation pass for your own tool roadmap. If your tool can't beat the clean reference, you have a tool-spec problem before you have a copy problem.
Ad-platform scraping from the operator IP is a permanent ban risk — not a per-session judgment call. Operator runs Meta, Google, TikTok ad accounts from this machine. Any scraping of those platforms’ ad-intelligence properties (Ad Library, Transparency Center, Creative Center) or competitor landers that those platforms’ fraud-detection systems might fingerprint = correlation-to-banned-account-behavior risk. Codified today as durable auto-memory (feedback_firecrawl_exclusive_scraping.md) + hard-locked at the top of every agent brief in the session. Federal authority domains (ed.gov, bls.gov, studentaid.gov, va.gov, irs.gov) carved out as lower-risk for direct operator-initiated bulk-federal-data scripts. Litmus test: “If the ad platform’s fraud team saw this IP request tomorrow, would it look like browsing or scraping?”
Google Ads Transparency Center is structurally unscrapeable via unauthenticated Firecrawl. 15 min of retry exhausted the surface: SPA hydration barrier (post-bootstrap internal RPC; 7s wait-for insufficient for virtualized card mount; 2.7 MB raw HTML captured with zero creative IDs / dates / text), advertiser-detail pages auth-wall with a sign-in prompt, interact -c code sandbox has a ctx-already-declared collision. Typeahead confirmed advertisers exist but detail pages inaccessible. Same verdict for TikTok Creative Center (JS-gated + geo-gated + anonymized + no landing URLs). Escape hatches: (1) commercial ad-intel tool ($79–$299/mo SpyFu, Semrush, AdClarity, or BigSpy), (2) operator-authenticated separate-IP browser session, (3) firecrawl interact -c --python which avoids the Node sandbox collision (2–3h engineering, no auth-wall guarantee). Pattern to remember: if a platform’s whole value is behind SPA hydration + auth walls, Firecrawl alone is the wrong tool; don’t grind. The first Google agent burned 939s and produced zero output trying to brute-force this. The retry with tight time-boxed scope produced the same finding gracefully in 8 min. Tight-scope agent briefs with explicit stop-triggers beat broad-scope grinds every time.
Authority-data-cache is quality-over-quantity — 101 more FSA archive files would have delayed Gate 3 close for marginal angle value. The seductive move after getting 31 CSVs of FAFSA data converted was to go back and pull the 7-year archive (2018-19 through 2026-27, 101 queued files, blocked by VPN DNS sinkhole). Resisted. Current cache already satisfies ad-relevant use cases: current cycle snapshot + just-completed cycle baseline + in-progress cycle momentum + 2023-24 national demographics (federal TAM validation) + per-school specificity. OBBBA is the structural break — pre-OBBBA data collapses to one baseline comparison regardless of how many years deep. Ads need current amounts + current status + one YoY comparison + demographic validation + per-school specificity, not a historical trends dashboard. Defensibility is a function of source URL + retrieval date + authority tier on each claim, not archive depth. Skipping 101 files of marginal signal is the right call when the cost is ingest overhead + quarterly refresh burden vs zero new angles. Pattern to remember: more data without a specific downstream use case is bloat, not rigor.
Parent PLUS $20K/$65K cap + $257,500 lifetime max + consolidation 3-month buffer closed today = high-urgency current-borrower ad signal. OBBBA record synthesis surfaced a timing detail buried in the FSA big-updates page: consolidate at least 3 months before 2026-07-01 to guarantee disbursement before cutoff, or lose access to IBR/ICR/PAYE permanently. As of today (2026-04-17) the 3-month buffer has closed — any consolidation application from now forward carries elevated risk of disbursement landing post-cutoff. That’s a concrete urgency mechanism for Loan Repayment Calculator and Financial Aid Quiz that wasn’t obvious from the high-level OBBBA summary. Mine the operational guidance under the headline numbers; the timing details are often the sharpest ad levers.
Agent stream-watchdog (600s no-progress) is a useful failure mode, not a lost session. Landers agent stalled at 600s without writing its structured JSONL or SUMMARY, but left 102 usable raw files on disk (20 affiliate landers + 9 schools × 6 pages + 11 SimilarWeb reports). Inline structuring during synthesis is higher-quality than a second agent pass because the synthesizer can tag-as-they-read and the taxonomy (compliance posture, desire family, tool-backed translation) stays internally consistent across all cases. Agent stalls are not failures when the raw data survives; they’re just phase-boundary signals that the synthesizer should pick up where the collector stopped.
Inference bugs hide in date math — always clamp-to-today on release-date estimates for in-progress cycles. FAFSA ingest initially emitted data_release_date: 2027-06-30 for the 2026-27 Q1 opening cycle — the Q7 end-of-cycle inference. A release date is definitionally in the past; future-dated release = bug. Caught on spot-check, fixed with a clamp-to-today guard. Pattern: any inference that could produce future dates for records about past events deserves an invariant check. Cost of clamp = 3 LOC; cost of future auditor finding future-dated authority-data = credibility hit.
Federal public-domain data is the cleanest license in the stack. 17 USC § 105 excludes all federal government works from copyright protection. No usage fees, no attribution required, no restrictions on derivative or commercial use. FSA FAFSA Application Volume + Demographics releases meet this bar. Every record we cite with federal source_url + retrieval_date is a regulator-defensible claim. This makes the authority-data-cache's primary_gov tier not just a taxonomy label — it's the legal foundation of the whole compliant-ads thesis.
Federal primary source beats aggregator visualization every time. Operator surfaced NCAN’s Bill DeBaun FAFSA Tracker (Tableau Public). Scout investigation confirmed NCAN aggregates from the same FSA .xls files we can pull directly. Skipping NCAN saves a Tableau-scraping project (accordions don’t expand in Firecrawl’s Playwright; agent-mode hallucinated values when blocked), avoids accredited_private tier dilution, and delivers cleaner upstream. Pattern: every time an aggregator appears promising, ask “what federal source is this built on” first.
Dependent vs Independent split is the adult-learner TAM at federal level. 2023-24 national demographics: 52% Independent applicants, 41.8% age 25+, 47% first-gen. Per-school pilot data confirms the concentration: WGU 93.6% independent, SNHU 92%, UoP 95%, Capella 98%, DeVry 97%, Strayer 98%. CampaignForge’s “online EDU = adult learner” thesis isn’t just a perspective — it’s now quantified with federal authoritative data. Biosphere narratives citing these numbers are regulator-armored.
Multi-cycle YoY trajectory unlocks narrative depth single-cycle data can’t. Three cycles of per-school Q1–Q7 data lets us say “SNHU processed 282,010 FAFSAs in the botched 2024-25 cycle (93% independent); recovering at 208,290 through Q3 of 2025-26; ASU already at 119,930 in Q1 of 2026-27 alone — half of full cycle 2024-25 in one quarter.” Regulator-defensible claim material for Stage 3 Copy Factory state × cycle-recency social-proof anchors. Single-cycle data is a table; multi-cycle data is a story.
Completion-time data quantitatively validates tool-discovery framing. 2023-24 demographics reports dependent-filer full-form completion at 50:58 minutes; independent-filer EZ form at 16:42. The friction we’ve been claiming our tools reduce isn’t a vibes argument — FSA itself publishes the number. Calculator reduces a 51-minute form to a 2-minute qualification check. Friction-reduction is federally quantified, not narrative.
DNS sinkhole pattern: read the resolved IP. Two VPN exits both failed to reach studentaid.gov. Looked like VPN blocking, retry loops, or HTTP/2 issues. Real answer surfaced from one command: host studentaid.gov returned 198.18.8.39 — a TEST-NET-2 private-range IP. VPN’s DNS resolver was sinkholing studentaid.gov specifically while passing bls.gov and data.gov through. Pattern: when a curl fails silently (no 403, no 5xx, just timeout), always check DNS resolution first. Private-range IP returned for a public domain = sinkhole. Cheap diagnostic; saves an hour of chasing firewall hypotheses.
Reusable tooling pays compounding dividends. Built scripts/convert-fafsa-xls-to-csv.py to handle one 11-file XLS batch from the operator; same script later converted a single demographics file, and will run unchanged on any future FSA batch of arbitrary size. Idempotent skip logic + multi-sheet auto-detection + engine selection (openpyxl vs xlrd) means it’s a “drop-and-go” interface for the operator. Each time I write infrastructure rather than one-shot code, the next batch is free.
Adult-learner application concentration: 85% of non-freshmen list only 1 school. 2023-24 demographics detail: freshmen list 1 school 44% of the time, trailing to 4% for 10-school filers — classic shopping distribution. Non-freshmen collapse that entirely: 85% list only 1 school. Adult returners aren’t comparison-shopping; they’ve pre-decided. Copy implication: adult-learner ads should be school-specific, not “compare your options” framing. Different psychology from first-time freshmen filers; need different angle generation paths per persona.
Parallel orchestration works cleanly when output paths don’t collide. Four research agents dispatched from a single orchestrator turn — voc-collector, school-data-puller, bls-extender, state-aid-resolver — each writing to a distinct subdirectory tree. Zero file collisions, zero coordination overhead, zero need to consult each other’s in-flight state. Wall-clock compressed from ~4h serial estimate to ~30min (longest pole). The discipline is upfront: confirm writes land in non-overlapping paths before launching, not after. Can’t parallelize when agents need to share a stateful file or each other’s intermediate conclusions; can parallelize when each agent’s output stands alone.
Self-contained briefs beat shared-context agents. Each agent brief included: project context files to read (absolute paths), exact task scope, output schema, guardrails (compliance + anti-hallucination + firecrawl-only), success checklist, report format + word cap. Agents came back with tight, actionable reports rather than rambling transcripts. Pattern reuses: if you can write the brief as if the agent just walked into the room cold, you don’t need to babysit.
Pell-refund discovery provokes fear before delight. VoC Reddit + Trustpilot signal that first-time Pell recipients default to “I’ll get in trouble” + tax-penalty anxiety, not windfall delight. Our current EFC Calculator output flow leads with the dollar amount then context. The real sequence should be permission-slip first (“this is legitimately yours”) then specificity (“here’s $7,395”). This inverts a framing assumption baked into Phase 2 tool specs. Flagged for Phase 6 lander archetype re-work.
Peer-insider-knowledge is the primary Reddit conversion mechanism. Reddit buyers convert each other not through copy but by surfacing concrete administrative mechanisms — unusual circumstances override, 90-day SAVE recertification, §127 structural requirements, SAP appeal pathways. The tools CampaignForge builds need to package peer-insider quality natively, not just present math. That means output screens that sound like “here’s the specific administrative lever you can pull” rather than “here’s your calculated number.” Phase 3 tool blueprint implication.
Middle-income squeeze is a distinct adult-learner identity. Not a sub-segment of traditional undergrad — a named persona. The narrative: parents “made too much to qualify” for need-based aid but had no savings → student dropped out → returned 10 years later. Our 8 desire families (F1–F8) partially address this but don’t name it. Candidate F9 or cross-cutting persona in Phase 2 reality_map.json.
DEMO_KEY rate limits don’t match the docs. api.data.gov DEMO_KEY documented at 30 req/hour, actual behavior 10 req/hour. Single batched 10-school Scorecard query exhausted budget; subsequent calls returned HTTP 429 retry-after 12096s (3.3h). Operator-key path is unblocking and should be the default for any production data pull; DEMO_KEY is genuinely demo-only, not “light production.” Document upfront. 5 minutes of operator reCAPTCHA saves 3.3h of wait.
Government agency web is deteriorating; Firecrawl catches it, PDFs survive it. MT primary URL 404’d (tourism content returned); OR redesigned its entire student-aid portal; UT USHE site Wordfence-503-blocks all scraping; CT statute-database URL replaced by knowledge-base article; WY admin URL is the administering-institution’s (UW-SFA) page, not the state’s. Seven of the 14 re-pulled records required URL replacement. The authority-data-cache “source_url” discipline is load-bearing — when these URLs drift, our claims stop being traceable. For UT specifically: legislative appropriations + USHE PDF downloads survive Wordfence; HTML scraping does not. Next-pass planning needs to internalize that PDF + API survive where HTML decays.
Port verification is a one-time debt that prevents an infinite-sized future debt. Before this session, ~4,500 LOC of production JS had been reimplemented in TypeScript with only 3 trivial unit tests covering one helper function. No automated check existed that the TS produced identical outputs to the JS running on degreesources.com. Ship any design change or dep upgrade on top of that, and a silent formula drift would go undetected into production. 291 parity tests later, the drift window is closed — every tool's calculation branch has golden cases, and a regression fails the suite in milliseconds. The cost was one session. The cost of finding out about drift from a user's email about "wrong EFC number" is unbounded.
Diff line-by-line before writing tests, not after. For each tool the workflow was: (1) read source JS calculation block, (2) read port .logic.ts, (3) diff constants and formulas mentally, (4) only then write tests. Finding a match lets the tests lock in known-good behavior. Finding a mismatch would have meant fixing the port before writing the test — but none appeared, which is itself a finding: the port author stayed faithful to the source. Parity tests that pass first-run are not anti-climactic; they're the goal.
"Tools must work just like the current site" is more testable than it sounds. The operator's stated requirement reads like a vague UX promise. But tool behavior = pure functions from inputs to outputs, which is the most testable thing in software. Once restated as "for every documented input combination, TS output equals JS output," it becomes a bounded, finite verification task. Always convert fuzzy "just work like" requirements into explicit input/output golden-case assertions.
Session-end script: git add -u silently drops new files; -A is the fix. Session #20 commit missed Deliverable B artifacts because -u only stages modified tracked files. Switched to -A; safety is preserved because the per-repo confirm step shows git status --short (including ?? entries) before staging. Also caught ContentForge branch drift (main not master) in the config. Tooling bugs compound across sessions; fix them when you notice them, not later.
A 2026-current biosphere study is the compliance-angle-map's external-environment twin. Deliverable A mapped the compliance-restriction list to psychological-desire translations; Deliverable C maps the 12 major 2026 market forces (enrollment cliff, AI displacement, layoff waves, SAVE end, FAFSA recovery, Gen-Z skepticism, OBBBA, Workforce Pell, §127 permanence, Meta Ad Library, platform benchmarks) to what’s converting RIGHT NOW. Five forces structurally favor our tool-backed-proof thesis (not coincidentally — the pipeline was designed for this environment). The enrollment cliff isn’t a problem to solve; it’s the rationale for the vertical-shifting strategy. The wage-premium plateau is the single most important datapoint because it empirically validates Gen-Z’s skepticism — so ROI Calculator's honest-verdict framing becomes a trust-building asset, not a conversion liability. Competitors still selling “guaranteed ROI” are fighting 2020 research with 2020 claims in a 2026 market.
One Firecrawl batch keeps paying dividends if the cache layer is schema-disciplined. Session #20 emitted 88 Firecrawl scrape files into .firecrawl/gate3-b/. Session #21 emitted 37 additional state-aid records, 8 BLS SOCs, 5 IPEDS pilots, and source-cited all 12 biosphere-study sections — zero new Firecrawl spend, because Session #20’s scrapes already contained the authoritative content. Every emitted record carries schema_version + authority_tier + cache_refresh_date; 14 [VERIFY] flags are queued where specific 2026-27 dollar amounts need re-pull. The cache doesn’t need to be perfect at seed; it needs to be correctly structured and honestly flagged.
“Full 50-state coverage” and “full 10-school pilot” are orthogonal scope questions. Operator C6 lock requires 50-state coverage at Phase 2 seed (agent-driven Copy Factory per ADR 0009 has no phased-copywriter constraint). That’s binary — 50 states covered or not. IPEDS pilot is different: 10 schools is a partner-negotiation scope that can stretch across sessions without blocking Phase 3. Solution: ship 50-state primary-program coverage this session (achieved); ship 5 of 10 IPEDS pilots with [PENDING_API_PULL] markers for Scorecard metrics; queue remaining 5 + Scorecard API pull for next session. Distinguish “coverage mandate” from “depth target” up front or the scope swims.
Research sprints compound when batched in parallel with high-yield searches. 88 authoritative pages scraped across 6 parallel Firecrawl batches (federal, state × 7 sub-batches, military+tax+employer, niche-private, ranking-data APIs, verify-pass). Each firecrawl search --scrape returns multiple full authoritative pages in one call; batching them in background frees main context for structured-data emission. The real throughput multiplier isn't "scrape more" — it's "scrape once, emit structured records that feed every downstream deliverable." B.1 (75 records), B.2 (10 floor/multiplier stacks), B.3 (10 methodologies), B.4 (53 cache records) all sourced from the same 88 scrapes.
Truth is the moat, but only if every claim is resolvable to its source. Every record in B.1 + B.4 carries source_url + retrieval_date + recency_confidence + authority_tier. Any claim that couldn't be confirmed from Firecrawl content shipped with a [VERIFY] tag and operator-verification-pass queue — never a speculative value. Operator verify pass resolved all 6 flagged items via targeted Firecrawl scrapes (loan rates 6.39%/7.94%/8.94%, HHS 2026 poverty guidelines, Ch 30 MGIB-AD $2,518, §127 student-loan-payment made PERMANENT by OBBBA). Zero [VERIFY] markers remain in final shipped state.
Aggregator sources excluded by construction, not by review. authority_tier enum hard-codes {primary_gov, secondary_gov, accredited_private} — third_party_aggregator isn't a value that ships. This isn't a taste preference; it's regulator-defense architecture. When a challenge comes (Meta account review, FTC inquiry, competitor legal), every claim in copy traces to a cache record that traces to a primary-gov URL with release date. 54 primary_gov + 19 accredited_private + 2 secondary_gov (reciprocity compacts), 0 aggregators across 75 records.
Institution-first rankings compound; program-level is Phase 8 expansion. Ship 10 institution-level proprietary rankings immediately from Scorecard + IPEDS data (operator-locked). Program-level (4-digit CIP × institution joins) is genuinely higher-moat but requires more data engineering — Phase 8 scope. Three methodologies (Best Veteran-Friendly, Best GI Bill Value Max, Best Employer Partner Schools) are institution-level by nature and stay that way permanently. The other seven carry phase_8_expansion_target: "institution_and_program" so the Phase 5 agent knows what's coming.
Floor-to-multiplier ratios quantify the moat numerically. 10 tools × per-tool floor anchor vs. tool-proven multiplier: ratios range 2x (Financial Aid Quiz "FAFSA" → 3–5 aid categories) to 40x+ (Scholarship Finder "unclaimed billions" → $60K specific). The big numbers aren't marketing — they're what the math produces when you stack compliant resources for a qualifying profile. Ad copy uses typical case; tool output shows the user's specific number. Competitors can't follow because they don't have the tool to produce the math.
OBBBA changed §127 from temporary to permanent. The One Big Beautiful Bill Act made the student-loan-repayment inclusion in Section 127 educational assistance permanent (previously scheduled to expire 12/31/2025). This is load-bearing for the employer-benefit angle — no more sunset-clock framing on the $5,250 employer loan-repayment benefit. Every Employer Tuition Checker output can now cite permanence. Find the legislative changes that quietly enable new angles; they're the research edge.
Sync discipline: ClickUp is primary, not just STATUS/SESSION-LOG. Operator reminded at session #20 close — tracker sync at session end must include ClickUp (per ADR 0006) when tracker-relevant work happened. Saved as feedback_clickup_sync.md in memory. If STATUS.md moves forward and ClickUp doesn't, the operator-facing view diverges from the working view.
Research-sprint deliverables are agent inputs, not copywriter briefs. The breakthrough reframe this session: Stage 3 Copy Factory is a skill-based Claude agent, not a human-copywriter production line. Every schema field, every authority-data-cache record, every compliance rule, every tool-multiplier story becomes a parameter the agent reads at generation time. This inverts the constraint model — bandwidth is no longer the bottleneck, research-input rigor is. Moat is two-layered: research quality competitors can't match AND agent throughput competitors can't match at human-copywriter pace. ADR 0009 locks this thesis in.
Full 50-state coverage is sustainable because production is agent-driven. Initially scoped state coverage as phased copywriter depth (top-15 deep at launch, remaining 35 in Phase 5). Operator flagged the agent-driven nature of Copy Factory — all phased gating dissolved. authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed; agent generates state × angle × platform × format matrix at every campaign run. Constraint shifts from hours to authority-data-cache completeness.
Channel the persona deliberately for judgment calls. When operator asked "answer from your 15-year perspective" on Gate 2 decisions, re-reading the v2 persona and deliberately channeling it produced substantively different answers than the default — categorical rather than hedge-y, strict rather than permissive (e.g., "universal skip YouTube Bumper when disclaimer won't fit", "strict 30-conv PMax bar, no lean seed", "PLUS-heavy red flag is factual"). The re-load triggers in CLAUDE.md aren't just cost-savings, they're judgment-quality gates.
Typical-case in ad copy, upper-bound as tool output. Distilled media-buyer discipline on multiplier framing: ad copy references the typical case ("most adults see $12–20K combined aid"); upper-bound numbers ($30K+, $65K/yr) only appear as tool output for users whose inputs produce them. Mixing the two in ad copy is the aggressive-affiliate pattern that burns accounts AND creates downstream lead-quality disasters (users expecting $30K, receiving $12K, churning through enrollment funnel, damaging school-partner relationships).
Compliance-and-CTR simultaneity is tool-discovery framing's signature. TikTok's restricted-industry scrutiny is aggressive for aid-claim copy. But tool-discovery framing ("take this 2-minute quiz to see what you qualify for") is BOTH the safer compliance posture AND the higher-CTR framing. Ad carries no standalone claim; tool on the lander carries substantiation. The moat pattern — tools-that-prove-claims — isn't just a moat, it's also the copywriting pattern that converts across every platform.
Plugin hooks can accumulate hidden tax. claude-mem's PreToolUse:Read hook was truncating file reads to line 1 when prior observations existed — a token-saving optimization that inverted into a capability regression for deep synthesis work (persona re-loads, spec reviews, multi-file research). Removed only that one hook (via jq 'del(.hooks.PreToolUse)'); kept every other claude-mem feature. Idempotent re-apply script because plugin updates will overwrite the patch. Lesson: "capability regression" can live inside what looks like a working plugin.
Operator's compounding-moat framing re-centers the work. The directive to "accumulate factual, true data across all possibilities that truly can benefit others, then condense that into useful, easy-to-use tools that output truthful quality results with a path for every consumer" is the project thesis in one sentence. Every research gap becomes a ceiling on output quality; every layer of rigor becomes throughput at output. Keep this framing front of mind for every future agent — research is not overhead, it is the moat made tangible.
Cloudflare Access cannot pre-gate a Pages custom domain during initial SSL provisioning. The ACME HTTP-01 challenge to /.well-known/acme-challenge/* is intercepted by the Access gate, blocking Google CA cert issuance. Workaround: delete the Access app → let cert issue (~15s once unblocked) → recreate the Access app. Brief public window is acceptable when the URL isn't advertised yet. Order-of-operations trap — gate pre-seeding doesn't work end-to-end for brand-new hostnames.
Pages.dev hostnames can't have direct Access apps. They belong to Cloudflare's shared zone owned by Cloudflare itself, not your account. Error 12130 "domain does not belong to zone". Always bind a custom subdomain from a zone in the same account as the Pages project, then gate the custom subdomain. The raw .pages.dev URL stays public and needs a separate noindex mechanism.
GitHub repo is locked to one CF account at a time. Disconnecting the CF Pages GitHub App from a repo does NOT clear CF's internal account-repo binding — only deleting the Pages project in the other account fully frees the repo. Error 8000093 means: "delete the conflicting project." Don't try to disconnect the GitHub App as a shortcut; check which account currently owns the binding and do cleanup there first.
CF doesn't support account merging. Manual migration: zones move cleanly via "Move Domain" (preserves all DNS records, no nameserver change); Pages/Workers/R2/KV/D1 must be recreated in target (destructive, lose history). Pick master by which account holds the most infrastructure, not the most domains. Moving a zone that has a bound Pages custom domain BREAKS that binding until the Pages project is re-homed to match — coordinate those two migrations together.
Canonical tag is advisory, not authoritative. Google may still crawl and index .pages.dev URLs despite <link rel="canonical"> pointing to production. Host-scoped X-Robots-Tag: noindex via Pages public/_headers is the authoritative signal. Don't rely on canonical alone for indexing control.
DNS on a prod zone is safe for new subdomains — but verify no wildcards first. Adding a new subdomain CNAME is isolated from root/www/MX records. But if the zone has a wildcard CNAME (*.example.com), an explicit new record overrides the wildcard for that host — could surprise downstream systems. Always grep DNS list for wildcards before committing new records to a prod zone.
After session compaction, restate the intended next action before producing substantive artifacts. Session #18 opened with the system-start context pointing at a Phase 2 Session 2 bootstrap prompt; agent assumed that meant "execute now" and produced a 260-line canonical tool spec. Operator was mid-Cloudflare-work from the prior session and confused by the pivot. Better pattern: after compact, briefly surface the ambiguous thread ("X was queued before compact, Y was just referenced — which one?") and await explicit confirmation before producing substantive artifacts. The work wasn't wasted (the EFC spec becomes the template for remaining 9), but the redirect cost a full round-trip of context.
Compliance is an angle-generation input, not a guardrail. The restricted-claims list is literally the cheat sheet for what converts — each restriction exists because the underlying psychological desire converts hard. Map every restriction to its desire family, the Layer-1 triggers that deliver that desire, and the tool-backed compliant framing that lands harder than the non-compliant original.
Disclaimers are conditional on ad-claim content, not platform/format. Tool-discovery framing ("See what you could qualify for") makes no standalone claim — the tool + 2–3K-word article lander carries all required disclosures with cited sources and freshness dates. In-ad disclaimers only trigger when the ad itself makes a specific claim (dollar amount, named government program, income claim). Tool-discovery framing is simultaneously the best-CTR AND most-compliant framing — not a coincidence.
Proprietary rankings from public gov data beats third-party whitelist. College Scorecard + IPEDS + BLS + VA data let us build our own defensible ranking system with full methodology transparency. No usage-rights gating, no publication bias, no expiration. Essentially "US News for online education" built from public gov data — an angle space competitors can't replicate.
Pattern-detect meta-specs. Three distinct questions (IPEDS cache, proprietary rankings, BLS wage refresh) had the same structural answer: authority-tier data cache with freshness tracking, cron-refreshed, read by Phase 5 with automated staleness halts. Consolidating into one authority-data-cache infrastructure spec beats three separate specs that repeat the same logic.
Persona re-load has empirical basis. Not superstition — output quality degrades every 4–5 turns as attention weight on the persona decays relative to accumulated context. Re-reading restores signal strength via token recency + repetition. CLAUDE.md now codifies: reload on strategic triggers + every ~5 turns in long sessions + every natural phase boundary. NOT every turn (wasteful, dilutes attention on actual work).
Tool-discovery framing is the workflow cheat-code. The same pattern that solves compliance (route substantiation to lander) also solves CTR (no in-ad disclaimer drag) and lead quality (friction-as-feature qualifies users). Three optimization goals aligned in one architectural choice.
Diagnose before adding. Session surfaced that Firecrawl skill wasn't firing, continuous-learning-v2 looked dormant, session-end was manual. Diagnosis revealed: Firecrawl was CLI-skill-vs-MCP-mismatch (fixable by rewiring deep-research); continuous-learning-v2 had 7.7MB of observations accumulated but observer.enabled: false was blocking the analysis phase (one config flag fix); session-end was genuinely manual (built scripts/session-end.sh safe multi-repo helper). Don't install — diagnose first. Most "missing" capabilities are broken capabilities.
Agents process early rules more heavily: Adding HARD RULES blocks at the top of each skill file ensures non-negotiable constraints are in the "hot zone" of agent attention. Rules buried mid-document get diluted by accumulated context.
Decision trees > prose guidance: Converting "consider X when Y" into "IF X THEN Y, OTHERWISE Z" produces more reliable agent behavior. Agents follow explicit branches; they interpret flexible guidance flexibly (which means inconsistently).
Anti-patterns are as powerful as patterns: Showing agents what NOT to produce (with concrete BAD examples and WHY explanations) creates hard boundaries. Without anti-patterns, agents gravitate toward "safe" generic output that passes no rules but also creates no value.
Subagent permissions are session-scoped, not inherited: Adding permissions to settings.local.json or project settings doesn't reliably propagate to subagents in don't-ask mode. Python/Bash fallback for file writes is the workaround. Some agents succeed, some don't — the behavior is inconsistent and needs investigation.
Sweet spot is 3-4 parallel agents: Beyond 4, prompt quality drops and merge review gets sloppy. The real constraint is file isolation, not compute. With 3 repos, the ceiling is ~5 agents if file boundaries are clean.
Feature branches prevent same-repo conflicts: Agents C and D in campaignforge-app worked on separate branches (feat/cost-session-manager, feat/pipeline-orchestrator), then merged sequentially. Zero conflicts.
Independent agents can converge on identical fixes: Both the a11y and Lighthouse agents independently identified and fixed the same CSS cascade issue (@layer base wrapping, :where() scoping) with byte-identical diffs. Merge was clean because Git detected identical changes.
CAPI Worker type errors are IDE-only: Cloudflare Workers have their own tsconfig with @cloudflare/workers-types. The IDE picks up the root tsconfig which doesn't know about Request/Response/fetch globals. Not real errors.
MDX body_html requires JSX-safe formatting: Raw HTML from JSON has nested block elements on same lines, bare <br> tags, and ~ chars parsed as strikethrough. Conversion script needed multi-pass formatting: self-close void tags, escape tildes to ~, and split every block element onto its own line.
Astro dynamic components need static imports: client:visible hydration fails with dynamic component references (NoMatchingImport). Must use conditional static rendering: {tc === 'EFCCalculator' && <EFCCalculator client:visible />}.
Parallel agents for tool implementations: 4 agents dispatched simultaneously, each handling 2-3 tools. All completed in ~17 minutes. Logic/data separation (*.logic.ts, *.data.ts) enabled clean parallelization with zero merge conflicts.
Astro 5 Content Layer API: entry.render() is gone. Use import { render } from 'astro:content' then render(entry). Content collections need glob() loader from astro/loaders.
shadcn-svelte + Tailwind v4: @apply border-border fails — Tailwind v4 doesn't know custom vars via @apply. Use @theme inline to declare all HSL color vars, then use raw CSS instead of @apply for base styles.
shadcn-svelte components need WithElementRef: The cn utility must also export WithElementRef and WithoutChildrenOrChild types for sidebar/rail components.
MDX in Content Collections: HTML (tables, callouts, step-lists) works inline in MDX. No need to convert to custom components yet — the prose CSS styles handle it.
Astro 5 Content Collections: Uses src/content.config.ts (not src/content/config.ts). The z import from astro:content shows deprecation warnings.
Tailwind v4: No tailwind.config.ts needed. Uses @tailwindcss/vite plugin. The @astrojs/tailwind integration conflicts — use the Vite plugin directly.
Remote agents failed: Overnight scaffold agents couldn't auth with GitHub. Local execution worked first try.
CampaignForge-specs/campaign-logic-chain.html) → 2 parallel non-overlapping workers → command-seat audit + merge authorization. Operator surfaced campus-vs-online modality insight during briefing: ADR 0013's direct_school conflated two regimes (online direct SNHU/Purdue Global = paid-affiliate-eligible today; campus direct TX State/Harvard = organic brand-play with 18-24 month partnership-play horizon). Command-seat proposed 3 pre-walkthrough pivots accepted by operator: (1) hybrid_dynamic is a routing strategy not a peer path; (2) school attributes get their own shared sub-schema; (3) campus redirect is measurable brand investment. Locked 2 renames: organic_state_school_redirect→organic_campus_school_redirect; hybrid enum value→hybrid_dynamic. Walked 10 OQs: 9 ACCEPT-with-specifics + 1 MODIFY (OQ-7 command-seat pushed back on page's null-lander_mode approach; added 4th content_site_article_direct value + enforced 6×4 compatibility matrix). Authored docs/adr/ADR-0018-OQ-VERDICTS.md scoping grid + PHASE-5-6-CAMPAIGN-RETRO-REGEN-SCOPE.md scope spec. Spawned 2 parallel workers.
Worker A (ADR 0018 authoring): branch claude/elated-liskov-58d6f0 (commit e2642ef), merged bce6b9d. 3 files: 962-line ADR + README index + 0013 supersession header. Command-seat audit caught scope-statement drift ("8 schema bumps" vs correct 9) at scope-understanding checkpoint; correction injected before draft began. 6 alternatives considered (scope required 4+). All 10 OQ rulings mapped (44 references); 3 pivots present (20 references); halt-marker inventory exact match (7 tier-1 + 1 tier-2).
Worker B (Phase 5.6 campaign-retro SKILL regen): branch claude/heuristic-bhabha-45e393 (commit bed6fc2), merged ea5bd5f. 24 files: 1372-line SKILL regen + 3 fixture scenarios (20 files) + 447-line pytest (52 pass + 4 intentional skips) + PHASE-5-6-DoD.md + scope doc copy. 12 HARD RULES (R1-R12) with symmetry to strategy-engine; persona v2 five-signal hierarchy encoded as rule_7c weighting (composite formula 1.0×s1 + 0.8×s2 + 0.3×s3 + 0.5×s4 + 0.8×s5 with signal-5 tiebreaker); 11-step processing algorithm. Worker B caught a command-seat scope-doc bug: my spec conflated stage_r_emission_reason (schema-frozen 6-value enum per v0.4.1) with retro verdict taxonomy. Worker proposed separate retro_verdict field; command-seat endorsed + updated scope doc mid-flight. Negative-test test_populated_readbacks_fail_validation_without_tool_id proves allOf/if/then conditional enforcement is live.
Specs cleanup side-quest (morning spec-repo worker): cherry-picked campaign-logic-chain.html from feature branch to specs master (commit 630b841) + deferred premature Session Log + Lessons Learned entries. Live at specs.fourthright.io/campaign-logic-chain.html. Cloudflare Access gate blocked curl; operator browser-verified.
Cross-cutting outputs: 9 schema bumps catalogued (4 breaking major + 2 minor additive + 3 new at v0.1.0); 8 halt markers (7 tier-1 + 1 tier-2); 8 downstream-skills amendment matrix with dependency ordering; Worker 3.5-C expanded scope (10 tasks, 2-3 hrs).
Files touched: campaign-forge (0018 ADR + OQ-VERDICTS scoping grid + 0013 supersession + README + PHASE-5-6-SCOPE + PHASE-5-6-DoD + campaign-retro SKILL regen + 3 fixture scenarios + pytest + STATUS + PE-STATUS + SESSION-LOG) + CampaignForge-specs (this entry + Lessons Learned Session 41 + campaign-logic-chain.html via side-quest) + 2 new durable memories (OQ walkthrough pattern + Cloudflare Access gotcha). ADR 0018 Accepted + Phase 5.6 SHIPPED; Worker 3.5-C UNBLOCKED.
compliance-angle-map.md (Parts I+II) + per-cell compliance_hard_blocks[] via cell_id FK; silent no-op path eliminated; tier-1 non-overridable on unresolved cell_id / Part III.5 leakage; merge 7b67f92. Worker 3.5-A winners_vault_readback.schema.json v0.4.0→v0.4.1 — added allOf/if/then conditional enforcement so tool_id + tool_version are schema-REQUIRED when stage_r_emission_reason == "populated" (v0.4.0 described the constraint in text only; v0.4.1 makes it enforceable); mirrors strategy.schema.json:558-574 pattern; merge c6dd46f. Worker 3.5-B trigger library 134→139 — 5 new tier-2 triggers for S6 veteran + S7 employer §127 cohorts, all VoC-anchored to real corpus quotes (voc_reddit_college_00036/37/38 + voc_reddit_studentloans_00021-24); 6th proposed trigger (§127_spouse_stacking_household_multiplier) halted for insufficient practitioner VoC — queued in PROPOSED-TRIGGERS.md with 3 unblock paths; merge f487de9.
Operator clarifications materially expanded monetization taxonomy: 6 monetization paths (affiliate_aggregator_portal / affiliate_direct_online_school / aggregator_marketplace_ping_post / organic_state_school_redirect / content_only / hybrid_dynamic) + 3 orthogonal lander modes (content_site_tool_direct / archetype_tool_embedded / archetype_prequalifier). Yellow Ribbon clarified as school attribute, not monetization path. ADR 0013's 3-value enum judged insufficient. Worker 3.5-C HELD pending ADR 0018 authoring against locked taxonomy. Visual mapping session spawned in CampaignForge-specs authoring campaign-logic-chain.html — halted with 10 open questions queued for next command-seat resolution.
4 durable memories saved: feedback_command_seat_cross_agent_routing.md, feedback_scope_statement_schema_read.md, project_monetization_taxonomy_expansion.md, project_audit_2026_04_21_findings.md.
Files touched: campaign-forge (copy-factory SKILL §2.5 rewrite + winners_vault_readback v0.4.1 + trigger-library 134→139 + PROPOSED-TRIGGERS.md + STATUS + BACKLOG) + user-level command-seat skill (Step 4 addendum) + CampaignForge-specs (campaign-logic-chain.html authored on feature branch, not merged at session end) + 4 new durable memories. 3 of 4 HIGH audit findings CLOSED same-day.
claude/adr-0015-stage-r-signal-hierarchy commit bcfaab1 closing Gap 2 from the persona-v2 5-gap strategic review. Canonical signal weighting when CPL + lead_to_sale + LTV + fatigue + compliance_violation are all populated.
3-parallel-persona-v2 coherence audit (before Phase 5.1-5.8 skill-regen fires) — 3 read-only sub-agents against the Phase 5 research layer from non-overlapping lenses (completeness / schema coherence / strategic-thesis encoding). 4 HIGH findings with cross-lens convergence: (1) Risk 1 compliance_angle_map split-brain — copy-factory Step 2.5 reading a deprecated flat-array path while canonical Parts I+II of compliance-angle-map.md sit unread; ACTIVE DRIFT since 2026-04-20. (2) winners_vault_readback.schema.json v0.4.0 describes tool_id + tool_version as required-when-populated in description text only — non-enforceable by JSON Schema; [READBACK_SCHEMA_VIOLATION: cell_id] halt marker not actually emittable. (3) Trigger library veteran/employer coverage gap — S6 veteran + S7 employer §127 cohorts undersampled. (4) Zero cells carry archetype_tag field — 6 landing-page archetypes exist in templates/landing-page-archetypes/ but no cells tag them; Stage 7 landing-page skill forced to re-invent per run.
TR1 deferred pending operator Google rep call (fresh MCC under Click Send Inc). 2 durable feedback memories saved: feedback_worktree_workflow.md (Claude Code app owns worktree creation; never pre-create via git worktree add) + feedback_persona_v2_universal_load.md (every worker + every sub-agent reads persona v2 at start regardless of task type; supersedes tiered-injection policy).
Findings disposition: Findings 1/2/3 routed to 3 parallel workers for Session #40 same-day closure; Finding 4 (archetype_tag) rolled into upcoming ADR 0018 monetization taxonomy work. Full audit diagnosis at docs/post-phase-5-coherence-audit-2026-04-21.md.
Files touched: campaign-forge (ADR 0015 draft + post-phase-5-coherence-audit doc + STATUS + BACKLOG + PE-STATUS) + 2 new durable feedback memories. Worktree nostalgic-shockley-606e91 alive for BR walkthrough.
compliance-angle-map.md v1.1→v1.2 + compliance-scan-patterns.json v1.0.0→v1.1.0 + cell-angle-rules.json v0.2.1→v0.2.1.1 with 20 renames + test fixtures, 4 atomic commits); Phase 2 (schema chain atomic per-version-bump — cell-angle-rules v0.2.2→v0.2.3→v0.2.4 + result_personalization v0.4.0→v0.5.0→v0.6.0 + winners_vault_readback v0.4.0 + tool_registry.json v0.1.0 + exploration_readback.schema.json v0.1.0 + scaffolds, 7 commits); Phase 3 (strategy.schema.json v0.2.0 + validation-rules.json stage_2 block, 2 parallel agents); Phase 4 (validator extension — 4 FK resolvers + monetization_path + exploration_budget gates, 3 commits, sequential same-file); Phase 5 SKILL regen (5 sequential subagents per disposition category 20A-preserve / 20B-extend / 20C-replace / 20D-remove / 20E-new + 20F coherence-fix + HALT-MARKERS.md v1.0.0, 8 commits, ~1652 lines); Checkpoint C interlude (vulnerable-segments.json v0.1.0 + strategy.schema v0.2.1 per rulings 1/2/3, 3 commits); Phase 6 (fixture 15 files + Tests A/B/C parallel dispatch + DoD verification, 2 commits). Checkpoint B cross-track sync VERBATIM MATCH on all 7 canonical enum strings against ContentForge 6440126. Checkpoint D reported 9/10 DoD green + 1/10 PARTIAL (validator min_angles coverage gap from S8-rollout gating).
Track B delivery: 3 files, 103 insertions, 1 commit on claude/contentforge-phase-1-retrofit. edu.md Persona B retrofit (5 new fields + rationale + metadata Retrofit-log); context-refresh/SKILL.md Section 3 extended (5 persona-level fields + hard-gate rule + Step 7.5 consumer pattern stubs for tool_registry/cell-angle-rules/content_brief_directives); content/SKILL.md Step 5 path-aware routing gate (8-subcommand × 5-path matrix + two-gate paid-eligibility resolution per BR 13-2b + organic-only stub annotation). 2-parallel-agent verification (Agent A structural 6/6 + Agent B §11.2 gut-check 6/6 with senior-affiliate voice). Session #35 protections preserved verbatim.
2-parallel-fresh-session audit (forked from Phase 5 tip, full working tree): Auditor A (structural, Explore sub-agent): DoD 7/10 green + 3/10 partial (items 1/4/5 downgraded — fixture authored but driver not run yet; cleared by housekeeping). BR rulings 32/34 traceable, 2/34 ambiguous (13-3b scoped-out, 13-4 operational). Schema chain atomic. FK resolution PASS across all 4 axes. Cross-track VERBATIM. Verdict: SHIP. Auditor B (strategic persona-v2, persona-v2-loaded Explore sub-agent): SKILL voice = senior-affiliate peer register (9+ blunt-register quotes). Moat thesis = STRUCTURAL (HARD RULES 4+8 bind every angle to tool_registry FK). Advantage+ thesis = PRESERVED. Compliance-as-angle-guide = PRESERVED. Anti-hallucination = CLEAN. Cross-cutting gap F1 REAL (latent): Step 3A line 394-395 splits exploration_candidates by is_exploratory only — should ALSO filter by monetization_path == "affiliate_online" per ADR 0012 line 77. Unreachable today (v0.2.4 has zero exploratory cells) but landmine before v2. F2 latent (operator-lane). Verdict: SHIP with F1 rework.
Consolidated housekeeping (1 subagent, persona-v2-loaded, 5 atomic commits): (1) HARD RULE 11 + [EXPLORATORY_CELL_NON_AFFILIATE_PATH] tier-1 marker + HALT-MARKERS.md v1.0.0→v1.0.1 (F1 fix); (2) min_angles → min_selected_cells rename; (3) strategy_b_expected.json updated to match §5.4 binding spec; (4) batched cosmetics (stale compliance_angle_map warning removal + SKILL §5.1 4-namespace halt-marker split + PHASE-5-SPEC §10 pointer fix); (5) Task 24 driver run artifacts captured. Post-housekeeping integration Tests A/B/C all pass. Final DoD 10/10 GREEN.
Merges + pushes: campaign-forge 03ffe11→7003bc2 via --no-ff merge of 39 commits (40 files, 13,617 insertions). contentforge bf6abe7→20f5e03 via --no-ff merge of 1 commit. Both pushed to origin. Stale .claude/worktrees/contentforge-phase-1/ (session #36 residue) cleaned up.
4 Phase 2 ContentForge brainstorm scoping inputs captured from Track B report (content scale + uniqueness governance / cross-offer routing [ADR 0016 dependency] / subject-taxonomy mapping [Extension 3A] / cell→persona aggregation rule) — routed to STATUS Next Session Priorities Path B.
Phase 5 status transition: ADRs ACCEPTED (session #37) → implementation worker spawned + shipped + audited + housekept + merged + pushed. Phase 6 now blocked only by ADR 0015 authoring (Stage R Signal Hierarchy + Tool-Driven Weighting — Gap 2 closure, spec-only session ~2-4 hours, Path A for next session).
Files touched: campaign-forge (40 files across 39 Phase 5 commits + 1 merge commit + rollup: schemas [cell-angle-rules / result_personalization / winners_vault_readback / tool_registry / strategy.schema / exploration_readback / vulnerable-segments] + SKILL regenerated + validator extended + generator new + test fixtures + HALT-MARKERS + compliance reconciliation + STATUS + psychology-engine STATUS + SESSION-LOG #38) + contentforge (3 files across 1 retrofit commit + 1 merge commit: edu.md Persona B + context-refresh/SKILL + content/SKILL) + CampaignForge-specs (this entry + Lessons Learned Session 38). Phase 5 ✅ COMPLETE + SHIPPED + MERGED + PUSHED.
CPL_delta × 0.4 + audience_novelty × 0.4 + compliance_survival × 0.2 vs direct-school-live lead_to_sale × 0.5 + audience_novelty × 0.3 + ltv_ratio × 0.2 — switches on first verified direct-school partnership ≥30 attributed leads); BR 12-4 2-mechanism design (is_exploratory + exploration_metadata only, no priority_tier enum extension); BR 13-7 Phase 1 retrofit timing (not Phase 2 refresh, prevents persona-definition source drift); BR 14-7 vertical-namespaced tool_ids (edu_*, ssd_*, medicare_*, xvert_ escape hatch + shared_architecture_pattern field — rejects shared-tool-id-with-verticals-array; tools encode per-vertical regulations).
BR-A superseded entirely — Layer-3 experimental slot concept obsoleted by ADR 0012 exploration budget architecture. Current SKILL.md's Layer-3 section gets REMOVED during Phase 5 regen.
3 ADR status transitions executed: ADR 0012/0013/0014 all flipped Proposed → Accepted 2026-04-19 with BR grid Final Verdicts resolution pointer + flip/extension notes on each.
3 extensions logged to BR grid (non-blocking additive scope for worker sessions): Extension 1 (ADR 0014 amendment BR 14-10 cross_offer_routes[] per tool for disqualifier-triggered cross-vertical routing; addendum: ActiveCampaign email-capture infrastructure already built + tested + feature-flagged off per operator clarification — MCP access + per-tool-answer AC tagging schema + email templates on 2 tools); Extension 2 (ADR 0016 flagged future — Cross-Vertical Offer-Wall as Closed-Loop Routing, before ContentForge Phase 2 brainstorm); Extension 3 (Tool-as-Lander + Subject-Taxonomy Routing — 3A subject_taxonomy_mapping on tools + ~12-subject shared enum aligned to white-label portal subject-tailored landers, 3B Phase 5 split-test-destination default [every tool-backed cell emits destination_tool_first + destination_lander_first as split-test pair], 3C Stage 7 archetype regen widens to include tool-as-lander + consent-form-as-last-page archetypes).
Session amendments logged inline in BR grid for 13-2a (school_type disambiguation: traditional OU/UT/UCLA/MIT/UF vs online_partnered WGU/SNHU/Purdue Global), 13-2b (two-gate paid-eligibility resolution persona.paid_traffic_eligible AND (if direct_school: school.paid_traffic_allowed)), 13-3a (per-partner implementation schema at verticals/edu/config/partner-endpoints/<school_id>.json), 13-3b (consent-form-as-last-page archetype when direct-buyer partnerships go active: tool → our-hosted consent form [name + phone + TCPA + trust verification] → partner endpoint), 13-3c (ADR 0017 flagged future — Direct-Buyer Routing via On-Site Consent-Form Architecture).
3 future ADRs flagged: 0015 Stage R Signal Hierarchy + Tool-Driven Weighting (Gap 2 closure, before Phase 6 kickoff), 0016 Cross-Vertical Offer-Wall as Closed-Loop Routing (before ContentForge Phase 2 brainstorm), 0017 Direct-Buyer Routing via On-Site Consent-Form Architecture (before Phase 4 direct-buyer-active infrastructure).
1 durable feedback memory saved: feedback_split_test_discipline.md — every strategy cell with tool-backing emits tool-first + lander-first destination pair as default; validates new tool-as-lander model against 15 years of traditional-flow data.
Key operator vision re-anchors through walkthrough (not flipping any BR, shaping implementation): Persona B = organic-only traffic regardless of destination; "direct school" disambiguated (traditional = Persona B organic; online_partnered = adult-returner paid OK); tool-as-lander replaces traditional Stage 7 archetype concept; $35 portal has ~12 subject-tailored landers driven by tool subject choice; free-traffic-to-inactive-partner-schools strategy already exists (tracker live); ActiveCampaign built + feature-flagged off; direct-buyer future state = tool → our-hosted consent form → partner endpoint (categorization vs implementation schema split); cross-offer / cross-vertical disqualifier routing is THE monetization flywheel.
Phase 5 state transition: KICKOFF SPEC + INPUT-LAYER TRIO ADRs DRAFTED → ADRs ACCEPTED — Phase 5 implementation worker session + ContentForge Phase 1 retrofit worker session READY TO SPAWN in parallel worktrees. Both worker scope statements pre-drafted in STATUS Next Session Priorities for fresh command-seat paste-and-go (no re-derivation needed).
Files touched: campaign-forge (INPUT-LAYER-TRIO-BR-GRID.md +180 lines with Extensions + amendments + Final Verdicts table; 3 ADR status flips with resolution pointers; STATUS.md with embedded worker scope statements; psychology-engine SESSION-LOG.md #37 entry) + user-level memory (feedback_split_test_discipline.md + MEMORY.md index) + CampaignForge-specs (this entry + Lessons Learned Session 37). No files touched in docs/psychology-engine/phase-5-strategy-engine/** (worker scope), contentforge/** (Track B scope), .claude/skills/** (worker scope), pipeline artifacts — command-seat lane discipline held.
claude/crazy-jones-2ec565 substrate worktree, committed uncommitted Session #33 Phase 4 closure + Session #35 Phase 5 specs that wrote to master instead of worktree, fixed node simdjson dyld via brew reinstall node, closed 2 stale worktrees, pushed 5 campaign-forge + 1 contentforge commits to origin).
Directus REST + Metabase programmatic dashboard build. Accessed historical_leads via Directus REST API. 443,136 rows; 142,658 clean after filtering nulls + DSM template-placeholder tokens. Sub-agent built live EDU Lead Intelligence dashboard at bi.fourthright.io/dashboard/8 via programmatic Metabase API (key in ~/.zshrc) — 18 cards across 6 sections. Rebuild script idempotent at docs/metabase/build_lead_intelligence_dashboard.py. Surfaced Counseling 19.9-21.9% share in 45-54/55+ (undersold in edu.md personas); Marcus "prior bachelor's" claim wrong (75%+ starts Some College/Associate's); Persona B not reachable via online-school affiliate offers.
ContentForge Phase 1 operator gut-check: parallel sub-agent verification → PASS-CONDITIONAL → 3 persona recalibrations shipped as contentforge/bf6abe7 (Marcus recalibration, new Persona F "Patricia, 58, 55+ Experienced Returner" covering 26.7% lead volume, Counseling elevated for 45-54/55+, [OPERATOR-SYNTHESIZED — LIMITED VERBATIM RESEARCH] flags on §3+§5).
Persona v2 strategic review — 5 structural gaps surfaced: (1) tools not first-class in input layer; (2) Stage R signal hierarchy not encoded in selection weights; (3) monetization topology ambiguous; (4) creative fatigue not operationalized; (5) vertical replication not systematized.
Authored 3 input-layer-discipline ADRs via persona-v2-loaded sub-agents (tiered-injection policy: strategic + creative mandatory full v2 read, engineering reference-only, research/ops skip). ADR 0012 Exploration Budget (~560 lines, 6 BRs, cell-angle-rules v0.2.1→v0.2.2 — creative-bias discipline). ADR 0013 Paid-Traffic Eligibility Gate (~560 lines, 8 BRs, v0.2.2→v0.2.3 — 4-path enum affiliate_online | direct_school | content_only | hybrid; Persona B→direct_school). ADR 0014 Tool Registry as First-Class Input (~580 lines, 9 BRs, v0.2.3→v0.2.4 — enforces persona v2 proof-mechanism doctrine; docs/schemas/tool_registry.json; proof_mechanism_id required; hard-fail validator; strict semver; multi-vertical via verticals[]; 90-day grace on deprecated bindings).
Consolidated 22-BR grid at docs/psychology-engine/INPUT-LAYER-TRIO-BR-GRID.md — next-session operator rules all 22 BRs in one pass.
Command-seat slash command at ~/.claude/commands/command-seat.md — discovery-based daily audit seat. Reads STATUS + follows pointers, asks 3 qualitative questions, surfaces operator-outstanding separately. Invoked via /command-seat or Wispr voice.
Revised plan sequencing locked: 28 BR rulings (22 trio + 6 Phase 5 spec) → Phase 5 implementation (Track A) + ContentForge Phase 1 retrofit (Track B, parallel) → Phase 5 validation → Phase 2 ContentForge brainstorm (with content scale + uniqueness governance as scoping input — 40-50 → several hundred articles target) → Phase 6 + ADR 0015 (Gap 2) → Gap 4 fatigue monitor non-blocking → Gap 5 vertical namespacing deferred to SSD post-EDU Phase 7.
Files touched: campaign-forge (ADRs 0012/0013/0014 + adr/README.md + INPUT-LAYER-TRIO-BR-GRID.md + docs/metabase/build_lead_intelligence_dashboard.py + psychology-engine STATUS + SESSION-LOG + main STATUS) + ~/.claude/commands/command-seat.md (user-level) + CampaignForge-specs (this entry + Lessons Learned Session 36). No pipeline skills, no pipeline runs, all schema changes await rulings.
docs/superpowers/plans/2026-04-17-contentforge-phase-1-foundation.md — mapped each task to DONE/PARTIAL/NOT STARTED against file existence + git log + STATUS + dashboard. Surfaced that Phase 1 was actually executed in Session #31 (commits e61e84c + 8c081ec + f025a37 + 5510202 on contentforge + 641fd85 on brain) but STATUS.md had drifted — "SPEC + PLAN COMMITTED — EXECUTION PENDING" still present in 3 lines (workstream row, phase-roadmap row, Next Session Priorities item #1). 22 of 28 tasks DONE, 5 acceptance tests DEFERRED to operator-run-from-ContentForge-cwd, 1 task (#28 sign-off) NOT DONE.
Option B executed: dispatched 2 general-purpose sub-agents against edu.md + ssd.md with bounded read-only scope fences (per-artifact + <400/<300 word output contracts). Agent A (edu.md vs 9 brain sources): SHIP with 5 noted gaps. Per-section table 12 rows: sections 1/6/8/9/10/11/12 PASS; sections 3/5 WARN-MED (operator-synthesized; archived buyer-intelligence.md is 5-buyer payout table, zero end-user persona data); section 4 WARN-MED (no source research); section 7 WARN-MED (competitor word-counts unverified). Hard-gate verdicts: anti-hallucination PASS (uncited stats flagged [CITATION REQUIRED]), verbatim quotes PASS-via-flag (section 6 marks every persona [VERBATIM RESEARCH REQUIRED]), compliance reversal PASS (all 7 restricted_claims from education.json mapped 1:1), distinctness PASS (swap-test fails cleanly for SSD/Medicare/auto). Agent B (ssd.md skeleton): SHIP with one defensible ⚠-alert deviation. All 12 headers + [RESEARCH TO BE GATHERED] bodies + research-gap checklist + metadata header; no fabricated content. Alert includes Section 6 where spec §6.4 excludes it (verbatim quotes genuinely need research — defensible widening).
File-level acceptance tests 3/4/5/6/8: all pass with one caveat (edu.md metadata header contains literal [GENERATED_AT_REFRESH_TIME] placeholder instead of real sha256 — research-backlog). Tests 1/2/7/9/10/11 remain deferred to operator-run-from-ContentForge-cwd per Session #31 scope. Operator §11.2 gut-check: PASS-CONDITIONAL against n=142,658 historical_leads diff. Three edits ordered + shipped: (1) Marcus recalibration — removed "may have a bachelor's already" claim; 75%+ of 25-54 cohort starts from Some College or Associate's, not prior-degree holders; reframed as stalled learner / first-time career-changer. (2) Added Persona F "Patricia, 58, Experienced Returner (55+)" covering 26.7% of lead volume previously absent from persona set; Counseling as primary subject (19.9-21.9% of cohort); full JTBD Four Forces block in Section 5 covering age-discrimination + stamina + retirement-earning-window anxiety. (3) Counseling elevated to primary subject for 45-54 slice of Marcus's cohort with same 19.9-21.9% stat calibration. [OPERATOR-SYNTHESIZED — LIMITED VERBATIM RESEARCH] flags added at top of Sections 3 and 5 so downstream Phase 2 skills know these sections are expert synthesis + aggregate lead data, not end-user interviews.
Non-blocking gaps routed to research backlog: hash placeholder regeneration, Section 7 competitor word-count spot-check, spec §7.3 stale example path (edu.json vs education.json), ssd.md alert Section-6 inclusion, verbatim-research gathering (r/AdultEducation / r/careeradvice / AARP encore-career / r/GIBill). Phase 2 brainstorm queued — article-writer + tool-page-builder + quiz-builder + content-brief-builder; spec must reconcile with existing brain-side content-writer + content-site skills (fate deferred per Phase 1 spec §13) + address Phase 5 strategy-engine output consumption pattern without violating brain-reads-at-runtime boundary from ADR 0010.
Files touched: campaign-forge (STATUS.md — header + workstream row + track-table + phase-table + Next Session Priorities + Recent Changes) + contentforge (.claude/vertical-contexts/edu.md — Sections 3/5 flags + Persona C recalibration + new Persona F with full JTBD block) + CampaignForge-specs (this entry + Lessons Learned Session 35 Track B). Phase 1 Foundation ✅ CLOSED.
docs/psychology-engine/phase-5-strategy-engine/ directory + dispatched three general-purpose sub-agents with tight non-overlapping file scope.
Agent A → PHASE-5-SPEC.md (458 lines): 11-section kickoff spec — mandate (regenerated skill reads deterministic input layer, emits strategy.json deterministically, halts with markers instead of hallucinating); canonical input layer (cell-angle-rules v0.2.1 + result_personalization v0.3.0 + compliance-angle-map v1.1 + winners_vault_readback.schema v0.1.0 + Phase 2 lookups + per-campaign + global config); 9-step runtime algorithm (cell selection → priority-tier ordering → S8 F1-first gating per rule_8 → variant resolution by axes match → compliance-merge-rule UNION overlay → null-vault-readback fallback per rule_7a-7d → S8 auto-flip per rule_7d N≥2 + quality gate); strategy.json v0.2.0 output contract with full FK trail; delta from current SKILL.md; validator coupling; non-goals; R1–R5 risk register; 10-item DoD; halt-marker vocabulary ([STALE_AUTHORITY_DATA], [MISSING: <slot>], [READBACK_SCHEMA_VIOLATION], [UNRESOLVED_COMPLIANCE_FK], [NO_CELL_MATCH]). 6 block-review asks BR-A..BR-F: (A) preserve narrow Layer-3 experimental slot vs cell-only — rec narrow preserve; (B) S8 rollout F1-only vs F1+F3 vs per-brief override — rec F1-only default; (C) variant tiebreak when axes-match equal: readback signal / lex / author-declared priority — rec author-declared variant_priority field + readback layered; (D) [STALE_AUTHORITY_DATA] halt policy: full halt / tier-split by decision_rule_category / loose partial — rec tier-split; (E) ship skill + validator same-session vs staged — rec same session; (F) manual gate on first S8 auto-flip per vertical — rec one-time ack file per vertical.
Agent B → SKILL-GAP-ANALYSIS.md (908 lines): section-by-section disposition of current 575-line SKILL.md. Rollup: 2 KEEP / 9 EXTEND / 6 REPLACE / 2 REMOVE + 6 net-new sections. Major REPLACE: Three-Layer Knowledge Model → priority_tier (P1_unclaimed / P1_reclaim / P2_differentiated / P3_parity) + vault_provenance overlay; Step 2.5 Compliance-Informed Angle Ideation → wholesale replacement (cell-angle-rules pre-computes compliance_hard_blocks; every angle MUST FK-resolve to a cell per ADR 0009); Step 3 three-pass angle generation → cell selection + variant resolution. REMOVE: Step 1.5 Swipe File Analysis (superseded by pre-authored cell copy_blueprint). 6 net-new sections: cell-selection algorithm, variant resolution, rule_7a-7d null-readback handling, rule_7d S8 auto-flip per ADR 0011, [STALE_AUTHORITY_DATA] halt emission, Phase 5 selection-audit trail block. Estimated new SKILL.md ~690-740 lines. 5 open questions BR-1..BR-5 mostly defer to PHASE-5-SPEC ownership.
Agent C → VALIDATOR-EXTENSION-SPEC.md (399 lines): implementation-ready spec. Option C recommended: ship scripts/extract-compliance-scan-patterns.py generator that reads compliance-angle-map.md v1.1 + writes docs/psychology-engine/phase-2-edu-reality-map/compliance-scan-patterns.json machine-readable derivative; validator reads JSON only; markdown stays authoritative; pre-commit hook + CI guard re-run generator on markdown edits so drift is impossible. 4 new validator functions: load_compliance_scan_patterns(), resolve_compliance_hard_block() (with difflib similarity-match typo suggestions), validate_cell_angle_rules_compliance_fks(), validate_strategy_json_compliance_fks() — wired into validate_stage_2 as pre-flight + output checks. Hard-fail exit 1 on FK miss per ADR 0009 input-layer-rigor. 4 operator decisions pending: A/B/C extraction path (rec C), hard-fail vs warning (rec hard-fail), generator execution surface (rec both pre-commit + CI), JSON location (rec phase-2 dir).
Files touched: campaign-forge (3 new phase-5 specs + psychology-engine STATUS + SESSION-LOG + this main STATUS) + CampaignForge-specs (this entry + Lessons Learned session 35). Phase 4 unchanged (CLOSED previously). Phase 5 implementation queued for next Track A session on operator block-review approval.
cell-angle-rules.json v0.2.1 per ADR 0009 input-layer-rigor — ADRs document WHY, input files carry HOW). BR-4 A→B with tighter spec (S8 auto-flip requires campaigns_observed_count ≥ 2 + quality gate via lead_to_sale_rate floor OR lead_quality_tier_observed match — persona "lead quality > CPL" prohibits N=1 CPL-within-band as sufficient). BR-5 1→2 (ship nomenclature reconciliation this session — rule_10 pre-gen compliance cut has silent failure mode while FK names drift). Operator approved all flips.
Reconciliation addendum shipped: cell-angle-rules.json v0.2.0→v0.2.1 — 10 scan-pattern renames (government_affiliation_scan_S4 → _without_disclaimer_scan_S4 across 8 cells; all *_scan_S5 → *_scan_S1 across 9 distinct absolute-promise patterns; save_plan_status_* duplicates unified) + rule_7a (null fallback) + rule_7b (null-with-reason fallback via stage_r_emission_reason enum) + rule_7c (sample-size × recency-decay weighting with compliance-violation downgrade) + rule_7d (S8 auto-flip tightened per BR-4). compliance-angle-map.md v1.0→v1.1 — new Part III.5 "Named Anti-Patterns" section formalizes 4 novel fail-class patterns (anti_establishment_framing_moral_liberty_oppression, systematic_undercount_by_omission, misleading_time_to_income_scan, save_plan_status_without_litigation_banner) + naming-convention summary table + S4 hard-block fail-case clarification. ADR 0011 updated (rule integration section rewritten to reflect canonical inlining + rule_7d tightening). PHASE-4-DOD.md closed with BR verdicts + sign-off.
JSON integrity: cell-angle-rules.json v0.2.1 valid (32 cells, 14 rules — was 10, added 7a/7b/7c/7d). result_personalization.json v0.3.0 valid. winners_vault_readback.schema.json v0.1.0 valid. FK 32/32 match. Validator script extension deferred to Phase 5 kickoff per session scope discipline.
Files touched: campaign-forge (cell-angle-rules.json v0.2.1 + compliance-angle-map.md v1.1 + ADR 0011 update + result_personalization.json metadata + PHASE-4-DOD.md close + psychology-engine STATUS + SESSION-LOG addendum + main STATUS) + CampaignForge-specs (this entry + Lessons Learned addendum). PHASE 4 ✅ CLOSED. Phase 5 strategy-engine regeneration UNBLOCKED.
result_personalization.json v0.2.0 → v0.3.0 (~350 KB). FK 32/32 to cell-angle-rules.json v0.2.0. Python JSON parse validation passes.
Shipped companion artifacts: (1) docs/adr/0011-winners-vault-readback-schema.md Accepted 2026-04-18 — typed schema for Stage R readback emission: null-is-first-class, populated-requires-minimum-set, emission-reason enum (populated / null_never_bought / null_bought_below_volume_threshold / null_stage_r_error), compliance-violation tracking independent of CPL, variant-level attribution threshold-gated at ~200 leads/cell, S8 unit_economics_unverified auto-flip rule_7d. (2) docs/schemas/winners_vault_readback.schema.json v0.1.0 — JSON Schema Draft 2020-12 oneOf null | populated-object. (3) docs/psychology-engine/phase-4-result-personalization/PHASE-4-DOD.md — 9-item DoD checklist + 5 block-review asks (BR-1 audit correction / BR-2 variant density / BR-3 schema contract / BR-4 S8 auto-flip rule hardening / BR-5 next-session routing). Also audit-patched docs/adr/README.md index entries for ADR 0010 (previously silently missing) + ADR 0011.
Files touched: campaign-forge (result_personalization.json v0.3.0 + PHASE-4-DOD.md + ADR 0011 + winners_vault_readback.schema.json + adr/README.md + psychology-engine STATUS + SESSION-LOG + main STATUS) + CampaignForge-specs (this entry + Lessons Learned session 33-C). Phase 4 closure + Phase 5 strategy-engine regeneration unblock gated on operator BR-1..BR-5 sign-off.
result_personalization.json v0.2.0 (~147 KB, 10 cells total). Priority split: 3 P1_hard_violation_reclaim + 3 P1_unclaimed_lane outside S8.
Reclaim cells: S1.F6 (5 rav / 4 dlo / 5 lpv / 5 trr · Yorkshire anti-establishment reclaim) — axes household_income_band × sai_result_band (<$5K / $5-15K / $15-30K / $30K+) × asset_complexity × dependent_count; rav.01 reverse_blindside for low-SAI users (WAS eligible); rav.03 blindside_confirmed_categorical_pivot is the core reclaim slot; dlo.02 zero-tolerance on asset-hiding for business-owner/real-estate-investor; dlo.04 names post-2024 sibling-allowance removal explicitly; non-gov-affiliation disclosure mandatory in every lpv. · S2.F3 (6/4/5/5 · Degree SNAP $6K+laptop bait reclaim) — axes age_band × employment_status × atb_pathway_match (5 routes: six_college_credits / state_atb_test / CPEP / GED_first / advisor_routing) × parent_status; each ATB-pathway variant carries its own 34 CFR 668.156(a) citation + named programs + transparent cost/timeline; dlo.01 zero-tolerance on skip-GED framing; lpv.01 names the bait directly ("you've seen the ‘no diploma? here's $6K + free laptop' ads. This isn't that."). · S4.F5 (6/4/5/5 · Learn Grant Writing income-lure reclaim) — axes current_debt_band (zero / <$50K / $50-150K / $150K+) × target_sector (6 sectors) × pivot_stage × current_salary_band; rav.04 $150K+ renders SAVE litigation banner + OBBBA cap overlay inline; rav.05 K-12 Teach Grant with conversion-to-unsub-loan risk surfaced; lpv.04 year-1 PSLF ECF filing-cadence warning; dlo.03 zero-tolerance on $15K-in-5hrs grant-writing-side-income language.
Unclaimed cells (outside S8): S1.F5 (5/3/4/4 · PLUS 8.94% vs Direct unsub 6.39% 10-yr differential) — axes annual_borrowing_band × years_remaining × parent_credit_tier × student_fafsa_status; rav.04 names IDR eligibility gap (PLUS → ICR only; Direct unsub → full IDR spectrum) as the biggest hidden cost of PLUS-at-scale; lpv.03 adverse-credit pathway framed as statutory mechanism per 20 USC §1078-2 ("not a loophole; it's how the statute is written"); dlo.01 zero-tolerance on private-refinance language that strips federal protections. · S4.F6 (6/4/5/5 · PSLF employer-qualification audit) — axes employer_pslf_status (5 states including confirmed_qualifying_501c3, likely_qualifying_unverified, confirmed_not_qualifying, self_employed) × past_pslf_certification_status × loan_type_status × current_repayment_plan; rav.01 retroactive-qualifying-payment recovery for never-certified-at-qualifying-employer is the biggest real unlock in the cell; rav.04 prior-denial remediation with specific cause diagnosis; dlo.03 zero-tolerance on paid-debt-relief-service language targeting denied users; lpv.01 reframes ECF as no-risk verification. · S4.F8 (6/4/5/6 · mission-identity scholarship stack) — axes mission_identity (7 tracks: k12_teacher / nonprofit_direct_service / federal_civil_service / state_local_gov / healthcare_underserved_service / environmental_conservation / faith_based_ministry) × credential_target × prior_sector × pivot_readiness; service-commitment conversion risk surfaced inline across Teach Grant ($4K/yr, 4-yr shortage-area) + NHSC/HRSA (HPSA contract + penalty) + Segal (1700 hrs commitment); trr.05 veteran-pivoter route to GI Bill + Yellow Ribbon + mission-identity max-stack schools.
Session-B scale: 116 variants (34 rav + 23 dlo + 29 lpv + 30 trr). Cumulative Phase 4: 10 cells × 181 variants across all P1 tiers. FK 10/10 resolve to cell-angle-rules.json v0.2.0. JSON integrity validated. All winners_vault_readback: null per rule_7. Every variant carries rationale; every rav/lpv/trr section carries fallback.
Files touched: campaign-forge (result_personalization.json v0.2.0 + psychology-engine STATUS + SESSION-LOG + main STATUS) + CampaignForge-specs (this entry + Lessons Learned session 33). Phase 4 Session C (20 P2_competitor_occupied_differentiated + 5 P3_gold_standard_parity + ADR 0011 winners_vault_readback schema + Phase 4 DoD + block-review close) queued. On Phase 4 DoD → Phase 5 strategy-engine regeneration unblocks.
tool_result_offer_routes fourth dimension to prevent segment-driven lead-quality flattening at the offer gate (Stage R signal rule #5 congruence). Q2 revised from S8.F1-only to all 4 S8 cells in Session A (rollout-gating is a budget concern, authoring is a disk concern — so when F1 validates, F3/F5/F8 ship in 24hr from existing artifacts). Q3 revised from fold-vault-schema-into-Phase-4 to spin own ADR (vault is cross-cell signal aggregation, not cell-level).
Shipped docs/psychology-engine/phase-4-result-personalization/result_personalization.json v0.1.1 (~59 KB): S8.F1 (4 rav / 3 dlo / 5 lpv / 5 trr) · S8.F3 (4/4/4/5) · S8.F5 (5/3/4/5) · S8.F8 (4/3/4/5). 65 total variants, every one with rationale, every section with fallback. FK 4/4 to cell-angle-rules.json v0.2.0.
Notable design choices: S8.F1 rav.01 recategorize_not_disqualify pivots low-result users to categorical AOTC+§127 · S8.F1 lpv.02 "cross out what doesn't match" radical-transparency frame for 18K+ band · S8.F3 dlo.04 self-employed users suppress "ask HR" language, route to §221/§222 Schedule C · S8.F5 rav.04 SAVE litigation banner mandatory inline · S8.F5 lpv.01 Parent PLUS carriers get past-kids-math vs current-self-math separation · S8.F8 dlo.03 + lpv.02 + trr.01 trades users get full academic-language separation.
Phase_5_runtime_contract encoded: variant selection by strict axes match + match-specificity tiebreak · disallowed_language merge = UNION (overlay only tightens cell baseline, never weakens) · offer_slot is Phase-8-infrastructure binding (URLs resolve at Stage-6 export via brief.yaml offer registry) · null_readback inheritance from cell-angle-rules rule_7.
Block-review judgment calls (J15–J19): J15 AFFIRM slot-based binding (no inline URLs). J16 AFFIRM (A) SAVE litigation banner in config/compliance/education.json. J17 AFFIRM UNION (persona asymmetric-compliance-risk argues against REPLACE). J19 AFFIRM (A) defer state + employer_name axes pending authority-data-cache (placeholder-as-data IS hallucination). J18 REVISED after operator-requested persona reload: S8.F5 loan_scenario_band bumped from zero/$10K/$30K to zero/$20K/$40K. Rationale: original $10K split put low-stress loans (Standard 10-yr ~$110/mo at $10K) into plan-comparison territory they don't belong in — real IDR-vs-Standard decision point is residual >$20K where Standard exceeds ~$220/mo; $40K matches aggressive-IDR + OBBBA-cap-biting. Migration applied across rav.02/03/04 + dlo.02 + trr.02/03/04. File v0.1.0 → v0.1.1.
Files touched: campaign-forge (result_personalization.json v0.1.1 + psychology-engine STATUS + SESSION-LOG + main STATUS) + CampaignForge-specs (this entry + Lessons Learned session 32). Phase 4 Sessions B (6 remaining P1 cells) + C (20 P2/P3 + Winners Vault ADR + DoD) queued. On Phase 4 DoD → Phase 5 strategy-engine regeneration unblocks.
docs/adr/0010-content-skills-in-contentforge.md (governance ADR + 5 hard gates: anti-hallucination, compliant reversal, distinctness, proof mechanism, brain path resolvable) + docs/content-source-manifest.yaml (brain-owned contract declaring which files feed which vertical-context sections — EDU full + SSD skeleton-mode) + docs/psychology-engine/CROSS-REPO.md Content workflow section appended (data flow map + three-tier brain-path resolution + manifest-evolution rules).
ContentForge: CLAUDE.md + docs/adr/0001-content-skills-local-adr.md + .claude/config.json (brain-path override scaffold) + .gitignore split (.claude/worktrees/ + settings.local.json + sessions/ stay ignored; skills and contexts tracked) + 7-file reference library in .claude/skills/content/references/ (3 active HARD GATES: anti-hallucination, angle-taxonomy, proof-mechanism-map; 4 reserved for Phase 2/3: cwv-budgets, wcag-contrast, schema-deprecations, freetool-scorecard) + /content orchestrator SKILL.md (routes Phase 1 built / Phase 2-4 stubs) + /context-refresh SKILL.md (manifest-driven 12-section synthesis) + generated .claude/vertical-contexts/edu.md (755 lines, 16 canonical angles across all 8 taxonomy categories, 7/7 restricted-claim compliant reversals with named proof tools, 4 hard gates validated PASS with 4 [CITATION REQUIRED] flags in Section 9 pending canonical buyer-intelligence landing) + ssd.md (skeleton per manifest, research-gap checklist populated, all 12 sections scaffolded).
Acceptance tests: tests 3–6 satisfied inline by committed artifacts (edu.md full synthesis + ssd.md skeleton). Tests 1–2 (help commands), 7 (Phase 2 stub), 8 (missing-source hard fail), 9–11 (path resolution + OPERATOR-EDITED marker) deferred to operator verification in Claude Code session with ContentForge as cwd.
Files touched: brain 3 commits (ADR 0010 + manifest + CROSS-REPO), contentforge 4 commits (scaffold, skills, CLAUDE.md, vertical-contexts), specs 1 commit (this entry + Lessons Learned Session 31 + architecture.html status table: Content skills moved from “None in ContentForge” to “Phase 1 Foundation shipped”). Phase 2 (article-writer + tool-page-builder + quiz-builder + content-brief-builder) unblocked.
cell-angle-rules.json APPROVED pending 3 modifications (J2 S8 unit-economics flag, J4 floor-cluster rationale, J5 phase_5_selection_rules extensions); artifacts #2 (tool-blueprint-patches.md) + #3 (adaptive-question-flow-spec-v2.md) APPROVED AS-DRAFTED (J6–J14 operator defaults stand); artifact #4 PHASE-3-NOTES.md — Decision #10 resolved to option B (S8.F1 primary first, F3/F5/F8 gated on F1 signal).
Modifications applied to cell-angle-rules.json (bumped to v0.2.0): unit_economics_unverified: true + unverified_rationale on all 4 S8 cells (validation-budget discipline — Phase 5 auto-caps initial S8 budget at learning_minimum × 1 ad set until Stage R populates winners_vault_readback) · cluster_size_rationale field on S6.F4 + S6.F6 (floor-cluster at 5 triggers — veteran F4 income transformation + F6 authority are audience-specific segments where additional triggers dilute) · phase_5_selection_rules extended: rule_7 (null vault readback = no-signal contract, Phase 5 falls back to floor_multiplier + archetype_template + emits telemetry for cells that stay null after N campaigns), rule_8 (S8 rollout gating F1 → F3/F5/F8 on signal with brief.yaml operator-override escape hatch), rule_9 (cluster_size_rationale required when triggers < 6), rule_10 (compliance_hard_blocks pre-gen cut) · rule_2_s8_first reconciled with rule_8 for queue-priority vs rollout-sequencing distinction.
J3 5-cell compliance spot-audit (S8.F1 + S1.F6 + S3.F4 + S6.F8 + S4.F5) documented in PHASE-3-CLOSING-AUDIT.md. Findings: all entries conceptually fail-class (no false-positive cuts), but 3 nomenclature-drift categories surfaced — _scan_S4 ambiguity (S4 is action=flag not fail; should rename to _without_disclaimer_scan_S4), _scan_S5 suffix undefined in compliance-angle-map Part III (should rename to _scan_S1 since these are absolute-promise violations), 4 novel anti-pattern entries (anti_establishment_framing_moral_liberty_oppression, systematic_undercount_by_omission, misleading_time_to_income_scan, save_plan_status_claim_without_litigation_banner) not formalized. Reconciliation queued for Phase 5 kickoff as non-blocking cleanup.
All 10 queued operator decisions resolved — see psychology-engine/STATUS.md §Phase 3 queued operator decisions.
Files touched: campaign-forge (cell-angle-rules.json v0.2.0 + new PHASE-3-CLOSING-AUDIT.md + psychology-engine STATUS + SESSION-LOG + main STATUS) + CampaignForge-specs (this entry + Lessons Learned session 30). Phase 4 Result Personalization now unblocked — next Track A session reads cell-angle-rules.json v0.2.0 copy_blueprint + compliance_hard_blocks + disallowed_language per cell to produce per-cell result_personalization.json.
docs/psychology-engine/phase-3-operationalization/cell-angle-rules.json (~75 KB, 32 cells): per-cell {trigger_cluster (3–7 triggers) + archetype_template + proof_mechanism (primary tool + supporting tools + public formula + authority tier) + copy_blueprint (hook + body + CTA + lander above-fold preempt + disallowed language) + platform_fit + unit_economics_band + compliance_hard_blocks + floor_multiplier + decision_rule_category + winners_vault_readback stub}. Priority tiers: P1_unclaimed_lane (7 cells) + P1_hard_violation_reclaim (5) + P2_competitor_occupied_differentiated (15) + P3_gold_standard_parity (5). All 4 S8 cells tagged P1_unclaimed_lane per operator directive. Phase 5 selection rules encoded (S8-first when brief allows, compliance-first cut, 3–7 trigger-cluster size bounds, archetype slot resolution against brief+cache). Validated 32/32 FK match to reality_map.json.
tool-blueprint-patches.md (~500 lines): two structural tool-spec patches — Time-to-Degree Calculator gets first-class PLA modeling (competency-category sub-flow + JST×PLA disambiguation + partner-school PLA policy overlay + ikea_effect_ownership / generation_effect_retrieval / black_swans_hidden_criteria activation), Career Salary Explorer gets employer-credential overlay (SOC-keyed credential cache + cross-tool persistence from Employer Tuition Checker + demonstration_beats_claim / authority_credentialed activation). Per-patch trigger-activation before/after tables + data-dependency flags + cell dependencies.
adaptive-question-flow-spec-v2.md (~200 lines, supersedes v1): adds trigger-deployment as a 5th prioritization axis (alongside v1's conversion × compliance × data × complexity). Re-ranks to 4 priority tiers: PRIORITY-1 (9 branches ship first) — public-data-only + activate ≥2 under-deployed moat triggers; PRIORITY-2 tool-blueprint-patch branches; PRIORITY-3 partner-data-dependent; PRIORITY-4 competitor-saturated-trigger-only deferred. Session-storage schema for cross-tool persistence specified. Recommended build order: Aid Letter Decoder PLUS-heavy red flag → EFC Calc adult-path permission-slip → EFC + Quiz S8 "you may be wrong about not qualifying" branch → Loan Repayment PSLF roadmap → ROI low-ROI warning → Financial Aid Quiz ATB pathway → EFC+Employer Tuition cross-link → Scholarship Finder VA branch → GI Bill entitlement-conservation.
PHASE-3-NOTES.md: decision audit trail (7 in-session decisions without operator input) + 10 queued operator decisions + explicit non-shipped scope (copy templates + authority-data-cache population + content-site article stubs) + Phase 3 DoD proposed.
Files touched: 4 new files under docs/psychology-engine/phase-3-operationalization/ + updated psychology-engine STATUS/SESSION-LOG + updated main STATUS + this dashboard. Phase 3 operator-review queued (expected 30–45 min block-review per session #27 cadence). On approval → Phase 4 (Result Personalization) unblocks.
docs/superpowers/specs/2026-04-17-contentforge-phase-1-foundation-design.md (524 lines) — 12-section vertical-context doc structure, /content orchestrator + /context-refresh skill contracts, 7-file reference library (3 active hard gates + 4 reserved Phase 2/3), brain-owned source manifest pattern, 5 hard gates (anti-hallucination, compliant reversal, distinctness, proof mechanism, brain path resolvable), testing/rollout checklists. (2) docs/superpowers/plans/2026-04-17-contentforge-phase-1-foundation.md (2,371 lines) — 28-task plan covering brain ADR 0010 + manifest + CROSS-REPO update, ContentForge .claude scaffold + 7 references + 2 skills, CLAUDE.md + local ADR 0001, 11 acceptance tests, specs dashboard update, final sign-off. (3) architecture.html — visual 3-repo flow diagram (brain / ContentForge / app), skills distribution map, 4 data flow paths, vertical expansion pattern, team operator journey, current-vs-target state table. Nav link added to 9 dashboard pages.
Strategic decisions captured as memories: (a) Content + copy persona unity — ContentForge content skills share the v2 affiliate-marketer persona with pipeline copy skills; articles must funnel to tools, tools honest-by-design, research-backed AND converts. (b) Quality over token economy — every content skill loads the full vertical context, no slicing. Content is the moat; reducing context to save tokens is rejected on sight. Prompt caching makes repeat reads cheap anyway. (c) SSD (Social Security Disability) replaces auto-insurance as #2 vertical — operator has buyers; gov fine-print research advantage applies; auto is saturated. Medicare + auto deferred until EDU generates. (d) Manifest-driven source loading — brain-owned YAML declares which files feed which context sections; research pipeline can evolve without breaking skills.
Execution pending: next Track-B session runs through the 28-task plan. Expected to produce ADR 0010, source manifest, ContentForge .claude scaffold, generated edu.md + ssd.md context docs, and the final sign-off commit.
Files touched: campaign-forge (spec + plan + memory updates) · campaignforge-specs (architecture.html + nav links across 9 pages + this entry). No Psychology Engine files touched — this is a distinct parallel track from the research path. Phase 3 of Psychology Engine remains unblocked and ready for the next Track-A session.
situation_family_map.json · desire_family_matrix.json · reality_map.json · trigger_reality_matrix.json) presented as per-JSON summary + 3–6 high-stakes judgment calls per artifact rather than line-by-line schema walkthrough (since synthesis had already passed programmatic validation). Operator approved all 4 JSONs in block.
Judgment calls sanity-checked & accepted: (a) S8 middle-income returner confirmed as strongest P1 category-1 bet — 4-for-4 unclaimed-lane cells, VoC-validated "too much money to qualify" self-disqualification narrative, zero Meta corpus competitor footprint, should lead Phase 5 angle-generation budget over high-density cells · (b) TikTok exclusion from 5 of 8 situations accepted (S1, S4, S6, S7, S8) · (c) CPL bands $15–$75 across 8 situations accepted · (d) F7 legitimate-urgency retained as distinct family · (e) moat thesis (interactive_tool_conversion_lift universal across our cells + absent across competitor cohort) pressure-tested and accepted as the single load-bearing claim Phase 5 angle generation leans hardest on · (f) anti-pattern government_card_imagery_compliance_trap accepted as creative-review hard-block.
Phase 3 next-session deliverables queued: (a) per-cell angle-generation rules for Phase 5 Strategy Engine ingestion (deterministic cell → trigger cluster → archetype → proof mechanism → copy blueprint mapping) · (b) tool spec refinements for reality_map gaps: PLA-modeling layer in Time-to-Degree Calculator, employer-credential surfacing in Career Salary Explorer, adaptive-question branch-point priorities · (c) adaptive question-flow v2 informed by trigger deployment evidence.
Files touched: docs/psychology-engine/STATUS.md + SESSION-LOG.md + main STATUS.md + this dashboard. Phase 2 status: SHIPPED DRAFT → APPROVED & CLOSED. Non-blocking deferred: commercial ad-intel tool for Google/TikTok supplement (defer post-Phase-7) + [VERIFY] authority URL audit (operator action item).
situation_family_map.json (~24 KB, 8 situations): S1 High-SAI family blindsided (Yorkshire 12-ad competitor demand, F6 primary) · S2 No-HS-diploma adult re-entry (Degree SNAP 13 ads, F3 primary, ATB pathway unlock) · S3 Allied-health career transition (Coursera+WGU+Prestige Health 59+ ads, F4 primary) · S4 Nonprofit mission career-change (Learn Grant Writing adjacent, F5 primary, PSLF-backed) · S5 Working adult with transfer credits (Strayer/Capella/WGU/UoPhx 113 ads, F3 primary, multi-school comparison as differentiation) · S6 Military/veteran/first-responder (Liberty/SNHU/ASU/Phoenix 40+ ads, F8 primary, GI Bill full-picture) · S7 Employer tuition benefit worker (Coursera+WGU+Capella 60+ ads, F1 primary, §127 + top-up stack) · S8 Middle-income returner (VoC-validated + ZERO competitor = unclaimed lane), F1 primary. Each situation carries demographic/motivational profile + VoC verbatim examples + floor/multiplier + proof-mechanism tools + compliance risk + unit economics + platform fit + P4 source citations + P6 winners-vault-readback stub.
desire_family_matrix.json (~21 KB, F1–F8 × 3 postures): Per family compliant trigger cluster + archetype template + floor/multiplier + proof-mechanism tools. Per-family 3 postures: competitor_occupied + gray_zone + campaignforge_defensible. Gold-standard references captured (Strayer Graduation Fund F1, WGU competency-based F2, Strayer Transfer Scholarship F3, Coursera × Google Career Certificate F4+F8, ACM May 6 deadline F7, JWU Pledge F6, Liberty First Responder F8). Hard-violation references captured (Yorkshire A1–A8 F6+F1+F2, Degree SNAP A10 F3+F1, Prestige Health A11 F4+F7, Learn Grant Writing A12 F4+F5). VoC counter-intuitive patterns noted: Pell refund provokes fear not delight; §127 framed as foregone-income not windfall.
reality_map.json (~30 KB, 32 active cells): 8 situations × 4 primary+secondary families each. Per cell: competitor_density (Meta corpus count + advertiser examples + archetype) + compliance_line_crossing_risk (school-direct posture + aggressive-affiliate posture + observed violations) + campaignforge_tool_match (primary tool + supporting tools + multiplier story + decision_rule_category) + proof_mechanism + unit_economics_band + platform_fit + source_citations + winners_vault_readback stub. 6 unclaimed-lane cells concentrate in S8 (all 4) + S1.F5 + S4.F6/F8. Highest-density cells: S5.F3 (113 ads), S3.F4 (59 ads), S7.F1 (60 ads), S6.F8 (40 ads). 32 inactive cells documented with rationale (Stage R can elevate).
trigger_reality_matrix.json (~43 KB, 134 triggers): 100% library coverage verified programmatically (134/134 match vs trigger-library.json). Per trigger: applicable_families + applicable_situations + empirical_deployment (school_direct × aggressive_affiliate × campaignforge_opportunity) + primary_tool_mechanic + restrictions + phase_5_priority. 12 under-deployed high-opportunity triggers surfaced as strategic moat: interactive_tool_conversion_lift + demonstration_beats_claim + test_dont_guess_proof + tool_as_proof_mechanism + results_in_advance_value_first + accusation_audit + labeling_emotions + shame_reframing + black_swans_hidden_criteria + ikea_effect_ownership + generation_effect_retrieval + others_deserve_it_more_objection. Anti-pattern government_card_imagery_compliance_trap confirmed as creative-review hard-block.
Files shipped: 4 Phase 2 JSONs + updated psychology-engine STATUS + SESSION-LOG + main STATUS + this dashboard. Phase 2 closes after operator approval → Phase 3 begins (Trigger × Reality Matrix operationalization + Tool Blueprint refinements informed by reality_map gaps).
competitor-teardown.md (403 lines, ~5,200 words): 5 sections covering executive read of two-market structure (school-direct disciplined cohort vs aggressive-affiliate cohort) · per-school SimilarWeb traffic-quality reads (Coursera 43.1M / WGU 11.3M / SNHU 9.5M leads cohort scale; SNHU best-engagement-combo 21.25% bounce + 7.30 pages/visit + 7:35 duration sets reference standard; ASU Online subdomain 70.81% bounce as cautionary tale for tool-page UX) · 11 school-direct teardowns (compliance posture + hook patterns + landing pattern + what to borrow + what to beat per school) · 5 aggressive-affiliate teardowns + cross-cohort patterns table (7 patterns including calculator-led unclaimed lane = zero schools deploying) · 11 aggregator/lead-gen lander analyses (CollegeRaptor as closest structural CampaignForge analog; DegreePros likely diploma-mill territory) · Google + TikTok data-gap section with operator-decision matrix on commercial ad-intel tools — recommendation: defer until post-Phase-7 validation campaign · Phase 2 reality_map.json implications (7 distinct adult-learner situations validated by competitor demand).
compliance-line-crossing-inventory.md (518 lines, ~7,100 words, 41 case studies): Section A 15 hard violations including 8 Yorkshire variants (cleanest case A1 “Don’t report this on your FAFSA — save $100,000” violates all 4 axes — government-affiliation implication + insider-anti-establishment framing + specific-large-dollar windfall + bait-and-switch funnel architecture) + Scholarship System $343,155 windfall N=1-as-method + Degree SNAP free-laptop+$6K-grant incentive-stack lure + Prestige Health Pell+manufactured-urgency Calendly bypass + Learn Grant Writing $15K/5hr income claim + Choice Point + Inside-Track CDL Pell coordinated content-funnel network (9 ads) + Get Online Class Takers contract-cheating Meta-policy violation. Section B 4 gray-zone/heuristic flags (UoPhx + Liberty FPs confirmed). Section C 12 clean references: JWU Pledge gold-standard institutional 100% + Wisconsin Nurse Educator Program gold-standard F6/F8 state-funded + UoPhx Tuition Guarantee mechanic-backed F2 + ACM full-tuition France with real May 6 deadline gold-standard F7 + WGU competency-based + Capella FlexPath + Purdue Global 3-week trial + Strayer Transfer Scholarship + Coursera × Google Career Certificate + Liberty May 18 term start. Section D 5 aggregator landers. Section E 5 sub-vertical adjacencies (NCSA athletic recruiting + CDL-Pell vocational + K-12 ESA + nonprofit grant-writing + Canadian StudentAid — all flagged for Phase 8). Per-case structure: advertiser + ad_library_id + sweep + verbatim quote + heuristic-vs-true posture + desire family (F1–F8) + triggers activated (per Phase 1 trigger library) + line-crossing axes + tool-backed compliant translation.
Heuristic refinement targets identified for Phase 2 v2 pass: cliffhanger-curiosity-gap detection + anti-establishment-framing detection + N=1-case-as-method detection (4 Yorkshire violations missed by simple regex).
CampaignForge structural defense per ADR 0009: every angle in Phase 5+ Strategy Engine output references its proof mechanism — aggressive-affiliate angles cannot pass that gate (no proof, only downstream sales call).
Files shipped: 2 deliverables + STATUS + SESSION-LOG + main STATUS + this dashboard. Gate 3 closes after operator approval → Phase 2 proper begins (4 Phase 2 JSONs: situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix).
ads-structured.jsonl + SUMMARY.md. Line-crossing case studies flagged: Yorkshire College Planning “Don’t report this on your FAFSA — save $100,000” (12 ads, F6 institutional-legitimacy violation + insider-angle; cleanest case study in corpus) · Degree SNAP FREE-Laptop+$6K-Grant+Online-HS (13 ads, highest-volume aggressive-affiliate play) · Prestige Health Academy Pell+urgency barber-training (F7 urgency-fabrication) · The Scholarship System $343,155 windfall · Learn Grant Writing career-transition. Moat-validation-by-omission: “financial aid calculator” keyword returned 1 EDU-relevant ad — zero competitive shadow on EFC Calculator angle.
Landers + SimilarWeb agent collected 102 raw files before 600s stream-watchdog stall (20 affiliate + 9 schools × 6 pages + 11 SimilarWeb reports); no structured JSONL/SUMMARY written — inline structuring during synthesis next session.
Google Ads Transparency + TikTok agent — first run 939s stream timeout (broad scope); retry with tighter 5-advertiser + 3-keyword scope confirmed Google Ads Transparency Center structurally unscrapeable via unauthenticated Firecrawl (SPA hydration barrier + advertiser-detail auth wall + virtualized DOM; typeahead confirmed 3 of 5 advertisers exist but detail pages gated; all 3 keyword queries typeahead-zero). TikTok Creative Center deferred (JS+geo-gated+anonymized+no landing URLs). Both require commercial ad-intel tool (SpyFu / Semrush / AdClarity / BigSpy $79–$299/mo) or operator-authenticated separate-IP session — flagged for operator decision.
FAFSA authority-data-cache module shipped — 44 structured JSON records: 3 per-cycle national aggregates (2024-25 Q7 final, 2025-26 Q5, 2026-27 Q1 opening) + 1 national demographics 2023-24 + 30 per-pilot × cycle records (10 pilots × 3 cycles) + 10 per-pilot trajectory rollups. Idempotent ingest at scripts/ingest-fafsa-application-volume.py — zero external network (parses session #23 local CSVs); release-date inference clamped to today for in-progress cycles (caught future-dated inference bug on spot-check).
OBBBA record shipped at student-aid-rates/obbba_implementation_2026_04.json — PL 119-75 structural shift: Parent PLUS $20K/yr + $65K aggregate per student · Graduate $100K aggregate · Professional $50K/yr + $200K aggregate · $257,500 lifetime max · RAP+Tiered Standard as post-2026-07-01 plan menu · PAYE+ICR sunset 2028-07-01 · IBR retained for pre-2026-07-01 cohort · grandfathering 2028-29 (grad) or 2029-30 (parent PLUS) · consolidation 3-month FSA-recommended buffer closed as of 2026-04-17 = high-urgency current-borrower ad signal. Tool integration points mapped (Loan Repayment Calculator cohort switch + lifetime cap + $20K/$65K Parent PLUS distinct category, Aid Letter Decoder cap-exceeded flags, Financial Aid Quiz consolidation-decision routing). Compliance-safe framings + violations-to-avoid catalogued. Follow-up scrapes queued for FSA definitional pages.
Durable feedback saved: feedback_firecrawl_exclusive_scraping.md — codifies ad-platform + competitor-lander scrapes as Firecrawl-only, federal domains carved out for direct pull.
Decision: skip bulk 101-file FSA archive pull — quality-over-quantity judgment; current authority-data-cache satisfies ad-relevant use cases; OBBBA structural break collapses pre-OBBBA to one baseline regardless of depth; VPN DNS sinkhole on studentaid.gov deprioritized. Idempotent scripts/fetch-fsa-archive.sh remains queued for future if downstream demand emerges.
~48+ files shipped. Deliverable E collection complete; synthesis next session closes Gate 3 and unblocks Phase 2 proper (four Phase 2 JSONs: situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix).
scripts/convert-fafsa-xls-to-csv.py, openpyxl+xlrd, idempotent, multi-sheet). Pilot-school YoY trajectories federally traceable: SNHU 2024-25 cycle-final 282,010 FAFSAs (92% independent), WGU 291,890 (93.6% ind), ASU 233,500, 7 other pilots complete. Federal authoritative validation of adult-learner TAM.
FAFSA Application Demographics 2023-24 — national pre-rollout baseline: 17.9M applicants × 8 dimensions × 7 quarters. 41.8% age 25+, 52% Independent, 49% Pell Eligible, 47% first-gen. 50:58 avg full-form dependent completion time (16:42 for independent EZ form) — quantitative federal validation of tool-discovery friction-reduction thesis.
StudentAid Big Updates page (2026-04-15) — 648 lines Firecrawl-scraped covering OBBBA implementation: Parent PLUS $20K/$65K caps, graduate $100K + professional $50K/yr + $200K aggregate, $257,500 lifetime max, RAP payment-qualification rules, consolidation 3-month deadline to preserve IBR/ICR, ICR + PAYE elimination. Queued for next-session ingest.
FSA archive enumerated — 241 downloadable files going back to 2006-07 (20 cycles). Batch fetcher (scripts/fetch-fsa-archive.sh) built for HTTP/1.1 + retry + parallel + idempotent skip.
101-file batch download BLOCKED — operator VPN DNS sinkholes studentaid.gov specifically (resolves to 198.18.8.39 TEST-NET-2 private IP; bls.gov + data.gov resolve normally). Fetcher killed cleanly; zero partial/corrupt files.
NCAN Tableau extraction KILLED — scout investigation confirmed federal primary source beats accredited_private aggregator.
Secondary FSA source discovered: fafsabyhs/<STATE>.xls series (50 states, weekly refresh). Scout POC pulled CA/TX/FL/NY.
License clarity — 17 USC § 105 confirms FSA data is federal public domain.
Addendum total: ~50 new files (13 XLS + 31 CSVs + 2 scripts + venv + 4 Firecrawl caches + 2 scout artifacts + STATUS updates). Combined session #23 total: ~107 files. Next: VPN DNS fix → complete archive pull → build fafsa-application-volume/ module → ingest OBBBA page → Gate 3 Deliverable E.
voc-corpus.json — 110 verbatim quotes tagged by situation family (F1–F8), emotional register, and trigger affinity resolving against Phase 1’s 134-entry library. Source distribution: Reddit 61 · Niche 24 · Trustpilot 14 · YouTube 11. Guardrail: research-input only, never reproduced in ads. Companion voc-themes.md with per-family themes + emotional-register patterns per source + cross-cutting buyer language + trigger-affinity heatmap. 3 surprising themes flagged for Phase 3 downstream: (1) Pell-refund discovery provokes fear-first not delight — EFC Calculator output flow reorder; (2) peer-insider-knowledge is primary Reddit conversion mechanism, not copy — Phase 3 tool blueprint must package peer-insider quality natively; (3) middle-income squeeze is a distinct adult-learner identity — candidate F9 or cross-cutting persona in reality_map.json.
B.4 continuation ALL closed:
BLS OEWS +10 SOCs (total 20/50): 15-1211 · 15-1231 · 15-1244 · 15-2051 (reclassified from 15-1211.01) · 15-1212 · 13-1082 · 13-1161 · 11-3031 · 11-2022 · 27-3031. BLS discontinued per-SOC HTML pages after May 2023 — May 2024 data pulled via Public API v2 (6 batched POSTs × 25 series); schema now has source_url_national_wages (API) + source_url_industries_states (2023 HTML) + cross_check_may_2023_annual audit block. BLS-suppressed annual_p90 for Financial Managers + Sales Managers (wage ≥ $239,200/yr); 2 SOC reclassifications documented.
IPEDS pilot completed (10/10): 5 new seeds — Grand Canyon 104717, Capella 413413, Strayer 131803 (DC flagship, multi-campus aggregation flagged), DeVry 482477 (HLC accreditation “ended 06/30/2019” flagged), ASU Online 483124. UoP + Purdue Global IPEDS vs Scorecard UNITID divergence documented.
Scorecard API pull (10/10 seeded): 3 full + 7 partial; blocker — api.data.gov DEMO_KEY rate-limit is 10 req/hour (not documented 30). Operator action: free api.data.gov key at https://api.data.gov/signup/ unblocks 3 remaining metrics.
State-aid [VERIFY] resolution (14/14, zero flags remain): 10 fully resolved with 2025-26 or 2026-27 authoritative figures (OR, WY SF0047 signed 2026-02-27, NE CCPE 2024-25, MS MTAG APA Part 611, AL ASGP, MO A+, VA VTAG full matrix, CT Roberta B. Willis FY-26, CO COF, ND Career Builders); 3 partial (MT portfolio restructure, MN OHE 2023-24, WI companion programs); 1 blocked (UT USHE Wordfence-503, substitute budget.utah.gov). 5 program-structure changes flagged for Phase 3: UT 2021 Regents+New Century merger into Opportunity Scholarship · MT MHEG retired · CT Governor’s → Roberta B. Willis rebrand · NE State Grant → Nebraska Opportunity Grant · WI HEAB track split.
Total session output: 43 new/updated files across 5 directories. Only Gate 3 Deliverable E (Competitor Teardown + Compliance-Line-Crossing Inventory) remains for Gate 3 close.
.astro pages per spec §8. 1 intentional slug rename (quiz-financial-aid → financial-aid-quiz).
URL structure change documented — old site served tools at root (/efc-calculator/); port moves tools under /tools/<slug>/. Added public/_redirects (12 entries) preserving SEO equity — 9 root→/tools/ + 1 quiz rename + 2 legacy indexed paths.
294 parity tests written and passing — 10 new Vitest files under tests/unit/*.parity.test.ts (291 new + 3 pre-existing). Every constant (Pell tables, BLS salaries, BAH rates, poverty lines, loan rates, scholarship and employer databases, state-COL multipliers) diffed byte-identical vs source. Every branch covered: EFC progressive brackets + auto-zero + 175% poverty; Quiz independence + flat-30% SAI + program composition; Scholarship Finder filters + sort + aggregates; ROI tenYearROI + break-even; Loan Repayment SAVE/PAYE/PSLF + overpayment log formula; Career Salary per-state COL + nextLevel; Time-to-Degree pace overrides + military 15-credit bonus; GI Bill 5 chapters + half-time BAH=0; Aid Letter Decoder letter grade A-F + 4 red flags; Employer Tuition 18-record lookup + $-parsing + Pell stack.
Reconciliation report: ContentForge/docs/port-reconciliation.md. Also fixed scripts/session-end.sh (-u → -A + ContentForge branch master → main) and gitignored .superpowers/ + cmux.json (campaign-forge) + .claude/ (ContentForge).
Operator requirement met: "all tools work just like they do on the current site." No formula drift. Next: design refinement via DESIGN.md, GrowthBook wiring, prod cutover.
biosphere-market-2026.md — 3,653 words, 12 topical sections, P4 metadata on every claim (source_url + retrieval_date + recency_confidence + authority_tier). Topics: enrollment cliff (WICHE 2025 peak, 38 states declining), AI displacement + layoff waves (BLS projections, 2026 YTD 928 layoffs/day), SAVE plan ended (Eighth Circuit March 10 2026, 7.5M borrowers in transition, RAP live July 1 2026), FAFSA 2024-25 aftermath → 2025-26 +15.7% completions recovery, Gen-Z ROI skepticism (46% say college not worth, trade schools +5%), community college renaissance (+3% fall 2025), OBBBA permanent §127 + SECURE 2.0 SLP matching, tuition-vs-wage plateau (Minneapolis Fed), 2026-27 Pell $7,395 + Workforce Pell live July 1 2026, Meta Ad Library signals, 2026 platform benchmarks (education CPL $19.27 avg 2025 → $21.57 Dec end, +44% YoY). Strategic implications mapped all 12 forces to CampaignForge tool-backed-proof thesis.
B.4 continuation — 37 additional state-aid records emitted from existing Session #20 Firecrawl cache (AK, AL, AR, AZ, CO, CT, DE, HI, IA, ID, KS, LA, MA, MD, ME, MI, MN, MO, MS, MT, NC, ND, NE, NH, NJ, NM, NV, OR, RI, SC, SD, UT, VA, VT, WI, WV, WY). State-aid cache now 56 records — 50-state primary-program coverage achieved per operator C6 lock. 14 [VERIFY] flags tracked in manifest.
BLS OEWS expansion — 8 priority SOCs added (29-1071 PAs, 29-2061 LPNs, 29-1171 NPs, 13-2011 Accountants, 25-2021 Elementary Teachers, 47-2111 Electricians, 49-9021 HVAC, 15-1232 Computer User Support). Cache at 10/50 target SOCs.
IPEDS pilot seed — 5 institutions (SNHU, WGU, UoP, Purdue Global, Liberty) with core IPEDS demographics; Scorecard-specific metrics marked [PENDING_API_PULL] for next session.
Full YRP directory deferred to Phase 8 cron with documented task spec (~2,000 institutions, 90-day refresh cadence).
Total session output: 55 new/updated files. Next session: Deliverable D (VOC Corpus) + Deliverable E (Competitor Teardown + Compliance-Line-Crossing Inventory). Gate 3 closes after D + E land.
resource-candidates.json — 75 records (54 primary_gov + 19 accredited_private + 2 secondary_gov, zero aggregators, all decision_rule_category=1, all 18 required fields populated, all unique resource_ids). Covers federal grants + loans + debt-relief + administrative pathways + military benefits + tax-code + 8 employer programs + 15 state programs + reciprocity compacts + 12 niche private scholarships + 5 data-source records (Scorecard, IPEDS, BLS OEWS, BLS Projections, VA GI Bill Tool, HHS).
B.2 floor-multiplier-map.md — per-tool (10) floor anchor + typical-case stack (ad copy) + upper-bound stack ([MODELED], tool-output only). Floor-to-multiplier ratios 2x–40x+.
B.3 proprietary-rankings-research.md — 10 methodology candidates built from primary_gov data (Best Earnings / Value Working Adults / Completion Working Adults / Lowest Debt / Best Repayment / Veteran-Friendly / Pell-Recipient Outcomes / Credit Transfer / GI Bill Value Max / Employer Partner Schools). Each with data sources + inclusion/exclusion + weights (sum to 100) + refresh cadence + compliance framing + competitive moat + tool integration. Displaces third-party rankings entirely.
B.4 verticals/edu/research/authority-data-cache/ — scaffolded with 3 shared schemas (cached-record.schema.json v0.1.0 + provenance.schema.json + staleness-rules.json) + 9 sub-module manifests + 18 seed records + 10 ranking methodology spec JSONs = 53 JSON files across 8 sub-modules. Seed records: Pell 2026-27 $7,395 (PL 119-75), Direct Sub/Unsub undergrad 6.39%, Unsub grad 7.94%, PLUS 8.94%, VA Ch 33 national cap $29,920.95, Ch 30 MGIB-AD $2,518/mo, Ch 35 DEA $1,536/mo, IRS §127 $5,250 (student-loan-repayment made PERMANENT by OBBBA), HHS 2026 poverty guidelines full tables, + 15 state-aid programs + 2 reciprocity compacts (WICHE WUE + MSEP) + 2 pilot BLS OEWS SOC records (RN + Software Developers 2024 percentiles).
Operator verify-pass resolved all [VERIFY] flags via targeted Firecrawl scrapes. Operator-locked all 5 B.2 + all 7 B.3 open questions from v2 persona lens (institution-first rankings, Phase 8 program-level, 100-completer threshold, -15%/20% Parent PLUS penalty, dual YR treatment, two-tier employer-partner visibility, degreesources.com methodology hosting, hold at 10 methodologies for Phase 2).
Deliverable B ready for Gate 3 operator review alongside C/D/E (pending). Next session begins Deliverable C (2026 EDU Biosphere Market Study) + continuation work on B.4 state-aid (35 remaining states) + IPEDS/Scorecard pilot seeds + BLS OEWS top-50 expansion.
verticals/edu/tool-specs/. Shipped tool-trigger-audit.json (10 tools × 134 triggers, all 8 desire families F1–F8 covered, 2 new-tool gaps flagged for Phase 8), tool-multiplier-stories.md (per-tool floor/multiplier/source/desire-family + P5 cross-vertical pattern), adaptive-question-flow-spec.md (branch points + priority matrix), unit-economics-monetization-rules.md (Deliverable G — monetization rails + CPL bands + lead tiers + LTV + consolidated authority-data-cache infrastructure), platform-fit-rules.md (Deliverable H — 8 platforms × 3 concerns + conditional in-ad disclaimer render spec).
Applied P0/P1 patches: [VERIFY: authority-data-cache refresh] tags on federal loan rates + PLUS rate + Pell max + FAFSA deadline + OBBBA Public Law citation + CFPB specific report URL. 150% vs 225% poverty-line distinction for IDR plans (SAVE vs IBR/PAYE/ICR). Spouse §127 "both employers must have adopted plan" caveat. HR talking-points constrained to template-based placeholder-substitution only (no free-form LLM generation). SCAM CHECK 5-item exclusion pattern + scoring weights for Scholarship Finder.
Operator locked 12 Tier A + 10 Tier B + 8 Tier C + 10 Tier D Gate 2 decisions from 15-year media-buyer lens. C6 full-50-state revision applied after operator clarified Stage 3 Copy Factory is an agent-driven skill — all phased copywriter-depth scoping removed across 6 files; authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed.
ADR 0009 written and accepted at docs/adr/0009-agent-driven-copy-factory-input-layer-thesis.md — formalizes agent-driven Copy Factory + research-rigor-as-throughput thesis. Affects campaign-forge + ContentForge + campaignforge-app.
claude-mem PreToolUse:Read hook patched — removed the single file-context hook that was truncating Read output to line 1 when files had prior observations. All other claude-mem functionality preserved. Idempotent re-apply script at ~/.claude-mem/disable-pretool-read-hook.sh. Takes effect next session.
Gate 2 closed. Next session begins Gate 3 — Deliverable B (resource excavation + floor/multiplier + proprietary rankings research, Firecrawl-heavy, 4 parallel batches).
verticals/edu/tool-specs/efc-calculator.md (~260 lines; structural template for the 9 remaining tool specs in Deliverable F — purpose, desire families, question set with dependencies, calculation pseudocode, data sources with authority tier, output format, compliance posture, ContentForge Svelte vs campaign-forge vanilla JS drift-check, mobile-load targets, adaptive-flow branch points). Session paused mid-Deliverable-F to resume a carried-over Cloudflare infrastructure task from the prior session.
Mapped 3-account CF architecture: infra hub (fourthright.io + project subdomains for CampaignForge/ContentForge/Plane/Directus/Metabase/Coolify + contentforge-1ei.pages.dev Pages project) chosen as long-term master; Keith's account (degreesources.com production + doinsilence.com side blog) + third fourthright account held for a future consolidation session.
Stood up Cloudflare Access gate on ds.fourthright.io — bound as custom domain to existing ContentForge Pages project in infra hub, OTP IDP, 24h session, 3 owner emails allow-listed. Resolved ACME-HTTP-01-intercepted-by-Access issue via delete-app → let-cert-issue (~15s) → recreate-app workaround.
Pushed public/_headers to RationalSeer/contentforge main (commit 935a020): host-scoped X-Robots-Tag: noindex, nofollow for contentforge-1ei.pages.dev only (Access-gated preview and future production hostnames unaffected). Cleaned up wasted Zero Trust org from Keith's account (earlier false start).
4 CF API tokens leaked in conversation — flagged for operator revocation.
Preview gated and indexing-protected. Next session resumes Phase 2 Session 2 Deliverable F (9 specs + audit + stories + adaptive flow) + Deliverables G + H.
compliance-angle-map.md — 9 parts translating every restricted claim / word / scan pattern into compliant tool-backed angles via 8 desire families (F1–F8). Part VII: disclaimers conditional on ad-claim content, not platform/format — tool-discovery framing routes compliance to the lander.
Gate 1 resolved 5 open questions with 3 strategic extensions: super tools via adaptive question-flow, proprietary rankings from public gov data (displacing third-party whitelist), consolidated authority-data-cache infrastructure spec (IPEDS + BLS + rankings + VA + Scorecard unified).
Workflow infrastructure hardened: (1) CLAUDE.md +3 workflow rules — Firecrawl-default for research, parallel tool calls, persona re-load every ~5 turns; (2) scripts/session-end.sh multi-repo helper built with interactive per-repo confirm + safe staging; (3) deep-research skill rewired to Firecrawl CLI (was looking for MCP tools that weren't installed); (4) continuous-learning-v2 observer enabled — had 7.7MB of observations accumulated but analysis was blocked by observer.enabled: false; (5) session-end protocol mandates Lessons Learned entries (restoring cadence that lapsed after session #9). CLAUDE.md tuned 41.5k → 39.8k chars.
Gate 1 closed. Session 2 bootstrap prepared. Infrastructure ready for long-haul sessions. Deliverables F + G + H next (pure synthesis, no external research).
trigger-library.json (134 entries, schema v0.2.0). All of Phase 1 research shipped.
Merges applied: 5 Paper 3 formal confirmations (1 triple-merge on specificity_concrete_number), 9 Paper 2 confirmations into P1 parents (operator-approved; tool_as_proof_mechanism + mental_accounting_horizon_alignment kept standalone).
Deadline tension resolved: 3 sources merged into canonical deadline_mechanic with deadline_reality: "real" | "manufactured" selector set at angle generation. Three source IDs retained as alias stubs.
Tier distribution: Tier 1: 8 (triple-confirmed families) · Tier 2: 106 · Tier 3: 19 (replication-contested or register-sensitive) · Anti-pattern: 1 (government_card_imagery_compliance_trap).
Schema v0.2.0 formalizes P2/P3 extension fields (proof_mechanism_required, shortcut_risk, replication_status) and adds tier, see_also, variant_of, merged_from. Removed paper_N_status — provenance carries it.
Phase 1 complete. Phase 2 (EDU Reality Map) unblocked. Canonical vocabulary locked for all downstream phases.
proof_mechanism_required: true.
Tension surfaced for Phase 1d: deadline_deweaponization (P3) vs scarcity_time (P1) — planned conditional-trigger resolution.
Voss's 7-38-55 / Mehrabian rule deliberately NOT elevated (replication caveat documented).
All three Layer-1 research papers locked. Phase 1d synthesis unblocked.
government_card_imagery_compliance_trap (CMS-prohibited Medicare pattern).
Both operator-approved same session.
| Repository | Purpose | Stack | Deploys To | Status |
|---|---|---|---|---|
| campaignforge-app | CampaignForge ops platform (workflow UI, pipeline execution, dashboards) | SvelteKit 2, Svelte 5, shadcn-svelte, Drizzle + Kysely, Postgres | DigitalOcean via Coolify | Deploy-Ready |
| contentforge | Content sites (degreesources.com + future verticals) | Astro 5, Svelte 5 islands, MDX, Tailwind 4 | Cloudflare Pages | W1 Done • Deployed |
| campaign-forge | Pipeline brain: skills, config, specs, vertical data, research | Python scripts, YAML/JSON config, Claude skills | Local / CLI | Active |
Meta / Google / TikTok ad → user clicks → lands on content site tool page
User uses EFC Calculator / Quiz / Finder → genuine value delivered
User clicks "Explore Programs" → routed via offer URL with tracking params
User fills form on partner portal → lead captured → Everflow attributes
$35 CPL per qualified lead → Winners Vault updated → next campaign informed
This Week
Spec review, social profiles, Meta Verified, entity resolution, pre-warming
22 tasksWeeks 1-3
Tracking live, platform MVP, ad accounts, organic posting, campaign structures
35 tasksWeeks 4-6
Full workflow UI, account warm-up, first live campaign, Winners Vault seeded
26 tasksWeeks 7-9
Lead intelligence, email capture, social automation, competitive intel, platform health
28 tasksWeeks 10+
Ad platform APIs, CAPI firing, retro automated, ping/post routing, multi-vertical
15+ tasks