APPROVED 2026-04-13 • FEATURE FREEZE • BUILD STARTED 2026-04-14

CampaignForge + ContentForge

Complete Spec Bundle — 5 Documents, 3 Repos, Building

5
Spec Documents
17
Decisions
3
Repos Live
815+
Files Shipped
134
Triggers (canonical)
75
Resources cataloged
10
Proprietary rankings
50
States covered (aid)
294
Tool parity tests
Spec Documents
Click any card to drill into the full visual breakdown.
Platform Spec (Ops App)
ContentForge (Content Sites)
Trust Framework (Accounts)
Execution Plan (Timeline)
Operator Runbook (Daily Ops)
Build Progress
Repos scaffolded 2026-04-14. Feature freeze in effect — building from locked specs.

campaignforge-app

PIPELINE UI DONE

SvelteKit 2 + Svelte 5 + Tailwind 4 + Drizzle ORM + shadcn-svelte

✓ App shell + auth + dark mode
✓ shadcn-svelte (9 components)
✓ PostgreSQL + Drizzle + migrations
✓ Agent Executor + SSE streaming
✓ Cost tracker (per-model pricing)
✓ Session manager (state machine)
✓ Pipeline orchestrator (stage sequencing)
✓ Brief editor (7-step form → YAML)
✓ Pipeline execution UI (SSE + timeline)
✓ Approval panels (angles + copy)
✓ 3 pipeline API routes wired
◆ Validation results display
◆ Deploy to Coolify
◆ GATE: Pipeline runnable from UI

contentforge

QUALITY GATES DONE

Astro 5 + Svelte islands + MDX + Tailwind 4 • 42 pages, 10 tools, Lighthouse 95+

✓ 20 articles + 10 tools + 4 E-E-A-T
✓ 8 shared + 10 tool Svelte islands
✓ JSON-LD schema (all page types)
✓ XML sitemaps (priority + lastmod)
✓ Zaraz events (5 × 10 tools)
✓ Playwright E2E (22 tests)
✓ a11y audit (zero critical/serious)
✓ Lighthouse CI (all pages 95+)
✓ Skip-to-content + ARIA roles
✓ Color contrast WCAG AA
◆ URL redirect plan + A/B test
◆ CF Pages production cutover
◆ GATE: ContentForge live on Astro

Lessons Learned

Session 26 — Phase 2 Proper Is Mechanical When Research Sprint Does Its Job + Decision-Rule Discipline Surfaces Unclaimed Lanes + Tier-1 Trigger Mapping Proves Universal Moat (2026-04-17)

Phase 2 proper synthesis was truly mechanical per scope-doc prediction — the 8 Gate 1–3 deliverables carried every input needed, and session time went into structuring + cross-referencing rather than inventing. Writing the 4 JSONs (situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix) was a combinatorial exercise against the compliance-angle-map (F1–F8 definitions) + floor-multiplier-map (per-tool multiplier stories) + voc-themes (desire-family VoC validation) + competitor-teardown + compliance-line-crossing-inventory (competitor deployment evidence). The scope doc predicted this: "combine inputs → emit four JSONs → validate against cross-cutting principles → ship. No invention, no memory-pull, no blind spots." It held exactly. Pattern: when a research sprint genuinely does the work of scoping its own downstream synthesis, the synthesis phase stops requiring creativity and becomes operational. The payoff justifies the upfront investment in rigorous research deliverables.

S8 unclaimed-lane situation surfaces only when P1 decision-rule forces discipline. Meta corpus had ZERO advertisers targeting "middle-income returner delayed by cost" — the VoC-validated self-disqualification persona (quotes 8, 9, 11 "too much money to qualify"). Without the P1 principle ("only categories 1 and 3 ship"), S8 might have been cut for lack of competitor-demand validation. With P1, S8 is identified as the strongest category-1 strategic opportunity in the entire situation map — tool-native dismantling of a belief with zero competitor contest. Pattern: discipline around decision rules surfaces moats that pure market-validation signals miss, because the strongest moats have empty competitor cells for a reason. Where the signal is "no one is doing this," the question is whether the empty cell has a structural barrier (tool-requirement, data-requirement, compliance-requirement) or a market-signal deficit — the former is moat, the latter is trap.

Tier-1 interactive_tool_conversion_lift mapping across 100% of reality-map cells + 0% of competitor cells is the structural moat in one line. Every active cell (32 of 32) activates this trigger. Competitor corpus shows zero school-direct + zero aggressive-affiliate deployment (per competitor-teardown.md "calculator-led creative: zero schools deploying"). The moat is not "CampaignForge has some tool-led angles" — it's "the entire reality map is tool-native by construction while competitors are tool-absent by construction." Pattern: when a trigger library entry maps across 100% of your cells and 0% of competitor cells, the advantage is structural, not differentiation. Structural advantages compound because competitors can't copy them without building equivalent infrastructure; differentiation advantages converge because copy is cheap.

Documenting inactive cells with explicit rationale preserves optionality for Stage R. Of the 64 possible cells (8 situations × 8 families), 32 are primary/secondary active and 32 are inactive-this-pass. Rather than omit the inactive half, reality_map.json documents each inactive cell with rationale (e.g., "S1 × F3/F4/F7/F8 — SAI-blindsided families are already post-qualification and don't need effortless-qualification or urgency"). Stage R campaign signal can later elevate an inactive cell based on unexpected conversion patterns. Pattern: mark the map complete, don't leave it sparse. Future-you reviewing the map after campaigns run needs to distinguish "we decided this cell doesn't matter" from "we forgot this cell existed." Explicit inactivation ≠ omission.

Compliance-line-crossing inventory (session #25) + reality_map cells produce a tight one-to-one mapping on first overlay. Every Yorkshire violation (A1–A8) traces cleanly to S1.F6 (high-SAI institutional-legitimacy). Every Degree SNAP variant (A10) traces to S2.F3 + S2.F1. Every Learn Grant Writing violation (A12) traces to S4.F5 + S4.F4. The inventory is functioning as a competitor-pattern-to-reality-cell dictionary exactly as session #25's Lessons Learned predicted. Pattern: when two artifacts from different sessions produce tight structural mapping on first overlay, the underlying taxonomy is sound — don't revise. Taxonomy-validation-via-cross-referencing is one of the cheapest quality signals available, and a mismatch would have been the clearest signal that the taxonomy needed rework.

Programmatic validation catches transcription errors that semantic review misses. A single Python script (json.load + set comparison) verified 134/134 trigger coverage + JSON parse validity + structural cell count in <2 seconds. During drafting I'd written "detailed_entries_written: 135" in the trigger-reality-matrix metadata block; the script caught the drift — actual count was 134. Zero semantic review pass would have flagged that. Pattern: machine-verifiable invariants catch the errors human review invariably misses. Every structured research output should have a one-command validation script, and the script should run before "shipped" is claimed.

Session 25 — Pre-Flagged Collection Notes Compress Synthesis + Heuristic Gaps Surface At Use + SimilarWeb As Tool-Page UX Benchmark (2026-04-17)

Pre-flagged collection-pass observations are the single highest-leverage synthesis throughput multiplier. Session #24's Meta agent SUMMARY.md pre-identified the top 5 line-crossing patterns (Yorkshire / Degree SNAP / Scholarship System / Prestige Health / Learn Grant Writing) plus the moat-validation-by-omission finding ("financial aid calculator" returned 1 ad). When session #25 opened, the case-study skeleton was already built — synthesis time went into trigger annotations + tool-backed translations + cross-cohort patterns rather than re-discovering structure from raw text. Pattern: every collection agent should be tasked with producing a SUMMARY that pre-identifies 5–10 highest-signal cases for the synthesis pass, not just dumping raw data. The collector knows the corpus better than the synthesizer ever will after a one-pass read — capture that knowledge upfront.

Compliance heuristic gaps surface during synthesis, not collection. The simple regex/keyword classifier in _parse.py missed 4 Yorkshire variants that the synthesis pass flagged as violations: A3 "$72,000 lie" (no dollar-anchor regex match), A11 "don't claim your child as a dependent" (insider-knowledge without explicit "free money" token), A12 "never submit your FAFSA same day" (cliffhanger-curiosity-gap with no aggressive trigger word), A15 "$5,500 federal loan = no real help" (anti-establishment framing weaponizing federal-aid distinction). Heuristic was tuned for explicit-claim violations; missed framing-violations entirely. Phase 2 v2 pass needs: cliffhanger-curiosity-gap detection + anti-establishment-framing detection + N=1-case-as-method detection. Pattern: heuristics built from prior-known violations don't catch novel framing patterns; always run a manual-review pass on a sample to surface what the regex isn't seeing.

SimilarWeb engagement metrics give measurable UX benchmarks for tool pages. SNHU's 21.25% bounce + 7.30 pages/visit + 7:35 duration is a concrete reference standard for any CampaignForge tool page that gets paid traffic; Strayer's 11.77 pages/visit is the high-engagement ceiling reference. ASU Online subdomain's 70.81% bounce / 1.61 pages/visit / 1:14 duration is the failure-mode reference (paid landing without next-click logic). Phase 6 lander-archetype specs should reference these benchmarks explicitly rather than rely on internal-only quality conventions. Engagement metrics from SimilarWeb are not an "interesting data point" — they're the empirical floor your competitive landing experience has to clear, and the absolute ceiling of what's been validated as feasible by competitors at scale.

Operator-requested mid-session persona reload validates the "Strategic-decision triggers" rule from CLAUDE.md. Operator surfaced advanced-affiliate-marketer-system-prompt-v2.md mid-session before the synthesis pass. Re-load preceded both deliverables; both drafted with fresh attention weight on the structural-edge thesis (tool-backed proof > aggressive claim) and the compliance-as-angle-input frame. Without the reload, synthesis would likely have collapsed the school-direct vs aggressive-affiliate cohort distinction into "competitor analysis" rather than treating it as the empirical foundation of the moat thesis. Pattern: re-loads at strategic-decision points are not optional; the compounding cost of weakened persona attention across a multi-hour synthesis pass is invisible until the output looks generic.

The "calculator-led creative" cross-cohort pattern table cell is empty across all 11 schools. Of 7 patterns mapped (floor-number / competency-based / military-discount / transfer-credit / tuition-guarantee / creator-testimonial / calculator-led), six have multiple school-direct deployers; the seventh has zero. That's not a coincidence — school-direct advertisers don't have multi-school calculators because they only know their own pricing/aid model. Aggressive affiliates don't have calculators because building a real EFC calculator is engineering investment they refuse to make. The CampaignForge moat is structurally locked — not because it's clever, but because no one else can build it from where they sit. Pattern: when a competitive teardown surfaces an empty cell that nobody is filling, that's a moat opportunity if and only if there's a structural reason no one fills it. Empty cells with structural barriers compound; empty cells without barriers get filled by the next entrant.

"Tool-backed compliant translation" is the most reusable per-case annotation in the entire inventory. Every Section A (hard violations) case got tagged with the specific CampaignForge tool that captures the same psychological payload through proof rather than claim. That tagging is the direct seed for Phase 5+ Strategy Engine angle-generation: when the agent reads "Yorkshire's $100K windfall claim violates F6 institutional-legitimacy", it now has the explicit "EFC Calculator with transparent asset-treatment disclosure" route to the same audience. The line-crossing inventory is functioning as a competitor-pattern-to-tool-mapping dictionary, not just a violations catalog. Pattern: every research-input artifact should carry direct-actionable tagging for downstream agent consumption, not just observations — the translation work happens during research, not during generation.

OBBBA-aware tool integration mapping in clean-reference cases (C-section) reveals tool-coverage gaps. While annotating the WGU competency-based reference, surfaced that Time-to-Degree Calculator needs a PLA (prior learning assessment) modeling layer to compete on competency-based pathway comparison — current scope doesn't include this. While annotating Coursera × Google Career Certificate reference, surfaced that Career Salary Explorer should surface employer-recognized credentials as part of the pathway, not just degrees — current scope is degree-centric. Both gaps now flagged for Phase 3 tool blueprint refinement. Pattern: clean-reference annotation in a competitive teardown is the cheapest possible scope-validation pass for your own tool roadmap. If your tool can't beat the clean reference, you have a tool-spec problem before you have a copy problem.

Session 24 — Firecrawl-Exclusive Scraping + Google Transparency Unscrapeable + Quality-Over-Quantity on Archive Depth (2026-04-17)

Ad-platform scraping from the operator IP is a permanent ban risk — not a per-session judgment call. Operator runs Meta, Google, TikTok ad accounts from this machine. Any scraping of those platforms’ ad-intelligence properties (Ad Library, Transparency Center, Creative Center) or competitor landers that those platforms’ fraud-detection systems might fingerprint = correlation-to-banned-account-behavior risk. Codified today as durable auto-memory (feedback_firecrawl_exclusive_scraping.md) + hard-locked at the top of every agent brief in the session. Federal authority domains (ed.gov, bls.gov, studentaid.gov, va.gov, irs.gov) carved out as lower-risk for direct operator-initiated bulk-federal-data scripts. Litmus test: “If the ad platform’s fraud team saw this IP request tomorrow, would it look like browsing or scraping?”

Google Ads Transparency Center is structurally unscrapeable via unauthenticated Firecrawl. 15 min of retry exhausted the surface: SPA hydration barrier (post-bootstrap internal RPC; 7s wait-for insufficient for virtualized card mount; 2.7 MB raw HTML captured with zero creative IDs / dates / text), advertiser-detail pages auth-wall with a sign-in prompt, interact -c code sandbox has a ctx-already-declared collision. Typeahead confirmed advertisers exist but detail pages inaccessible. Same verdict for TikTok Creative Center (JS-gated + geo-gated + anonymized + no landing URLs). Escape hatches: (1) commercial ad-intel tool ($79–$299/mo SpyFu, Semrush, AdClarity, or BigSpy), (2) operator-authenticated separate-IP browser session, (3) firecrawl interact -c --python which avoids the Node sandbox collision (2–3h engineering, no auth-wall guarantee). Pattern to remember: if a platform’s whole value is behind SPA hydration + auth walls, Firecrawl alone is the wrong tool; don’t grind. The first Google agent burned 939s and produced zero output trying to brute-force this. The retry with tight time-boxed scope produced the same finding gracefully in 8 min. Tight-scope agent briefs with explicit stop-triggers beat broad-scope grinds every time.

Authority-data-cache is quality-over-quantity — 101 more FSA archive files would have delayed Gate 3 close for marginal angle value. The seductive move after getting 31 CSVs of FAFSA data converted was to go back and pull the 7-year archive (2018-19 through 2026-27, 101 queued files, blocked by VPN DNS sinkhole). Resisted. Current cache already satisfies ad-relevant use cases: current cycle snapshot + just-completed cycle baseline + in-progress cycle momentum + 2023-24 national demographics (federal TAM validation) + per-school specificity. OBBBA is the structural break — pre-OBBBA data collapses to one baseline comparison regardless of how many years deep. Ads need current amounts + current status + one YoY comparison + demographic validation + per-school specificity, not a historical trends dashboard. Defensibility is a function of source URL + retrieval date + authority tier on each claim, not archive depth. Skipping 101 files of marginal signal is the right call when the cost is ingest overhead + quarterly refresh burden vs zero new angles. Pattern to remember: more data without a specific downstream use case is bloat, not rigor.

Parent PLUS $20K/$65K cap + $257,500 lifetime max + consolidation 3-month buffer closed today = high-urgency current-borrower ad signal. OBBBA record synthesis surfaced a timing detail buried in the FSA big-updates page: consolidate at least 3 months before 2026-07-01 to guarantee disbursement before cutoff, or lose access to IBR/ICR/PAYE permanently. As of today (2026-04-17) the 3-month buffer has closed — any consolidation application from now forward carries elevated risk of disbursement landing post-cutoff. That’s a concrete urgency mechanism for Loan Repayment Calculator and Financial Aid Quiz that wasn’t obvious from the high-level OBBBA summary. Mine the operational guidance under the headline numbers; the timing details are often the sharpest ad levers.

Agent stream-watchdog (600s no-progress) is a useful failure mode, not a lost session. Landers agent stalled at 600s without writing its structured JSONL or SUMMARY, but left 102 usable raw files on disk (20 affiliate landers + 9 schools × 6 pages + 11 SimilarWeb reports). Inline structuring during synthesis is higher-quality than a second agent pass because the synthesizer can tag-as-they-read and the taxonomy (compliance posture, desire family, tool-backed translation) stays internally consistent across all cases. Agent stalls are not failures when the raw data survives; they’re just phase-boundary signals that the synthesizer should pick up where the collector stopped.

Inference bugs hide in date math — always clamp-to-today on release-date estimates for in-progress cycles. FAFSA ingest initially emitted data_release_date: 2027-06-30 for the 2026-27 Q1 opening cycle — the Q7 end-of-cycle inference. A release date is definitionally in the past; future-dated release = bug. Caught on spot-check, fixed with a clamp-to-today guard. Pattern: any inference that could produce future dates for records about past events deserves an invariant check. Cost of clamp = 3 LOC; cost of future auditor finding future-dated authority-data = credibility hit.

Session 23 Addendum — FAFSA Data Pipeline + VPN DNS Sinkhole Diagnostic (2026-04-17)

Federal public-domain data is the cleanest license in the stack. 17 USC § 105 excludes all federal government works from copyright protection. No usage fees, no attribution required, no restrictions on derivative or commercial use. FSA FAFSA Application Volume + Demographics releases meet this bar. Every record we cite with federal source_url + retrieval_date is a regulator-defensible claim. This makes the authority-data-cache's primary_gov tier not just a taxonomy label — it's the legal foundation of the whole compliant-ads thesis.

Federal primary source beats aggregator visualization every time. Operator surfaced NCAN’s Bill DeBaun FAFSA Tracker (Tableau Public). Scout investigation confirmed NCAN aggregates from the same FSA .xls files we can pull directly. Skipping NCAN saves a Tableau-scraping project (accordions don’t expand in Firecrawl’s Playwright; agent-mode hallucinated values when blocked), avoids accredited_private tier dilution, and delivers cleaner upstream. Pattern: every time an aggregator appears promising, ask “what federal source is this built on” first.

Dependent vs Independent split is the adult-learner TAM at federal level. 2023-24 national demographics: 52% Independent applicants, 41.8% age 25+, 47% first-gen. Per-school pilot data confirms the concentration: WGU 93.6% independent, SNHU 92%, UoP 95%, Capella 98%, DeVry 97%, Strayer 98%. CampaignForge’s “online EDU = adult learner” thesis isn’t just a perspective — it’s now quantified with federal authoritative data. Biosphere narratives citing these numbers are regulator-armored.

Multi-cycle YoY trajectory unlocks narrative depth single-cycle data can’t. Three cycles of per-school Q1–Q7 data lets us say “SNHU processed 282,010 FAFSAs in the botched 2024-25 cycle (93% independent); recovering at 208,290 through Q3 of 2025-26; ASU already at 119,930 in Q1 of 2026-27 alone — half of full cycle 2024-25 in one quarter.” Regulator-defensible claim material for Stage 3 Copy Factory state × cycle-recency social-proof anchors. Single-cycle data is a table; multi-cycle data is a story.

Completion-time data quantitatively validates tool-discovery framing. 2023-24 demographics reports dependent-filer full-form completion at 50:58 minutes; independent-filer EZ form at 16:42. The friction we’ve been claiming our tools reduce isn’t a vibes argument — FSA itself publishes the number. Calculator reduces a 51-minute form to a 2-minute qualification check. Friction-reduction is federally quantified, not narrative.

DNS sinkhole pattern: read the resolved IP. Two VPN exits both failed to reach studentaid.gov. Looked like VPN blocking, retry loops, or HTTP/2 issues. Real answer surfaced from one command: host studentaid.gov returned 198.18.8.39 — a TEST-NET-2 private-range IP. VPN’s DNS resolver was sinkholing studentaid.gov specifically while passing bls.gov and data.gov through. Pattern: when a curl fails silently (no 403, no 5xx, just timeout), always check DNS resolution first. Private-range IP returned for a public domain = sinkhole. Cheap diagnostic; saves an hour of chasing firewall hypotheses.

Reusable tooling pays compounding dividends. Built scripts/convert-fafsa-xls-to-csv.py to handle one 11-file XLS batch from the operator; same script later converted a single demographics file, and will run unchanged on any future FSA batch of arbitrary size. Idempotent skip logic + multi-sheet auto-detection + engine selection (openpyxl vs xlrd) means it’s a “drop-and-go” interface for the operator. Each time I write infrastructure rather than one-shot code, the next batch is free.

Adult-learner application concentration: 85% of non-freshmen list only 1 school. 2023-24 demographics detail: freshmen list 1 school 44% of the time, trailing to 4% for 10-school filers — classic shopping distribution. Non-freshmen collapse that entirely: 85% list only 1 school. Adult returners aren’t comparison-shopping; they’ve pre-decided. Copy implication: adult-learner ads should be school-specific, not “compare your options” framing. Different psychology from first-time freshmen filers; need different angle generation paths per persona.

Session 23 — 4-Agent Parallel Orchestration for Gate 3 Deliverable D + B.4 Continuation (2026-04-17)

Parallel orchestration works cleanly when output paths don’t collide. Four research agents dispatched from a single orchestrator turn — voc-collector, school-data-puller, bls-extender, state-aid-resolver — each writing to a distinct subdirectory tree. Zero file collisions, zero coordination overhead, zero need to consult each other’s in-flight state. Wall-clock compressed from ~4h serial estimate to ~30min (longest pole). The discipline is upfront: confirm writes land in non-overlapping paths before launching, not after. Can’t parallelize when agents need to share a stateful file or each other’s intermediate conclusions; can parallelize when each agent’s output stands alone.

Self-contained briefs beat shared-context agents. Each agent brief included: project context files to read (absolute paths), exact task scope, output schema, guardrails (compliance + anti-hallucination + firecrawl-only), success checklist, report format + word cap. Agents came back with tight, actionable reports rather than rambling transcripts. Pattern reuses: if you can write the brief as if the agent just walked into the room cold, you don’t need to babysit.

Pell-refund discovery provokes fear before delight. VoC Reddit + Trustpilot signal that first-time Pell recipients default to “I’ll get in trouble” + tax-penalty anxiety, not windfall delight. Our current EFC Calculator output flow leads with the dollar amount then context. The real sequence should be permission-slip first (“this is legitimately yours”) then specificity (“here’s $7,395”). This inverts a framing assumption baked into Phase 2 tool specs. Flagged for Phase 6 lander archetype re-work.

Peer-insider-knowledge is the primary Reddit conversion mechanism. Reddit buyers convert each other not through copy but by surfacing concrete administrative mechanisms — unusual circumstances override, 90-day SAVE recertification, §127 structural requirements, SAP appeal pathways. The tools CampaignForge builds need to package peer-insider quality natively, not just present math. That means output screens that sound like “here’s the specific administrative lever you can pull” rather than “here’s your calculated number.” Phase 3 tool blueprint implication.

Middle-income squeeze is a distinct adult-learner identity. Not a sub-segment of traditional undergrad — a named persona. The narrative: parents “made too much to qualify” for need-based aid but had no savings → student dropped out → returned 10 years later. Our 8 desire families (F1–F8) partially address this but don’t name it. Candidate F9 or cross-cutting persona in Phase 2 reality_map.json.

DEMO_KEY rate limits don’t match the docs. api.data.gov DEMO_KEY documented at 30 req/hour, actual behavior 10 req/hour. Single batched 10-school Scorecard query exhausted budget; subsequent calls returned HTTP 429 retry-after 12096s (3.3h). Operator-key path is unblocking and should be the default for any production data pull; DEMO_KEY is genuinely demo-only, not “light production.” Document upfront. 5 minutes of operator reCAPTCHA saves 3.3h of wait.

Government agency web is deteriorating; Firecrawl catches it, PDFs survive it. MT primary URL 404’d (tourism content returned); OR redesigned its entire student-aid portal; UT USHE site Wordfence-503-blocks all scraping; CT statute-database URL replaced by knowledge-base article; WY admin URL is the administering-institution’s (UW-SFA) page, not the state’s. Seven of the 14 re-pulled records required URL replacement. The authority-data-cache “source_url” discipline is load-bearing — when these URLs drift, our claims stop being traceable. For UT specifically: legislative appropriations + USHE PDF downloads survive Wordfence; HTML scraping does not. Next-pass planning needs to internalize that PDF + API survive where HTML decays.

Session 22 — ContentForge Port Reconciliation + Tool Parity Verified (2026-04-17)

Port verification is a one-time debt that prevents an infinite-sized future debt. Before this session, ~4,500 LOC of production JS had been reimplemented in TypeScript with only 3 trivial unit tests covering one helper function. No automated check existed that the TS produced identical outputs to the JS running on degreesources.com. Ship any design change or dep upgrade on top of that, and a silent formula drift would go undetected into production. 291 parity tests later, the drift window is closed — every tool's calculation branch has golden cases, and a regression fails the suite in milliseconds. The cost was one session. The cost of finding out about drift from a user's email about "wrong EFC number" is unbounded.

Diff line-by-line before writing tests, not after. For each tool the workflow was: (1) read source JS calculation block, (2) read port .logic.ts, (3) diff constants and formulas mentally, (4) only then write tests. Finding a match lets the tests lock in known-good behavior. Finding a mismatch would have meant fixing the port before writing the test — but none appeared, which is itself a finding: the port author stayed faithful to the source. Parity tests that pass first-run are not anti-climactic; they're the goal.

"Tools must work just like the current site" is more testable than it sounds. The operator's stated requirement reads like a vague UX promise. But tool behavior = pure functions from inputs to outputs, which is the most testable thing in software. Once restated as "for every documented input combination, TS output equals JS output," it becomes a bounded, finite verification task. Always convert fuzzy "just work like" requirements into explicit input/output golden-case assertions.

Session-end script: git add -u silently drops new files; -A is the fix. Session #20 commit missed Deliverable B artifacts because -u only stages modified tracked files. Switched to -A; safety is preserved because the per-repo confirm step shows git status --short (including ?? entries) before staging. Also caught ContentForge branch drift (main not master) in the config. Tooling bugs compound across sessions; fix them when you notice them, not later.

Session 21 — Deliverable C Shipped + 50-State Coverage Achieved (2026-04-17)

A 2026-current biosphere study is the compliance-angle-map's external-environment twin. Deliverable A mapped the compliance-restriction list to psychological-desire translations; Deliverable C maps the 12 major 2026 market forces (enrollment cliff, AI displacement, layoff waves, SAVE end, FAFSA recovery, Gen-Z skepticism, OBBBA, Workforce Pell, §127 permanence, Meta Ad Library, platform benchmarks) to what’s converting RIGHT NOW. Five forces structurally favor our tool-backed-proof thesis (not coincidentally — the pipeline was designed for this environment). The enrollment cliff isn’t a problem to solve; it’s the rationale for the vertical-shifting strategy. The wage-premium plateau is the single most important datapoint because it empirically validates Gen-Z’s skepticism — so ROI Calculator's honest-verdict framing becomes a trust-building asset, not a conversion liability. Competitors still selling “guaranteed ROI” are fighting 2020 research with 2020 claims in a 2026 market.

One Firecrawl batch keeps paying dividends if the cache layer is schema-disciplined. Session #20 emitted 88 Firecrawl scrape files into .firecrawl/gate3-b/. Session #21 emitted 37 additional state-aid records, 8 BLS SOCs, 5 IPEDS pilots, and source-cited all 12 biosphere-study sections — zero new Firecrawl spend, because Session #20’s scrapes already contained the authoritative content. Every emitted record carries schema_version + authority_tier + cache_refresh_date; 14 [VERIFY] flags are queued where specific 2026-27 dollar amounts need re-pull. The cache doesn’t need to be perfect at seed; it needs to be correctly structured and honestly flagged.

“Full 50-state coverage” and “full 10-school pilot” are orthogonal scope questions. Operator C6 lock requires 50-state coverage at Phase 2 seed (agent-driven Copy Factory per ADR 0009 has no phased-copywriter constraint). That’s binary — 50 states covered or not. IPEDS pilot is different: 10 schools is a partner-negotiation scope that can stretch across sessions without blocking Phase 3. Solution: ship 50-state primary-program coverage this session (achieved); ship 5 of 10 IPEDS pilots with [PENDING_API_PULL] markers for Scorecard metrics; queue remaining 5 + Scorecard API pull for next session. Distinguish “coverage mandate” from “depth target” up front or the scope swims.

Session 20 — Phase 2 Gate 3 Deliverable B Shipped (2026-04-17)

Research sprints compound when batched in parallel with high-yield searches. 88 authoritative pages scraped across 6 parallel Firecrawl batches (federal, state × 7 sub-batches, military+tax+employer, niche-private, ranking-data APIs, verify-pass). Each firecrawl search --scrape returns multiple full authoritative pages in one call; batching them in background frees main context for structured-data emission. The real throughput multiplier isn't "scrape more" — it's "scrape once, emit structured records that feed every downstream deliverable." B.1 (75 records), B.2 (10 floor/multiplier stacks), B.3 (10 methodologies), B.4 (53 cache records) all sourced from the same 88 scrapes.

Truth is the moat, but only if every claim is resolvable to its source. Every record in B.1 + B.4 carries source_url + retrieval_date + recency_confidence + authority_tier. Any claim that couldn't be confirmed from Firecrawl content shipped with a [VERIFY] tag and operator-verification-pass queue — never a speculative value. Operator verify pass resolved all 6 flagged items via targeted Firecrawl scrapes (loan rates 6.39%/7.94%/8.94%, HHS 2026 poverty guidelines, Ch 30 MGIB-AD $2,518, §127 student-loan-payment made PERMANENT by OBBBA). Zero [VERIFY] markers remain in final shipped state.

Aggregator sources excluded by construction, not by review. authority_tier enum hard-codes {primary_gov, secondary_gov, accredited_private} — third_party_aggregator isn't a value that ships. This isn't a taste preference; it's regulator-defense architecture. When a challenge comes (Meta account review, FTC inquiry, competitor legal), every claim in copy traces to a cache record that traces to a primary-gov URL with release date. 54 primary_gov + 19 accredited_private + 2 secondary_gov (reciprocity compacts), 0 aggregators across 75 records.

Institution-first rankings compound; program-level is Phase 8 expansion. Ship 10 institution-level proprietary rankings immediately from Scorecard + IPEDS data (operator-locked). Program-level (4-digit CIP × institution joins) is genuinely higher-moat but requires more data engineering — Phase 8 scope. Three methodologies (Best Veteran-Friendly, Best GI Bill Value Max, Best Employer Partner Schools) are institution-level by nature and stay that way permanently. The other seven carry phase_8_expansion_target: "institution_and_program" so the Phase 5 agent knows what's coming.

Floor-to-multiplier ratios quantify the moat numerically. 10 tools × per-tool floor anchor vs. tool-proven multiplier: ratios range 2x (Financial Aid Quiz "FAFSA" → 3–5 aid categories) to 40x+ (Scholarship Finder "unclaimed billions" → $60K specific). The big numbers aren't marketing — they're what the math produces when you stack compliant resources for a qualifying profile. Ad copy uses typical case; tool output shows the user's specific number. Competitors can't follow because they don't have the tool to produce the math.

OBBBA changed §127 from temporary to permanent. The One Big Beautiful Bill Act made the student-loan-repayment inclusion in Section 127 educational assistance permanent (previously scheduled to expire 12/31/2025). This is load-bearing for the employer-benefit angle — no more sunset-clock framing on the $5,250 employer loan-repayment benefit. Every Employer Tuition Checker output can now cite permanence. Find the legislative changes that quietly enable new angles; they're the research edge.

Sync discipline: ClickUp is primary, not just STATUS/SESSION-LOG. Operator reminded at session #20 close — tracker sync at session end must include ClickUp (per ADR 0006) when tracker-relevant work happened. Saved as feedback_clickup_sync.md in memory. If STATUS.md moves forward and ClickUp doesn't, the operator-facing view diverges from the working view.

Session 19 — Phase 2 Gate 2 Closed + ADR 0009 + claude-mem Hook Patch (2026-04-17)

Research-sprint deliverables are agent inputs, not copywriter briefs. The breakthrough reframe this session: Stage 3 Copy Factory is a skill-based Claude agent, not a human-copywriter production line. Every schema field, every authority-data-cache record, every compliance rule, every tool-multiplier story becomes a parameter the agent reads at generation time. This inverts the constraint model — bandwidth is no longer the bottleneck, research-input rigor is. Moat is two-layered: research quality competitors can't match AND agent throughput competitors can't match at human-copywriter pace. ADR 0009 locks this thesis in.

Full 50-state coverage is sustainable because production is agent-driven. Initially scoped state coverage as phased copywriter depth (top-15 deep at launch, remaining 35 in Phase 5). Operator flagged the agent-driven nature of Copy Factory — all phased gating dissolved. authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed; agent generates state × angle × platform × format matrix at every campaign run. Constraint shifts from hours to authority-data-cache completeness.

Channel the persona deliberately for judgment calls. When operator asked "answer from your 15-year perspective" on Gate 2 decisions, re-reading the v2 persona and deliberately channeling it produced substantively different answers than the default — categorical rather than hedge-y, strict rather than permissive (e.g., "universal skip YouTube Bumper when disclaimer won't fit", "strict 30-conv PMax bar, no lean seed", "PLUS-heavy red flag is factual"). The re-load triggers in CLAUDE.md aren't just cost-savings, they're judgment-quality gates.

Typical-case in ad copy, upper-bound as tool output. Distilled media-buyer discipline on multiplier framing: ad copy references the typical case ("most adults see $12–20K combined aid"); upper-bound numbers ($30K+, $65K/yr) only appear as tool output for users whose inputs produce them. Mixing the two in ad copy is the aggressive-affiliate pattern that burns accounts AND creates downstream lead-quality disasters (users expecting $30K, receiving $12K, churning through enrollment funnel, damaging school-partner relationships).

Compliance-and-CTR simultaneity is tool-discovery framing's signature. TikTok's restricted-industry scrutiny is aggressive for aid-claim copy. But tool-discovery framing ("take this 2-minute quiz to see what you qualify for") is BOTH the safer compliance posture AND the higher-CTR framing. Ad carries no standalone claim; tool on the lander carries substantiation. The moat pattern — tools-that-prove-claims — isn't just a moat, it's also the copywriting pattern that converts across every platform.

Plugin hooks can accumulate hidden tax. claude-mem's PreToolUse:Read hook was truncating file reads to line 1 when prior observations existed — a token-saving optimization that inverted into a capability regression for deep synthesis work (persona re-loads, spec reviews, multi-file research). Removed only that one hook (via jq 'del(.hooks.PreToolUse)'); kept every other claude-mem feature. Idempotent re-apply script because plugin updates will overwrite the patch. Lesson: "capability regression" can live inside what looks like a working plugin.

Operator's compounding-moat framing re-centers the work. The directive to "accumulate factual, true data across all possibilities that truly can benefit others, then condense that into useful, easy-to-use tools that output truthful quality results with a path for every consumer" is the project thesis in one sentence. Every research gap becomes a ceiling on output quality; every layer of rigor becomes throughput at output. Keep this framing front of mind for every future agent — research is not overhead, it is the moat made tangible.

Session 18 — Cloudflare Preview Gating + Phase 2 Session 2 Kickoff (Paused) (2026-04-17)

Cloudflare Access cannot pre-gate a Pages custom domain during initial SSL provisioning. The ACME HTTP-01 challenge to /.well-known/acme-challenge/* is intercepted by the Access gate, blocking Google CA cert issuance. Workaround: delete the Access app → let cert issue (~15s once unblocked) → recreate the Access app. Brief public window is acceptable when the URL isn't advertised yet. Order-of-operations trap — gate pre-seeding doesn't work end-to-end for brand-new hostnames.

Pages.dev hostnames can't have direct Access apps. They belong to Cloudflare's shared zone owned by Cloudflare itself, not your account. Error 12130 "domain does not belong to zone". Always bind a custom subdomain from a zone in the same account as the Pages project, then gate the custom subdomain. The raw .pages.dev URL stays public and needs a separate noindex mechanism.

GitHub repo is locked to one CF account at a time. Disconnecting the CF Pages GitHub App from a repo does NOT clear CF's internal account-repo binding — only deleting the Pages project in the other account fully frees the repo. Error 8000093 means: "delete the conflicting project." Don't try to disconnect the GitHub App as a shortcut; check which account currently owns the binding and do cleanup there first.

CF doesn't support account merging. Manual migration: zones move cleanly via "Move Domain" (preserves all DNS records, no nameserver change); Pages/Workers/R2/KV/D1 must be recreated in target (destructive, lose history). Pick master by which account holds the most infrastructure, not the most domains. Moving a zone that has a bound Pages custom domain BREAKS that binding until the Pages project is re-homed to match — coordinate those two migrations together.

Canonical tag is advisory, not authoritative. Google may still crawl and index .pages.dev URLs despite <link rel="canonical"> pointing to production. Host-scoped X-Robots-Tag: noindex via Pages public/_headers is the authoritative signal. Don't rely on canonical alone for indexing control.

DNS on a prod zone is safe for new subdomains — but verify no wildcards first. Adding a new subdomain CNAME is isolated from root/www/MX records. But if the zone has a wildcard CNAME (*.example.com), an explicit new record overrides the wildcard for that host — could surprise downstream systems. Always grep DNS list for wildcards before committing new records to a prod zone.

After session compaction, restate the intended next action before producing substantive artifacts. Session #18 opened with the system-start context pointing at a Phase 2 Session 2 bootstrap prompt; agent assumed that meant "execute now" and produced a 260-line canonical tool spec. Operator was mid-Cloudflare-work from the prior session and confused by the pivot. Better pattern: after compact, briefly surface the ambiguous thread ("X was queued before compact, Y was just referenced — which one?") and await explicit confirmation before producing substantive artifacts. The work wasn't wasted (the EFC spec becomes the template for remaining 9), but the redirect cost a full round-trip of context.

Session 17 — Phase 2 Pre-Sprint + Workflow Infrastructure Hardening (2026-04-17)

Compliance is an angle-generation input, not a guardrail. The restricted-claims list is literally the cheat sheet for what converts — each restriction exists because the underlying psychological desire converts hard. Map every restriction to its desire family, the Layer-1 triggers that deliver that desire, and the tool-backed compliant framing that lands harder than the non-compliant original.

Disclaimers are conditional on ad-claim content, not platform/format. Tool-discovery framing ("See what you could qualify for") makes no standalone claim — the tool + 2–3K-word article lander carries all required disclosures with cited sources and freshness dates. In-ad disclaimers only trigger when the ad itself makes a specific claim (dollar amount, named government program, income claim). Tool-discovery framing is simultaneously the best-CTR AND most-compliant framing — not a coincidence.

Proprietary rankings from public gov data beats third-party whitelist. College Scorecard + IPEDS + BLS + VA data let us build our own defensible ranking system with full methodology transparency. No usage-rights gating, no publication bias, no expiration. Essentially "US News for online education" built from public gov data — an angle space competitors can't replicate.

Pattern-detect meta-specs. Three distinct questions (IPEDS cache, proprietary rankings, BLS wage refresh) had the same structural answer: authority-tier data cache with freshness tracking, cron-refreshed, read by Phase 5 with automated staleness halts. Consolidating into one authority-data-cache infrastructure spec beats three separate specs that repeat the same logic.

Persona re-load has empirical basis. Not superstition — output quality degrades every 4–5 turns as attention weight on the persona decays relative to accumulated context. Re-reading restores signal strength via token recency + repetition. CLAUDE.md now codifies: reload on strategic triggers + every ~5 turns in long sessions + every natural phase boundary. NOT every turn (wasteful, dilutes attention on actual work).

Tool-discovery framing is the workflow cheat-code. The same pattern that solves compliance (route substantiation to lander) also solves CTR (no in-ad disclaimer drag) and lead quality (friction-as-feature qualifies users). Three optimization goals aligned in one architectural choice.

Diagnose before adding. Session surfaced that Firecrawl skill wasn't firing, continuous-learning-v2 looked dormant, session-end was manual. Diagnosis revealed: Firecrawl was CLI-skill-vs-MCP-mismatch (fixable by rewiring deep-research); continuous-learning-v2 had 7.7MB of observations accumulated but observer.enabled: false was blocking the analysis phase (one config flag fix); session-end was genuinely manual (built scripts/session-end.sh safe multi-repo helper). Don't install — diagnose first. Most "missing" capabilities are broken capabilities.

Session 9 — Pipeline Brain Upgrade (2026-04-15)

Agents process early rules more heavily: Adding HARD RULES blocks at the top of each skill file ensures non-negotiable constraints are in the "hot zone" of agent attention. Rules buried mid-document get diluted by accumulated context.

Decision trees > prose guidance: Converting "consider X when Y" into "IF X THEN Y, OTHERWISE Z" produces more reliable agent behavior. Agents follow explicit branches; they interpret flexible guidance flexibly (which means inconsistently).

Anti-patterns are as powerful as patterns: Showing agents what NOT to produce (with concrete BAD examples and WHY explanations) creates hard boundaries. Without anti-patterns, agents gravitate toward "safe" generic output that passes no rules but also creates no value.

Subagent permissions are session-scoped, not inherited: Adding permissions to settings.local.json or project settings doesn't reliably propagate to subagents in don't-ask mode. Python/Bash fallback for file writes is the workaround. Some agents succeed, some don't — the behavior is inconsistent and needs investigation.

Session 8 — Parallel Agent Scaling (2026-04-15)

Sweet spot is 3-4 parallel agents: Beyond 4, prompt quality drops and merge review gets sloppy. The real constraint is file isolation, not compute. With 3 repos, the ceiling is ~5 agents if file boundaries are clean.

Feature branches prevent same-repo conflicts: Agents C and D in campaignforge-app worked on separate branches (feat/cost-session-manager, feat/pipeline-orchestrator), then merged sequentially. Zero conflicts.

Independent agents can converge on identical fixes: Both the a11y and Lighthouse agents independently identified and fixed the same CSS cascade issue (@layer base wrapping, :where() scoping) with byte-identical diffs. Merge was clean because Git detected identical changes.

CAPI Worker type errors are IDE-only: Cloudflare Workers have their own tsconfig with @cloudflare/workers-types. The IDE picks up the root tsconfig which doesn't know about Request/Response/fetch globals. Not real errors.

Session 3 — Content Migration + Tool Islands (2026-04-14)

MDX body_html requires JSX-safe formatting: Raw HTML from JSON has nested block elements on same lines, bare <br> tags, and ~ chars parsed as strikethrough. Conversion script needed multi-pass formatting: self-close void tags, escape tildes to &#126;, and split every block element onto its own line.

Astro dynamic components need static imports: client:visible hydration fails with dynamic component references (NoMatchingImport). Must use conditional static rendering: {tc === 'EFCCalculator' && <EFCCalculator client:visible />}.

Parallel agents for tool implementations: 4 agents dispatched simultaneously, each handling 2-3 tools. All completed in ~17 minutes. Logic/data separation (*.logic.ts, *.data.ts) enabled clean parallelization with zero merge conflicts.

Session 2 — W1 Build (2026-04-14)

Astro 5 Content Layer API: entry.render() is gone. Use import { render } from 'astro:content' then render(entry). Content collections need glob() loader from astro/loaders.

shadcn-svelte + Tailwind v4: @apply border-border fails — Tailwind v4 doesn't know custom vars via @apply. Use @theme inline to declare all HSL color vars, then use raw CSS instead of @apply for base styles.

shadcn-svelte components need WithElementRef: The cn utility must also export WithElementRef and WithoutChildrenOrChild types for sidebar/rail components.

MDX in Content Collections: HTML (tables, callouts, step-lists) works inline in MDX. No need to convert to custom components yet — the prose CSS styles handle it.

Session 1 — Scaffold (2026-04-14)

Astro 5 Content Collections: Uses src/content.config.ts (not src/content/config.ts). The z import from astro:content shows deprecation warnings.

Tailwind v4: No tailwind.config.ts needed. Uses @tailwindcss/vite plugin. The @astrojs/tailwind integration conflicts — use the Vite plugin directly.

Remote agents failed: Overnight scaffold agents couldn't auth with GitHub. Local execution worked first try.

Session Log

2026-04-17 #26
Phase 2 proper COMPLETE — all 4 Phase 2 JSONs shipped → Phase 2 ready-to-close pending operator review — Mechanical synthesis from all 8 Gate 1–3 approved inputs produced the structured-input layer Phase 5 Strategy Engine will read to generate angles deterministically. situation_family_map.json (~24 KB, 8 situations): S1 High-SAI family blindsided (Yorkshire 12-ad competitor demand, F6 primary) · S2 No-HS-diploma adult re-entry (Degree SNAP 13 ads, F3 primary, ATB pathway unlock) · S3 Allied-health career transition (Coursera+WGU+Prestige Health 59+ ads, F4 primary) · S4 Nonprofit mission career-change (Learn Grant Writing adjacent, F5 primary, PSLF-backed) · S5 Working adult with transfer credits (Strayer/Capella/WGU/UoPhx 113 ads, F3 primary, multi-school comparison as differentiation) · S6 Military/veteran/first-responder (Liberty/SNHU/ASU/Phoenix 40+ ads, F8 primary, GI Bill full-picture) · S7 Employer tuition benefit worker (Coursera+WGU+Capella 60+ ads, F1 primary, §127 + top-up stack) · S8 Middle-income returner (VoC-validated + ZERO competitor = unclaimed lane), F1 primary. Each situation carries demographic/motivational profile + VoC verbatim examples + floor/multiplier + proof-mechanism tools + compliance risk + unit economics + platform fit + P4 source citations + P6 winners-vault-readback stub. desire_family_matrix.json (~21 KB, F1–F8 × 3 postures): Per family compliant trigger cluster + archetype template + floor/multiplier + proof-mechanism tools. Per-family 3 postures: competitor_occupied + gray_zone + campaignforge_defensible. Gold-standard references captured (Strayer Graduation Fund F1, WGU competency-based F2, Strayer Transfer Scholarship F3, Coursera × Google Career Certificate F4+F8, ACM May 6 deadline F7, JWU Pledge F6, Liberty First Responder F8). Hard-violation references captured (Yorkshire A1–A8 F6+F1+F2, Degree SNAP A10 F3+F1, Prestige Health A11 F4+F7, Learn Grant Writing A12 F4+F5). VoC counter-intuitive patterns noted: Pell refund provokes fear not delight; §127 framed as foregone-income not windfall. reality_map.json (~30 KB, 32 active cells): 8 situations × 4 primary+secondary families each. Per cell: competitor_density (Meta corpus count + advertiser examples + archetype) + compliance_line_crossing_risk (school-direct posture + aggressive-affiliate posture + observed violations) + campaignforge_tool_match (primary tool + supporting tools + multiplier story + decision_rule_category) + proof_mechanism + unit_economics_band + platform_fit + source_citations + winners_vault_readback stub. 6 unclaimed-lane cells concentrate in S8 (all 4) + S1.F5 + S4.F6/F8. Highest-density cells: S5.F3 (113 ads), S3.F4 (59 ads), S7.F1 (60 ads), S6.F8 (40 ads). 32 inactive cells documented with rationale (Stage R can elevate). trigger_reality_matrix.json (~43 KB, 134 triggers): 100% library coverage verified programmatically (134/134 match vs trigger-library.json). Per trigger: applicable_families + applicable_situations + empirical_deployment (school_direct × aggressive_affiliate × campaignforge_opportunity) + primary_tool_mechanic + restrictions + phase_5_priority. 12 under-deployed high-opportunity triggers surfaced as strategic moat: interactive_tool_conversion_lift + demonstration_beats_claim + test_dont_guess_proof + tool_as_proof_mechanism + results_in_advance_value_first + accusation_audit + labeling_emotions + shame_reframing + black_swans_hidden_criteria + ikea_effect_ownership + generation_effect_retrieval + others_deserve_it_more_objection. Anti-pattern government_card_imagery_compliance_trap confirmed as creative-review hard-block. Files shipped: 4 Phase 2 JSONs + updated psychology-engine STATUS + SESSION-LOG + main STATUS + this dashboard. Phase 2 closes after operator approval → Phase 3 begins (Trigger × Reality Matrix operationalization + Tool Blueprint refinements informed by reality_map gaps).
2026-04-17 #25
Phase 2 Gate 3 Deliverable E — SYNTHESIS COMPLETE → Gate 3 ready-to-close pending operator review — Both companion documents shipped from session #24's collection corpus (Meta JSONL 463 ads + Landers raw 102 files + SimilarWeb 11 schools + persona v2 reload mid-session per operator request). competitor-teardown.md (403 lines, ~5,200 words): 5 sections covering executive read of two-market structure (school-direct disciplined cohort vs aggressive-affiliate cohort) · per-school SimilarWeb traffic-quality reads (Coursera 43.1M / WGU 11.3M / SNHU 9.5M leads cohort scale; SNHU best-engagement-combo 21.25% bounce + 7.30 pages/visit + 7:35 duration sets reference standard; ASU Online subdomain 70.81% bounce as cautionary tale for tool-page UX) · 11 school-direct teardowns (compliance posture + hook patterns + landing pattern + what to borrow + what to beat per school) · 5 aggressive-affiliate teardowns + cross-cohort patterns table (7 patterns including calculator-led unclaimed lane = zero schools deploying) · 11 aggregator/lead-gen lander analyses (CollegeRaptor as closest structural CampaignForge analog; DegreePros likely diploma-mill territory) · Google + TikTok data-gap section with operator-decision matrix on commercial ad-intel tools — recommendation: defer until post-Phase-7 validation campaign · Phase 2 reality_map.json implications (7 distinct adult-learner situations validated by competitor demand). compliance-line-crossing-inventory.md (518 lines, ~7,100 words, 41 case studies): Section A 15 hard violations including 8 Yorkshire variants (cleanest case A1 “Don’t report this on your FAFSA — save $100,000” violates all 4 axes — government-affiliation implication + insider-anti-establishment framing + specific-large-dollar windfall + bait-and-switch funnel architecture) + Scholarship System $343,155 windfall N=1-as-method + Degree SNAP free-laptop+$6K-grant incentive-stack lure + Prestige Health Pell+manufactured-urgency Calendly bypass + Learn Grant Writing $15K/5hr income claim + Choice Point + Inside-Track CDL Pell coordinated content-funnel network (9 ads) + Get Online Class Takers contract-cheating Meta-policy violation. Section B 4 gray-zone/heuristic flags (UoPhx + Liberty FPs confirmed). Section C 12 clean references: JWU Pledge gold-standard institutional 100% + Wisconsin Nurse Educator Program gold-standard F6/F8 state-funded + UoPhx Tuition Guarantee mechanic-backed F2 + ACM full-tuition France with real May 6 deadline gold-standard F7 + WGU competency-based + Capella FlexPath + Purdue Global 3-week trial + Strayer Transfer Scholarship + Coursera × Google Career Certificate + Liberty May 18 term start. Section D 5 aggregator landers. Section E 5 sub-vertical adjacencies (NCSA athletic recruiting + CDL-Pell vocational + K-12 ESA + nonprofit grant-writing + Canadian StudentAid — all flagged for Phase 8). Per-case structure: advertiser + ad_library_id + sweep + verbatim quote + heuristic-vs-true posture + desire family (F1–F8) + triggers activated (per Phase 1 trigger library) + line-crossing axes + tool-backed compliant translation. Heuristic refinement targets identified for Phase 2 v2 pass: cliffhanger-curiosity-gap detection + anti-establishment-framing detection + N=1-case-as-method detection (4 Yorkshire violations missed by simple regex). CampaignForge structural defense per ADR 0009: every angle in Phase 5+ Strategy Engine output references its proof mechanism — aggressive-affiliate angles cannot pass that gate (no proof, only downstream sales call). Files shipped: 2 deliverables + STATUS + SESSION-LOG + main STATUS + this dashboard. Gate 3 closes after operator approval → Phase 2 proper begins (4 Phase 2 JSONs: situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix).
2026-04-17 #24
Phase 2 Gate 3 Deliverable E — Collection COMPLETE (3-agent parallel Firecrawl) + FAFSA + OBBBA authority-data-cache shipped + Firecrawl-exclusive rule codified — Dispatched 3 parallel background collection agents with Firecrawl-exclusive hard-locked at the top of each brief (operator runs Meta/Google/TikTok ad accounts from this IP — ad-intelligence scraping from the same IP = ban risk, non-negotiable). Meta Ad Library agent shipped — 463 deduped EDU-relevant ads (307 named-advertiser sweep across 11 schools + 185 keyword sweep across 10 queries), 326 unique landing URLs across 123 advertisers, 23 raw scrapes, full ads-structured.jsonl + SUMMARY.md. Line-crossing case studies flagged: Yorkshire College Planning “Don’t report this on your FAFSA — save $100,000” (12 ads, F6 institutional-legitimacy violation + insider-angle; cleanest case study in corpus) · Degree SNAP FREE-Laptop+$6K-Grant+Online-HS (13 ads, highest-volume aggressive-affiliate play) · Prestige Health Academy Pell+urgency barber-training (F7 urgency-fabrication) · The Scholarship System $343,155 windfall · Learn Grant Writing career-transition. Moat-validation-by-omission: “financial aid calculator” keyword returned 1 EDU-relevant ad — zero competitive shadow on EFC Calculator angle. Landers + SimilarWeb agent collected 102 raw files before 600s stream-watchdog stall (20 affiliate + 9 schools × 6 pages + 11 SimilarWeb reports); no structured JSONL/SUMMARY written — inline structuring during synthesis next session. Google Ads Transparency + TikTok agent — first run 939s stream timeout (broad scope); retry with tighter 5-advertiser + 3-keyword scope confirmed Google Ads Transparency Center structurally unscrapeable via unauthenticated Firecrawl (SPA hydration barrier + advertiser-detail auth wall + virtualized DOM; typeahead confirmed 3 of 5 advertisers exist but detail pages gated; all 3 keyword queries typeahead-zero). TikTok Creative Center deferred (JS+geo-gated+anonymized+no landing URLs). Both require commercial ad-intel tool (SpyFu / Semrush / AdClarity / BigSpy $79–$299/mo) or operator-authenticated separate-IP session — flagged for operator decision. FAFSA authority-data-cache module shipped — 44 structured JSON records: 3 per-cycle national aggregates (2024-25 Q7 final, 2025-26 Q5, 2026-27 Q1 opening) + 1 national demographics 2023-24 + 30 per-pilot × cycle records (10 pilots × 3 cycles) + 10 per-pilot trajectory rollups. Idempotent ingest at scripts/ingest-fafsa-application-volume.py — zero external network (parses session #23 local CSVs); release-date inference clamped to today for in-progress cycles (caught future-dated inference bug on spot-check). OBBBA record shipped at student-aid-rates/obbba_implementation_2026_04.json — PL 119-75 structural shift: Parent PLUS $20K/yr + $65K aggregate per student · Graduate $100K aggregate · Professional $50K/yr + $200K aggregate · $257,500 lifetime max · RAP+Tiered Standard as post-2026-07-01 plan menu · PAYE+ICR sunset 2028-07-01 · IBR retained for pre-2026-07-01 cohort · grandfathering 2028-29 (grad) or 2029-30 (parent PLUS) · consolidation 3-month FSA-recommended buffer closed as of 2026-04-17 = high-urgency current-borrower ad signal. Tool integration points mapped (Loan Repayment Calculator cohort switch + lifetime cap + $20K/$65K Parent PLUS distinct category, Aid Letter Decoder cap-exceeded flags, Financial Aid Quiz consolidation-decision routing). Compliance-safe framings + violations-to-avoid catalogued. Follow-up scrapes queued for FSA definitional pages. Durable feedback saved: feedback_firecrawl_exclusive_scraping.md — codifies ad-platform + competitor-lander scrapes as Firecrawl-only, federal domains carved out for direct pull. Decision: skip bulk 101-file FSA archive pull — quality-over-quantity judgment; current authority-data-cache satisfies ad-relevant use cases; OBBBA structural break collapses pre-OBBBA to one baseline regardless of depth; VPN DNS sinkhole on studentaid.gov deprioritized. Idempotent scripts/fetch-fsa-archive.sh remains queued for future if downstream demand emerges. ~48+ files shipped. Deliverable E collection complete; synthesis next session closes Gate 3 and unblocks Phase 2 proper (four Phase 2 JSONs: situation_family_map, desire_family_matrix, reality_map, trigger_reality_matrix).
2026-04-17 #23+
Session 23 Addendum — FAFSA Data Pipeline Foundation + StudentAid OBBBA Recon — Operator surfaced three public federal data resources mid-session. FAFSA Application Volume by School — 11 XLS (2024-25 Q2–Q7 + 2025-26 Q1–Q5) + 1 CSV (2026-27 Q1) converted to 31 CSVs via new reusable script (scripts/convert-fafsa-xls-to-csv.py, openpyxl+xlrd, idempotent, multi-sheet). Pilot-school YoY trajectories federally traceable: SNHU 2024-25 cycle-final 282,010 FAFSAs (92% independent), WGU 291,890 (93.6% ind), ASU 233,500, 7 other pilots complete. Federal authoritative validation of adult-learner TAM. FAFSA Application Demographics 2023-24 — national pre-rollout baseline: 17.9M applicants × 8 dimensions × 7 quarters. 41.8% age 25+, 52% Independent, 49% Pell Eligible, 47% first-gen. 50:58 avg full-form dependent completion time (16:42 for independent EZ form) — quantitative federal validation of tool-discovery friction-reduction thesis. StudentAid Big Updates page (2026-04-15) — 648 lines Firecrawl-scraped covering OBBBA implementation: Parent PLUS $20K/$65K caps, graduate $100K + professional $50K/yr + $200K aggregate, $257,500 lifetime max, RAP payment-qualification rules, consolidation 3-month deadline to preserve IBR/ICR, ICR + PAYE elimination. Queued for next-session ingest. FSA archive enumerated — 241 downloadable files going back to 2006-07 (20 cycles). Batch fetcher (scripts/fetch-fsa-archive.sh) built for HTTP/1.1 + retry + parallel + idempotent skip. 101-file batch download BLOCKED — operator VPN DNS sinkholes studentaid.gov specifically (resolves to 198.18.8.39 TEST-NET-2 private IP; bls.gov + data.gov resolve normally). Fetcher killed cleanly; zero partial/corrupt files. NCAN Tableau extraction KILLED — scout investigation confirmed federal primary source beats accredited_private aggregator. Secondary FSA source discovered: fafsabyhs/<STATE>.xls series (50 states, weekly refresh). Scout POC pulled CA/TX/FL/NY. License clarity — 17 USC § 105 confirms FSA data is federal public domain. Addendum total: ~50 new files (13 XLS + 31 CSVs + 2 scripts + venv + 4 Firecrawl caches + 2 scout artifacts + STATUS updates). Combined session #23 total: ~107 files. Next: VPN DNS fix → complete archive pull → build fafsa-application-volume/ module → ingest OBBBA page → Gate 3 Deliverable E.
2026-04-17 #23
Phase 2 Gate 3 — Deliverable D shipped + B.4 continuation ALL closed via 4-agent parallel orchestration — Four background research agents dispatched from a single orchestrator turn, each with self-contained briefs + non-overlapping output paths. Zero file collisions. Wall-clock: ~30min parallel (longest pole) vs ~4h serial. Deliverable D shipped: voc-corpus.json — 110 verbatim quotes tagged by situation family (F1–F8), emotional register, and trigger affinity resolving against Phase 1’s 134-entry library. Source distribution: Reddit 61 · Niche 24 · Trustpilot 14 · YouTube 11. Guardrail: research-input only, never reproduced in ads. Companion voc-themes.md with per-family themes + emotional-register patterns per source + cross-cutting buyer language + trigger-affinity heatmap. 3 surprising themes flagged for Phase 3 downstream: (1) Pell-refund discovery provokes fear-first not delight — EFC Calculator output flow reorder; (2) peer-insider-knowledge is primary Reddit conversion mechanism, not copy — Phase 3 tool blueprint must package peer-insider quality natively; (3) middle-income squeeze is a distinct adult-learner identity — candidate F9 or cross-cutting persona in reality_map.json. B.4 continuation ALL closed: BLS OEWS +10 SOCs (total 20/50): 15-1211 · 15-1231 · 15-1244 · 15-2051 (reclassified from 15-1211.01) · 15-1212 · 13-1082 · 13-1161 · 11-3031 · 11-2022 · 27-3031. BLS discontinued per-SOC HTML pages after May 2023 — May 2024 data pulled via Public API v2 (6 batched POSTs × 25 series); schema now has source_url_national_wages (API) + source_url_industries_states (2023 HTML) + cross_check_may_2023_annual audit block. BLS-suppressed annual_p90 for Financial Managers + Sales Managers (wage ≥ $239,200/yr); 2 SOC reclassifications documented. IPEDS pilot completed (10/10): 5 new seeds — Grand Canyon 104717, Capella 413413, Strayer 131803 (DC flagship, multi-campus aggregation flagged), DeVry 482477 (HLC accreditation “ended 06/30/2019” flagged), ASU Online 483124. UoP + Purdue Global IPEDS vs Scorecard UNITID divergence documented. Scorecard API pull (10/10 seeded): 3 full + 7 partial; blocker — api.data.gov DEMO_KEY rate-limit is 10 req/hour (not documented 30). Operator action: free api.data.gov key at https://api.data.gov/signup/ unblocks 3 remaining metrics. State-aid [VERIFY] resolution (14/14, zero flags remain): 10 fully resolved with 2025-26 or 2026-27 authoritative figures (OR, WY SF0047 signed 2026-02-27, NE CCPE 2024-25, MS MTAG APA Part 611, AL ASGP, MO A+, VA VTAG full matrix, CT Roberta B. Willis FY-26, CO COF, ND Career Builders); 3 partial (MT portfolio restructure, MN OHE 2023-24, WI companion programs); 1 blocked (UT USHE Wordfence-503, substitute budget.utah.gov). 5 program-structure changes flagged for Phase 3: UT 2021 Regents+New Century merger into Opportunity Scholarship · MT MHEG retired · CT Governor’s → Roberta B. Willis rebrand · NE State Grant → Nebraska Opportunity Grant · WI HEAB track split. Total session output: 43 new/updated files across 5 directories. Only Gate 3 Deliverable E (Competitor Teardown + Compliance-Line-Crossing Inventory) remains for Gate 3 close.
2026-04-17 #22
ContentForge port reconciliation — all 10 tools verified parity with pre-port production JS — Orthogonal to the Psychology Engine research stream. Focus: lock in tool behavior before any design work. Content diff — 26 articles + 10 school profiles + 10 tool pages 100% accounted for. 3 E-E-A-T pages (about / privacy-policy / terms-of-use) correctly moved to native .astro pages per spec §8. 1 intentional slug rename (quiz-financial-aidfinancial-aid-quiz). URL structure change documented — old site served tools at root (/efc-calculator/); port moves tools under /tools/<slug>/. Added public/_redirects (12 entries) preserving SEO equity — 9 root→/tools/ + 1 quiz rename + 2 legacy indexed paths. 294 parity tests written and passing — 10 new Vitest files under tests/unit/*.parity.test.ts (291 new + 3 pre-existing). Every constant (Pell tables, BLS salaries, BAH rates, poverty lines, loan rates, scholarship and employer databases, state-COL multipliers) diffed byte-identical vs source. Every branch covered: EFC progressive brackets + auto-zero + 175% poverty; Quiz independence + flat-30% SAI + program composition; Scholarship Finder filters + sort + aggregates; ROI tenYearROI + break-even; Loan Repayment SAVE/PAYE/PSLF + overpayment log formula; Career Salary per-state COL + nextLevel; Time-to-Degree pace overrides + military 15-credit bonus; GI Bill 5 chapters + half-time BAH=0; Aid Letter Decoder letter grade A-F + 4 red flags; Employer Tuition 18-record lookup + $-parsing + Pell stack. Reconciliation report: ContentForge/docs/port-reconciliation.md. Also fixed scripts/session-end.sh (-u-A + ContentForge branch mastermain) and gitignored .superpowers/ + cmux.json (campaign-forge) + .claude/ (ContentForge). Operator requirement met: "all tools work just like they do on the current site." No formula drift. Next: design refinement via DESIGN.md, GrowthBook wiring, prod cutover.
2026-04-17 #21
Phase 2 Gate 3 — Deliverable C shipped + B.4 50-state coverage achieved + BLS/IPEDS expansionDeliverable C biosphere-market-2026.md — 3,653 words, 12 topical sections, P4 metadata on every claim (source_url + retrieval_date + recency_confidence + authority_tier). Topics: enrollment cliff (WICHE 2025 peak, 38 states declining), AI displacement + layoff waves (BLS projections, 2026 YTD 928 layoffs/day), SAVE plan ended (Eighth Circuit March 10 2026, 7.5M borrowers in transition, RAP live July 1 2026), FAFSA 2024-25 aftermath → 2025-26 +15.7% completions recovery, Gen-Z ROI skepticism (46% say college not worth, trade schools +5%), community college renaissance (+3% fall 2025), OBBBA permanent §127 + SECURE 2.0 SLP matching, tuition-vs-wage plateau (Minneapolis Fed), 2026-27 Pell $7,395 + Workforce Pell live July 1 2026, Meta Ad Library signals, 2026 platform benchmarks (education CPL $19.27 avg 2025 → $21.57 Dec end, +44% YoY). Strategic implications mapped all 12 forces to CampaignForge tool-backed-proof thesis. B.4 continuation — 37 additional state-aid records emitted from existing Session #20 Firecrawl cache (AK, AL, AR, AZ, CO, CT, DE, HI, IA, ID, KS, LA, MA, MD, ME, MI, MN, MO, MS, MT, NC, ND, NE, NH, NJ, NM, NV, OR, RI, SC, SD, UT, VA, VT, WI, WV, WY). State-aid cache now 56 records — 50-state primary-program coverage achieved per operator C6 lock. 14 [VERIFY] flags tracked in manifest. BLS OEWS expansion — 8 priority SOCs added (29-1071 PAs, 29-2061 LPNs, 29-1171 NPs, 13-2011 Accountants, 25-2021 Elementary Teachers, 47-2111 Electricians, 49-9021 HVAC, 15-1232 Computer User Support). Cache at 10/50 target SOCs. IPEDS pilot seed — 5 institutions (SNHU, WGU, UoP, Purdue Global, Liberty) with core IPEDS demographics; Scorecard-specific metrics marked [PENDING_API_PULL] for next session. Full YRP directory deferred to Phase 8 cron with documented task spec (~2,000 institutions, 90-day refresh cadence). Total session output: 55 new/updated files. Next session: Deliverable D (VOC Corpus) + Deliverable E (Competitor Teardown + Compliance-Line-Crossing Inventory). Gate 3 closes after D + E land.
2026-04-17 #20
Phase 2 Gate 3 — Deliverable B shipped (all four sub-deliverables) — Firecrawl-heavy research session. 88 authoritative pages scraped across 6 parallel batches (federal ED + VA + DoD + DoL + HHS; state 7-part; military + tax + employer; niche-private + debt-relief; ranking-data APIs; verify-pass). B.1 resource-candidates.json — 75 records (54 primary_gov + 19 accredited_private + 2 secondary_gov, zero aggregators, all decision_rule_category=1, all 18 required fields populated, all unique resource_ids). Covers federal grants + loans + debt-relief + administrative pathways + military benefits + tax-code + 8 employer programs + 15 state programs + reciprocity compacts + 12 niche private scholarships + 5 data-source records (Scorecard, IPEDS, BLS OEWS, BLS Projections, VA GI Bill Tool, HHS). B.2 floor-multiplier-map.md — per-tool (10) floor anchor + typical-case stack (ad copy) + upper-bound stack ([MODELED], tool-output only). Floor-to-multiplier ratios 2x–40x+. B.3 proprietary-rankings-research.md — 10 methodology candidates built from primary_gov data (Best Earnings / Value Working Adults / Completion Working Adults / Lowest Debt / Best Repayment / Veteran-Friendly / Pell-Recipient Outcomes / Credit Transfer / GI Bill Value Max / Employer Partner Schools). Each with data sources + inclusion/exclusion + weights (sum to 100) + refresh cadence + compliance framing + competitive moat + tool integration. Displaces third-party rankings entirely. B.4 verticals/edu/research/authority-data-cache/ — scaffolded with 3 shared schemas (cached-record.schema.json v0.1.0 + provenance.schema.json + staleness-rules.json) + 9 sub-module manifests + 18 seed records + 10 ranking methodology spec JSONs = 53 JSON files across 8 sub-modules. Seed records: Pell 2026-27 $7,395 (PL 119-75), Direct Sub/Unsub undergrad 6.39%, Unsub grad 7.94%, PLUS 8.94%, VA Ch 33 national cap $29,920.95, Ch 30 MGIB-AD $2,518/mo, Ch 35 DEA $1,536/mo, IRS §127 $5,250 (student-loan-repayment made PERMANENT by OBBBA), HHS 2026 poverty guidelines full tables, + 15 state-aid programs + 2 reciprocity compacts (WICHE WUE + MSEP) + 2 pilot BLS OEWS SOC records (RN + Software Developers 2024 percentiles). Operator verify-pass resolved all [VERIFY] flags via targeted Firecrawl scrapes. Operator-locked all 5 B.2 + all 7 B.3 open questions from v2 persona lens (institution-first rankings, Phase 8 program-level, 100-completer threshold, -15%/20% Parent PLUS penalty, dual YR treatment, two-tier employer-partner visibility, degreesources.com methodology hosting, hold at 10 methodologies for Phase 2). Deliverable B ready for Gate 3 operator review alongside C/D/E (pending). Next session begins Deliverable C (2026 EDU Biosphere Market Study) + continuation work on B.4 state-aid (35 remaining states) + IPEDS/Scorecard pilot seeds + BLS OEWS top-50 expansion.
2026-04-17 #19
Phase 2 Gate 2 CLOSED — Deliverables F + G + H + ADR 0009 + claude-mem hook patch — Resumed Phase 2 Session 2. Shipped 9 remaining canonical tool specs (Financial Aid Quiz, Scholarship Finder, ROI Calculator, Employer Tuition Checker, Loan Repayment Calculator, Career Salary Explorer, Time-to-Degree Calculator, GI Bill Calculator, Aid Letter Decoder) at verticals/edu/tool-specs/. Shipped tool-trigger-audit.json (10 tools × 134 triggers, all 8 desire families F1–F8 covered, 2 new-tool gaps flagged for Phase 8), tool-multiplier-stories.md (per-tool floor/multiplier/source/desire-family + P5 cross-vertical pattern), adaptive-question-flow-spec.md (branch points + priority matrix), unit-economics-monetization-rules.md (Deliverable G — monetization rails + CPL bands + lead tiers + LTV + consolidated authority-data-cache infrastructure), platform-fit-rules.md (Deliverable H — 8 platforms × 3 concerns + conditional in-ad disclaimer render spec). Applied P0/P1 patches: [VERIFY: authority-data-cache refresh] tags on federal loan rates + PLUS rate + Pell max + FAFSA deadline + OBBBA Public Law citation + CFPB specific report URL. 150% vs 225% poverty-line distinction for IDR plans (SAVE vs IBR/PAYE/ICR). Spouse §127 "both employers must have adopted plan" caveat. HR talking-points constrained to template-based placeholder-substitution only (no free-form LLM generation). SCAM CHECK 5-item exclusion pattern + scoring weights for Scholarship Finder. Operator locked 12 Tier A + 10 Tier B + 8 Tier C + 10 Tier D Gate 2 decisions from 15-year media-buyer lens. C6 full-50-state revision applied after operator clarified Stage 3 Copy Factory is an agent-driven skill — all phased copywriter-depth scoping removed across 6 files; authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed. ADR 0009 written and accepted at docs/adr/0009-agent-driven-copy-factory-input-layer-thesis.md — formalizes agent-driven Copy Factory + research-rigor-as-throughput thesis. Affects campaign-forge + ContentForge + campaignforge-app. claude-mem PreToolUse:Read hook patched — removed the single file-context hook that was truncating Read output to line 1 when files had prior observations. All other claude-mem functionality preserved. Idempotent re-apply script at ~/.claude-mem/disable-pretool-read-hook.sh. Takes effect next session. Gate 2 closed. Next session begins Gate 3 — Deliverable B (resource excavation + floor/multiplier + proprietary rankings research, Firecrawl-heavy, 4 parallel batches).
2026-04-17 #18
Cloudflare Preview Gating + Phase 2 Session 2 Kickoff (Paused) — Hybrid session. Started Phase 2 Session 2 by shipping EFC Calculator canonical tool spec at verticals/edu/tool-specs/efc-calculator.md (~260 lines; structural template for the 9 remaining tool specs in Deliverable F — purpose, desire families, question set with dependencies, calculation pseudocode, data sources with authority tier, output format, compliance posture, ContentForge Svelte vs campaign-forge vanilla JS drift-check, mobile-load targets, adaptive-flow branch points). Session paused mid-Deliverable-F to resume a carried-over Cloudflare infrastructure task from the prior session. Mapped 3-account CF architecture: infra hub (fourthright.io + project subdomains for CampaignForge/ContentForge/Plane/Directus/Metabase/Coolify + contentforge-1ei.pages.dev Pages project) chosen as long-term master; Keith's account (degreesources.com production + doinsilence.com side blog) + third fourthright account held for a future consolidation session. Stood up Cloudflare Access gate on ds.fourthright.io — bound as custom domain to existing ContentForge Pages project in infra hub, OTP IDP, 24h session, 3 owner emails allow-listed. Resolved ACME-HTTP-01-intercepted-by-Access issue via delete-app → let-cert-issue (~15s) → recreate-app workaround. Pushed public/_headers to RationalSeer/contentforge main (commit 935a020): host-scoped X-Robots-Tag: noindex, nofollow for contentforge-1ei.pages.dev only (Access-gated preview and future production hostnames unaffected). Cleaned up wasted Zero Trust org from Keith's account (earlier false start). 4 CF API tokens leaked in conversation — flagged for operator revocation. Preview gated and indexing-protected. Next session resumes Phase 2 Session 2 Deliverable F (9 specs + audit + stories + adaptive flow) + Deliverables G + H.
2026-04-17 #17
Psychology Engine Phase 2 Pre-Sprint + Deliverable A (Gate 1 Closed) + Workflow Infrastructure Hardening — Scoped the Phase 2 EDU Reality Map sprint into 8 research deliverables with 3-gate structure (A alone; F+G+H; B+C+D+E). v2 persona-driven revision added 6 cross-cutting principles (Decision Rule for Angles, floor/multiplier, proof-mechanism gate, anti-hallucination rigor, cross-vertical reusability, Stage R readback hook). Shipped Deliverable A: compliance-angle-map.md — 9 parts translating every restricted claim / word / scan pattern into compliant tool-backed angles via 8 desire families (F1–F8). Part VII: disclaimers conditional on ad-claim content, not platform/format — tool-discovery framing routes compliance to the lander. Gate 1 resolved 5 open questions with 3 strategic extensions: super tools via adaptive question-flow, proprietary rankings from public gov data (displacing third-party whitelist), consolidated authority-data-cache infrastructure spec (IPEDS + BLS + rankings + VA + Scorecard unified). Workflow infrastructure hardened: (1) CLAUDE.md +3 workflow rules — Firecrawl-default for research, parallel tool calls, persona re-load every ~5 turns; (2) scripts/session-end.sh multi-repo helper built with interactive per-repo confirm + safe staging; (3) deep-research skill rewired to Firecrawl CLI (was looking for MCP tools that weren't installed); (4) continuous-learning-v2 observer enabled — had 7.7MB of observations accumulated but analysis was blocked by observer.enabled: false; (5) session-end protocol mandates Lessons Learned entries (restoring cadence that lapsed after session #9). CLAUDE.md tuned 41.5k → 39.8k chars. Gate 1 closed. Session 2 bootstrap prepared. Infrastructure ready for long-haul sessions. Deliverables F + G + H next (pure synthesis, no external research).
2026-04-17 #16
Psychology Engine Phase 1d Synthesis COMPLETE — Canonical Trigger Library Locked — Papers 1/2/3 synthesized into trigger-library.json (134 entries, schema v0.2.0). All of Phase 1 research shipped. Merges applied: 5 Paper 3 formal confirmations (1 triple-merge on specificity_concrete_number), 9 Paper 2 confirmations into P1 parents (operator-approved; tool_as_proof_mechanism + mental_accounting_horizon_alignment kept standalone). Deadline tension resolved: 3 sources merged into canonical deadline_mechanic with deadline_reality: "real" | "manufactured" selector set at angle generation. Three source IDs retained as alias stubs. Tier distribution: Tier 1: 8 (triple-confirmed families) · Tier 2: 106 · Tier 3: 19 (replication-contested or register-sensitive) · Anti-pattern: 1 (government_card_imagery_compliance_trap). Schema v0.2.0 formalizes P2/P3 extension fields (proof_mechanism_required, shortcut_risk, replication_status) and adds tier, see_also, variant_of, merged_from. Removed paper_N_status — provenance carries it. Phase 1 complete. Phase 2 (EDU Reality Map) unblocked. Canonical vocabulary locked for all downstream phases.
2026-04-17 #15
Psychology Engine Paper 3 Shipped — Phase 1 Research Complete — 34 triggers extracted from Voss Never Split the Difference (Cluster 15 dialogic persuasion, 20 triggers) and Berger Magic Words (Cluster 16 word-level NLP, 14 triggers). 29 new / 5 formally confirmed / 3 indirect-extension relationships. Running library: 148 distinct trigger IDs across three papers (Paper 1: 69, Paper 2: 45, Paper 3: 34), 0 ID collisions. Five Tier-1 triple-confirmed trigger families identified: concrete specificity, reason-why/because, similarity matching, anchoring-with-walk-away, loss-aversion dialogic. 7 triggers tagged proof_mechanism_required: true. Tension surfaced for Phase 1d: deadline_deweaponization (P3) vs scarcity_time (P1) — planned conditional-trigger resolution. Voss's 7-38-55 / Mehrabian rule deliberately NOT elevated (replication caveat documented). All three Layer-1 research papers locked. Phase 1d synthesis unblocked.
2026-04-17 #14
Psychology Engine Papers 1 + 2 Shipped and Approved — Paper 1: 69 triggers from the LLM training canon (Cialdini, Schwartz, Hopkins, Halbert, Kennedy, Whitman, Haidt, Fogg, Kahneman, Shotton, Sutherland) across 9 clusters. Paper 2: 45 triggers from deep research (29 new + 16 confirmed) across 5 new clusters — contemporary copywriter methodology (Georgi RMBC, Sultanic NHB, Brown E5, Kern RIA, Wiebe VoC, Furr 26-proof), administrative burden psychology (Moynihan 3-cost model), platform-native empirical mechanics, ethics/compliance-as-credibility, vertical-specific psychology (Medicare/SSDI/auto). Replication updates: priming weakened, loss aversion magnitude contested (Brown 2024 vs Walasek 2024), endowment + default robust. Anti-trigger catalogued: government_card_imagery_compliance_trap (CMS-prohibited Medicare pattern). Both operator-approved same session.
2026-04-16 #13
Psychology Engine Project — Phase 0 Planning Shipped — Strategic pivot: stopped trying to patch copy quality and instead designed a foundational Psychology-to-Angle Mapping Engine. 5-layer architecture where angles are derived from (trigger × reality × tool), not invented. Content site is the architectural moat — every ad routes through it. Deliverables: 2 whiteboards (Process Map + End Result with mermaid diagrams), 8 ADRs (content-site-is-moat, Layer 4 split, monetization routing, URL collision rule, shared schemas, ClickUp primary tracker, LeadAmplify archived, fourthright.io deferred), 12 JSON schema stubs, monetization model doc, ClickUp workspace restructure (6 new folders + 5 mirrored docs), session + cross-repo tracking infra, CLAUDE.md protocol updated. Also: Stage 6 + Stage 3B validator bugs fixed (pipeline now passes clean). C1 entity resolution (Click Send Inc FL active). P1.2 Plane task unblocked (Blocked→Ready). ~25 new files shipped. 12-17 session research roadmap locked. Phase 1a ready.
2026-04-15 #10
Parallel Orchestration: 9 Agents, 2 Batches, 3 Repos — Batch 1 (5 agents): URL redirect plan (309 lines, phased cutover), Zaraz config spec + guide (1,322 lines, found 7 tracking gaps including critical cta_type mismatch), pipeline v2 dry run (Stages 1-2 passed, found proof_mechanism validator gap), BUILD-P validation results UI (941 lines, 5 files), ContentForge backfill (12 pages: 10 school profiles with College Scorecard data + degree-match quiz + terms-of-use). Batch 2 (4 agents): BUILD-P Coolify deploy prep (Dockerfile, health check, deploy guide), end-to-end tracking verification audit (full chain status + 7-phase operator checklist), Winners Vault seeded (4 tool-backed winners + 5 blacklisted from swipe vault history), ContentForge sitemaps + school profile routing + 18 stale URL fixes. Inline: proof_mechanism added to programmatic validator (config + script), Financial Aid Quiz URL mismatch fixed (32 refs), claude-mem permissions unblocked. 9 agents, ~35 files, ~4,000 lines across 3 repos. Winners Vault live. BUILD-P deploy-ready.
2026-04-15 #9
Pipeline Brain Upgrade: v2 System Prompt + 19 Skills Restructured — Rewrote the affiliate marketer system prompt (v2) with strategic thesis: tool-backed proof > generic claims, Advantage+ black box reality, multi-vertical expansion pattern. Restructured all 19 pipeline skills across 5 batches: added HARD RULES blocks, VERIFY gates between steps, anti-pattern examples (GOOD vs BAD with WHY), IF/THEN decision trees replacing prose, tool-as-proof integration throughout. New data model additions: proof_mechanism field on angles, tool_backed on vault entries, tool_angle_insights in retro reports, tool-backed specificity bonus in copy scoring. Deep strategic discussion: Advantage+ angle flooding strategy, tool-as-moat thesis, multi-vertical gov research pattern, free-traffic-to-direct-school outreach play, win-win-win-win value architecture. 19 skills restructured, ~1,600 lines added. v2 system prompt created. All pushed.
2026-04-15 #8
Parallel Builds #3-5: 10 Tasks, 3 Batches, 12 Agents — Batch 3 (3 agents): XML sitemaps, cost tracker + session manager, pipeline orchestrator. Batch 4 (3 agents): brief editor (7-step form), Zaraz event taxonomy (5 events × 10 tools), Playwright E2E tests (22 tests). Batch 5 (4 agents): pipeline execution UI (SSE streaming + timeline + approvals), a11y audit (zero critical/serious), Lighthouse CI (all pages 95+), CAPI Worker (Zaraz → Meta CAPI). Both contentforge a11y and Lighthouse agents independently arrived at identical CSS cascade + favicon fixes — zero merge conflicts. +48 files, ~6,000 lines across 3 repos. 10 tasks completed.
2026-04-15 #5
Parallel Build #2: Schema Markup + Agent Executor — Agent team. Agent 1: JSON-LD structured data on all 42 contentforge pages (BreadcrumbList, Article, WebApplication, Organization via @graph arrays). Agent 2: full Agent Executor in campaignforge-app — Anthropic SDK streaming, cost tracker with per-model pricing, session manager, POST /api/pipeline/execute with auth guard and SSE response. +8 files, 1,623 insertions across 2 repos. 1 Plane task Done.
2026-04-15 #4
Parallel Build: E-E-A-T + PostgreSQL — Agent team session. Agent 1: built 4 substantive E-E-A-T pages in contentforge (about, editorial-policy, methodology, contact) with real data sources (Federal Methodology, BLS, IPEDS, College Scorecard), tool-by-tool methodology, affiliate disclosure, AI disclosure. Agent 2: set up PostgreSQL + Drizzle in campaignforge-app — docker-compose.yml (PG 16, port 5433), drizzle.config.ts, 6 db npm scripts, .env.example, Coolify production setup docs. +8 files across 2 repos. 2 Plane tasks marked Done.
2026-04-14 #3
BUILD-C Content + Tools Complete — Bulk converted 28 article/tool JSONs to MDX (wrote Python converter with MDX-safe HTML formatting). Built 4 category index pages (/category/financial-aid/ etc.). Polished 8 shared Svelte components (StepWizard, ButtonGroup, ResultCard, InputField, ProgressBar, MethodologyDisclosure, SourceCitation, CTAButton). Implemented all 10 interactive Svelte tool islands (EFC Calculator, Financial Aid Quiz, Scholarship Finder, ROI Calculator, Loan Repayment, Career Salary Explorer, Employer Tuition Checker, Time-to-Degree, GI Bill Calculator, Aid Letter Decoder). Wired tool components with client:visible hydration. 41 pages build in 4s. +31 files, 10,452 insertions in tool commit alone. 12 Plane tasks marked Done.
2026-04-14 #2
BUILD-C W1 + BUILD-P W1 — Design tokens migrated, full header/footer/layouts built, homepage + tools index polished, FAFSA article + EFC calculator converted to MDX, Content Layer routing working (9 pages). shadcn-svelte initialized, app shell with sidebar nav + dark mode + auth. CF Pages deployed. +86 files across 2 repos.
2026-04-14 #1
Scaffolds — Both repos created, pushed to GitHub. contentforge: Astro 5 + Svelte + Tailwind 4 (60 files). campaignforge-app: SvelteKit 2 + Drizzle (99 files). 30 Plane tasks created.
1 3-Repo Architecture
Each concern lives in its own repository. Shared config, independent deploys.
Repository Purpose Stack Deploys To Status
campaignforge-app CampaignForge ops platform (workflow UI, pipeline execution, dashboards) SvelteKit 2, Svelte 5, shadcn-svelte, Drizzle + Kysely, Postgres DigitalOcean via Coolify Deploy-Ready
contentforge Content sites (degreesources.com + future verticals) Astro 5, Svelte 5 islands, MDX, Tailwind 4 Cloudflare Pages W1 Done • Deployed
campaign-forge Pipeline brain: skills, config, specs, vertical data, research Python scripts, YAML/JSON config, Claude skills Local / CLI Active
2 Revenue Flow
Ad spend → Content site → Tool engagement → Offer conversion → Revenue
1

Ad Click

Meta / Google / TikTok ad → user clicks → lands on content site tool page

2

Tool Engagement

User uses EFC Calculator / Quiz / Finder → genuine value delivered

3

CTA Click

User clicks "Explore Programs" → routed via offer URL with tracking params

4

Lead Submit

User fills form on partner portal → lead captured → Everflow attributes

5

Revenue

$35 CPL per qualified lead → Winners Vault updated → next campaign informed

3 Execution Phases
BUILD (Claude builds) + OPERATE (Operator executes) run simultaneously across 5 phases.

Phase 0

This Week

Spec review, social profiles, Meta Verified, entity resolution, pre-warming

22 tasks

Phase 1

Weeks 1-3

Tracking live, platform MVP, ad accounts, organic posting, campaign structures

35 tasks

Phase 2

Weeks 4-6

Full workflow UI, account warm-up, first live campaign, Winners Vault seeded

26 tasks

Phase 3

Weeks 7-9

Lead intelligence, email capture, social automation, competitive intel, platform health

28 tasks

Phase 4-5

Weeks 10+

Ad platform APIs, CAPI firing, retro automated, ping/post routing, multi-vertical

15+ tasks