Complete Spec Bundle — 5 Documents, 3 Repos, Building
CampaignForge operations platform. SvelteKit + Svelte 5 + shadcn. 12 comparative decisions. Agent SDK pipeline execution. Full database schema.
PlatformContent site architecture. Astro 5 + Svelte 5 islands. Zero JS articles, ~5KB tools. 10 tool patterns. 5-6 week migration plan.
ContentForgePlatform trust and account health. Meta HiVA scoring. Google EDU restrictions. 4-week warm-up playbooks. Weekly intel monitoring.
TrustMaster execution timeline. 5 phases, ~150 tasks. BUILD + OPERATE parallel tracks. GATE milestones. Week-by-week timeline.
ExecutionDaily operations guide. 3-4 hrs/day rhythm. Campaign lifecycle. Solo to team-of-3 scaling. Incident response playbooks.
RunbookSvelteKit 2 + Svelte 5 + Tailwind 4 + Drizzle ORM + shadcn-svelte
Astro 5 + Svelte islands + MDX + Tailwind 4 • 42 pages, 10 tools, Lighthouse 95+
Compliance is an angle-generation input, not a guardrail. The restricted-claims list is literally the cheat sheet for what converts — each restriction exists because the underlying psychological desire converts hard. Map every restriction to its desire family, the Layer-1 triggers that deliver that desire, and the tool-backed compliant framing that lands harder than the non-compliant original.
Disclaimers are conditional on ad-claim content, not platform/format. Tool-discovery framing ("See what you could qualify for") makes no standalone claim — the tool + 2–3K-word article lander carries all required disclosures with cited sources and freshness dates. In-ad disclaimers only trigger when the ad itself makes a specific claim (dollar amount, named government program, income claim). Tool-discovery framing is simultaneously the best-CTR AND most-compliant framing — not a coincidence.
Proprietary rankings from public gov data beats third-party whitelist. College Scorecard + IPEDS + BLS + VA data let us build our own defensible ranking system with full methodology transparency. No usage-rights gating, no publication bias, no expiration. Essentially "US News for online education" built from public gov data — an angle space competitors can't replicate.
Pattern-detect meta-specs. Three distinct questions (IPEDS cache, proprietary rankings, BLS wage refresh) had the same structural answer: authority-tier data cache with freshness tracking, cron-refreshed, read by Phase 5 with automated staleness halts. Consolidating into one authority-data-cache infrastructure spec beats three separate specs that repeat the same logic.
Persona re-load has empirical basis. Not superstition — output quality degrades every 4–5 turns as attention weight on the persona decays relative to accumulated context. Re-reading restores signal strength via token recency + repetition. CLAUDE.md now codifies: reload on strategic triggers + every ~5 turns in long sessions + every natural phase boundary. NOT every turn (wasteful, dilutes attention on actual work).
Tool-discovery framing is the workflow cheat-code. The same pattern that solves compliance (route substantiation to lander) also solves CTR (no in-ad disclaimer drag) and lead quality (friction-as-feature qualifies users). Three optimization goals aligned in one architectural choice.
Diagnose before adding. Session surfaced that Firecrawl skill wasn't firing, continuous-learning-v2 looked dormant, session-end was manual. Diagnosis revealed: Firecrawl was CLI-skill-vs-MCP-mismatch (fixable by rewiring deep-research); continuous-learning-v2 had 7.7MB of observations accumulated but observer.enabled: false was blocking the analysis phase (one config flag fix); session-end was genuinely manual (built scripts/session-end.sh safe multi-repo helper). Don't install — diagnose first. Most "missing" capabilities are broken capabilities.
Agents process early rules more heavily: Adding HARD RULES blocks at the top of each skill file ensures non-negotiable constraints are in the "hot zone" of agent attention. Rules buried mid-document get diluted by accumulated context.
Decision trees > prose guidance: Converting "consider X when Y" into "IF X THEN Y, OTHERWISE Z" produces more reliable agent behavior. Agents follow explicit branches; they interpret flexible guidance flexibly (which means inconsistently).
Anti-patterns are as powerful as patterns: Showing agents what NOT to produce (with concrete BAD examples and WHY explanations) creates hard boundaries. Without anti-patterns, agents gravitate toward "safe" generic output that passes no rules but also creates no value.
Subagent permissions are session-scoped, not inherited: Adding permissions to settings.local.json or project settings doesn't reliably propagate to subagents in don't-ask mode. Python/Bash fallback for file writes is the workaround. Some agents succeed, some don't — the behavior is inconsistent and needs investigation.
Sweet spot is 3-4 parallel agents: Beyond 4, prompt quality drops and merge review gets sloppy. The real constraint is file isolation, not compute. With 3 repos, the ceiling is ~5 agents if file boundaries are clean.
Feature branches prevent same-repo conflicts: Agents C and D in campaignforge-app worked on separate branches (feat/cost-session-manager, feat/pipeline-orchestrator), then merged sequentially. Zero conflicts.
Independent agents can converge on identical fixes: Both the a11y and Lighthouse agents independently identified and fixed the same CSS cascade issue (@layer base wrapping, :where() scoping) with byte-identical diffs. Merge was clean because Git detected identical changes.
CAPI Worker type errors are IDE-only: Cloudflare Workers have their own tsconfig with @cloudflare/workers-types. The IDE picks up the root tsconfig which doesn't know about Request/Response/fetch globals. Not real errors.
MDX body_html requires JSX-safe formatting: Raw HTML from JSON has nested block elements on same lines, bare <br> tags, and ~ chars parsed as strikethrough. Conversion script needed multi-pass formatting: self-close void tags, escape tildes to ~, and split every block element onto its own line.
Astro dynamic components need static imports: client:visible hydration fails with dynamic component references (NoMatchingImport). Must use conditional static rendering: {tc === 'EFCCalculator' && <EFCCalculator client:visible />}.
Parallel agents for tool implementations: 4 agents dispatched simultaneously, each handling 2-3 tools. All completed in ~17 minutes. Logic/data separation (*.logic.ts, *.data.ts) enabled clean parallelization with zero merge conflicts.
Astro 5 Content Layer API: entry.render() is gone. Use import { render } from 'astro:content' then render(entry). Content collections need glob() loader from astro/loaders.
shadcn-svelte + Tailwind v4: @apply border-border fails — Tailwind v4 doesn't know custom vars via @apply. Use @theme inline to declare all HSL color vars, then use raw CSS instead of @apply for base styles.
shadcn-svelte components need WithElementRef: The cn utility must also export WithElementRef and WithoutChildrenOrChild types for sidebar/rail components.
MDX in Content Collections: HTML (tables, callouts, step-lists) works inline in MDX. No need to convert to custom components yet — the prose CSS styles handle it.
Astro 5 Content Collections: Uses src/content.config.ts (not src/content/config.ts). The z import from astro:content shows deprecation warnings.
Tailwind v4: No tailwind.config.ts needed. Uses @tailwindcss/vite plugin. The @astrojs/tailwind integration conflicts — use the Vite plugin directly.
Remote agents failed: Overnight scaffold agents couldn't auth with GitHub. Local execution worked first try.
compliance-angle-map.md — 9 parts translating every restricted claim / word / scan pattern into compliant tool-backed angles via 8 desire families (F1–F8). Part VII: disclaimers conditional on ad-claim content, not platform/format — tool-discovery framing routes compliance to the lander.
Gate 1 resolved 5 open questions with 3 strategic extensions: super tools via adaptive question-flow, proprietary rankings from public gov data (displacing third-party whitelist), consolidated authority-data-cache infrastructure spec (IPEDS + BLS + rankings + VA + Scorecard unified).
Workflow infrastructure hardened: (1) CLAUDE.md +3 workflow rules — Firecrawl-default for research, parallel tool calls, persona re-load every ~5 turns; (2) scripts/session-end.sh multi-repo helper built with interactive per-repo confirm + safe staging; (3) deep-research skill rewired to Firecrawl CLI (was looking for MCP tools that weren't installed); (4) continuous-learning-v2 observer enabled — had 7.7MB of observations accumulated but analysis was blocked by observer.enabled: false; (5) session-end protocol mandates Lessons Learned entries (restoring cadence that lapsed after session #9). CLAUDE.md tuned 41.5k → 39.8k chars.
Gate 1 closed. Session 2 bootstrap prepared. Infrastructure ready for long-haul sessions. Deliverables F + G + H next (pure synthesis, no external research).
trigger-library.json (134 entries, schema v0.2.0). All of Phase 1 research shipped.
Merges applied: 5 Paper 3 formal confirmations (1 triple-merge on specificity_concrete_number), 9 Paper 2 confirmations into P1 parents (operator-approved; tool_as_proof_mechanism + mental_accounting_horizon_alignment kept standalone).
Deadline tension resolved: 3 sources merged into canonical deadline_mechanic with deadline_reality: "real" | "manufactured" selector set at angle generation. Three source IDs retained as alias stubs.
Tier distribution: Tier 1: 8 (triple-confirmed families) · Tier 2: 106 · Tier 3: 19 (replication-contested or register-sensitive) · Anti-pattern: 1 (government_card_imagery_compliance_trap).
Schema v0.2.0 formalizes P2/P3 extension fields (proof_mechanism_required, shortcut_risk, replication_status) and adds tier, see_also, variant_of, merged_from. Removed paper_N_status — provenance carries it.
Phase 1 complete. Phase 2 (EDU Reality Map) unblocked. Canonical vocabulary locked for all downstream phases.
proof_mechanism_required: true.
Tension surfaced for Phase 1d: deadline_deweaponization (P3) vs scarcity_time (P1) — planned conditional-trigger resolution.
Voss's 7-38-55 / Mehrabian rule deliberately NOT elevated (replication caveat documented).
All three Layer-1 research papers locked. Phase 1d synthesis unblocked.
government_card_imagery_compliance_trap (CMS-prohibited Medicare pattern).
Both operator-approved same session.
| Repository | Purpose | Stack | Deploys To | Status |
|---|---|---|---|---|
| campaignforge-app | CampaignForge ops platform (workflow UI, pipeline execution, dashboards) | SvelteKit 2, Svelte 5, shadcn-svelte, Drizzle + Kysely, Postgres | DigitalOcean via Coolify | Deploy-Ready |
| contentforge | Content sites (degreesources.com + future verticals) | Astro 5, Svelte 5 islands, MDX, Tailwind 4 | Cloudflare Pages | W1 Done • Deployed |
| campaign-forge | Pipeline brain: skills, config, specs, vertical data, research | Python scripts, YAML/JSON config, Claude skills | Local / CLI | Active |
Meta / Google / TikTok ad → user clicks → lands on content site tool page
User uses EFC Calculator / Quiz / Finder → genuine value delivered
User clicks "Explore Programs" → routed via offer URL with tracking params
User fills form on partner portal → lead captured → Everflow attributes
$35 CPL per qualified lead → Winners Vault updated → next campaign informed
This Week
Spec review, social profiles, Meta Verified, entity resolution, pre-warming
22 tasksWeeks 1-3
Tracking live, platform MVP, ad accounts, organic posting, campaign structures
35 tasksWeeks 4-6
Full workflow UI, account warm-up, first live campaign, Winners Vault seeded
26 tasksWeeks 7-9
Lead intelligence, email capture, social automation, competitive intel, platform health
28 tasksWeeks 10+
Ad platform APIs, CAPI firing, retro automated, ping/post routing, multi-vertical
15+ tasks