Complete Spec Bundle — 5 Documents, 3 Repos, Building
CampaignForge operations platform. SvelteKit + Svelte 5 + shadcn. 12 comparative decisions. Agent SDK pipeline execution. Full database schema.
PlatformContent site architecture. Astro 5 + Svelte 5 islands. Zero JS articles, ~5KB tools. 10 tool patterns. 5-6 week migration plan.
ContentForgePlatform trust and account health. Meta HiVA scoring. Google EDU restrictions. 4-week warm-up playbooks. Weekly intel monitoring.
TrustMaster execution timeline. 5 phases, ~150 tasks. BUILD + OPERATE parallel tracks. GATE milestones. Week-by-week timeline.
ExecutionDaily operations guide. 3-4 hrs/day rhythm. Campaign lifecycle. Solo to team-of-3 scaling. Incident response playbooks.
RunbookSvelteKit 2 + Svelte 5 + Tailwind 4 + Drizzle ORM + shadcn-svelte
Astro 5 + Svelte islands + MDX + Tailwind 4 • 42 pages, 10 tools, Lighthouse 95+
Compliance is an angle-generation input, not a guardrail. The restricted-claims list is literally the cheat sheet for what converts — each restriction exists because the underlying psychological desire converts hard. Map every restriction to its desire family, the Layer-1 triggers that deliver that desire, and the tool-backed compliant framing that lands harder than the non-compliant original. Phase 5 then reads this map as its primary input.
Disclaimers are conditional on ad-claim content, not platform/format. Tool-discovery framing ("See what you could qualify for") makes no standalone claim — the tool + article lander carries all required disclosures. In-ad disclaimers only trigger when the ad itself makes a claim that requires one (specific dollar amount, named government program, income claim). This preserves CTR on discovery-framed ads while compliance routes to the lander where substantiation actually happens. Tool-discovery framing is simultaneously the best-CTR AND most-compliant framing — not a coincidence.
Proprietary rankings from public gov data beats third-party whitelist. College Scorecard + IPEDS + BLS + VA data let us build our own defensible ranking system ("Best Earnings Outcomes," "Best Value," "Best Completion for Working Adults") with full methodology transparency. No usage-rights gating, no publication bias, no expiration, unique angle space competitors can't replicate. Essentially "US News for online education" built from public gov data.
Pattern-detect the meta-spec. Three questions (IPEDS cache, proprietary rankings, BLS wage refresh) all had the same structural answer: authority-tier data cache with freshness tracking, cron-refreshed, read by Phase 5 with automated staleness halts. Consolidating into one authority-data-cache infrastructure spec beats three separate specs that repeat the same logic.
Load the v2 persona before strategic review. Reviewed my own scoping doc first without the persona, then with it. The persona-loaded review caught 10 missing elements (Decision Rule for Angles filter, floor/multiplier analysis, Tool Multiplier Stories, strategic platform fit vs. just compliance fit, proof-mechanism gate, Stage R readback hook). Shallow research-framing vs. deep strategic-framing — persona load is the gate.
Agents process early rules more heavily: Adding HARD RULES blocks at the top of each skill file ensures non-negotiable constraints are in the "hot zone" of agent attention. Rules buried mid-document get diluted by accumulated context.
Decision trees > prose guidance: Converting "consider X when Y" into "IF X THEN Y, OTHERWISE Z" produces more reliable agent behavior. Agents follow explicit branches; they interpret flexible guidance flexibly (which means inconsistently).
Anti-patterns are as powerful as patterns: Showing agents what NOT to produce (with concrete BAD examples and WHY explanations) creates hard boundaries. Without anti-patterns, agents gravitate toward "safe" generic output that passes no rules but also creates no value.
Subagent permissions are session-scoped, not inherited: Adding permissions to settings.local.json or project settings doesn't reliably propagate to subagents in don't-ask mode. Python/Bash fallback for file writes is the workaround. Some agents succeed, some don't — the behavior is inconsistent and needs investigation.
Sweet spot is 3-4 parallel agents: Beyond 4, prompt quality drops and merge review gets sloppy. The real constraint is file isolation, not compute. With 3 repos, the ceiling is ~5 agents if file boundaries are clean.
Feature branches prevent same-repo conflicts: Agents C and D in campaignforge-app worked on separate branches (feat/cost-session-manager, feat/pipeline-orchestrator), then merged sequentially. Zero conflicts.
Independent agents can converge on identical fixes: Both the a11y and Lighthouse agents independently identified and fixed the same CSS cascade issue (@layer base wrapping, :where() scoping) with byte-identical diffs. Merge was clean because Git detected identical changes.
CAPI Worker type errors are IDE-only: Cloudflare Workers have their own tsconfig with @cloudflare/workers-types. The IDE picks up the root tsconfig which doesn't know about Request/Response/fetch globals. Not real errors.
MDX body_html requires JSX-safe formatting: Raw HTML from JSON has nested block elements on same lines, bare <br> tags, and ~ chars parsed as strikethrough. Conversion script needed multi-pass formatting: self-close void tags, escape tildes to ~, and split every block element onto its own line.
Astro dynamic components need static imports: client:visible hydration fails with dynamic component references (NoMatchingImport). Must use conditional static rendering: {tc === 'EFCCalculator' && <EFCCalculator client:visible />}.
Parallel agents for tool implementations: 4 agents dispatched simultaneously, each handling 2-3 tools. All completed in ~17 minutes. Logic/data separation (*.logic.ts, *.data.ts) enabled clean parallelization with zero merge conflicts.
Astro 5 Content Layer API: entry.render() is gone. Use import { render } from 'astro:content' then render(entry). Content collections need glob() loader from astro/loaders.
shadcn-svelte + Tailwind v4: @apply border-border fails — Tailwind v4 doesn't know custom vars via @apply. Use @theme inline to declare all HSL color vars, then use raw CSS instead of @apply for base styles.
shadcn-svelte components need WithElementRef: The cn utility must also export WithElementRef and WithoutChildrenOrChild types for sidebar/rail components.
MDX in Content Collections: HTML (tables, callouts, step-lists) works inline in MDX. No need to convert to custom components yet — the prose CSS styles handle it.
Astro 5 Content Collections: Uses src/content.config.ts (not src/content/config.ts). The z import from astro:content shows deprecation warnings.
Tailwind v4: No tailwind.config.ts needed. Uses @tailwindcss/vite plugin. The @astrojs/tailwind integration conflicts — use the Vite plugin directly.
Remote agents failed: Overnight scaffold agents couldn't auth with GitHub. Local execution worked first try.
compliance-angle-map.md — 9 parts translating every restricted claim / word / scan pattern into compliant tool-backed angles. 8 desire families (F1–F8) consolidate the 7 restricted claims + 5 restricted words into trigger clusters + compliant archetypes. Part VII refined: disclaimers are conditional on ad-claim content, not platform/format — tool-discovery framing routes compliance to the lander.
Gate 1 resolved 5 open questions with strategic extensions: (1) super tools via adaptive question-flow audit, (2) IPEDS hybrid cache, (3) conditional disclaimer rule, (4) proprietary rankings from College Scorecard + IPEDS + BLS + VA data, (5) BLS hybrid cache, (meta) consolidated authority-data-cache infrastructure spec.
Session-end protocol updated in CLAUDE.md to mandate Lessons Learned entry on dashboard. CLAUDE.md trimmed from 41.5k → 37.5k chars (GitNexus section condensed).
Gate 1 closed. Session 2 bootstrap prepared. Deliverables F + G + H next (pure synthesis, no external research).
trigger-library.json (134 entries, schema v0.2.0). All of Phase 1 research shipped.
Merges applied: 5 Paper 3 formal confirmations (1 triple-merge on specificity_concrete_number), 9 Paper 2 confirmations into P1 parents (operator-approved; tool_as_proof_mechanism + mental_accounting_horizon_alignment kept standalone).
Deadline tension resolved: 3 sources merged into canonical deadline_mechanic with deadline_reality: "real" | "manufactured" selector set at angle generation. Three source IDs retained as alias stubs.
Tier distribution: Tier 1: 8 (triple-confirmed families) · Tier 2: 106 · Tier 3: 19 (replication-contested or register-sensitive) · Anti-pattern: 1 (government_card_imagery_compliance_trap).
Schema v0.2.0 formalizes P2/P3 extension fields (proof_mechanism_required, shortcut_risk, replication_status) and adds tier, see_also, variant_of, merged_from. Removed paper_N_status — provenance carries it.
Phase 1 complete. Phase 2 (EDU Reality Map) unblocked. Canonical vocabulary locked for all downstream phases.
proof_mechanism_required: true.
Tension surfaced for Phase 1d: deadline_deweaponization (P3) vs scarcity_time (P1) — planned conditional-trigger resolution.
Voss's 7-38-55 / Mehrabian rule deliberately NOT elevated (replication caveat documented).
All three Layer-1 research papers locked. Phase 1d synthesis unblocked.
government_card_imagery_compliance_trap (CMS-prohibited Medicare pattern).
Both operator-approved same session.
| Repository | Purpose | Stack | Deploys To | Status |
|---|---|---|---|---|
| campaignforge-app | CampaignForge ops platform (workflow UI, pipeline execution, dashboards) | SvelteKit 2, Svelte 5, shadcn-svelte, Drizzle + Kysely, Postgres | DigitalOcean via Coolify | Deploy-Ready |
| contentforge | Content sites (degreesources.com + future verticals) | Astro 5, Svelte 5 islands, MDX, Tailwind 4 | Cloudflare Pages | W1 Done • Deployed |
| campaign-forge | Pipeline brain: skills, config, specs, vertical data, research | Python scripts, YAML/JSON config, Claude skills | Local / CLI | Active |
Meta / Google / TikTok ad → user clicks → lands on content site tool page
User uses EFC Calculator / Quiz / Finder → genuine value delivered
User clicks "Explore Programs" → routed via offer URL with tracking params
User fills form on partner portal → lead captured → Everflow attributes
$35 CPL per qualified lead → Winners Vault updated → next campaign informed
This Week
Spec review, social profiles, Meta Verified, entity resolution, pre-warming
22 tasksWeeks 1-3
Tracking live, platform MVP, ad accounts, organic posting, campaign structures
35 tasksWeeks 4-6
Full workflow UI, account warm-up, first live campaign, Winners Vault seeded
26 tasksWeeks 7-9
Lead intelligence, email capture, social automation, competitive intel, platform health
28 tasksWeeks 10+
Ad platform APIs, CAPI firing, retro automated, ping/post routing, multi-vertical
15+ tasks