Complete Spec Bundle — 5 Documents, 3 Repos, Building
CampaignForge operations platform. SvelteKit + Svelte 5 + shadcn. 12 comparative decisions. Agent SDK pipeline execution. Full database schema.
PlatformContent site architecture. Astro 5 + Svelte 5 islands. Zero JS articles, ~5KB tools. 10 tool patterns. 5-6 week migration plan.
ContentForgePlatform trust and account health. Meta HiVA scoring. Google EDU restrictions. 4-week warm-up playbooks. Weekly intel monitoring.
TrustMaster execution timeline. 5 phases, ~150 tasks. BUILD + OPERATE parallel tracks. GATE milestones. Week-by-week timeline.
ExecutionDaily operations guide. 3-4 hrs/day rhythm. Campaign lifecycle. Solo to team-of-3 scaling. Incident response playbooks.
RunbookSvelteKit 2 + Svelte 5 + Tailwind 4 + Drizzle ORM + shadcn-svelte
Astro 5 + Svelte islands + MDX + Tailwind 4 • 42 pages, 10 tools, Lighthouse 95+
Research sprints compound when batched in parallel with high-yield searches. 88 authoritative pages scraped across 6 parallel Firecrawl batches (federal, state × 7 sub-batches, military+tax+employer, niche-private, ranking-data APIs, verify-pass). Each firecrawl search --scrape returns multiple full authoritative pages in one call; batching them in background frees main context for structured-data emission. The real throughput multiplier isn't "scrape more" — it's "scrape once, emit structured records that feed every downstream deliverable." B.1 (75 records), B.2 (10 floor/multiplier stacks), B.3 (10 methodologies), B.4 (53 cache records) all sourced from the same 88 scrapes.
Truth is the moat, but only if every claim is resolvable to its source. Every record in B.1 + B.4 carries source_url + retrieval_date + recency_confidence + authority_tier. Any claim that couldn't be confirmed from Firecrawl content shipped with a [VERIFY] tag and operator-verification-pass queue — never a speculative value. Operator verify pass resolved all 6 flagged items via targeted Firecrawl scrapes (loan rates 6.39%/7.94%/8.94%, HHS 2026 poverty guidelines, Ch 30 MGIB-AD $2,518, §127 student-loan-payment made PERMANENT by OBBBA). Zero [VERIFY] markers remain in final shipped state.
Aggregator sources excluded by construction, not by review. authority_tier enum hard-codes {primary_gov, secondary_gov, accredited_private} — third_party_aggregator isn't a value that ships. This isn't a taste preference; it's regulator-defense architecture. When a challenge comes (Meta account review, FTC inquiry, competitor legal), every claim in copy traces to a cache record that traces to a primary-gov URL with release date. 54 primary_gov + 19 accredited_private + 2 secondary_gov (reciprocity compacts), 0 aggregators across 75 records.
Institution-first rankings compound; program-level is Phase 8 expansion. Ship 10 institution-level proprietary rankings immediately from Scorecard + IPEDS data (operator-locked). Program-level (4-digit CIP × institution joins) is genuinely higher-moat but requires more data engineering — Phase 8 scope. Three methodologies (Best Veteran-Friendly, Best GI Bill Value Max, Best Employer Partner Schools) are institution-level by nature and stay that way permanently. The other seven carry phase_8_expansion_target: "institution_and_program" so the Phase 5 agent knows what's coming.
Floor-to-multiplier ratios quantify the moat numerically. 10 tools × per-tool floor anchor vs. tool-proven multiplier: ratios range 2x (Financial Aid Quiz "FAFSA" → 3–5 aid categories) to 40x+ (Scholarship Finder "unclaimed billions" → $60K specific). The big numbers aren't marketing — they're what the math produces when you stack compliant resources for a qualifying profile. Ad copy uses typical case; tool output shows the user's specific number. Competitors can't follow because they don't have the tool to produce the math.
OBBBA changed §127 from temporary to permanent. The One Big Beautiful Bill Act made the student-loan-repayment inclusion in Section 127 educational assistance permanent (previously scheduled to expire 12/31/2025). This is load-bearing for the employer-benefit angle — no more sunset-clock framing on the $5,250 employer loan-repayment benefit. Every Employer Tuition Checker output can now cite permanence. Find the legislative changes that quietly enable new angles; they're the research edge.
Sync discipline: ClickUp is primary, not just STATUS/SESSION-LOG. Operator reminded at session #20 close — tracker sync at session end must include ClickUp (per ADR 0006) when tracker-relevant work happened. Saved as feedback_clickup_sync.md in memory. If STATUS.md moves forward and ClickUp doesn't, the operator-facing view diverges from the working view.
Research-sprint deliverables are agent inputs, not copywriter briefs. The breakthrough reframe this session: Stage 3 Copy Factory is a skill-based Claude agent, not a human-copywriter production line. Every schema field, every authority-data-cache record, every compliance rule, every tool-multiplier story becomes a parameter the agent reads at generation time. This inverts the constraint model — bandwidth is no longer the bottleneck, research-input rigor is. Moat is two-layered: research quality competitors can't match AND agent throughput competitors can't match at human-copywriter pace. ADR 0009 locks this thesis in.
Full 50-state coverage is sustainable because production is agent-driven. Initially scoped state coverage as phased copywriter depth (top-15 deep at launch, remaining 35 in Phase 5). Operator flagged the agent-driven nature of Copy Factory — all phased gating dissolved. authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed; agent generates state × angle × platform × format matrix at every campaign run. Constraint shifts from hours to authority-data-cache completeness.
Channel the persona deliberately for judgment calls. When operator asked "answer from your 15-year perspective" on Gate 2 decisions, re-reading the v2 persona and deliberately channeling it produced substantively different answers than the default — categorical rather than hedge-y, strict rather than permissive (e.g., "universal skip YouTube Bumper when disclaimer won't fit", "strict 30-conv PMax bar, no lean seed", "PLUS-heavy red flag is factual"). The re-load triggers in CLAUDE.md aren't just cost-savings, they're judgment-quality gates.
Typical-case in ad copy, upper-bound as tool output. Distilled media-buyer discipline on multiplier framing: ad copy references the typical case ("most adults see $12–20K combined aid"); upper-bound numbers ($30K+, $65K/yr) only appear as tool output for users whose inputs produce them. Mixing the two in ad copy is the aggressive-affiliate pattern that burns accounts AND creates downstream lead-quality disasters (users expecting $30K, receiving $12K, churning through enrollment funnel, damaging school-partner relationships).
Compliance-and-CTR simultaneity is tool-discovery framing's signature. TikTok's restricted-industry scrutiny is aggressive for aid-claim copy. But tool-discovery framing ("take this 2-minute quiz to see what you qualify for") is BOTH the safer compliance posture AND the higher-CTR framing. Ad carries no standalone claim; tool on the lander carries substantiation. The moat pattern — tools-that-prove-claims — isn't just a moat, it's also the copywriting pattern that converts across every platform.
Plugin hooks can accumulate hidden tax. claude-mem's PreToolUse:Read hook was truncating file reads to line 1 when prior observations existed — a token-saving optimization that inverted into a capability regression for deep synthesis work (persona re-loads, spec reviews, multi-file research). Removed only that one hook (via jq 'del(.hooks.PreToolUse)'); kept every other claude-mem feature. Idempotent re-apply script because plugin updates will overwrite the patch. Lesson: "capability regression" can live inside what looks like a working plugin.
Operator's compounding-moat framing re-centers the work. The directive to "accumulate factual, true data across all possibilities that truly can benefit others, then condense that into useful, easy-to-use tools that output truthful quality results with a path for every consumer" is the project thesis in one sentence. Every research gap becomes a ceiling on output quality; every layer of rigor becomes throughput at output. Keep this framing front of mind for every future agent — research is not overhead, it is the moat made tangible.
Cloudflare Access cannot pre-gate a Pages custom domain during initial SSL provisioning. The ACME HTTP-01 challenge to /.well-known/acme-challenge/* is intercepted by the Access gate, blocking Google CA cert issuance. Workaround: delete the Access app → let cert issue (~15s once unblocked) → recreate the Access app. Brief public window is acceptable when the URL isn't advertised yet. Order-of-operations trap — gate pre-seeding doesn't work end-to-end for brand-new hostnames.
Pages.dev hostnames can't have direct Access apps. They belong to Cloudflare's shared zone owned by Cloudflare itself, not your account. Error 12130 "domain does not belong to zone". Always bind a custom subdomain from a zone in the same account as the Pages project, then gate the custom subdomain. The raw .pages.dev URL stays public and needs a separate noindex mechanism.
GitHub repo is locked to one CF account at a time. Disconnecting the CF Pages GitHub App from a repo does NOT clear CF's internal account-repo binding — only deleting the Pages project in the other account fully frees the repo. Error 8000093 means: "delete the conflicting project." Don't try to disconnect the GitHub App as a shortcut; check which account currently owns the binding and do cleanup there first.
CF doesn't support account merging. Manual migration: zones move cleanly via "Move Domain" (preserves all DNS records, no nameserver change); Pages/Workers/R2/KV/D1 must be recreated in target (destructive, lose history). Pick master by which account holds the most infrastructure, not the most domains. Moving a zone that has a bound Pages custom domain BREAKS that binding until the Pages project is re-homed to match — coordinate those two migrations together.
Canonical tag is advisory, not authoritative. Google may still crawl and index .pages.dev URLs despite <link rel="canonical"> pointing to production. Host-scoped X-Robots-Tag: noindex via Pages public/_headers is the authoritative signal. Don't rely on canonical alone for indexing control.
DNS on a prod zone is safe for new subdomains — but verify no wildcards first. Adding a new subdomain CNAME is isolated from root/www/MX records. But if the zone has a wildcard CNAME (*.example.com), an explicit new record overrides the wildcard for that host — could surprise downstream systems. Always grep DNS list for wildcards before committing new records to a prod zone.
After session compaction, restate the intended next action before producing substantive artifacts. Session #18 opened with the system-start context pointing at a Phase 2 Session 2 bootstrap prompt; agent assumed that meant "execute now" and produced a 260-line canonical tool spec. Operator was mid-Cloudflare-work from the prior session and confused by the pivot. Better pattern: after compact, briefly surface the ambiguous thread ("X was queued before compact, Y was just referenced — which one?") and await explicit confirmation before producing substantive artifacts. The work wasn't wasted (the EFC spec becomes the template for remaining 9), but the redirect cost a full round-trip of context.
Compliance is an angle-generation input, not a guardrail. The restricted-claims list is literally the cheat sheet for what converts — each restriction exists because the underlying psychological desire converts hard. Map every restriction to its desire family, the Layer-1 triggers that deliver that desire, and the tool-backed compliant framing that lands harder than the non-compliant original.
Disclaimers are conditional on ad-claim content, not platform/format. Tool-discovery framing ("See what you could qualify for") makes no standalone claim — the tool + 2–3K-word article lander carries all required disclosures with cited sources and freshness dates. In-ad disclaimers only trigger when the ad itself makes a specific claim (dollar amount, named government program, income claim). Tool-discovery framing is simultaneously the best-CTR AND most-compliant framing — not a coincidence.
Proprietary rankings from public gov data beats third-party whitelist. College Scorecard + IPEDS + BLS + VA data let us build our own defensible ranking system with full methodology transparency. No usage-rights gating, no publication bias, no expiration. Essentially "US News for online education" built from public gov data — an angle space competitors can't replicate.
Pattern-detect meta-specs. Three distinct questions (IPEDS cache, proprietary rankings, BLS wage refresh) had the same structural answer: authority-tier data cache with freshness tracking, cron-refreshed, read by Phase 5 with automated staleness halts. Consolidating into one authority-data-cache infrastructure spec beats three separate specs that repeat the same logic.
Persona re-load has empirical basis. Not superstition — output quality degrades every 4–5 turns as attention weight on the persona decays relative to accumulated context. Re-reading restores signal strength via token recency + repetition. CLAUDE.md now codifies: reload on strategic triggers + every ~5 turns in long sessions + every natural phase boundary. NOT every turn (wasteful, dilutes attention on actual work).
Tool-discovery framing is the workflow cheat-code. The same pattern that solves compliance (route substantiation to lander) also solves CTR (no in-ad disclaimer drag) and lead quality (friction-as-feature qualifies users). Three optimization goals aligned in one architectural choice.
Diagnose before adding. Session surfaced that Firecrawl skill wasn't firing, continuous-learning-v2 looked dormant, session-end was manual. Diagnosis revealed: Firecrawl was CLI-skill-vs-MCP-mismatch (fixable by rewiring deep-research); continuous-learning-v2 had 7.7MB of observations accumulated but observer.enabled: false was blocking the analysis phase (one config flag fix); session-end was genuinely manual (built scripts/session-end.sh safe multi-repo helper). Don't install — diagnose first. Most "missing" capabilities are broken capabilities.
Agents process early rules more heavily: Adding HARD RULES blocks at the top of each skill file ensures non-negotiable constraints are in the "hot zone" of agent attention. Rules buried mid-document get diluted by accumulated context.
Decision trees > prose guidance: Converting "consider X when Y" into "IF X THEN Y, OTHERWISE Z" produces more reliable agent behavior. Agents follow explicit branches; they interpret flexible guidance flexibly (which means inconsistently).
Anti-patterns are as powerful as patterns: Showing agents what NOT to produce (with concrete BAD examples and WHY explanations) creates hard boundaries. Without anti-patterns, agents gravitate toward "safe" generic output that passes no rules but also creates no value.
Subagent permissions are session-scoped, not inherited: Adding permissions to settings.local.json or project settings doesn't reliably propagate to subagents in don't-ask mode. Python/Bash fallback for file writes is the workaround. Some agents succeed, some don't — the behavior is inconsistent and needs investigation.
Sweet spot is 3-4 parallel agents: Beyond 4, prompt quality drops and merge review gets sloppy. The real constraint is file isolation, not compute. With 3 repos, the ceiling is ~5 agents if file boundaries are clean.
Feature branches prevent same-repo conflicts: Agents C and D in campaignforge-app worked on separate branches (feat/cost-session-manager, feat/pipeline-orchestrator), then merged sequentially. Zero conflicts.
Independent agents can converge on identical fixes: Both the a11y and Lighthouse agents independently identified and fixed the same CSS cascade issue (@layer base wrapping, :where() scoping) with byte-identical diffs. Merge was clean because Git detected identical changes.
CAPI Worker type errors are IDE-only: Cloudflare Workers have their own tsconfig with @cloudflare/workers-types. The IDE picks up the root tsconfig which doesn't know about Request/Response/fetch globals. Not real errors.
MDX body_html requires JSX-safe formatting: Raw HTML from JSON has nested block elements on same lines, bare <br> tags, and ~ chars parsed as strikethrough. Conversion script needed multi-pass formatting: self-close void tags, escape tildes to ~, and split every block element onto its own line.
Astro dynamic components need static imports: client:visible hydration fails with dynamic component references (NoMatchingImport). Must use conditional static rendering: {tc === 'EFCCalculator' && <EFCCalculator client:visible />}.
Parallel agents for tool implementations: 4 agents dispatched simultaneously, each handling 2-3 tools. All completed in ~17 minutes. Logic/data separation (*.logic.ts, *.data.ts) enabled clean parallelization with zero merge conflicts.
Astro 5 Content Layer API: entry.render() is gone. Use import { render } from 'astro:content' then render(entry). Content collections need glob() loader from astro/loaders.
shadcn-svelte + Tailwind v4: @apply border-border fails — Tailwind v4 doesn't know custom vars via @apply. Use @theme inline to declare all HSL color vars, then use raw CSS instead of @apply for base styles.
shadcn-svelte components need WithElementRef: The cn utility must also export WithElementRef and WithoutChildrenOrChild types for sidebar/rail components.
MDX in Content Collections: HTML (tables, callouts, step-lists) works inline in MDX. No need to convert to custom components yet — the prose CSS styles handle it.
Astro 5 Content Collections: Uses src/content.config.ts (not src/content/config.ts). The z import from astro:content shows deprecation warnings.
Tailwind v4: No tailwind.config.ts needed. Uses @tailwindcss/vite plugin. The @astrojs/tailwind integration conflicts — use the Vite plugin directly.
Remote agents failed: Overnight scaffold agents couldn't auth with GitHub. Local execution worked first try.
resource-candidates.json — 75 records (54 primary_gov + 19 accredited_private + 2 secondary_gov, zero aggregators, all decision_rule_category=1, all 18 required fields populated, all unique resource_ids). Covers federal grants + loans + debt-relief + administrative pathways + military benefits + tax-code + 8 employer programs + 15 state programs + reciprocity compacts + 12 niche private scholarships + 5 data-source records (Scorecard, IPEDS, BLS OEWS, BLS Projections, VA GI Bill Tool, HHS).
B.2 floor-multiplier-map.md — per-tool (10) floor anchor + typical-case stack (ad copy) + upper-bound stack ([MODELED], tool-output only). Floor-to-multiplier ratios 2x–40x+.
B.3 proprietary-rankings-research.md — 10 methodology candidates built from primary_gov data (Best Earnings / Value Working Adults / Completion Working Adults / Lowest Debt / Best Repayment / Veteran-Friendly / Pell-Recipient Outcomes / Credit Transfer / GI Bill Value Max / Employer Partner Schools). Each with data sources + inclusion/exclusion + weights (sum to 100) + refresh cadence + compliance framing + competitive moat + tool integration. Displaces third-party rankings entirely.
B.4 verticals/edu/research/authority-data-cache/ — scaffolded with 3 shared schemas (cached-record.schema.json v0.1.0 + provenance.schema.json + staleness-rules.json) + 9 sub-module manifests + 18 seed records + 10 ranking methodology spec JSONs = 53 JSON files across 8 sub-modules. Seed records: Pell 2026-27 $7,395 (PL 119-75), Direct Sub/Unsub undergrad 6.39%, Unsub grad 7.94%, PLUS 8.94%, VA Ch 33 national cap $29,920.95, Ch 30 MGIB-AD $2,518/mo, Ch 35 DEA $1,536/mo, IRS §127 $5,250 (student-loan-repayment made PERMANENT by OBBBA), HHS 2026 poverty guidelines full tables, + 15 state-aid programs + 2 reciprocity compacts (WICHE WUE + MSEP) + 2 pilot BLS OEWS SOC records (RN + Software Developers 2024 percentiles).
Operator verify-pass resolved all [VERIFY] flags via targeted Firecrawl scrapes. Operator-locked all 5 B.2 + all 7 B.3 open questions from v2 persona lens (institution-first rankings, Phase 8 program-level, 100-completer threshold, -15%/20% Parent PLUS penalty, dual YR treatment, two-tier employer-partner visibility, degreesources.com methodology hosting, hold at 10 methodologies for Phase 2).
Deliverable B ready for Gate 3 operator review alongside C/D/E (pending). Next session begins Deliverable C (2026 EDU Biosphere Market Study) + continuation work on B.4 state-aid (35 remaining states) + IPEDS/Scorecard pilot seeds + BLS OEWS top-50 expansion.
verticals/edu/tool-specs/. Shipped tool-trigger-audit.json (10 tools × 134 triggers, all 8 desire families F1–F8 covered, 2 new-tool gaps flagged for Phase 8), tool-multiplier-stories.md (per-tool floor/multiplier/source/desire-family + P5 cross-vertical pattern), adaptive-question-flow-spec.md (branch points + priority matrix), unit-economics-monetization-rules.md (Deliverable G — monetization rails + CPL bands + lead tiers + LTV + consolidated authority-data-cache infrastructure), platform-fit-rules.md (Deliverable H — 8 platforms × 3 concerns + conditional in-ad disclaimer render spec).
Applied P0/P1 patches: [VERIFY: authority-data-cache refresh] tags on federal loan rates + PLUS rate + Pell max + FAFSA deadline + OBBBA Public Law citation + CFPB specific report URL. 150% vs 225% poverty-line distinction for IDR plans (SAVE vs IBR/PAYE/ICR). Spouse §127 "both employers must have adopted plan" caveat. HR talking-points constrained to template-based placeholder-substitution only (no free-form LLM generation). SCAM CHECK 5-item exclusion pattern + scoring weights for Scholarship Finder.
Operator locked 12 Tier A + 10 Tier B + 8 Tier C + 10 Tier D Gate 2 decisions from 15-year media-buyer lens. C6 full-50-state revision applied after operator clarified Stage 3 Copy Factory is an agent-driven skill — all phased copywriter-depth scoping removed across 6 files; authority-data-cache/state-aid/ covers all 50 states at Phase 2 seed.
ADR 0009 written and accepted at docs/adr/0009-agent-driven-copy-factory-input-layer-thesis.md — formalizes agent-driven Copy Factory + research-rigor-as-throughput thesis. Affects campaign-forge + ContentForge + campaignforge-app.
claude-mem PreToolUse:Read hook patched — removed the single file-context hook that was truncating Read output to line 1 when files had prior observations. All other claude-mem functionality preserved. Idempotent re-apply script at ~/.claude-mem/disable-pretool-read-hook.sh. Takes effect next session.
Gate 2 closed. Next session begins Gate 3 — Deliverable B (resource excavation + floor/multiplier + proprietary rankings research, Firecrawl-heavy, 4 parallel batches).
verticals/edu/tool-specs/efc-calculator.md (~260 lines; structural template for the 9 remaining tool specs in Deliverable F — purpose, desire families, question set with dependencies, calculation pseudocode, data sources with authority tier, output format, compliance posture, ContentForge Svelte vs campaign-forge vanilla JS drift-check, mobile-load targets, adaptive-flow branch points). Session paused mid-Deliverable-F to resume a carried-over Cloudflare infrastructure task from the prior session.
Mapped 3-account CF architecture: infra hub (fourthright.io + project subdomains for CampaignForge/ContentForge/Plane/Directus/Metabase/Coolify + contentforge-1ei.pages.dev Pages project) chosen as long-term master; Keith's account (degreesources.com production + doinsilence.com side blog) + third fourthright account held for a future consolidation session.
Stood up Cloudflare Access gate on ds.fourthright.io — bound as custom domain to existing ContentForge Pages project in infra hub, OTP IDP, 24h session, 3 owner emails allow-listed. Resolved ACME-HTTP-01-intercepted-by-Access issue via delete-app → let-cert-issue (~15s) → recreate-app workaround.
Pushed public/_headers to RationalSeer/contentforge main (commit 935a020): host-scoped X-Robots-Tag: noindex, nofollow for contentforge-1ei.pages.dev only (Access-gated preview and future production hostnames unaffected). Cleaned up wasted Zero Trust org from Keith's account (earlier false start).
4 CF API tokens leaked in conversation — flagged for operator revocation.
Preview gated and indexing-protected. Next session resumes Phase 2 Session 2 Deliverable F (9 specs + audit + stories + adaptive flow) + Deliverables G + H.
compliance-angle-map.md — 9 parts translating every restricted claim / word / scan pattern into compliant tool-backed angles via 8 desire families (F1–F8). Part VII: disclaimers conditional on ad-claim content, not platform/format — tool-discovery framing routes compliance to the lander.
Gate 1 resolved 5 open questions with 3 strategic extensions: super tools via adaptive question-flow, proprietary rankings from public gov data (displacing third-party whitelist), consolidated authority-data-cache infrastructure spec (IPEDS + BLS + rankings + VA + Scorecard unified).
Workflow infrastructure hardened: (1) CLAUDE.md +3 workflow rules — Firecrawl-default for research, parallel tool calls, persona re-load every ~5 turns; (2) scripts/session-end.sh multi-repo helper built with interactive per-repo confirm + safe staging; (3) deep-research skill rewired to Firecrawl CLI (was looking for MCP tools that weren't installed); (4) continuous-learning-v2 observer enabled — had 7.7MB of observations accumulated but analysis was blocked by observer.enabled: false; (5) session-end protocol mandates Lessons Learned entries (restoring cadence that lapsed after session #9). CLAUDE.md tuned 41.5k → 39.8k chars.
Gate 1 closed. Session 2 bootstrap prepared. Infrastructure ready for long-haul sessions. Deliverables F + G + H next (pure synthesis, no external research).
trigger-library.json (134 entries, schema v0.2.0). All of Phase 1 research shipped.
Merges applied: 5 Paper 3 formal confirmations (1 triple-merge on specificity_concrete_number), 9 Paper 2 confirmations into P1 parents (operator-approved; tool_as_proof_mechanism + mental_accounting_horizon_alignment kept standalone).
Deadline tension resolved: 3 sources merged into canonical deadline_mechanic with deadline_reality: "real" | "manufactured" selector set at angle generation. Three source IDs retained as alias stubs.
Tier distribution: Tier 1: 8 (triple-confirmed families) · Tier 2: 106 · Tier 3: 19 (replication-contested or register-sensitive) · Anti-pattern: 1 (government_card_imagery_compliance_trap).
Schema v0.2.0 formalizes P2/P3 extension fields (proof_mechanism_required, shortcut_risk, replication_status) and adds tier, see_also, variant_of, merged_from. Removed paper_N_status — provenance carries it.
Phase 1 complete. Phase 2 (EDU Reality Map) unblocked. Canonical vocabulary locked for all downstream phases.
proof_mechanism_required: true.
Tension surfaced for Phase 1d: deadline_deweaponization (P3) vs scarcity_time (P1) — planned conditional-trigger resolution.
Voss's 7-38-55 / Mehrabian rule deliberately NOT elevated (replication caveat documented).
All three Layer-1 research papers locked. Phase 1d synthesis unblocked.
government_card_imagery_compliance_trap (CMS-prohibited Medicare pattern).
Both operator-approved same session.
| Repository | Purpose | Stack | Deploys To | Status |
|---|---|---|---|---|
| campaignforge-app | CampaignForge ops platform (workflow UI, pipeline execution, dashboards) | SvelteKit 2, Svelte 5, shadcn-svelte, Drizzle + Kysely, Postgres | DigitalOcean via Coolify | Deploy-Ready |
| contentforge | Content sites (degreesources.com + future verticals) | Astro 5, Svelte 5 islands, MDX, Tailwind 4 | Cloudflare Pages | W1 Done • Deployed |
| campaign-forge | Pipeline brain: skills, config, specs, vertical data, research | Python scripts, YAML/JSON config, Claude skills | Local / CLI | Active |
Meta / Google / TikTok ad → user clicks → lands on content site tool page
User uses EFC Calculator / Quiz / Finder → genuine value delivered
User clicks "Explore Programs" → routed via offer URL with tracking params
User fills form on partner portal → lead captured → Everflow attributes
$35 CPL per qualified lead → Winners Vault updated → next campaign informed
This Week
Spec review, social profiles, Meta Verified, entity resolution, pre-warming
22 tasksWeeks 1-3
Tracking live, platform MVP, ad accounts, organic posting, campaign structures
35 tasksWeeks 4-6
Full workflow UI, account warm-up, first live campaign, Winners Vault seeded
26 tasksWeeks 7-9
Lead intelligence, email capture, social automation, competitive intel, platform health
28 tasksWeeks 10+
Ad platform APIs, CAPI firing, retro automated, ping/post routing, multi-vertical
15+ tasks