← back to paper map
Pass 1.5 document 04 · 2026-05-07

Context-integration revision

Three claims from Dania's AEC codesign thesis folded into the Pass 1 spine — plus a deep-dive-grounded Intuit battle card and the explicit 9-vs-10 moat decision.

What this document is

Pass 1 produced the spine — thesis, framework, source plan, cast list, and 5 working battle cards. Pass 2 will produce the full theory chapters, the master matchup matrix, and the industry war games. Pass 1.5 sits in between: a targeted revision pass that integrates three load-bearing claims from Dania's AEC codesign thesis (next-gen-ai-codesign-abstract.md + the supporting manifesto) into the cast, archetypes, and moat-spine design. It also upgrades one of the five working battle cards (Intuit) from generic talking points to deep-dive-grounded specifics.

Three things changed structurally:

  1. Speckle reframed from file-format / data-translation protocol to API / data-gravity protocol — project-graph as system of record (Archetype B). The mechanism is no longer "translate Revit ↔ IFC ↔ DWG" — it's "make the project graph the system of record, reduce file formats to a serialization detail."
  2. Tenth moat added. Evaluator Judgment Power joins the spine as a peer to Process Power, Data Flywheel, and Agentic Workflow Lock-in. The 9-power spine is now a 10-power spine. The argument is below.
  3. Federated agent-translator network replaces "verticalized stack" framing for the AI-native generative-agent archetype (Archetype 4 · Decagon + Crescendo + Clay). The moat sits in the specialist node, not in the integration.

Plus one deliverable upgrade: the Intuit battle card was generic in Pass 1 ("80% SMB share, GenOS, Enterprise Suite push"). The Pass 1.5 version is grounded in the Intuit deep-dive folder (Companies/Intuit/) — see document 03 for the live card and the bottom of this document for the inline diff.

Refinement 1 — Speckle reframed (codesign claim #2)

Source: "File formats are no longer the moat. APIs, data gravity, and platform terms are — and agents are on a trajectory to route around those too, inside a 24-month window."

What changed in Pass 1 docs

Reasoning

The codesign-thesis read is that the format moat is decaying along three vectors simultaneously: regulatory mandates (IFC 4.3 ISO standardization 2024; 20+ countries with formal openBIM mandates), 2D/3D fidelity reconciliation (the 2D drawing isn't going away because it's the legally stamped artifact — and AI works across fidelity levels), and agent-mediated routing (Revit MCP servers, ODA SDK, DataDrivenConstruction extraction tooling). The format becomes a transport layer, not a knowledge layer.

That changes the Speckle attack story materially. Speckle isn't winning by giving you a translator ("now your Revit file opens in Rhino") — it's winning by making the live graph authoritative across tools so the file format question dissolves. The lock-in surface that's actually being defended is the API perimeter and data terms, which is exactly where Autodesk has started fortifying (tiered APS pricing in December 2025). Counter-positioning still holds as the primary moat — Autodesk literally cannot ship Speckle's product without converting Revit-license customers into shoppable accounts — but the mechanism is API/data-gravity, not file translation.

What flips at H1 / H2

Refinement 2 — Evaluator Judgment Power (codesign claim #3)

Source: "Generation is cheap. Judgment is the scarce resource. The moats are in the 'No,' not the 'Yes.'"

The decision: 10 moats, not 9

The choice is between (a) sharpening Process Power's definition to absorb judgment-as-moat or (b) introducing a 10th moat — Evaluator Judgment Power — as a peer to Process Power, Data Flywheel, and Agentic Workflow Lock-in.

Verdict: 10 moats. Adding Evaluator Judgment Power.

Why a peer, not a child of Process Power

Process Power (Helmer) is about replicable organizational know-how that can't be copied even if revealed. Toyota's production system is the canonical case. The moat is in the system's irreproducibility, even when the system is documented and visible.

Evaluator Judgment Power is about something structurally different: the institutional license to be wrong on someone else's behalf. Three things distinguish it from Process Power:

  1. Pricing dynamics are distinct. Process Power monetizes through cost-leadership margin (Toyota) or price premium for quality (TSMC). Evaluator Power monetizes through share-of-savings (the ESCO precedent), metered evaluation (per-eval pricing on judgment quality), and insurance-linked risk reduction (Smartvid.io / Newmetrix → worker's comp reduction; Tesla auto insurance using vehicle telemetry; cyber insurance underwriting integrated with SOC 2 platforms like Vanta). These are fundamentally different pricing surfaces — they require the vendor to take risk on the outcome, not just the customer. That's a moat dimension Process Power doesn't address.
  2. The accountability surface is distinct. A plan reviewer's stamp, an engineer's E&O policy, a doctor's malpractice exposure, a bookkeeper's CPA license — these are licenses to refuse. The moat sits in who has the credible institutional standing to say "no" with weight, not in who can replicate the workflow. AI generation is becoming free; the right to be accountable for the answer isn't.
  3. The defensibility test is distinct. Process Power survives revelation — you can publish Toyota's playbook and still not become Toyota. Evaluator Power survives delegation — you can hand the customer a frontier model that generates the same answer 95% of the time, and the customer still pays for the judgment-bearing entity to certify the 5%. Different test, different moat.

What sharper Process Power would have looked like (and why it doesn't do the work)

The "absorb into Process Power" move would extend Helmer's definition to include "the embedded judgment layer of an organization, not just its replicable know-how." That's a real broadening, but it loses two things: the pricing distinction (share-of-savings, metered evaluation, insurance-linked) and the accountability/license dimension. Both are load-bearing for the AEC chapter (Chapter 11) and the AI-native vertical-co-pilot archetype (Harvey, Sierra, Pilot). The cost of clarity is one more chapter; the benefit is two extra analytical surfaces (pricing model design and accountability/license) that the paper otherwise can't address cleanly. Worth it.

What changes downstream

Trunk Tools — leave bundled or elevate now?

Decision: leave bundled inside Archetype C for Pass 1.5; elevate to standalone in Pass 2 with explicit Evaluator-Power stat dimension. Reason: building a fifth Pass-1 working battle card mid-revision violates the "Pass 1.5 is context-integration, not new construction" budget. The cast-list bullet for Archetype C now flags this explicitly so the elevation is queued, not forgotten. The deep-dive at Companies/Trunk Tools/ is rich enough to support a standalone card when Pass 2 builds it (TrunkSubmittal's 72% non-compliance rate at Gilbane, the Procore API revocation as platform-risk signal, the custom-LLM defensibility bet).

What flips at H1 / H2

Refinement 3 — Federated agent-translator network (codesign claim #5)

Source: "100 Davids, 1,000 translators." The verticalized end-to-end stack is a strawman; the real archetype is many small specialist agents + many translation/coordination layers between them.

What I found in the audit

Pass 1 documents do not contain explicit "verticalized stack" or "vertical integration" language as a framing for AI-native attackers. The implicit assumption was however present in how Archetype 4 (generative agentic execution layer · Decagon + Crescendo + Clay) was described — the framing suggested these companies win by being end-to-end agent platforms rather than by being deep specialist nodes in a federated network.

What changed

Why this matters for moat analysis

Under the verticalized-stack frame, you'd analyze Decagon as a soup-to-nuts CX agent platform competing with Sierra and incumbent contact-center suites. Under the federated-network frame, you analyze Decagon as a specialist node whose moat is depth at one job (CX agent escalation calibration) plus the data flywheel that comes from running that one job at volume. The competitive question shifts from "can Decagon beat Salesforce end-to-end" to "can Decagon stay deeper at the escalation-calibration node than the surrounding agent-translator network can route around." That's a much sharper analytical question — and it's a Data-Flywheel + Evaluator-Power question, not a Switching-Costs question.

What flips at H1 / H2

Intuit battle card — Pass 1.5 upgrade

The full revised card is live in document 03. Inline summary of the diff below.

What was generic in Pass 1

Pass 1 leaned on directional talking points: 80% SMB share, "GenOS" as a black box, "Enterprise Suite push" with no mechanism, generic Brand + Data + Network moat tagging. The card was format-correct but analytically thin — it didn't carry deep-dive specifics that a reader could pressure-test.

What's deep-dive-grounded in Pass 1.5

Three specifics from Companies/Intuit/company-profile.md and Companies/Intuit/Research Briefs/competitive-brief-ies-vs-netsuite-manufacturing.md now anchor the card:

  1. IES Construction Edition is the only shipped vertical edition (Feb 11 2026, open beta). Manufacturing, healthcare, nonprofit, field services have dashboard-level KPI support only. The manufacturing edition is 12–18 months of platform engineering away — IES today has no BOMs, work orders, WIP accounting, serial/lot tracking, or barcode scanning. This is now in the card's stat block ("Vertical-Edition Depth (mfg): 2.5") and called out in the deep-dive footnote.
  2. The Enterprise Suite move is a counter-position at NetSuite, not "moving up-market." NetSuite has a structural 73% implementation-budget-overrun rate (Gartner-cited), $5–50M migration cost benchmark, $80–200K Year-1 mid-market total, and a known shop-floor adoption failure (production workers revert to spreadsheets). IES counter-positions on 30-day deploy, consumer-grade UX, and Anthropic-powered no-code agent SDK on the Intuit transaction graph. This is a Counter-Positioning moat, not a Brand moat (the Pass 1 card had Brand as primary; that was wrong).
  3. The Anthropic partnership is genuinely distinctive — but the window is closing. Multi-year, announced February 24 2026, Claude Agent SDK on IES, spring 2026 rollout. What makes the GenOS strategy distinctive is that fine-tuned financial agents ride on Intuit's transaction data graph (QuickBooks Payments, Payroll, Mailchimp, Credit Karma adjacencies) — that data shape is Intuit's, not Anthropic's. NetSuite can sign its own foundation-model partnership; it can't conjure Intuit's transaction flywheel. What's table-stakes: "fine-tuned financial LLM" alone is no longer a moat claim — Acumatica 2026 R1 shipped AI Studio (no-code LLM-to-screen) in March 2026.

Other changes

Quality-gate self-check

GateStatus
Speckle no longer frames primary attack as file-format translationPass — reframed to API/data-gravity / project-graph-as-system-of-record across 00 + 03
9 OR 10 moats — explicit choice with explicit argumentPass — 10 moats, Evaluator Judgment Power added; argument above
"Verticalized stack" audited and replaced where presentPass — explicit Pass 1 language not present; implicit framing in Archetype 4 reframed to "node in a federated agent-translator network"
Intuit battle card references ≥3 specific facts from Companies/Intuit/ deep-divePass — IES Construction-only / Feb 2026 open beta; +40% YoY IES growth; +50% QoQ new contracts; Anthropic partnership Feb 24 2026 + spring 2026 rollout; Acumatica 2026 R1 AI Studio competitive ceiling; INTU $369 / P/E 23x / IRS Direct File 25 states (≥6 specifics)
3–5 paper-wide asterisks discipline preservedPass — still 4 paper-wide (Glean vs MS Copilot; NetSuite vs AI-native ERP; Speckle vs Autodesk reframed; Suffolk vs owner-led IPD). Intuit vs Brex/Ramp/Puzzle remains the only Intuit-card asterisk.
H1 = 12mo, H2 = 3yr — no 5-year languagePass — verified across edits
Revised HTML parses against existing shared/styles.cssPass — uses existing battle-card / moat-badge / details.tradeoff / stats classes; new .moat-badge.evaluator token added to shared/styles.css for Pass 2 cards

Noticed but not actioned (for Dania)

One additional refinement surfaced in the codesign thesis that I'm flagging rather than silently expanding scope:

Codesign claim #4 — "Digital twins as currently built are frozen at handover. What AEC needs are Systems of Intelligence — knowledge graphs that update as construction conditions change." This claim has a real moat implication: it suggests the AEC field-capture archetype (Buildots + OpenSpace) is moat-thin if the data stops compounding at handover, and moat-thick if the data continues to feed a live System of Intelligence through operations. It also opens a possible fifth AEC archetype — "Construction-end living-graph" — distinct from Archetype B (project graph as system of record at design) and Archetype D (field-capture data flywheel during construction). I didn't act on this because (a) Pass 2 already has the Buildots + OpenSpace card on the docket and the codesign-thesis nuance can land in that card's Special Move and Battles table, and (b) adding a 5th AEC archetype would re-open the cast-list count and Dania has 14 cards locked. Flag for Dania: if the Pass 2 Buildots + OpenSpace card can't carry the live-graph distinction inside its Special Move, we should revisit whether to elevate it.

Files revised in Pass 1.5: 00-thesis-and-outline.html, 03-cast-list-with-card-sketches.html, shared/styles.css (added --moat-evaluator token + .moat-badge.evaluator class for Pass 2). Files untouched: 01-framework-recommendation.html (framework locked), 02-source-plan.html (sources unchanged at this pass).

Hand-off for Pass 2: the four matchups whose H2 calls remain genuinely close (the 4 paper-wide asterisks) survive Pass 1.5 unchanged. The new Evaluator-Power dimension lands as a 10th theory chapter in Part I and as a row + column in the 10×10 master matrix in Part II. The four cards that need explicit Evaluator-Power stat dimensions in Pass 2 are: Harvey + Sierra (legal-judgment + CX-escalation calibration), Pilot (managed-bookkeeping accountability margin), and Trunk Tools (now elevated to standalone — TrunkSubmittal's correct-refusal rate is the cleanest cast-wide Evaluator-Power exemplar).