← back to paper map
Pass 1 Pass 1.5 revisions document 00

Thesis and outline

Pass 1.5 (2026-05-07) integrated three claims from the codesign thesis: Speckle reframed from file-format translation to API/data-gravity (Archetype B); a 10th moat — Evaluator Judgment Power — added to the spine; and the AI-native generative-agent archetype reframed from "verticalized stack" to "node in a federated agent-translator network." Change-log and reasoning live in document 04.

Thesis

For thirty years, seven moats have explained most of what makes a company hard to dislodge: Network Effects, Scale Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resource, Process Power. Different frameworks slice them differently (Helmer's 7 Powers, Porter's Five Forces, Greenwald's barriers); the primitives converge. If a company had real, durable competitive advantage, it almost always traced back to one or two of these.

AI breaks some of them. As A16z has argued, defensibility isn't inherent to data itself: the more data you collect, the more the cost of acquiring the next slice goes up, not down, and data plus scale doesn't on its own create a network effect. Several incumbents who thought they had a “data moat” actually had a static asset. The most direct casualty among the seven is Process Power: AI lowers the cost of copying organizational know-how, which is the thing Process Power says can't be copied even when shown. Procedural switching costs (training, UX habits) get compressed too; if an AI-native rebuild of the core loop is good enough, retraining pays for itself in weeks. The other classical moats bend rather than break, but the bends matter.

If incumbents keep playing on the same seven moats, AI-native entrants can win by differentiating along four new axes the seven don't capture:

  1. Data Flywheel. Each customer use makes the product measurably better at the next one. The moat is cumulative; a competitor needs both the same data and the same loops running in production, accumulating at the same rate. Cold-start losses compound, so the gap widens with every quarter the leader keeps spinning.
  2. Agentic Workflow Lock-in. The agent fits the customer's specific work so well that adoption becomes inevitable, and ripping it out means giving up the outcomes — faster turnaround, lower cost, less human effort. The lock-in is outcome dependency, not classical switching cost, and the gap between “with the agent” and “without it” widens with use.
  3. Evaluator Judgment Power. Once generation becomes a commodity, the scarce resource isn't the “yes.” It's a calibrated, accountable “no.” The moat sits in who has the authority and accountability to make a consequential judgment a customer is legally bound to, and the pricing surfaces that come with it (share-of-savings, metered evaluation, insurance-linked risk reduction).
  4. AI-Native Land-Grab. Nearly every industry has a blue-ocean AI-native opportunity with a closing window in the next three years. Whoever captures distribution first — a foundation model with reach, or a firm built AI-operational from day one — sets the terms. The second mover inherits higher CAC, a narrower remaining market, and a leader whose other three moats have already started spinning.

The strategic question for the next three years: which of the seven moats is each incumbent leaning on, how fast is AI cutting into it, and can they grow one of the four new ones before a challenger does?

The non-obvious claim, and the one this paper defends, is that the moats most often celebrated as “AI-proof” (brand, scale, distribution) are only AI-proof when paired with a Data Flywheel or Agentic Lock-in mechanism that converts the static asset into a compounding one. A brand without a flywheel is a melting ice cube. Distribution without learned agentic context is a delivery channel for a commodity. The moats that survive are the ones that spin.

Outline

Part I — Theory: 7 prior moats and 4 new ones

One chapter per moat. Each chapter has the same five-part structure, so the reader can compare across moats without re-orienting. (Evaluator Judgment Power was added as the third AI-era moat in Pass 1.5; AI-Native Land-Grab was added as the fourth on 2026-05-08. See document 04 for the Pass 1.5 argument.)

Classical Moats: Helmer's 7

Each of Helmer's seven names a structural reason an incumbent stays ahead: a challenger either can't close the gap, or can't close it profitably.

  1. Scale Economies. A leader spreads fixed costs over a volume a smaller competitor can't match. Per-unit costs stay lower than the challenger's, so the price the leader can profitably charge is one the challenger can't survive.
  2. Network Economies. Each additional user makes the product more valuable to every other user. A challenger has to coordinate a critical mass of users switching at once, because no individual user gains from leaving alone.
  3. Counter-Positioning. A challenger adopts a business model the incumbent can't copy without destroying its own profit pool. The incumbent stays put because cannibalizing itself costs more than ceding the lane.
  4. Switching Costs. Customers face real costs (money, time, retraining, lost data) when they leave. A challenger has to deliver enough value to clear the switching cost, not just the headline price gap.
  5. Branding. Customers pay more because they trust what the brand stands for, not just what the product does. That trust took years to build, and a challenger can't shortcut it with marketing spend.
  6. Cornered Resource. The company controls something competitors literally can't get: an exclusive license, a long-term contract, a specific piece of land, a small group of essential people. There's no second copy of the resource, and no way to manufacture access.
  7. Process Power. A way of working that delivers measurable advantage and resists copying even when competitors can see how it's done. The know-how lives in habits, decisions, and organizational structure rather than documents, which is why Toyota's production system has been visible for decades and copied by no one.

Emergent Moats: 4 new in the AI era

AI commoditizes the cognitive labor that used to make incumbents expensive to compete with, so durable advantage moves to what cheap cognition can't generate on its own: compounding loops between use and improvement, customer outcomes good enough that ripping the system out is unthinkable, codified judgment that bears its own accountability, and speed in capturing AI-native distribution before the window closes.

  1. Data Flywheel. Data and scale combine into a moat when each customer use generates training signal that improves the product on the next use, and that improvement is visible enough to attract more use. The compounding lives in the closed loop, not in dataset size: a static archive of 10x more rows doesn't help, but a live loop where the next user's edge case hardens the system's response to similar edge cases does. A challenger can't catch up by acquiring more data, because the signal that matters lives in the leader's history of use, not in raw rows that can be bought or scraped.
  2. Agentic Workflow Lock-in. The agent fits the customer's specific work so well that adoption becomes inevitable: the outputs are good enough, the human effort is low enough, and the operating cost is low enough that going back to the prior way of working is unacceptable. The lock-in isn't a classical switching cost; it's outcome dependency. The customer would lose measurable value (turnaround, headcount, error rate, quality) by ripping the agent out, and as the agent accumulates customer-specific context, the gap between “with the agent” and “without it” widens. The cost of leaving rises with use, not with calendar time.
  3. Evaluator Judgment Power. Most of the value in expert work lives inside the heads of senior professionals: the partner-level lawyer who flags an unusual indemnification clause, the engineer who overrides a generative design that's structurally borderline, the underwriter who refuses a loan that scores “approve.” The moat sits in turning that judgment into something a system can apply at machine scale while still carrying the accountability for being wrong. The barrier has two parts: the codified judgment itself (the rules that took years of cases to refine, the calibrated thresholds, the escalation patterns) and the institutional license to make consequential calls a customer is legally bound to. Competitors can buy compute and hire engineers; they can't shortcut the case history that taught the system when to refuse.
  4. AI-Native Land-Grab. Nearly every industry has a blue-ocean AI-native opportunity that didn't exist three years ago and won't be open much longer. The race in the next three years is to capture distribution before the lane fills, and there are two kinds of winners: a foundation model with broad-enough reach to fan out into vertical workflows (OpenAI sliding into legal research without ever calling itself a legal company), or a vertical-specific firm built to operate AI-natively from day one (Harvey in legal, Sierra in CX). The moat compounds two things: distribution captured during the race, and the organizational know-how of running an AI-operational company — which doesn't retrofit into one built for human-only execution. Once a category is decided, the second mover faces a higher CAC, a narrower remaining market, and a competitor whose other moats (Data Flywheel, Agentic Lock-in) have already started spinning.

Part II — AEC software war game

The single battleground the paper carries to depth. Battle-card stack and an industry-level “so what.”

Three other industry war games (AEC general contractors, Accounting, AI-native horizontal) were extracted as future-expansion lanes. See archive of future expansion chapters.

Part III — Synthesis

Appendix A — The master matchup matrix

Moved out of the main spine on 2026-05-08 because the AEC war game is what carries the paper's analytical weight. Kept here as supporting apparatus for readers who want the moat-vs-moat fundamentals.

Decisions locked (Dania, 2026-05-07)

Five decisions locked from Dania's 2026-05-07 ruling. Listed here for the audit trail; the rest of the document is updated to reflect them.

  1. Contrast probe = Autodesk. Bentley and Adobe dropped. Argument hung on file-format + seat-licensing moat under attack from generative design and open-format AI tools; cannibalization risk on AI-native re-platform.
  2. Scope narrowed to AEC software (2026-05-08). The original plan war-gamed three other industries (AEC general contractors, Accounting, AI-native horizontal); those were extracted on 2026-05-08 as future-expansion lanes (see archive). The paper now carries one battleground — AEC software — to depth.
  3. AEC software = 4 attack archetypes. Higharc and Speckle pre-pinned. Final 4 selected — see below.
  4. Horizons: H1 = 12 months, H2 = 3 years. Strike all 5-year language; tighten H2 throughout.
  5. Asterisk discipline: 3–5 across the entire paper. Reserved for the genuinely close, load-bearing tradeoffs. Most matchups resolve cleanly to WIN/LOSE/TIE.

Archetype selections (locked for Pass 2)

AEC software — four attack archetypes

  1. Contract / commercial-relationship disruption. Canonical: Higharc. Attack vector: redefine the homebuilder ↔ buyer ↔ contractor commercial structure, not just the design tool.
  2. API / data-gravity protocol — project-graph as system of record. Canonical: Speckle. Adjacent: open-format generative tools (Snaptrude). Attack vector: not file-format translation. The mechanism is to make the project graph (rooms, walls, systems, versions, contributors) the system of record across tools, so the file format collapses to a serialization detail. The lock-in surface that's actually being attacked isn't RVT-the-binary; it's API access, data terms, and whose graph the project commits to. Re: codesign-thesis claim #2 — "file formats are no longer the moat. APIs, data gravity, and platform terms are."
  3. Document / spec / risk AI agent. Trunk Tools + Document Crunch, bundled. Attack vector: AI agents parsing specs, RFIs, contracts, schedule — extracting intent and automating the document-bound knowledge work.
  4. Field-capture data flywheel. Buildots + OpenSpace, bundled. Attack vector: jobsite computer vision + sensor data compounding into a proprietary construction-progress dataset. (Picked because it's the only AEC archetype that builds a Data Flywheel moat from physical-world capture — orthogonal to the other three, and gives the chapter a flywheel exemplar.)

New open questions for Dania (discovered during research)

Open questions surfaced while building the archetype framing. Each has a recommendation; respond “yes, do it” or “no, pick the alternative.” (A third question about Cursor / AI-native UX rebuild was extracted on 2026-05-08 with the AI-native horizontal chapter — see archive.)

  1. Q1. Autodesk technical-defensibility question — handoff to Priya before Pass 2?

    The Autodesk argument depends on a technical claim I should not assert without Priya's view: is generative design (text/sketch → BIM) likely to clear the "production-ready" bar within H2 = 3 years, or does it stall at concept design? That answer changes whether Autodesk's Revit moat decays linearly (commodity attack) or non-linearly (structural collapse). Recommendation: ping Priya before Pass 2 with this exact question. If she says "stalls at concept," the Autodesk argument softens; if "production-ready by H2," the argument hardens.

  2. Q2. Source-gap flag — AEC field-capture economics.

    Buildots and OpenSpace publish customer counts and a few case studies, but I can't find a credible third-party benchmark on (a) how much proprietary data their flywheels actually accumulate per customer per year, and (b) whether that data has decreasing or increasing marginal value past year 1. The flywheel thesis depends on this. Recommendation: in Pass 2, treat the field-capture flywheel as "thesis-grade, not benchmarked" — argue the mechanism, flag the data gap, give the asterisk an explicit "I'd update toward [X] if I saw [Y]" note. Cleaner than asserting a flywheel that may or may not have escape velocity.

Hand-off note for Priya: three technical questions I'm flagging rather than asserting on. (a) Is fine-tuning technically defensible past the 12-month horizon, given continued base-model capability gains? (b) For agentic lock-in: how durable is "agent memory" as a moat once MCP-style portability standards mature? (c) For Autodesk: does generative design (text/sketch → BIM) clear "production-ready" by H2 = 3 years, or stall at concept design? The third shapes the AEC software chapter materially; the first two shape Chapters 8 and 9. I want her view before Pass 2.