← back to paper map
Pass 2 · superseded Part V · updated analysis · chapter 13

The new fights happen through three new dimensions

Working draft of the synthesis. The canonical version — the “It's an AI land-grab” section of the consolidated paper — replaces this chapter as the framework spine.

Superseded. This chapter was the working draft of the synthesis across the AI-era moats. The canonical version is now the “It's an AI land-grab” section of the consolidated paper. Preserved here as the working draft.

The shape of the new fights

The seven classical moats from chapter 08 still describe most fights. They describe none of three specific kinds of fight that AI-era attackers now run by default. Each new fight has a primary mechanism that does not collapse into any of the seven classical types: a compounding loop on workflow data that runs through a model rather than between users; a lock-in that lives in an agent's learned calibration rather than in procedural retraining; a wedge of pricing power that comes from the institutional license to refuse rather than from reputation. Chapter 09 argued why each of these is a peer to a classical moat rather than a sub-type. This chapter shows what the resulting fights look like.

The structure below is parallel for each AI-era moat. What new attack does it enable? What classical moat does it cut through? Where has it played out in real fights? Then, four claims that capture how the picture is now structurally different from the one chapter 08 described.

(a) Data Flywheel — the loop on decision-bearing data

An attacker who captures workflow decisions at high velocity — what users accepted, rejected, refused, edited — and runs them through a model loop the foundation-model layer can't absorb, ends up with a quality gap that compounds on deployment time, not on capital deployment.

What new attack this enables

The new attacker doesn't need a network in the classical sense. They need a workflow they can sit inside and the right to capture what their users decided as a byproduct of using the product. Each cycle — user makes a decision, model learns from it, product gets better, more users come — widens the leader's quality gap against any challenger. Below a certain threshold of accumulated usage (chapter 10 calls it the good-enough threshold), the loop is fragile and a well-funded competitor can match it. Above that threshold, the loop's feedback velocity outruns the kind of capital deployment any challenger can mount, and catching up requires more money than challengers usually have.

"Decision-bearing" is the load-bearing word. A corpus of events — clicks, page views, sensor exhaust — is the substrate, not the moat. A corpus that records what an expert user chose in a real workflow is the moat. The distinction is what makes this different from generic data-gravity arguments, and what makes the loop privileged from foundation-model absorption: the foundation-model layer can absorb any text, but it cannot absorb a corpus of expert choices it never saw.

What classical moat it cuts through

It cuts through the data-network sub-type of Network Economies, which Helmer's framework grouped with two-sided and direct networks under one mechanism. Once foundation models can extract value from any reasonably-sized proprietary corpus, the data-network case is no longer the smallest member of the network family — it is its own moat with its own threshold dynamics. It also cuts through Process Power on the production side. In domains where the know-how decomposes into model-trainable patterns, what stays scarce is whatever the model loop cannot absorb. Generation gets cheap; the moat moves to the inputs the model never sees.

Worked example: Higharc in tract residential

Higharc sells production homebuilders a configurator-driven design-to-sales tool: the home model, the sales option set, the contract pricing, and the construction documents are all written into a single per-home record. Every time a builder configures a home, prices an option, locks a buyer in at signing, or pushes a change order, that decision is captured in a corpus that no foundation-model lab can absorb because it never gets to see it. Once the corpus crosses the good-enough threshold for tract residential, Autodesk's Revit moat cannot hold the typology — not because Revit got worse, but because the per-home decision corpus is privileged to Higharc and the unit economics of homebuilding now run through it. Buildots and OpenSpace are running a parallel field-capture flywheel: tens of thousands of jobsite-image-to-BIM comparisons, accumulated as a byproduct of weekly walks, that no general computer-vision model has access to.

The MECE distinction matters here. Higharc's flywheel is a data property moat — the corpus records expert choices and compounds them. It is not the same as the institutional standing to refuse with weight. A company can have one without the other; the strongest cases have both. Section (c) below covers the second.

(b) Agentic Workflow Lock-in — the moat in the calibration

An AI-native attacker whose product accumulates tool-graph, translation memory, and escalation calibration in a customer's daily workflow ends up with a lock-in surface that is portable in name only. A competing agent runtime can read the tool-graph; it cannot read the calibration.

What new attack this enables

The agent's lock-in is dynamic where classical Switching Costs are static. Classical switching is paid once at migration — the customer eats a procedural retraining cost and amortizes it. Agentic lock-in compounds with every interaction: which tools the agent has learned to call, in what sequence, against what calibration of when to escalate, when to refuse, when to chain. (Tool-graph means the recorded set of downstream APIs and tools the agent learned to use, in what order. MCP-style protocols — Anthropic's Model Context Protocol and similar — let one agent runtime read another's tool list, but the calibration is opaque to them.) An attacker who wins the daily-default position for the first six months walks away with a calibration depth that a competing agent has to rebuild from zero, in production, while running.

The cost of leaving in month 24 is materially higher than in month 6, because the agent has accumulated more calibration. This changes both the defender's investment math and the attacker's timing.

What classical moat it cuts through

The naïve read of AI-era enterprise software says agentic attackers eat embedded SaaS — classical Switching Costs collapse against AI-rebuilt UX. The moat-mechanic analysis says the opposite where the embedded SaaS sits in regulated or deeply integrated domains. Agentic Lock-in composes multiplicatively with classical Switching Costs in those domains: the customer pays the integration cost once for the embedded platform, and then the agent compounds calibration on top of paid-once integration work. Embedded SaaS plus agentic calibration on top is more defensible than either alone. This is the most important analytical finding of the original 10×10 grid this chapter replaces; the matchup that the naïve read flagged as Attacker (Agentic) flips to Defender (Switching plus Agentic) once the composition is taken seriously.

Worked example: Decagon, Crescendo, Clay as federated specialist nodes

Decagon in customer experience, Crescendo in support automation, and Clay in go-to-market data each operate as a specialist node in a federated network of agent-translators (the chapter 09 reframe). They are not vertically integrated stacks; they are deep at one job, stitched together with other agents through translation layers. The lock-in lives in the node's calibration — how Decagon learned this customer's escalation patterns, how Crescendo learned which complaints to auto-resolve and which to route, how Clay learned which prospect signals matter for this account team. Cursor, in developer tooling, runs the contested-defender version of the same play: every accept and reject of a code completion is calibration data, and at $2B ARR by Q1 2026, Cursor has accumulated more daily-default position than any challenger can recover at parity model quality. The contested part is whether GitHub Copilot's distribution can prevent Cursor from reaching the calibration depth that locks the position in.

(c) Evaluator Judgment Power — the institutional license to refuse

Generation is becoming free. Judgment isn't. The new moat is the institutional standing to certify, refuse, or stand behind an output a customer relies on — backed by licensure, malpractice exposure, or an errors-and-omissions policy.

What new attack this enables

An attacker (or incumbent) with the institutional standing to refuse with weight controls the moat that survives delegation. Hand the customer a frontier model that produces the same output ninety-five percent of the time, and they still pay the judgment-bearing entity to certify the five percent — because the consequence of being wrong on that five percent is severe enough that the customer wants someone else contractually accountable for it. The pricing surface that goes with this moat is qualitatively distinct from the rest of the framework. Where Process Power monetizes through margin on cost of production, and Brand monetizes through willingness-to-pay premium, Evaluator Power monetizes through share-of-savings (the energy-services-company precedent — vendor takes a cut of measured savings), metered evaluation (per-evaluation pricing on judgment quality), and insurance-linked risk reduction (the vendor's telemetry feeds the insurer's underwriting). Each of these requires the vendor to take outcome risk, not just feature risk.

What classical moat it cuts through

Process Power survives revelation but not delegation. Evaluator Power survives delegation: that is the test that makes it a peer moat to Process, not a sub-type. It also cuts through Brand. Both Brand and Evaluator Power are trust-as-asset, but they are mechanically different. Brand is trust-as-asset on the customer-perception side: the customer's shortcut around information asymmetry at the moment of purchase. Evaluator Power is trust-as-license on the regulatory-and-liability side: the institutional standing to be contractually accountable for being wrong on someone else's behalf. A trusted brand without an evaluator surface cannot price share-of-savings or insurance-linked reduction, because it cannot transfer liability. Big-name accounting firms have both; pure evaluator-power firms (specialty plan reviewers, insurance-linked safety vendors) have only the second; pure-brand firms (luxury durables, consumer media) have only the first. The defensibility geometry is different.

Worked example: Trunk Tools in submittal review

Trunk Tools' submittal-review product, deployed at Gilbane, found that 72% of submittals were non-compliant with the project specification. The product's value isn't the 72% it processed — it's the 72% it correctly refused. The customer is buying calibrated refusal backed by accountability, not generation. Pilot in accounting runs the same pattern in a different domain: managed-bookkeeping margin is already accountability-bearing, and the customer pays for the judgment-bearing review, not for the AI doing the books underneath. The same shape shows up wherever the consequence of being wrong is severe: Harvey in legal, Sierra in customer experience, the Big Four audit firms in financial certification.

Four loud claims that capture the reshape

Claim 1 — Data Flywheel beats Speed once spinning.

The classical "speed-as-moat" reading dies once a flywheel reaches escape velocity. Speed is a relevant variable below the good-enough threshold — the leader is racing to cross it, and a faster competitor can cross first. Above the threshold, the dynamic inverts: the leader's feedback velocity is the speed that matters, and it is the product of the loop, not the team. A scale-leader incumbent with deep capital but no privileged corpus loses to the flywheel attacker that has crossed, because capital cannot buy the cycles of expert decisions it never captured. The Glean-versus-Microsoft-Copilot fight at the three-year horizon is the cleanest test in front of us: Microsoft has the scale, distribution, and capital; Glean has the twelve-to-eighteen-month knowledge-graph maturation curve. If the curve is real, Glean's flywheel beats Microsoft's scale on the daily-default segment; Microsoft retains the seat-licensing tail. Speed-as-moat would predict the opposite.

Claim 2 — Switching Costs are amplified, not replaced, by Agentic Workflow Lock-in.

This is the multiplicative-composition claim, and it is the most consequential structural finding in this chapter. In regulated or embedded domains — ERP, EHR, banking, deeply integrated AEC platforms — the classical financial migration cost and the agentic calibration cost compound rather than substitute. The customer pays for the integration once; the agent then compounds calibration on top, against that integration, every interaction. A unit of switching-cost investment buys more durable lock-in once an agent is calibrating against the integration than it ever did against a feature surface. This is why Plaid plus AI agents is the canonical projected example: Plaid's Cornered-Resource-plus-Switching-Cost stack is already structurally resistant to AI-native attack, and an agent runtime that sits on top of the regulated integrations amplifies the embedded moat rather than displacing it. The naïve read says AI-native attackers eat embedded SaaS; the moat-mechanic reading says embedded SaaS plus agentic calibration on top is more defensible than either alone.

Claim 3 — Counter-Positioning strengthens in proportion to incumbent AI investment.

This is the unintuitive one. Classical theory has Counter-Positioning as a one-shot bind: the incumbent either responds and breaks the bind, or doesn't and loses. The AI-era reading is sharper. Every dollar the incumbent spends on AI capability lands on top of the legacy pricing model and accelerates the cannibalization the incumbent already could not afford. Autodesk's tiered API pricing in December 2025 is the tell — the incumbent fortifies the API perimeter (the new lock-in surface) without re-pricing the seat license (the legacy bind). The bind tightens. Higharc's per-home pricing in tract residential, Joist AI's per-outcome pricing in field execution, and any consumption-based pricing surface in a seat-licensing incumbent's segment all live on the same dynamic. AI investment by the incumbent is a force multiplier for the counter-positioner, not a defensive move. This is the rare moat the AI era amplifies rather than compresses.

Claim 4 — Evaluator Power beats Generation Power in any high-stakes professional-services domain.

Where the consequence of being wrong is severe — life-safety, fiduciary, regulatory, insurable risk — the customer pays for the calibrated "no" and the institutional accountability that goes with it, not for the "yes" the model produced. This is the cleanest test of why Evaluator Power is a peer moat rather than a Brand variation: the customer's willingness-to-pay surface is the contractual transfer of liability, not the trust shortcut. Harvey, Sierra, Trunk Tools, and Pilot all sit on this dynamic in their respective domains. The structural prediction the classical 7×7 grid couldn't make: brand without an accountability surface is a melting ice cube against AI-era volume; brand with an accountability surface is a defensible Evaluator-Power business in disguise. The codesign-thesis claim — generation cheap, judgment scarce — is sharpened here into a matchup verdict.

What this sets up

The seven classical chapters set the baseline. The three new dimensions reshape it. Together they are the framework the rest of the paper applies. The next part of the paper takes that framework into AEC software, where the new fights play out through real attackers — Higharc in tract residential, Speckle on the project-graph layer, Trunk Tools in submittal review, Buildots and OpenSpace in field capture, Snaptrude in design — plus archetype-borrowed plays running on horizontal AI-native DNA. Part VI walks the battle cards. Pass 2b will run the AEC software war games on top of those cards.

Sources: synthesis original; supporting evidence drawn from chapters 10–12 (the three AI-era moats) and the cast files in Pass 1 document 03 and Pass 2 chapter 14. The Trunk Tools / Gilbane 72%-correct-refusal rate is from Trunk Tools' published case material; Cursor's $2B ARR (Q1 2026) from The Information; Higharc and Buildots / OpenSpace company facts from the Pass 2 battle cards.