1. Abstract
The seven classical moats (Network Effects, Scale Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resource, Process Power) won't get you to a defensible position in a post-AI world. You need four new ones: Data Flywheel, Agentic Workflow Lock-in, Evaluator Judgment Power, and AI-Native Land-Grab. These aren't subcategories. They're four new and emergent strategic moats in a world of AI. Incumbents can lean on the classical seven for a while, but only a while: the new four create adoption, customer outcomes, compounding learning, and speed of category capture more efficiently with AI than any of the older powers can. AI compresses moats whose barrier is human-encoded process or learned habit; it amplifies those built on proprietary workflow data, agent memory, the authority and accountability to make a consequential judgment a customer is legally bound to, and the speed to capture an AI-native category before the window closes.
2. Theory — 7 prior moats and 4 new ones
One section per moat. Each has the same two-part structure — what it is and how it holds — so the reader can compare across moats without re-orienting.
Classical Moats: Helmer's 7
Each of Helmer's seven names a structural reason an incumbent stays ahead: a challenger either can't close the gap, or can't close it profitably.
- Scale Economies. A leader spreads fixed costs over a volume a smaller competitor can't match. Per-unit costs stay lower than the challenger's, so the price the leader can profitably charge is one the challenger can't survive.
- Network Economies. Each additional user makes the product more valuable to every other user. A challenger has to coordinate a critical mass of users switching at once, because no individual user gains from leaving alone.
- Counter-Positioning. A challenger adopts a business model the incumbent can't copy without destroying its own profit pool. The incumbent stays put because cannibalizing itself costs more than ceding the lane.
- Switching Costs. Customers face real costs (money, time, retraining, lost data) when they leave. A challenger has to deliver enough value to clear the switching cost, not just the headline price gap.
- Branding. Customers pay more because they trust what the brand stands for, not just what the product does. That trust took years to build, and a challenger can't shortcut it with marketing spend.
- Cornered Resource. The company controls something competitors literally can't get: an exclusive license, a long-term contract, a specific piece of land, a small group of essential people. There's no second copy of the resource, and no way to manufacture access.
- Process Power. A way of working that delivers measurable advantage and resists copying even when competitors can see how it's done. The know-how lives in habits, decisions, and organizational structure rather than documents, which is why Toyota's production system has been visible for decades and copied by no one.
Emergent Moats: 4 new in the AI era
AI commoditizes the cognitive labor that used to make incumbents expensive to compete with, so durable advantage moves to what cheap cognition can't generate on its own: compounding loops between use and improvement, customer outcomes good enough that ripping the system out is unthinkable, codified judgment that bears its own accountability, and speed in capturing AI-native distribution before the window closes.
- Data Flywheel. Data and scale combine into a moat when each customer use generates training signal that improves the product on the next use, and that improvement is visible enough to attract more use. The compounding lives in the closed loop, not in dataset size: a static archive of 10x more rows doesn't help, but a live loop where the next user's edge case hardens the system's response to similar edge cases does. A challenger can't catch up by acquiring more data, because the signal that matters lives in the leader's history of use, not in raw rows that can be bought or scraped.
- Agentic Workflow Lock-in. The agent fits the customer's specific work so well that adoption becomes inevitable: the outputs are good enough, the human effort is low enough, and the operating cost is low enough that going back to the prior way of working is unacceptable. The lock-in isn't a classical switching cost; it's outcome dependency. The customer would lose measurable value (turnaround, headcount, error rate, quality) by ripping the agent out, and as the agent accumulates customer-specific context, the gap between “with the agent” and “without it” widens. The cost of leaving rises with use, not with calendar time.
- Evaluator Judgment Power. Most of the value in expert work lives inside the heads of senior professionals: the partner-level lawyer who flags an unusual indemnification clause, the engineer who overrides a generative design that's structurally borderline, the underwriter who refuses a loan that scores “approve.” The moat sits in turning that judgment into something a system can apply at machine scale while still carrying the accountability for being wrong. The barrier has two parts: the codified judgment itself (the rules that took years of cases to refine, the calibrated thresholds, the escalation patterns) and the institutional license to make consequential calls a customer is legally bound to. Competitors can buy compute and hire engineers; they can't shortcut the case history that taught the system when to refuse.
- AI-Native Land-Grab. Nearly every industry has a blue-ocean AI-native opportunity that didn't exist three years ago and won't be open much longer. The race in the next three years is to capture distribution before the lane fills, and there are two kinds of winners: a foundation model with broad-enough reach to fan out into vertical workflows (OpenAI sliding into legal research without ever calling itself a legal company), or a vertical-specific firm built to operate AI-natively from day one (Harvey in legal, Sierra in CX). The moat compounds two things: distribution captured during the race, and the organizational know-how of running an AI-operational company — which doesn't retrofit into one built for human-only execution. Once a category is decided, the second mover faces a higher CAC, a narrower remaining market, and a competitor whose other moats (Data Flywheel, Agentic Lock-in) have already started spinning.
3. It's an AI land-grab
The next few years are a blue ocean. Virtually every industry has specialization that is underdeveloped before AI comes in and learns it — the codified judgment of a senior underwriter, the workflow patterns of a top-tier project manager, the tacit knowledge of a master estimator, the case history of a senior partner. AI compresses the cost of that learning. The firm that captures distribution in a category before the AI has finished learning it — before the category is decided — converts that head start into compounding moats that close the door behind them. The race is the moat-formation moment, and the surface area is enormous: AI is in the early innings of commoditizing knowledge work across legal, medical, financial, operational, and creative domains, and each industry presents the same race shape.
This is why the AI-native cohort is working 996. The race is real, the window is closing, and the strategic prize for being first is large. With AI, the rate-limit on speed is no longer machine compute — it is human judgment. As complexity rises, the gap between “the model generates an answer” and “a qualified human can evaluate it” widens, and that gap is where the moat lives. The race is accelerating along the axis of complexity, and the firms that are AI-native from day one have an organizational advantage no incumbent can retrofit.
The seven classical moats from section 2 still describe most fights. They describe none of three specific kinds of fight that AI-era attackers now run by default. Each new fight has a primary mechanism that does not collapse into any of the seven classical types: a compounding loop on workflow data that runs through a model rather than between users; a lock-in that lives in an agent's learned calibration rather than in procedural retraining; a wedge of pricing power that comes from the institutional license to refuse rather than from reputation. Combined with go-to-market execution, the three drive the speed at which a customer becomes unmovable. Speed of customer lock-in, in a category being remade by AI, is what wins the land-grab.
Each of the three product moats drives speed of lock-in in its own way. The Data Flywheel accelerates quality improvement, which accelerates customer commitment. Agentic Workflow Lock-in accelerates context accumulation, which accelerates behavioral lock-in. Evaluator Judgment Power accelerates trust accumulation, which accelerates certification lock-in. Combined, the three push the leader's lead forward every quarter the customer stays. The fourth input is go-to-market execution — an organization built to operate AI-natively from day one, distribution captured before competitors can scale, capital deployed to compound the lead through the closing window, and a business-model innovation traditional firms can't match. AI-native cost structure makes a set of pricing models economically viable that legacy firms cannot match: share-of-savings, pay-per-outcome, per-resolution pricing, insurance-linked offerings, marketable accuracy guarantees. The pricing weapon is the second thing the land-grab buys, alongside the speed of lock-in.
Specialization is the deepening moat
The moat does not decay at one rate everywhere. It decays at a rate that depends on how specialized the work is. Place AI products on a spectrum from general to specialized: foundation models on the left, thin wrappers on top of them next — where the foundation layer absorbs whatever judgment the wrapper encoded, often within a single release — then the three AI-era moats themselves, each more specialized than the last. The further right on the spectrum, the more specialized the work, the fewer people can tell whether the result is good, and the longer the moat survives the foundation layer's downward pressure.
The maturation mechanism
The three product moats are not three coequal moats. They mature in a sequence, and each level deepens the moat the prior level created. The Data Flywheel runs first and produces raw quality — the basic loop where usage generates signal that improves the model. Agentic Workflow Lock-in builds on the flywheel because the calibration the agent accumulates is itself the kind of data that compounds the loop, and because the agent locks in the current human workflow while enabling the next-generation agentic version of it. Evaluator Judgment Power builds on both: specialized judgment, written down as evals, is the most refined form of training signal — it tells the system not just what the right answers look like but what kinds of mistakes matter, sharpening the flywheel further and refining the agent's calibration further. Each level requires the prior level to function fully.
The two diagrams describe the same thing from different angles. Specialization deepens through maturation; maturation produces specialization. As a firm steps through — building the Data Flywheel, then getting agents that lock in the current human workflow (and enable the future agentic version of it), then codifying judgment into Evaluator Judgment Power — the moat deepens and the lock-in compounds. Combined with go-to-market execution, this is what allows a firm to win the race for the blue-ocean AI category before the window closes. The land-grab is what falls out the bottom; the three product moats plus go-to-market are how it's built.
Emergent Moats: 4 new in the AI era
The framework names four new moats: Data Flywheel, Agentic Workflow Lock-in, Evaluator Judgment Power, and the synthesis the first three produce — AI-Native Land-Grab, the umbrella this whole section has been describing. The three product moats are detailed below; their full chapter treatments are in 10 · Data Flywheel, 11 · Agentic Workflow Lock-in, and 12 · Evaluator Judgment Power.
▸ a. Data Flywheel Data Flywheel
A flywheel exists when usage produces proprietary data that materially improves the model, and the improvement is observable enough that more usage follows. Peer of Network Economies; the AI-era moat with a sharp threshold and a brutal failure mode.
How it works
The mechanism has a sharp threshold — the good-enough gap. Below it, the flywheel is fragile and a competitor with sufficient capital can match it. Above it, the leader's feedback velocity outruns any plausible challenger's capital deployment. The load-bearing input is privileged access: the leader's data is not also accessible to the foundation-model layer that any competitor can buy from. Decision-bearing data — what an expert user chose in a real workflow — is the substrate that matters. The foundation-model layer can absorb any text it gets to see; it cannot absorb a corpus of expert choices it never saw.
Canonical example
Higharc sells production homebuilders a configurator-driven design-to-sales tool. Every time a builder configures a home, prices an option, locks a buyer in at signing, or pushes a change order, that decision is captured in a corpus that no foundation-model lab can absorb because it never gets to see it. Once the corpus crosses the good-enough threshold for tract residential, the flywheel produces a quality gap that compounds with deployment time, not with capital.
How it fails
- Marginal data goes stale — the first 10,000 users contribute novel signal; the next 100,000 don't.
- Foundation-layer absorption from below: a frontier release closes the quality gap and the privileged corpus stops mattering.
- The threshold is never crossed; capital exhausts; a challenger catches up.
- Privacy or regulatory friction breaks the loop; the corpus stays valuable but stops compounding.
- A capital-rich incumbent with embedded distribution ships a “good-enough” version that prevents the flywheel from reaching scale.
Key insight — the AEC platform-incumbent pattern
The two AEC software incumbents most often called data-flywheel candidates — Autodesk and Procore — show in mirror image why privileged access is what breaks the mechanism. Both have flywheel-shaped artifacts (corpus, compute, model architecture, distribution). Both tightened their API perimeters within months of each other (Autodesk APS December 2025, Procore September 2025). Both face the same load-bearing assumption: customer-data training rights at scale. Neither has resolved it. The diagnostic for any AEC data-flywheel claim is identical: show the privileged-access mechanism, dated, with the contractual or technical evidence. Without that, the flywheel is conditional — an artifact, not a moat. Most AEC vendors that look like they have data flywheels (OpenSpace, Buildots, Trunk Tools) are running the same artifact-without-mechanism play.
▸ b. Agentic Workflow Lock-in Agentic Workflow Lock-in
An agent embedded in a workflow accumulates memory, tool-graph, learned calibration, and trust over time. Peer of Switching Costs; the cost of leaving compounds with use, and outcome dependency replaces classical migration cost.
How it works
The lock-in surface is dynamic, not static. Each interaction the agent runs widens the gap between “with the agent” and “without it.” The customer's cost of leaving in month 24 is materially higher than in month 6, because the agent has accumulated more calibration. A protocol-compatible runtime can read another agent's tool-graph; it cannot read the calibration. Calibration — the learned rules about when to escalate, when to refuse, when to chain — is the residual moat. The flywheel of calibration only spins when the agent is the daily default; an agent that is a side option to a primary tool accumulates curiosity, not lock-in.
Canonical example
Joist AI ingests historical proposals, CRM data, project images, and resumes from AEC business-development teams, building a per-firm institutional memory for proposal generation. Every response adds data; the graph compounds with use. Mortenson — a top-25 ENR general contractor — said it plainly: “If we tried to get rid of Joist AI now, we'd have a revolt.” That's outcome dependency, not switching cost. The per-firm graph isn't portable; rebuilding it from zero would mean losing the institutional memory the team now relies on every Friday.
How it fails
- Protocol portability matures faster than calibration depth; agent runtimes become interchangeable.
- Calibration turns out to be reproducible from logs — a competitor with the logs trains a pre-calibrated agent.
- A distribution incumbent ships a “good-enough” agent first (Microsoft 365 Copilot, Salesforce Einstein) and the specialist never reaches calibration depth.
- Foundation-layer absorption: frontier labs ship agent-runtime capabilities that close the gap from below.
- The customer fragments across multiple agents; none reaches calibration depth; all become commodities.
Key insight — Procore Agent Studio as the AEC test
The strongest current cases (Cursor, Sierra) are horizontal SaaS with digital-native users. The vertical AEC test is in flight on Procore Agent Studio, the no-code custom-agent builder Procore announced alongside Procore Assist and Procore Agents. Procore wins the daily-default position at top-200 GCs (95% gross retention, 17,623+ organic customers); Datagrid's reasoning engine (acquired January 2026 for $168M) provides the agentic backbone. The historical base rate is unfriendly — no-code AI builders in horizontal enterprise SaaS have consistently underperformed. If Procore's adoption disappoints at Groundbreak 2026, the agentic lock-in archetype is below threshold for vertical enterprise SaaS by counterexample. That is a paper-shaping observation, not a Procore-specific one.
▸ c. Evaluator Judgment Power Evaluator Judgment Power
Generation is collapsing toward free. The bottleneck moves to a different question: who has the right to say it is good enough? Peer of Process Power; distinct from Brand. The moat is the institutional standing to certify an AI-generated output against a regulatory or professional standard of care, and to be contractually accountable when the certification is wrong.
How it works
Four mechanics, in order. (1) The domain has a defined standard of care — building code, engineering practice, standard accounting rules, medical standard of care, legal duty of competent representation, financial fiduciary duty. (2) The AI system can reliably produce output that meets the standard. (3) The right to certify against the standard is gatekept — by professional licensure, regulatory clearance, or institutional standing that capital cannot manufacture. (4) An accountable party carries the consequence: the licensed professional, the firm with the liability insurance, the cleared vendor, the insurer. The moat is steps 3 and 4 acting together. A vendor who can produce a passing answer but cannot certify it and will not be named in the suit is selling a tool, not holding a moat.
Canonical example
Intuit recognized the TurboTax software was fine but lacked the institutional credibility to stand behind a return at audit-grade standard of care. The fix was TurboTax Live: a network of credentialed tax professionals whose review and stamp now back the software output. The accuracy guarantee on the box is the marketable surface; the network of certifiers is the moat. The system is the AI plus the gatekeeping cohort, working together.
How it fails
- Liability does not transfer to the vendor; pricing collapses to per-seat and the surface looks like Brand.
- Licensure regime broadens or commoditizes; the right to certify ceases to be scarce.
- Foundation-layer absorption: a frontier model gets close enough that customers self-insure rather than pay for stamped certification.
- Brand-only collapse: the vendor stays trusted but loses calibration depth as the underlying model commoditizes.
- Cohort capture by a competitor: a rival firm hires or acquires a more credible network of gatekeepers, and the moat migrates with them.
Key insight — the AEC phase differential
This moat is phase-specific in AEC, not industry-wide. Design has the cleanest professional-stamp regime: a licensed architect or engineer personally seals the drawings issued for permit and construction; all four mechanics are present and load-bearing. Construction is structurally different — there is no equivalent stamp-bearing professional cohort whose seal certifies the in-place work to the same code-bound standard of care. General contractors operate on contracts, surety bonds, lien rights, liquidated damages — an accountability stack that allocates risk but does not concentrate the right-to-certify in a credentialed individual. The Intuit-style manufactured-gatekeeper move is also harder to translate: there is no construction-side cohort to acquire. Construction-side plays should be scored on Agentic Workflow Lock-in and Data Flywheel mechanics, not credited with an evaluator moat the phase's accountability stack does not actually support.
4. AEC software war game — Moatfight Arena
Why I built it. Theory is one thing. I wanted a fun way to watch the (7+4) framework fight itself — to run probabilistic outcomes, mine many trials for patterns, and let the empirical residue argue with the prose. The arena lives at /arena.
The basic game. Each attacker and defender has a battle card with a special move. Players deploy strategies to beat the opponent across a turn-based quarterly investment cycle, trying to outfox the other.
- Battle card: three moat types, a stat block, a special move, a deck of strategies.
- Special move: the one trick that card uniquely does (e.g., Speckle's counter-positioning bind tightens every time Autodesk fortifies the wrong perimeter).
- Phase: the fight happens inside one of five AEC phases — Permitting, Design, Preconstruction, Construction, Operations — or a cross-phase domain. Phase is terrain.
- Horizon: H1 (12 months / 4 quarters) or H2 (3 years / 12 quarters).
- Capital ($): each card has starting cash, expected funding tranches, and a quarterly burn (private cards burn ~10%/Q; public cards replenish). Run dry and you're forced into acquisition or extinction.
- A turn: spend cash, deploy a strategy, roll for success, watch market share migrate.
- Outcomes: independent, acquired, or extinct.
How it works. In a turn-based Monte Carlo simulation, companies deploy realistic strategies based on their own market position and capital — attackers can open-source and build cheaply, but have limited cash to invest in complex or expensive strategies; defenders have more options, including acquiring the attacker and partnering with a distributor. If an attacker gets acquired, we capture the acquisition price. Both parties compete for market share over two time horizons — 1 year and 3 years. May the best company win!
Run one trial and you see a story. Run a thousand and a pattern emerges. Keep a log across enough runs and a theory builds itself — the loud claims that survived enough rounds to be worth writing down live in their full chapters at 10 · Data Flywheel, 11 · Agentic Workflow Lock-in, and 12 · Evaluator Judgment Power.
Known limitations. Stats, type-chart multipliers, and the Data Flywheel threshold are all hand-curated calls — this is a thinking tool to make simulating strategies in a new world more fun and educational, not a forecast.
Battle-card stack at pass-2/14; worked matchups at pass-2/15. Earlier renderings: v4 · v3.
Play the game
The cast lives on the AEC AI competitive landscape. The arena is where moats fight, picking sides and running the trials yourself.
The cast — AEC AI competitive landscape
Companies mapped by AEC phase and total funding (representative top tier shown).
Click through to the live landscape HTML; cross-reference the cast against the paper's battle cards via the coverage matrix.
A stochastic war game on the 11-moat framework. Pick two AEC software cards (real or fictional archetype), choose strategies, run trials, watch the dimensions decide. Each side has three possible outcomes at horizon: independent, acquired, or extinct. Run 100–1000 trials to mine for empirical patterns.
5. Synthesis — predictions for AEC software in 2026
[Section to write.] What the (7+4) framework predicts about AEC software through H2 (3 years): the survival prediction table, pricing-as-counter-positioning (Bricsys lineage + AI-era extensions), three predictive AEC-native attacker archetypes (Evaluator-Power play with insurer underwriting; federated agent network on consumption pricing; counter-positioning at the API perimeter on per-project pricing), and four forward-tracking asterisks. Current draft at pass-2/16 (~2,900 words; clean structure against the 10-moat framework, needs 4th-moat additions to predictions).
6. Appendix — master matchup matrix
The (7+4) × (7+4) moat-vs-moat matrix — with each cell defended in prose — lives at Appendix A of the planning hub. The interesting fights are in the cross-quadrants, where a classical moat squares off against an AI-era one. For the AI-era moats themselves, see the canonical chapters in Pass 2: 10 · Data Flywheel (peer of Network Economies), 11 · Agentic Workflow Lock-in (peer of Switching Costs), and 12 · Evaluator Judgment Power (peer of Process Power; distinct from Brand).
7. Source list
Real, citable sources only. Every quantitative claim gets a primary or near-primary source (10-K, S-1, vendor press release, industry analyst report, dated newsletter). Every framework claim gets the canonical author. Gaps flagged where original analysis carries the load.