← back to thesis and outline
Archived extracted 2026-05-08

Archive: Future expansion chapters

Material extracted from the main outline on 2026-05-08 to narrow the paper's focus to AEC software. The three chapters below (AEC general contractors, Accounting, AI-native horizontal) are preserved here for potential future expansion. Same goes for the AI-native horizontal archetype selections and the Q1 Cursor question.

Why these were archived

The paper's analytical depth is in AEC software — that's where the battle cards exist (Autodesk, Speckle, Higharc, Trunk Tools, Buildots/OpenSpace, Snaptrude, Joist, plus the cross-cutting co-AEC thought experiments for Figma, Glean, and Harvey). The other three chapters started as proof points that the framework generalized; in practice, they pulled focus and didn't get the same depth. Pulling them out lets the AEC software chapter carry the paper, and leaves the others as clean expansion lanes for Pass 3 if the framework holds up.

Chapter 10 — AEC general contractors (archived)

The unsexy moat. What do Suffolk, Turner, DPR actually own? (Answer preview: bonding capacity, relationship density, project-data exhaust nobody is collecting yet.) What happens when AI commoditizes preconstruction estimating?

Note for future expansion: this chapter would extend the AEC analysis from software into the services tier. The moats are different in shape (bonding capacity is a Cornered Resource; relationship density is a kind of Process Power; project-data exhaust is a latent Data Flywheel that nobody has activated yet). The most interesting question is whether a GC can build a Data Flywheel on its own project-data exhaust before an AI-native attacker captures the surface from the software side.

Chapter 12 — Accounting (archived)

NetSuite (the complexity whale) vs Intuit (the SMB volume + flywheel) vs the AI-native attackers (Pilot, Puzzle, Brex+Ramp). The interesting question: which segment gets re-shaped first, and which moat decays first under AI pressure?

Note for future expansion: the accounting chapter is the cleanest test of Evaluator Judgment Power as a pricing-surface moat — Pilot's managed-bookkeeping margin is paid for accountability-bearing review, not for the AI doing the books underneath. References to Pilot survive in the Pass 2 chapters on Evaluator Power as illustrative examples; the full war game can rebuild from those threads if needed.

Chapter 13 — AI-native horizontal (archived)

Four attack archetypes, not four companies. (1) Horizontal enterprise knowledge / retrieval (Glean canonical). (2) Vertical workflow co-pilot for high-stakes professional services (Harvey + Sierra) — also where Evaluator Judgment Power matters most as a tested moat. (3) AI-native UX rebuild of an embedded workflow (Cursor). (4) Generative agentic execution layer (Decagon + Crescendo + Clay) — framed as nodes in a federated agent-translator network, not vertically integrated stacks; the moat sits in the specialist node, not the integration. Are they building durable moats or running an arbitrage on incumbent slowness?

AI-native horizontal — four attack archetypes (archived)

  1. Horizontal enterprise knowledge / retrieval layer. Canonical: Glean. Variant inside the same archetype: Hebbia (specialized document-retrieval flywheel). Attack vector: index-everything-with-permission-aware retrieval, building a data flywheel of organizational context.
  2. Vertical workflow co-pilot for high-stakes professional services. Harvey (legal) + Sierra (CX), bundled. Attack vector: domain-specific data + evals + workflow integration that compounds with usage.
  3. AI-native UX rebuild of an embedded workflow. Canonical: Cursor. Attack vector: AI-native UX rebuild that compresses procedural switching costs by being so much better at the core loop that re-training pays for itself in weeks.
  4. Generative agentic execution layer — node in a federated agent-translator network. Decagon + Crescendo + Clay, bundled. Attack vector: agents that take action on behalf of the user, accumulating workflow lock-in via tool-graph memory, escalation patterns, and learned outcome calibration. Re: codesign-thesis claim #5 — these are not verticalized end-to-end stacks; they're specialist nodes that win by being deeper at one job than the surrounding agent-translator network can route around. The moat sits in the node (its data, its evaluator, its escalation calibration), not in the integration. The translator/coordination layer commoditizes on top of MCP-style protocols.

Open question Q1 — Cursor as its own archetype (archived)

Originally posed during the AI-native horizontal chapter framing. Preserved here in case the AI-native horizontal chapter is reactivated in Pass 3.

Q1. Should “AI-native UX rebuild” be its own archetype, or fold into “vertical workflow co-pilot”?

Cursor's attack vector is mechanically distinct from Harvey/Sierra — it's not about domain-specific data + evals (Cursor's data flywheel is shallow vs. its UX moat); it's about being so much better at the core loop that procedural switching costs collapse. Recommendation: keep Cursor as its own archetype. The “switching-cost compression by 10x UX” attack vector deserves its own treatment because it's the single archetype most commonly conflated with technical differentiation, and it's the one most often wrong in defensibility analysis. (If you'd rather collapse it, fallback is to add a 5th archetype: “specialized data-acquisition flywheel” via Hebbia.)