Blogs - Intuist AI

Moat Today, Gone Tomorrow

Written by Bhupendra Sheoran | September 9, 2025 9:17:54 PM Z

For many years, “What’s your moat?” was the first—and often last—question investors asked startups. In the pre-AI era, it made sense: defensible advantages  such as patents, proprietary data, network effects, and switching costs could hold for years. However, we have all seen that AI moves on a different timescale. Foundation models update frequently, sometimes in weeks, capabilities diffuse in days, and what looked like a differentiating feature yesterday can ship as a checkbox in a hyperscaler release tomorrow.

In this new’ish world, the moat isn’t gone, but the kind that matters has changed. Static moats (the castle-and-water kind) erode fast. Dynamic moats (the speed, adaptability, and compounding-systems kind) are what actually separates the survivors from the soon to be commoditized.

Below is the case for why “moat” has become a less relevant evaluation lens for AI startups and what investors should evaluate instead.

How the Classic Moat Dries Up in AI

  1. Core capabilities commoditize at model speed: As the underlying platforms get dramatically better (and cheaper) every quarter, feature-based differentiation evaporates. If your value prop is “we can summarize, extract, classify, translate, convert slides, write emails…,” that’s table stakes until the next model update makes it free, faster, or built-in. “Feature verticals” are especially at risk.

  2. APIs collapse switching costs: AI stacks are modular. Most customers expect “bring your own model” and would prefer the ability to swap out LLMs on the fly vs. having to start all over again as they switch LLMs.. If a startup’s advantage depends on keeping customers locked into one model or one provider, it’s upside-down with market gravity.

  3. Open ecosystems erase capability gaps: Open-weights, prompt libraries, and reference agents shrink the time from idea to parity. Yesterday’s defensible technique becomes today’s GitHub repo and tomorrow’s baseline in a notebook. The half-life of technique-moats is measured in weeks.

  4. “Proprietary data” is misunderstood: Owning a pile of data isn’t a moat if (a) the data rights are murky, (b) it’s not uniquely valuable for the task, (c) it’s stale, or (d) you can’t turn it into a learning loop. The defensible part isn’t having data; it’s the closed-loop rights + instrumentation + feedback that continuously improve outcomes safely and legally.

  5. Distribution outpaces patents: Speed to credible enterprise deployment (security, compliance, procurement, integrations, admin controls) beats cleverness. Patents and clever tricks don’t close deals; working change-management, integrations, and measured outcomes do.

What to Evaluate Instead: A Dynamic-Moat Scorecard

Use these points as lenses, not checkboxes, to gauge whether a startup will compound advantage as models evolve.

1) Model-agnostic architecture (optionality by default)

  • Can they hot-swap models by task?
  • Do they use adapters, evaluators, and tool use to capture gains from new models without rewrites?
  • What’s their model-switch latency (minutes/hours/days, not weeks)?

2) Upgrade velocity & experimentation muscle

  • Cadence of experiments, gated rollouts, and offline/online evals.
  • “Upgrade half-life”: time from new capability → measurable improvement in production KPIs.

3) Data rights + feedback loops (the real flywheel)

  • Clear, durable rights to use and learn from data to make each agent better (not to train the overall models).
  • Protecting user uploaded knowledge assets and data to only train their secure agents.
  • Active learning that actually improves outcomes.

4) Platform cost curve control

  • Unit economics at the task level (input tokens, tool calls, retries, guardrails) and a roadmap to reduce them.
  • Ability to downshift to smaller/cheaper models when quality allows, and to prove it with metrics.
  • Caching, distillation, retrieval, and batching strategies that get better with scale.

5) Trust, safety, and governance maturity

  • Enterprise guardrails (PII handling, audit trails, policy enforcement, red-teaming)
  • Evidence they can pass security reviews without halting the sales cycle.

6) Integration surface area

  • Connectors to the systems where the work actually happens (CRM, EHR, ticketing, ERP, doc repos).
  • Admin controls, SSO/SCIM, observability, and analytics that make admins heroes.

7) Go-to-market repeatability

  • Repeatable use cases, and referenceable outcomes.
  • Sales cycle learning: land → expand patterns, partner motion, implementation timelines.

8) Team topology and culture of iteration

  • Product × engineering × domain expertise tightly coupled.
  • A culture of ruthless measurement, rapid rollback, and learning, not shipping clever prompts and calling it done.

The Investor Questions That Matter Now

  • If a hyperscaler shipped your top three features next quarter, what still makes you win?
  • How quickly can you adopt a new model class without a rewrite?
  • What’s your evaluation harness? Which tasks regress when you swap models?
  • Which signals make your system smarter each week? Who owns the rights to those signals?
  • Which integrations create real switching costs? (Because they embed you in the workflow, not because you’re proprietary.)
  • What breaks when the customer doubles usage? Show operational readiness: monitoring, incidents, rollback.
  • What’s your change-management playbook? Who trains users, rewrites SOPs, and owns the outcome?

What Replaces the Moat: Compounding Systems

Instead of asking “What’s your moat?”, ask “What compounds as you grow?”

  • Learning compounds (evals, feedback, adapters).
  • Distribution compounds (integrations, partners, reference architectures).
  • Economics compound (cost per task drops with scale and sophistication).
  • Trust compounds (governance, reliability, security, compliance).
  • Workflow ownership compounds (from assist → automate → autonomize with controls).

That’s the durable advantage in AI: not a wall around a castle, but a system that gets better, cheaper, and safer the more it’s used, and can ride the model wave instead of being wrecked by it.

Bottom Line

“Moat Today, Gone Tomorrow” isn’t a cynical take, it’s an operating principle. In AI, static defenses age out quickly. The startups that endure don’t rely on features the platforms will subsume; they build architectures, loops, and go-to-market machines that compound with every customer, every task, and every model upgrade.

If you’re a founder, design for optionality, speed, and learning. If you’re an investor, underwrite adaptability over artifacts. Because in this market, the question isn’t “Do you have a moat?”—it’s “Will you still have an edge after the next model release?”