Simon Oster
Back to Writing
GenAIPlatform EngineeringScalingAI Strategy

From Use-Case Roulette to Platform Thinking

"Let's start with a few use cases and see what happens" sounds reasonable. In reality, it often leads to isolated pilots that never scale. Here is a better approach.

December 20258 min read

Use-Case Thinking: Why It Gets You Stuck

"Let's start with a few use cases and see what happens" sounds reasonable. In reality, it often leads to a handful of isolated pilots that never scale, different tech stacks per team with little reuse, and no clear way to track cost, value, or risk across the board. In fact, the “State of AI in Business 2025” study found that only about 5% of pilots reach production with measurable value.

Each new feature becomes a bespoke project: new integrations, new data plumbing, new security reviews. It feels like progress, but structurally it is fragile. One leadership change, one budget cut, and your "AI strategy" disappears with the next slide deck.

That's why use-case-only thinking is so risky: every bet is independent. If one pays off, great. If not, you learned something, but built nothing reusable.

Platform Thinking: Stack the Odds in Your Favor

A platform mindset flips the script. Instead of asking, "Which use case should we build next?" you ask, "What shared capabilities can many use cases plug into?"

A solid AI platform typically includes:

  • Core AI services: chat/completion, retrieval, search, classification, summarization
  • Data layer: unified access to documents, APIs, and events, plus governance and lineage
  • Gateway: routing, logging, rate limiting, cost attribution, and policy enforcement
  • Developer layer: SDKs, templates, and CI/CD pipelines tied into your hosting stack
  • Governance & risk layer: approved data sources, model approvals, deployment guardrails, ongoing audit and compliance support
With that in place, each new use case is not a new system—it is simply another composition of existing services. Every experiment stacks the odds in your favor: it reuses what’s already there, improves shared components, and leaves behind something others can build on.

Why "Platform First" Beats "Use-Case Roulette"

Think of it like this:

Use-case roulette

  • Every spin = a new stack.
  • If it fails, you walk away with a slide and a lesson.
  • If it works, scaling it is painful and slow.

Platform first

  • Every spin reuses the same underlying rails.
  • If it fails, the platform still got better (more logs, patterns, guardrails).
  • If it works, scaling is just "more traffic on the same rails."
You don't avoid use cases—you feed them through a consistent substrate. You stop gambling and start compounding.

The AI Stack

To escape use-case roulette, organizations must align two capabilities as a single system. A modern developer platform provides standardized environments for building, deploying, and operating applications with embedded security, identity, observability, and CI/CD—enabling teams to ship reliably without rebuilding infrastructure each time.

On top of this foundation, a shared AI platform centralizes access to models, embeddings, search, and orchestration, with a governed gateway for policy enforcement, cost management, and auditability. Built on platforms such as Azure AI Studio/Foundry, Google Vertex AI, or AWS Bedrock, it becomes a reusable enterprise capability rather than a series of bespoke implementations.

For CIOs, the test is speed and consistency. Can teams move from idea to a running experiment in days, not weeks? Can new models or data sources be introduced quickly and safely? If not, the constraint is not AI ambition—it is platform readiness.

Facing 2026 Challenges Head-On

As organizations look toward 2026, an AI roadmap composed of disconnected use cases will not scale. Without a clear AI platform strategy, predictable challenges emerge: the absence of self-service environments to rapidly launch AI-powered services; a lack of shared platform capabilities such as embeddings, vector stores, and LLM gateways; and insufficient guardrails governing approved data sources, models, deployment patterns, cost controls, and chargeback mechanisms.

The Strategic Choice: Build vs. Buy vs. Hybrid

  • Buy offers the fastest path to value through mature, pre-built capabilities, but often comes with reduced flexibility and increased vendor lock-in.
  • Build provides maximum control and alignment to specific organizational needs, at the cost of longer delivery timelines and sustained engineering and operational overhead.
  • Hybrid balances speed and control by combining commercial platforms with internal capabilities, though it introduces integration complexity and the risk of fragmented ways of working.
The takeaway for executives: invest in an AI platform stack and operating model that spans the full stack—from core services and data foundations to governance and financial controls. Individual use cases should sit on reusable foundations, not amount to a high-stakes game of AI roulette.

If You're Still Chasing Use Cases…

If your roadmap is a list of disconnected "AI use cases" and not a diagram of an AI platform, you're gambling. You might hit a few wins, but you won't scale.

Let's talk about building rails, not just rolling the wheel.

Don't build just use cases. Build a platform. Otherwise, you're basically playing AI roulette.

Let's continue the conversation

Have thoughts on this article? I'd love to discuss over a coffee.