Reflection WEF in Davos – From Experimentation in 2025 to ROI in 2026
What began as curiosity and experimentation has evolved into a question of infrastructure, economics, and control. In hindsight, the rapid pace of change reveals a clear pattern from late 2024 to early 2026.
Act I - Acceleration Phase (Late 2024 - Early 2025)
At the end of 2024, something subtle but important happened: reasoning models entered the scene. Until then, most tools were impressive but limited—good at generating content, less capable of structured thinking.
That changed quickly.
By January 2025, AI had already become a core topic in workforce discussions. The "Future of Jobs" narrative placed AI and data literacy at the center of essential skills. At the same time, a major assumption was challenged: that only a handful of tech giants could build powerful models.
The Deepseek moment proved otherwise.
Then regulation caught up. The EU formally began enforcing restrictions on high-risk AI systems, while also introducing a new expectation: organizations must ensure a baseline level of AI literacy across their workforce. This marked the shift from experimentation to accountability.
Meanwhile, infrastructure took a leap forward. New hardware drastically reduced the cost and energy required to train massive models. At the same time, model innovation accelerated, and early "agentic" capabilities began to take shape—systems that could not just respond, but act. Frontier organizations were among the first to adopt standardized orchestration patterns, such as the Model Context Protocol (MCP) introduced six months earlier by Anthropic. MCP allowed models to call external tools, operate across systems, and make bounded decisions under human oversight on platforms like ChatGPT or Gemini.
Act II - The Experimentation Peak (Spring-Summer 2025)
By mid-2025, AI was everywhere. Nearly every organization was experimenting with chatbots, knowledge retrieval, document summarization, early agent workflows—what I've called use-case roulette.
And yet, beneath the surface, something didn't add up. Most AI initiatives were failing to deliver real value. This gap between informal usage and formal adoption became one of the defining tensions of the year.
Interestingly, the broader economy remained resilient. Confidence grew, investment surged, and AI became a central pillar of growth strategies. This was reflected in massive partnerships and compute deals:
- OpenAI and NVIDIA committed to building 10 GW of AI infrastructure, backed by up to $100 billion.
- Microsoft, Oracle, SoftBank, and others joined forces to expand national-scale AI data centers.
Act III – The Reality Check (Autumn 2025)
Then came the pushback.
By October, serious questions were being raised about the sustainability of AI investment. One argument stood out: much of the growth appeared "circular"—companies investing in each other within the same ecosystem, without clear evidence of long-term value creation.
Not everyone agreed. Others pointed out that, unlike previous hype cycles, AI companies were already generating real revenue and cash flow. Yet evidence of execution challenges surfaced. At the NVIDIA NANDA event in July 2025, a headline-grabbing study reported that 95% of enterprise AI initiatives were failing to deliver on expectations. Interestingly, the same study showed that nearly every organization, ~90%, was using AI in some form—highlighting a disconnect between adoption and meaningful results.
At the same time, governance complexity increased. Regulatory frameworks evolved rapidly, forcing organizations to constantly adapt their compliance strategies. Many teams found themselves reworking plans they had only just finalized.
By the end of the year, a pattern was clear:
- Technology had advanced rapidly
- But scaling it responsibly—and profitably—remained unresolved
Act IV – Davos 2026: The Shift from Experimentation to ROI
This is the context in which the World Economic Forum in Davos took place in January 2026. And notably, the conversation changed. Less hype. More direction. Davos cut through the noise and focused on what actually matters next.
1. From Pilots to Platforms
The era of isolated AI experiments is over.
Running disconnected demos—chatbots here, copilots there—is no longer enough. The real shift is toward AI-native operating models, where AI is embedded across entire workflows, not layered on top.
This is not a tooling decision. It's an organizational redesign.
2. The Energy–Compute Constraint
One of the clearest insights: AI is no longer just a software problem.
It is an infrastructure problem.
Compute requires energy. At scale, energy becomes the bottleneck. This changes how we think about AI strategy entirely—from cloud decisions to physical infrastructure and geographic positioning.
In other words: power is now a strategic dependency.
3. Humans as Orchestrators
For years, we talked about "human in the loop."
That framing is evolving.
The emerging model is human in the lead:
- Humans define intent, constraints, and trade-offs
- AI executes at scale
4. Sovereignty as a Competitive Advantage
Perhaps the most important shift: organizations are starting to recognize the risk of becoming dependent on generic AI.
If everyone uses the same models, differentiation disappears.
The response? Build proprietary intelligence.
Not necessarily from scratch—but through:
- Private data
- Internal logic
- Customized models
What This Means Going Forward
If 2024 was about discovery, and 2025 was about experimentation, then 2026 is about decisions.
The key questions are no longer:
- What can AI do?
- Where do we embed it?
- How do we scale it?
- What do we control vs. outsource?
- And how do we make it economically viable?
Sources & References
10 sources cited in this article
Open-Source AI Innovation: What DeepSeek Means for Global AI Development
Analysis of how DeepSeek challenged the assumption that only a handful of tech giants could build powerful AI models.
AI Talent, Skills and Literacy
EU policy on AI literacy requirements under the AI Act, marking the shift from experimentation to accountability.
Introducing the Model Context Protocol
Announcement of MCP, a standardized protocol allowing AI models to call external tools and operate across systems.
OpenAI and NVIDIA Systems Partnership
OpenAI and NVIDIA commit to building 10 GW of AI infrastructure, backed by up to $100 billion.
Related Articles
AI in the AEC Value Chain — Why ROI Is Still Missing
AI is everywhere right now. And yet, if you look closely, something feels off. Despite widespread experimentation, tangible ROI remains limited. This isn't because AI lacks potential. It's because of where—and how—it's being applied.
When AI Meets Indecision: Strategy Lessons from Hesitant Leadership
AI has moved from hype to habit. With around 88% of companies now using AI, it is no longer a differentiator to experiment. Yet leadership decisions often lag behind.