First real use cases are emerging — from voice AI to intelligent engagement — but separating innovation from hype is getting harder.
Walking the halls at Mobile World Congress this week, one thing was impossible to miss:
AI is everywhere.
AI in the network.
AI in the radio layer.
AI in customer experience.
AI in developer platforms.
AI in voice.
Every booth had it. Every strategy slide mentioned it.
And to be fair, that makes sense. Telcos should absolutely be working on AI.
But after dozens of conversations during the week, one question kept coming back:
Where are the industries?
MWC still feels like a bit of an echo chamber. Telcos talking to telcos. Vendors talking to telcos. Infrastructure companies talking about infrastructure.
The industries we ultimately want to support — retail, airlines, healthcare, finance — are largely absent.
That’s the real gap.
Where AI actually makes sense for telcos
There are several areas where AI genuinely makes sense for telecom operators:
* Inside the enterprise – improving operations, productivity, and customer experience like any large organization.
* Cleaning up the data mess – extracting insights from decades of fragmented systems and legacy IT.
* Inside the network – optimizing radio networks, automating operations, and running infrastructure more efficiently.
* Sovereign AI infrastructure – providing trusted local platforms where enterprises can run AI close to their data.
All of that is real.
But walking the show floor, it was also clear that AI messaging is running ahead of real use cases.
In other words: the hype machine is in full swing.
Sometimes it’s genuine innovation. Sometimes it’s automation with a new label. That’s normal in any technology wave — but it makes it harder to see what actually matters.
Some real infrastructure signals
Among the hype, a few things stood out as genuinely interesting.
BT’s Global Fabric is a good example. The idea is to build a network that is far more dynamic, programmable, and secure by design — effectively creating an AI-ready network fabric for enterprise workloads and distributed AI systems.
Another example came from Intel, which was showcasing real-time AI inference running inside telecom environments. Their message was simple but powerful: “Inference in live networks, right now.” In other words, the AI stack is moving closer to the network edge.
This matters because AI workloads are very different from human workloads. As agentic systems emerge, networks will increasingly carry machine-to-machine interactions, automation signals, and distributed AI processes.
That changes the architecture of telecom networks themselves.
CPaaS is becoming the AI platform layer
One of the most interesting shifts happening right now is the evolution of CPaaS platforms into AI platforms.
Companies like Sinch and Infobip are moving beyond communications channels and APIs toward intelligent engagement platforms, where AI orchestrates interactions across voice, messaging, and digital channels.
At the Sinch booth in Barcelona, the phrase “Intelligent Engagement” was literally on the wall.
That’s encouraging, because it reflects the direction we’ve been outlining in the State of CPaaS research: communications platforms becoming the layer where AI, communications, and enterprise workflows come together.
Other announcements pointed in the same direction.
Radisys announced a partnership with Rakuten Mobile to deploy its Engage Digital Platform as a foundation for AI-enabled communication services.
Deutsche Telekom demonstrated its Magenta AI Call Assistant, embedding AI directly into voice calls — including real-time translation.
These examples point to a bigger shift:
communications platforms are becoming the orchestration layer for AI-driven interaction.
Network APIs: the supporting actor in the AI story
Another theme that came up repeatedly in Barcelona was Agentic AI — systems where AI agents interact with other agents, services, and platforms to complete tasks.
If that vision becomes reality, it raises an important question:
Who provides the trust layer?
Agentic systems will require identity verification, authentication, and clear consent frameworks for when AI systems can act on behalf of users or organizations.
That’s exactly what some of the Network API discussions at MWC were about — including frameworks for consent capture, policy enforcement, and proof at runtime.
Capabilities exposed through Network APIs — identity, authentication, location, device signals — could provide the trusted infrastructure that AI systems need to operate safely.
It’s something I wrote about already last year: Network APIs may not be the star of the AI story, but they could become a crucial supporting actor.
The industry conversation is finally starting to move in that direction.
Developers are changing too
AI is also changing how developers interact with platforms.
For years, CPaaS was about APIs. Developers integrated communications capabilities directly into applications.
Now we’re seeing AI orchestration layers and agent frameworks sitting between applications and APIs.
Model Context Protocol wrappers.
AI agents calling APIs.
Automation layers coordinating workflows.
Developers are still essential — but increasingly they are building AI systems that interact with platforms, rather than coding every interaction themselves.
That makes the role of CPaaS even more important.
These platforms remain the bridge between telecom infrastructure, AI systems, and real-world applications.
From MWC to Enterprise Connect
Which brings us to the transition from MWC to Enterprise Connect next week.
MWC is still largely about telecom infrastructure.
Enterprise Connect is much closer to enterprise applications and real use cases.
AI will be everywhere there as well.
And somewhere between those two worlds sits an increasingly important category:
Voice AI.
Part of it lives in telecom infrastructure.
Part of it lives in the application layer.
Next week the CPaaS Acceleration Alliance will publish our new Voice AI research paper, and I’ll be moderating a panel at Enterprise Connect with:
- Mike Stowe (RingCentral)
- Paolina White (Speechmatics)
- Jacques Klick (Sinch)
The goal is simple:
to bridge the conversation between telecom infrastructure and real enterprise applications.
No time to waste
What MWC showed very clearly is that AI is moving incredibly fast.
Startups are experimenting.
Cloud platforms are evolving quickly.
Enterprises are already deploying AI across their operations.
Telcos are experimenting too — almost every operator is running pilots, trials, and internal AI initiatives.
But the next step now is moving from experimentation to production.
Telcos have real opportunities in the AI ecosystem:
* providing sovereign AI infrastructure
* operating AI-ready networks like BT’s Global Fabric
* exposing trusted capabilities through Network APIs
* enabling intelligent engagement platforms that connect businesses and customers
But none of this will happen in isolation.
The AI stack is too complex, and the opportunity too large, for any single company to build it alone. Real progress will come through ecosystems and partnerships.
That means collaboration across the stack — operators, platform providers, and technology partners such as BT, Radisys, Sinch, Vonage, Intel, and others.
And it means working much closer with customers and developers to figure out what actually creates value.
Because pilots alone won’t move the industry forward.
The real challenge now is scaling these ideas into production services and real business models.
The window is open.
But it won’t stay open forever.
In the AI era, the companies that learn fastest will win.
And learning only happens when things are built, deployed, and used in the real world.
That’s also the thinking behind the work we’re doing with Sandbox — creating an environment where telcos, platforms, and startups can learn faster, experiment together, and scale the ideas that actually work.

