Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding

Products

  1. Home
  2. Blog
  3. Mobile World Congress 2025: the year of AI
Expert insights
AI

Mobile World Congress 2025: the year of AI

  • March 14, 2025
  • 3 min read
Mobile World Congress 2025: the year of AI

As Mobile World Congress wrapped up for another year, it was apparent that only one topic was on everyone’s minds: artificial intelligence.

Major players—such as Google, Ericsson, and Deutsche Telekom—showcased the various ways in which they’re piloting AI applications—from operations to infrastructure management and customer interactions. It’s clear there is a great desire to see AI move from the research lab into the real world, where it can make a real difference to people’s everyday lives. The days of more theoretical projects and gimmicky robots seem to be behind us: this year, it was all about real-world applications.

MWC has long been an event for telecommunications companies to launch their latest innovations, and this year was no different. Telco companies demonstrated how AI is now essential in managing network performance, reducing operational downtime, and driving significant cost savings. The industry consensus is that AI is no longer experimental but a critical component of modern telecommunications. While many of the applications showcased were early-stage pilots and stakeholders are still figuring out what wide-scale, real-time AI means in practice, the ambition to innovate and move forward on adoption is clear.

Here are three of the most exciting AI developments that caught our eye in Barcelona:

Conversational AI

Chatbots were probably the key telco application showcased across MWC, with applications ranging from contact centers, in-field repairs, personal assistants transcribing calls, booking taxis and making restaurant reservations, to emergency responders using intelligent assistants to manage critical incidents. The easy-to-use, conversational nature of chatbots makes them an attractive means to deploy AI across functions, as it doesn’t require users to have any prior hands-on machine learning expertise.

AI for first responders

Emergency responders often rely on telco partners to access novel, technology-enabled solutions to address their challenges. One such example is the collaboration between telcos and large language model (LLM) companies to deliver emergency-response chatbots. These tailored chatbots integrate various decision-making models, enabling them to quickly parse vast data streams and suggest actionable steps for human operators in real time.

This collaboration not only speeds up response times during critical situations but also enhances the overall effectiveness of emergency services, ensuring that support reaches those in need faster.

Another interesting example in this field was the Deutsche Telekom drone with an integrated LTE base station, which can be deployed in emergencies to deliver temporary coverage to an affected area or extend the service footprint during sports events and festivals, for example.

Enhancing Radio Access Networks (RAN)

Telecommunication companies are increasingly turning to advanced applications to manage the growing complexity of their networks and provide high-quality, uninterrupted service for their customers.

By leveraging artificial intelligence, these applications can proactively monitor network performance, detect anomalies in real time, and automatically implement corrective measures. This not only enhances network reliability but reduces operational costs and minimizes downtime, paving the way for more efficient, agile, and customer-focused network management.

One notable example was the Deutsche Telekom and Google Cloud collaboration: RAN Guardian. Built using Gemini 2.0, this agent analyzes network behavior, identifies performance issues, and takes corrective measures to boost reliability, lower operational costs, and improve customer experience.

As telecom networks become more complex, conventional rule-based automation struggles to handle real-time challenges. In contrast, agentic AI employs large language models (LLMs) and sophisticated reasoning frameworks to create intelligent systems capable of independent thought, action, and learning.

What’s next in the world of AI?

The innovation on show at MWC 2025 confirms that AI is rapidly transitioning from a research topic to a fundamental component of telecom and enterprise operations.  Wide-scale AI adoption is, however, a balancing act between cost, benefit, and risk management.

Telcos are global by design, operating in multiple regions with varying business needs and local regulations. Ensuring service continuity and a good return on investment from AI-driven applications while carefully navigating regional laws around data privacy and security is no mean feat.

If you want to learn more about incorporating AI into your business operations, we can help.

Gcore Everywhere Inference significantly simplifies large-scale AI deployments by providing a simple-to-use serverless inference tool that abstracts the complexity of AI hardware and allows users to deploy and manage AI inference globally with just a few clicks. It enables fully automated, auto-scalable deployment of inference workloads across multiple geographic locations, making it easier to handle fluctuating requirements, thus simplifying deployment and maintenance.

mobile-world-congress-2025-year-ai-1.webpmobile-world-congress-2025-year-ai-2.webpmobile-world-congress-2025-year-ai-3.webpmobile-world-congress-2025-year-ai-4.webp

Learn more about Gcore Everywhere Inference

Try Gcore AI

Gcore all-in-one platform: cloud, AI, CDN, security, and other infrastructure services.

Related articles

Introducing GPU VMs on NVIDIA AI infrastructure in Sines (EU): flexible, cost-efficient compute for AI workloads

Some AI jobs require the full power and predictability of dedicated bare metal clusters. Others need something more agile: compute that can be sized up or down quickly, used for a burst of experimentation, powered down when idle, and spun b

Gcore Everywhere AI evolves to full-lifecycle management with Slurm, Jupyter, and token-based inference integrations

AI adoption has a fragmentation problem. Organizations routinely stitch together separate tools for development, training, and serving, each with its own infrastructure, access controls, and operational overhead. The result is a patchwork t

Introducing faster, lower-cost LLM inference with NVIDIA Dynamo

Imagine if you could click a button and suddenly your GPUs increase their throughput by 6x. Or reduce latency by 2x. Or route inference requests seamlessly across different GPU types.That's the experience we're bringing to our inference cus

New AI inference models on Application Catalog: translation, agents, and flagship reasoning

We’ve expanded our AI inference Application Catalog with three new state-of-the-art models, covering massively multilingual translation, efficient agentic workflows, and high-end reasoning. All models are live today via Everywhere Inference

New AI inference models available now on Gcore

We’ve expanded our Application Catalog with a new set of high-performance models across embeddings, text-to-speech, multimodal LLMs, and safety. All models are live today via Everywhere Inference and Everywhere AI, and are ready to deploy i

Introducing Gcore Everywhere AI: 3-click AI training and inference for any environment

For enterprises, telcos, and CSPs, AI adoption sounds promising…until you start measuring impact. Most projects stall or even fail before ROI starts to appear. ML engineers lose momentum setting up clusters. Infrastructure teams battle to b

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.