Leonardo AI delivers high-speed, global content creation with Gcore AI services
- July 29, 2025
- 3 min read
Leonardo.Ai helps creators turn ideas into stunning AI-generated content in seconds. Headquartered in Australia and now part of Canva, the company gives game developers, designers, and marketers powerful tools to generate and refine images, videos, and creative assets in real time.
As James Stewart, DevOps Engineering Manager at Leonardo.Ai explains, the team’s top priority is speed. Their north-star value is “go fast”, taking ideas to prototype and release at an impressive pace. But delivering that kind of speed at scale takes serious GPU infrastructure and deep levels of expertise around orchestration.
Seeking speed, scale, and infrastructure maturity under pressure
Delivering AI speed at scale for customers worldwide requires powerful, on-demand GPU inference infrastructure. Early on, Leonardo found that limited GPU availability and high cost were bottlenecks.
GPUs make up a significant part of our operating costs, so competitive pricing and availability are crucial for us.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
With big growth goals ahead, Leonardo needed an efficient, flexible GPU provider that would support their plans for speed and scale. They looked at AI providers from global hyperscalers to local GPU services. Some providers looked promising but had no availability. Others offered low prices or easy access (no long-term commitment) but were missing essential features like private networking, infrastructure-as-code, or 24/7 support.
Cheap GPUs alone weren’t enough for us. We needed a mature platform with Terraform support, private networking, and reliable support. Otherwise, deployment and maintenance become really painful for our engineers at scale.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
Fortunately, they found what they were looking for in Gcore: solid GPU availability thanks to its Northern Data partnership, a fully-featured cloud platform, and genuinely helpful technical support.
We chose Gcore for overall platform integration, features, and support. Compared to some of the less capable GPU providers we’ve utilized, when using Gcore our engineers don’t need to battle with manual infrastructure deployment or performance issues. Which means they can focus on the part of the job that they love: actually building.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
Finding a flexible provider that can meet Leonardo’s need for speed
Leonardo AI needed infrastructure that wouldn’t slow innovation or momentum. With Gcore, it found a fast, flexible, and reliable AI platform able to match its speed of development and ambition. Leonardo chose to run their inference on Gcore GPU Cloud with Bare Metal, offering isolation, power, and flexibility for their AI workloads. Their demanding inference workloads run on current-gen NVIDIA H100 and A100 GPUs with zero virtualization overhead. This means their image and video generation services deliver fast, high-res output with no lag or slowdowns, even under the heaviest loads.
On-demand pricing lets Leonardo AI scale GPU usage based on traffic, product cycles, or model testing needs. There’s no overprovisioning or unnecessary spending. Leonardo gets a lean, responsive setup that adapts to the business’ scale, coupled with tailored support so their team can get the most out of the infrastructure.
We push our infrastructure hard and Gcore handles it with ease. The combination of raw GPU power, availability, fast and easy provisioning, and flexible scaling lets us move as fast as we need to. What really sets Gcore apart though, is the hands-on, personalized support. Their team really understands our setup and helps us to optimize it to our specific needs.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
Delivering real-time creation with top-tier AI infrastructure
Partnering with Gcore helps Leonardo to maintain its famously rapid pace of development and consistently deliver innovative new features to Leonardo.Ai users.
With Gcore, we can spin up GPU nodes instantly and trust that they’ll work reliably and consistently. Knowing that Gcore has the capacity that we need, when we need it, allows us to quickly and confidently develop new, cutting-edge features for Leonardo customers without worrying whether or not we’ll have the GPUs available to power them.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
The team now uses Terraform to provision GPUs on demand, and containerised workflows to “go fast” when deploying the suite of Gcore AI services powering Leonardo.Ai.
Powering global AI creativity
Gcore GPU Cloud has become part of the backbone of Leonardo AI’s infrastructure. By offloading infrastructure complexity to Gcore, the Leonardo AI team can stay focused on their customers and innovation.
Our partnership with Gcore gives us the flexibility and performance to innovate without limits. We can scale our AI workloads globally and keep our customers creating.
James Stewart, DevOps Engineering Manager, Leonardo.Ai
Ready to scale your AI workloads globally? Discover how Gcore’s AI services can power your next-generation applications. Find out more about GPU Cloud and Everywhere Inference, see how easy it is to deploy with just three clicks, or get in touch with our AI team for a personalized consultation.
More case studies
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.