Receive a 75% discount on your first month.
Valid for any configuration in any location!
AI Infrastructure as a service
We bring together Graphcore IPUs and the Gcore Cloud services for building AI IPU infrastructure under unified UI and API for ML acceleration.
Get started quickly, save on computing costs, and seamlessly scale to massive IPU compute on demand and with ease.
Graphcore IPU cloud services are now available, with free trials and a range of pricing options enabling innovators everywhere to make new breakthroughs in machine intelligence.

Why have we chosen Graphcore IPUs?
Massive Performance Leap
World-leading performance for natural language processing, computer vision and graph networks
Unique architecture for differentiated results
Low latency inference
Much More Flexible
Designed for training and inference
Support for wide range of ML models
Make new breakthroughs for competitive advantage
Easy to Use
Support from AI experts
Extensive documentation, tutorials and pre-canned models
Popular ML framework support
Exclusive solution pack
Gcore IPU-based AI cloud is a Graphcore Bow IPU-POD scale-out cluster, offering an effortless means to add state of the art machine intelligence compute on demand, without the need for on-premises hardware deployment or building an AI infrastructure from scratch.
The IPU is an entirely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. Cloud IPU’s robust performance and low cost make it ideal for machine learning teams looking to iterate quickly and frequently on their solutions.

Leading the means in AI inference
The Graphcore Bow IPU-POD outperforms the popular NVIDIA DGX A100 when tested on the three industry-standard AI models in the cloud. The Bow IPU-POD delivers higher throughput and a better performance to price ratio.

Suspension mode for Cloud virtual vPODs
Suspension mode provides a cost- and resource-efficient solution for temporarily pausing a virtual private cloud environment when it is not in use. By utilizing this feature, you can reduce expenses while preserving the integrity of your data and configurations.
- Only storage and Floating IP (if active) are charged when a cluster is suspended
- Cluster can be easily reactivated with the same configuration
- The network configuration and cluster data are stored on external block storage, excluding ephemeral storage information. This offers the ability to modify the configuration and expand the cluster as required, providing greater flexibility
Want to try other AI accelerators?
View our AI GPU Cloud offerings!
Try Gcore bare metal servers and virtual machines powered by NVIDIA A100 and H100 GPUs.
Both are powerful and versatile accelerators ideal for AI and high-performance computing workloads.
Features and advantages
World-class performance for natural language processing
Build, train, and deploy ready-to-use ML models via dashboard, API, or Terraform
Dataset management and integration with S3/NFS storage
Version control: hardware, code, dataset
Secure trusted Cloud platform
Free egress traffic for Virtual vPOD
SLA 99.9% guaranteed uptime
Highly skilled technical support 24/7
Made in the EU
AI full lifecycle tools and integrations
- TensorFlow
- Keras
- PyTorch
- Paddle Paddle
- ONNX
- Hugging Face
- Storm
- Spark
- Kafka
- PySpark
- MS SQL
- Oracle
- MongoDB
- Visual Studio Code
- PyCharm
- Jupyter
- GitLab
- GitHub
- RStudio
- Xcode
- Airflow
- Seaborn
- Matplotlib
- TensorBoard
- JavaScript
- R
- Swift
- Python
- PostgreSQL
- Hadoop
- Spark
- Vertika
ML and AI solutions:
Receiving and processing data:
Development tools:
Exploration and
visualization tools:
Programming
languages:
Data
platforms:

Accelerate ML with ready-made AI Infrastructure
With the AI Infrastructure, customers can now easily train and compare models or custom code training, and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on Gcore AI Infrastructure.
Gcore's IPU-based AI cloud is designed to help businesses across various fields, including finance, healthcare, manufacturing, and scientific research. It is built to support every stage of their AI adoption journey, from building proof of concept to training and deployment.
AI model development
ML models: Face recognition, Object
detection
AI training and
hyperparameter tuning

Locations
IPU-Pod256 is available in Amsterdam. It allows customers to explore AI compute at a supercomputing scale. Designed to accelerate large and demanding machine learning models, IPU-Pod256 gives you the AI resources of a tech giant.

Prices do not include VAT.
Try out vPOD4 for free for 24 hours! Contact our sales team to get the offer!
We stand for the digital sovereignty of the European Union
With the help of IPU-based AI infrastructure solutions we are realizing the HPC ambitions of Luxembourg, turning the city into the heart of Europe's AI hub. Thanks to Graphcore hardware and Gcore edge cloud, the new AI infrastructure can be used fully as a service.