API
The Gcore Customer Portal is being updated. Screenshots may not show the current version.
Edge Cloud
Edge Cloud
OverviewBillingTerraform
API
Chosen image
Home/Edge Cloud/AI Training/GPU Cloud

GPU Cloud infrastructure

Gcore GPU Cloud provides high-performance compute clusters designed for machine learning tasks.

AI GPU infrastructure

Train your ML models with the latest NVIDIA GPUs. We offer a wide range of Bare Metal servers and Virtual Machines powered by NVIDIA A100, H100, and L40S GPUs.

Pick the configuration and reservation plan that best fits your computing requirements.

Specification Characteristics Use case Performance
H100 with Infiniband 8x Nvidia H100 80GB
2 Intel Xeon 8480+
2TB RAM
2x 960GB
8x3.84 TB NVMe
3.2 Tbit/s Infiniband
2x100Gbit/s Ethernet
Optimized for distributed training of Large Language Models. Ultimate performance for compute-intensive tasks that require a significant exchange of data by the network.
A100 with Infiniband 8x Nvidia A100 80GB
2 Intel Xeon 8468
2 TB RAM
2x 960GB SSD
8x3.84 TB NVMe
800Gbit/s Infiniband
Distributed training for ML models and a broad range of HPC workloads. Well-balanced in performance and price.
A100 without Infiniband 8x Nvidia A100 80GB
2 Intel Xeon 8468
2 TB RAM
2x 960GB SSD
8x3.84 TB NVMe
2x100Gbit/s Ethernet
Training and fine-tuning of models on single nodes.

Inference for large models.
Multi-user HPC cluster.
The best solution for inference models that require more than 48GB vRAM.
L40 8x Nvidia L40S
2x Intel Xeon 8468
2TB RAM
4x7.68TB NVMe SSD
2x25Gbit/s Ethernet
Model inference.

Fine-tuning for small and medium-size models.
The best solution for inference models that require less than 48GB vRAM.

Explore our competitive pricing on the AI GPU Cloud infrastructure pricing page.

AI IPU infrastructure

Beyond our GPU offerings, we also provide IPUs. Our Graphcore infrastructure consists of three entities:

  • Poplar server manages all the other servers in the cluster. You have full access to this server via SSH and can work with it directly to manage the infrastructure and run your model.

  • M2000 or Bow-2000 server is used for calculations made while training your model. You don’t have access to it, and this server receives commands from the Poplar server. The available server type is location-dependent.

  • vIPU controller (virtual Intelligence Processing Unit) is a service that configures M2000/Bow-2000 servers of your AI infrastructure to form a cluster. It's involved while the cluster is being created and while you’re changing its configuration (e.g., resizing partitions). You have access to vIPU controller via API and can rebuild the cluster if needed.

For datasets storage, you can use Poplar server disk space, external S3 storage, or Gcore Object Storage.

AI Infrastructure scheme

Server specifications and performance

We provide two types of Graphcore servers: M2000 and Bow-2000. M2000 is a second-generation machine and Bow-2000 is a third-generation one. Server types are location-dependent.

Bow-2000 specifications

Specification Characteristics
IPU processors 4x Bow IPU processors (IPU frequency 1.85 GHz)
5,888 IPU-Cores™ with independent code execution on 35,328 worker threads
AI compute 1.394 petaFLOPS AI (FP16.16) compute
0.349 petaFLOPS FP32 compute
Memory Up to ~260 GB memory (3.6 GB In-Processor Memory™ plus up to 256 GB Streaming Memory™)
261 TB/s memory bandwidth
Streaming Memory 2x DDR4-2400 DIMM
DRAM options: 2x 64 GB (default SKU in Bow-2000 Founder’s Edition) or 2x 128 GB (contact sales)
IPU-Gateway 1x IPU-Gateway chip with integrated Arm Cortex quad-core A-series SoC
Internal SSD RoCEv2 NIC (1 PCIe G4 x16 FH¾L slot)
Standard QSFP ports
Mechanical 1U 19-inch chassis (Open Compute compliant)
40 mm (width) x 728 mm (depth) x 1U (height)
Weight: 16.395 kg (36.14 lbs)
Lights-outmanagement OpenBMC AST2520

M2000 specifications

Specification Characteristics
IPU processors 4 Colossus GC200 IPU processors (IPU frequency 1.325GHz)
5,888 IPU-Cores™ with independent code execution on 35,328 worker threads
AI compute 1 petaFLOPS AI compute
0.25 petaFLOPS FP32 compute
IPU-Fabric 8x IPU-Links supporting 2Tbps bi-directional bandwidth
8x OSFP ports
Switch-less scalability
Up to 8 M2000s in directly connected stacked systems
Up to 16 M2000s in IPU-POD systems
2x IPU-GW-Links (IPU-Link extension over 100GbE)
2 QSFP28 ports
Switch or switch-less scalability supporting 400Gbps bi-directional bandwidth
Up to 1024 IPU-M2000s connected
IPU-Gateway 1 IPU-Gateway with integrated Arm Cortex quad-core A-series SoC
Streaming Memory 2 DDR4-2400 DIMM DRAM
DRAM options: 2x 64GB (default SKU in IPU-M2000 Founder’s Edition) or 2x 128GB or 2x 256GB (contact sales)
Internal SSD 32GB eMMC 1TB M.2 SSD
Mechanical 1U 19-inch chassis (Open Compute compliant)
440mm (width) x 728mm (depth) x 1U (height)
Weight: 16.395kg (36.14lbs)
Lights-out management OpenBMC AST2520
2x1GbE RJ45 management ports

Tools supported by Gcore GPU Cloud

Tool class List of tools Explanation
Framework TensorFlow, Keras, PyTorch, Paddle Paddle, ONNX, Hugging Face Your model is supposed to use one of these frameworks for correct work.
Data platforms PostgreSQL, Hadoop, Spark, Vertika You can set up a connection between our cluster and your data platforms of these types to make them work together.
Programming languages JavaScript, R, Swift, Python Your model is supposed to be written in one of these languages for correct work.
Resources for receiving and processing data Storm, Spark, Kafka, PySpark, MS SQL, Oracle, MongoDB You can set up a connection between our cluster and your resources of these types to make them work together.
Exploration and visualization tools Seaborn, Matplotlib, TensorBoard You can connect our cluster to these tools to visualize your model.

Was this article helpful?

Not a Gcore user yet?

Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.

Go to the product page
// // Initialize a variable to undefined initially. // var growthBook = undefined; // (function() { // try { // var script = document.createElement('script'); // script.src = "https://cdn.jsdelivr.net/npm/@growthbook/growthbook/dist/bundles/auto.min.js"; // script.setAttribute("data-api-host", "https://cdn.growthbook.io"); // script.setAttribute("data-client-key", "sdk-truekA5wvhMYaqsu"); // document.head.appendChild(script); // script.onload = function() { // console.log("GrowthBook script loaded successfully."); // growthBook = window.GrowthBook; // Assuming GrowthBook attaches itself to window // }; // script.onerror = function() { // console.error("Failed to load the GrowthBook script."); // growthBook = undefined; // Explicitly set to undefined on error // }; // } catch (error) { // console.error("An error occurred while setting up the GrowthBook script:", error); // growthBook = undefined; // } // })(); // // Optional: Push to dataLayer if needed // window.dataLayer = window.dataLayer || []; // window.dataLayer.push({ // 'event': 'scriptLoadStatus', // 'growthBookStatus': growthBook ? "Loaded" : "Failed" // });