Skip to main content
GPU Cloud provides dedicated compute infrastructure for machine learning workloads. Use GPU clusters to train models, run inference, and process large-scale AI tasks.

What is a GPU cluster

A GPU cluster is a group of interconnected servers, each equipped with multiple high-performance GPUs. Clusters are designed for workloads that require massive parallel processing power, such as training large language models (LLMs), fine-tuning foundation models, running inference at scale, and high-performance computing (HPC) tasks.
GPU Cloud create cluster page showing region selection, cluster type, and GPU configuration options
All nodes in a cluster share the same configuration: operating system image, network settings, and storage mounts. This ensures consistent behavior across the cluster.

Cluster types

Gcore offers two types of GPU clusters:
TypeDescriptionBest for
Bare Metal GPUDedicated physical servers with guaranteed resources. No virtualization overheadProduction workloads, long-running training jobs, and latency-sensitive inference
Spot Bare Metal GPUSame hardware as Bare Metal, but at a reduced price (up to 50% discount). Instances can be preempted with a 24-hour notice when capacity is neededFault-tolerant training with checkpointing, batch processing, development, and testing
Spot instances are ideal for workloads that can handle interruptions. When a Spot cluster is reclaimed, you receive an email notification 24 hours before deletion. Use this time to save critical data to file shares or object storage.
Clusters can scale to hundreds of nodes. Production deployments with 250+ nodes in a single cluster are supported, limited only by regional stock availability.

Available configurations

Select a configuration based on your workload requirements:
ConfigurationGPUsInterconnectRAMStorageUse case
H100 with InfiniBand8x NVIDIA H100 80GB3.2 Tbit/s InfiniBand2TB8x 3.84TB NVMeDistributed LLM training requiring high-speed inter-node communication
H100 (bm3-ai-ndp)8x NVIDIA H100 80GB3.2 Tbit/s InfiniBand2TB6x 3.84TB NVMeDistributed training and latency-sensitive inference at scale
A100 with InfiniBand8x NVIDIA A100 80GB800 Gbit/s InfiniBand2TB8x 3.84TB NVMeMulti-node ML training and HPC workloads
A100 without InfiniBand8x NVIDIA A100 80GB2x 100 Gbit/s Ethernet2TB8x 3.84TB NVMeSingle-node training, inference for large models requiring more than 48GB VRAM
L40S8x NVIDIA L40S2x 25 Gbit/s Ethernet2TB4x 7.68TB NVMeInference, fine-tuning small to medium models requiring less than 48GB VRAM
Outbound data transfer (egress) from GPU clusters is free. For pricing details, see GPU Cloud billing.

InfiniBand networking

InfiniBand is a high-bandwidth, low-latency interconnect technology used for communication between nodes in a cluster. InfiniBand is configured automatically when you create a cluster. If the selected configuration includes InfiniBand network cards, all nodes are placed in the same InfiniBand domain with no manual setup required. H100 configurations typically have 8 InfiniBand ports per node, each creating a dedicated network interface. InfiniBand matters most for distributed training, where models that don’t fit on a single node require frequent gradient synchronization between GPUs. The same applies to multi-node inference when large models are split across servers. In these cases, InfiniBand reduces communication overhead significantly compared to Ethernet. For single-node workloads or independent batch jobs that don’t require node-to-node communication, InfiniBand provides no benefit. Standard Ethernet configurations work equally well and may be more cost-effective.

Storage options

GPU clusters support two storage types:
Storage typePersistencePerformanceUse case
Local NVMeTemporary (deleted with cluster)Highest IOPS, lowest latencyTraining data cache, checkpoints during training
File sharesPersistent (independent of cluster)Network-attached, lower latency than object storageDatasets, model weights, shared checkpoints
Learn more about configuring file shares for persistent storage and sharing data between nodes.

Cluster lifecycle

Create --> Configure --> Run workloads --> Resize (optional) --> Delete
  1. Create: Select region, GPU type, number of nodes, image, and network settings when creating a Bare Metal GPU cluster.
  2. Configure: Connect via SSH to each node, install required dependencies, and mount file shares to prepare the environment for workloads.
  3. Run workloads: Execute training jobs, run inference services, process data.
  4. Resize: Add or remove nodes based on demand. New nodes inherit the cluster configuration, which you can manage in the Bare Metal GPU cluster details.
  5. Delete: Remove the cluster when no longer needed. Local storage is erased; file shares remain.
GPU clusters may take 15–40 minutes to provision, and their configuration (image, network, and storage) is fixed at creation. Local NVMe storage is temporary, so critical data should be saved to persistent file shares. Spot clusters can be interrupted with a 24-hour notice, and cluster size is limited by available regional stock.
Hardware firewall support is available on servers equipped with BlueField network cards, enhancing network security for GPU clusters.