API
The Gcore Customer Portal is being updated. Screenshots may not show the current version.
Edge AI
Edge AI
Chosen image
Home/Edge AI/GPU Cloud/About GPU Cloud

GPU Cloud infrastructure

Gcore GPU Cloud provides high-performance compute clusters designed for machine learning tasks.

AI GPU infrastructure

Train your ML models with the latest NVIDIA GPUs. We offer a wide range of Bare Metal servers and Virtual Machines powered by NVIDIA A100, H100, and L40S GPUs.

Pick the configuration and reservation plan that best fits your computing requirements.

Specification Characteristics Use case Performance
H100 with Infiniband 8x Nvidia H100 80GB
2 Intel Xeon 8480+
2TB RAM
2x 960GB
8x3.84 TB NVMe
3.2 Tbit/s Infiniband
2x100Gbit/s Ethernet
Optimized for distributed training of Large Language Models. Ultimate performance for compute-intensive tasks that require a significant exchange of data by the network.
A100 with Infiniband 8x Nvidia A100 80GB
2 Intel Xeon 8468
2 TB RAM
2x 960GB SSD
8x3.84 TB NVMe
800Gbit/s Infiniband
Distributed training for ML models and a broad range of HPC workloads. Well-balanced in performance and price.
A100 without Infiniband 8x Nvidia A100 80GB
2 Intel Xeon 8468
2 TB RAM
2x 960GB SSD
8x3.84 TB NVMe
2x100Gbit/s Ethernet
Training and fine-tuning of models on single nodes.

Inference for large models.
Multi-user HPC cluster.
The best solution for inference models that require more than 48GB vRAM.
L40 8x Nvidia L40S
2x Intel Xeon 8468
2TB RAM
4x7.68TB NVMe SSD
2x25Gbit/s Ethernet
Model inference.

Fine-tuning for small and medium-size models.
The best solution for inference models that require less than 48GB vRAM.

Explore our competitive pricing on the AI GPU Cloud infrastructure pricing page.

Tools supported by Gcore GPU Cloud

Tool class List of tools Explanation
Framework TensorFlow, Keras, PyTorch, Paddle Paddle, ONNX, Hugging Face Your model is supposed to use one of these frameworks for correct work.
Data platforms PostgreSQL, Hadoop, Spark, Vertika You can set up a connection between our cluster and your data platforms of these types to make them work together.
Programming languages JavaScript, R, Swift, Python Your model is supposed to be written in one of these languages for correct work.
Resources for receiving and processing data Storm, Spark, Kafka, PySpark, MS SQL, Oracle, MongoDB You can set up a connection between our cluster and your resources of these types to make them work together.
Exploration and visualization tools Seaborn, Matplotlib, TensorBoard You can connect our cluster to these tools to visualize your model.

Was this article helpful?