We’re delighted to announce the upcoming availability of NVIDIA H100 GPU accelerators with high-performance NVIDIA ConnectX-7 cards in our Luxembourg data center. The ConnectX-7 cards deliver an impressive 3.2 Tbps of network bandwidth. Equipped with these smart adapters, the most powerful NVIDIA GPUs provide the premier AI architecture for training, deploying, and serving AI/ML models at scale.
Use Cases for ConnectX-7 Cards
H100 GPUs with ConnectX-7 cards will be part of our Generative AI Cluster, which we launched in September 2023. The cluster comprises dozens of servers powered by NVIDIA A100 and H100 GPU. It provides the significant performance boost required for training and inference of large AI/ML models, including those for generative AI.
ConnectX-7 cards allow you to connect multiple GPU instances to achieve exceptional performance and scale. This is particularly useful for training and deploying large models, as well as obtaining scalable multi-GPU clusters for inference with unpredictable workloads.
NVIDIA ConnectX-7 InfiniBand Adapter
The NVIDIA ConnectX-7 smart host channel adapter provides the highest networking performance for GPU-intensive workloads such as natural language processing (NLP) applications and supercomputing. Based on the NVIDIA Quantum-2 InfiniBand architecture, ConnectX-7 offers ultra-low latency, 3.2 Tbps throughput, and innovative NVIDIA In-Network Computing acceleration engines for additional acceleration to improve workload scalability.
Key ConnectX-7 features:
- Accelerated software-defined networking
- Enhanced security with inline encryption and decryption of TLS, IPsec, and MACsec
- High storage performance with RDMA/RoCE, GPUDirect Storage, and hardware-based NVMe-oF offload engines
- Accurate time synchronization for data-center applications and timing-sensitive infrastructures
Get Started Now
NVIDIA GPUs power our Virtual Instances, Bare Metal servers, and Managed Kubernetes worker nodes based on Virtual and Bare Metal Instances. Gcore AI infrastructure integrates seamlessly with MLOps platforms like UbiOps, streamlining machine learning workflows from experimentation to deployment. You can try our AI GPU infrastructure for free to experience its power for yourself. To get your free trial, fill out this form and our sales team will contact you to discuss the details.
To learn more about AI computing and Gcore’s contribution to the industry, check out our AI blog posts.