Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Blog
  3. New NVIDIA H100 GPUs Now Available in Gcore Luxembourg GenAI Cluster
AI
Cloud
News

New NVIDIA H100 GPUs Now Available in Gcore Luxembourg GenAI Cluster

  • January 30, 2024
  • 2 min read
New NVIDIA H100 GPUs Now Available in Gcore Luxembourg GenAI Cluster

We’re pleased to announce the addition of more than one hundred new NVIDIA H100 GPUs with InfiniBand interconnect to our generative AI (GenAI) cluster in Luxembourg. These are the most advanced NVIDIA GPUs currently available. The new GPUs power Gcore Virtual Instances and Gcore Bare Metal Servers and are available with a six-month commitment. Read on to learn more about the H100, and if you’re interested in benefitting from Gcore AI GPU Infrastructure, please contact our sales team.

About Gcore H100 GPU Infrastructure

We offer the H100 within our AI GPU Infrastructure lineup, which includes NVIDIA A100 and L40S GPUs. The H100 provides impressive capabilities:

  • Peak performance. With up to 7,916 teraFLOPS of performance, the NVIDIA H100 GPU delivers unparalleled processing power, making it ideal for training advanced generative AI and large language models (LLMs.)
  • Ultra-fast networking. With InfiniBand interconnect, you can experience up to 3,200 Gbps of data transfer speed—the key to handling complex AI operations such as training LLMs, high-performance computing, and low-latency embedded I/O applications.
  • Variety of configurations. We offer Virtual Instances with multiple GPU configurations for different AI/ML workloads. Bare Metal Servers are equipped with 8x H100 GPUs, 2x Intel Xeon 8468 CPUs, 2 TB RAM, 8x 3.84 TB NVMe SSD, and 3,200 Gbps InfiniBand.
  • Scalability. With the new H100 and support for multi-GPU clusters, our infrastructure is highly adaptable to any ML workflow. We’re ready to handle any load and peak traffic spikes to maintain the level of performance required for your AI/ML applications.
  • Kubernetes GPU worker nodes support. In addition to Virtual Instances and Bare Metal servers, NVIDIA H100 GPUs power Gcore Managed Kubernetes worker nodes. Run your containerized AI/ML workloads using the best GPU on the market.
  • MLOps platform support. Gcore AI Infrastructure GPUs, including the H100, integrate seamlessly with MLOps platforms like UbiOps, streamlining ML workflows from data preparation to deployment.

How the H100 Stacks Up Against NVIDIA’s Other Options

To learn more about the H100’s capabilities and how they differ from other powerful GPUs, check out our article comparing the H100 to the NVIDIA A100, L40S, and H200 GPUs.

Conclusion

As a leading cloud AI service provider, we’re constantly evolving to offer the best technology on the market. Our new offering of H100 GPUs with InfiniBand upgrades reaffirms our commitment to providing world-class AI/ML development resources.

To try Gcore AI Infrastructure GPUs for yourself, fill out this form, and our sales team will be in touch.

Try Gcore AI

Gcore all-in-one platform: cloud, AI, CDN, security, and other infrastructure services.

Related articles

New AI inference models on Application Catalog: translation, agents, and flagship reasoning

We’ve expanded our AI inference Application Catalog with three new state-of-the-art models, covering massively multilingual translation, efficient agentic workflows, and high-end reasoning. All models are live today via Everywhere Inference

New AI inference models available now on Gcore

We’ve expanded our Application Catalog with a new set of high-performance models across embeddings, text-to-speech, multimodal LLMs, and safety. All models are live today via Everywhere Inference and Everywhere AI, and are ready to deploy i

Introducing Gcore Everywhere AI: 3-click AI training and inference for any environment

For enterprises, telcos, and CSPs, AI adoption sounds promising…until you start measuring impact. Most projects stall or even fail before ROI starts to appear. ML engineers lose momentum setting up clusters. Infrastructure teams battle to b

Introducing AI Cloud Stack: turning GPU clusters into revenue-generating AI clouds

Enterprises and cloud providers face major roadblocks when trying to deploy GPU infrastructure at scale: long time-to-market, operational inefficiencies, and difficulty bringing new capacity to market profitably. Establishing AI environment

Edge AI is your next competitive advantage: highlights from Seva Vayner’s webinar

Edge AI isn’t just a technical milestone. It’s a strategic lever for businesses aiming to gain a competitive advantage with AI.As AI deployments grow more complex and more global, central cloud infrastructure is hitting real-world limits: c

From budget strain to AI gain: Watch how studios are building smarter with AI

Game development is in a pressure cooker. Budgets are ballooning, infrastructure and labor costs are rising, and players expect more complexity and polish with every release. All studios, from the major AAAs to smaller indies, are feeling t

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.