AI IPU

Infrastructure

Exclusive AI cloud infrastructure to accelerate machine learning. Proudly made in Europe.

AI Infrastructure as a service

We bring together Graphcore IPUs and the Gcore Cloud services for building AI IPU infrastructure under unified UI and API for ML acceleration.

Get started quickly, save on computing costs, and seamlessly scale to massive IPU compute on demand and with ease.

Graphcore IPU cloud services are now available, with free trials and a range of pricing options enabling innovators everywhere to make new breakthroughs in machine intelligence.

Why have we chosen Graphcore IPUs?

    Massive Performance Leap

    World-leading performance for natural language processing, computer vision and graph networks

     

    Unique architecture for differentiated results

     

    Low latency inference

     

    Much More Flexible

    Designed for training and inference

     

    Support for wide range of ML models

     

    Make new breakthroughs for competitive advantage

    Easy to Use

    Support from AI experts

     

    Extensive documentation, tutorials and pre-canned models

     

    Popular ML framework support

Exclusive solution pack

Gcore's IPU-based AI cloud is a Graphcore Bow IPU-POD scale-out cluster, offering an effortless way to add state of the art machine intelligence compute on demand, without the need for on-premises hardware deployment and AI infrastructure building from scratch.

The IPU is an entirely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. Cloud IPU’s robust performance and low cost make it ideal for machine learning teams looking to iterate quickly and frequently on their solutions.

solution image
suspension image

Suspension mode for Cloud virtual vPODs

Suspension mode provides a cost-effective and resource-efficient solution for temporarily pausing a virtual private cloud environment when it is not in use. By utilizing this feature, customers can effectively reduce expenses while preserving the integrity of the data and configurations.

  • Only storage and Floating IP (if active) are charged when a cluster is suspended
  • Cluster can be easily reactivated with the same configuration
  • The network configuration and cluster data are stored on external block storage, excluding ephemeral storage information. This offers the ability to modify the configuration and expand the cluster as required, providing greater flexibility

Features and advantages

    World-class performance for natural language processing

    Build, train and deploy ready-to-use ML models via dashboard, API, or Terraform

    Dataset management and integration with S3/NFS storage

    Version control: Hardware, Code, Dataset

    Secure Trusted Cloud platform

    Free egress traffic (for public or hybrid solutions)

    SLA 99.9% guaranteed uptime

     

    High-skilled technical support 24/7

    Made in the EU

     

AI full lifecycle tools and integrations

    ML and AI solutions:

    • TensorFlow
    • Keras
    • PyTorch
    • Paddle Paddle
    • ONNX
    • Hugging Face

    Receiving and processing data:

    • Storm
    • Spark
    • Kafka
    • PySpark
    • MS SQL
    • Oracle
    • MongoDB

    Development tools:

    • Visual Studio Code
    • PyCharm
    • Jupyter
    • GitLab
    • GitHub
    • RStudio
    • Xcode
    • Airflow

    Exploration and

    visualization tools:

    • Seaborn
    • Matplotlib
    • TensorBoard

    Programming

    languages:

    • JavaScript
    • R
    • Swift
    • Python

    Data

    platforms:

    • PostgreSQL
    • Hadoop
    • Spark
    • Vertika
  • pytorch
  • tensorflow
  • lightning
  • keras
  • onnx
  • hugging_face
  • paddle_paddle
  • slurm
  • kubernetes
  • prometheus
  • graphana
  • openbmc
  • redfish
  • openstack
  • vmware

Accelerate ML with ready-made AI Infrastructure

With the AI Infrastructure, customers can now easily train and compare models or custom code training, and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on Gcore AI Infrastructure.

Gcore's IPU-based AI cloud is designed to help businesses across various fields, including finance, healthcare, manufacturing, and scientific research. It is built to support every stage of their AI adoption journey, from building proof of concept to training and deployment.

  • AI model development

  • ML models: Face recognition, Object

    detection

  • AI training and

    hyperparameter tuning

Scroll horizontally to view the diagram
ML Model delivery and deployment pipelines

Locations

Luxembourg
Amsterdam

IPU-POD systems

Ready to order in Luxembourg in June 2022

IPU-POD systems let you break through barriers to unleash entirely new capabilities in machine intelligence with real business impact. Get ready for production with IPU-Pod64 and take advantage of a new approach to operationalize your AI projects.

IPU-Pod64 delivers ultimate flexibility to maximize all available space and power, no matter how it is provisioned. 16 petaFLOPS of AI-compute for both training and inference to develop and deploy on the same powerful system.

location.imageAlt

Pricing

Depending on the location, servers have different traffic options*. You can find explicit information below:

Product Server Config IPUs Server quantity Price
BOW-vPOD460 vCPU / 116GB RAM / 1100GB NVMe (ephemeral) / 100Gbit/s Interconnect 4 Order BOW-vPOD16120 vCPU / 232GB RAM / 2200GB NVMe (ephemeral) / 100Gbit/s Interconnect 16 Order BOW-vPOD16240 vCPU / 464GB RAM / 4400GB NVMe (ephemeral) / 100Gbit/s Interconnect 16 Order BOW-vPOD64240 vCPU / 464GB RAM / 4400GB NVMe (ephemeral) / 100Gbit/s Interconnect 64 Order Bow Pod42x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 4 Order Bow Pod162x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 16 Order Bow Pod642x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 64 Order Bow Pod1282x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 128 Order Bow Pod2562x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 256 Order Bow Pod10242x7763/ 512GB RAM / 2x450 SATA + 7x1.8Tb nvme / 2x100G 1024 Order
Scroll horizontally to view the table

Prices do not include VAT.

Try out vPOD4 for free for 24 hours! Contact our sales team to get the offer!

We stand for the digital sovereignty of the European Union

With the help of IPU-based AI infrastructure solutions we are realizing the HPC ambitions of Luxembourg, turning the city into the heart of Europe's AI hub. Thanks to Graphcore hardware and Gcore edge cloud, the new AI infrastructure can be used fully as a service.

Nigel Toon
Nigel ToonCo-founder and CEO of Graphcore
“Graphcore and Gcore solution is perfect for AI. It will make the power and flexibility of the IPU available to anyone who wants to accelerate their current workloads or to explore the use of next generation ML models.”
Andre Reitenbach
Andre ReitenbachCEO of Gcore
“Gcore is the first European provider to partner with Graphcore to bring innovations to a rapidly changing cloud market. To meet their changing AI needs, users are looking for trusted technologies that are highly efficient, easily accessible, and highly flexible.”
Christophe Brighi
Christophe BrighiHead of Economic and Commercial Affairs
“This partnership between Luxembourg-based cloud and edge solutions provider Gcore and the UK IPU producer Graphcore illustrates not only the vast opportunities that arise for trade and cooperation between the two countries, but it also confirms Luxembourg’s position as a leading data economy in the EU.”
Nigel Toon
Nigel ToonCo-founder and CEO of Graphcore
“Graphcore and Gcore solution is perfect for AI. It will make the power and flexibility of the IPU available to anyone who wants to accelerate their current workloads or to explore the use of next generation ML models.”
Andre Reitenbach
Andre ReitenbachCEO of Gcore
“Gcore is the first European provider to partner with Graphcore to bring innovations to a rapidly changing cloud market. To meet their changing AI needs, users are looking for trusted technologies that are highly efficient, easily accessible, and highly flexible.”

Request access to ready-to-use AI Infrastructure

Contact us to get a personalized offer

Tell us about the challenges of your business, and we’ll help you grow in any country in the world.