Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. A guide to running an ML model that makes diagnoses based on chest X-rays

A guide to running an ML model that makes diagnoses based on chest X-rays

  • By Gcore
  • February 16, 2023
  • 5 min read
A guide to running an ML model that makes diagnoses based on chest X-rays

Our partners at Graphcore have run the vit-base-patch16-224-in21k ML model and trained it using 30,000 chest X-rays. Now, the model can diagnose chest diseases. In this article, we’ll tell you how to repeat Graphcore’s success with our AI infrastructure, including how to run this model on our equipment and how much it costs.

Introduction

Today’s experiment will be conducted by Mikhail Khlystun, Head of Cloud Operation, responsible for AI at Gcore. We’ll follow him step by step through running a medical machine learning (ML) model that can diagnose based on X-rays. We’ll set up the cluster, download the dataset and model, and train it. Next, we’ll test the result by checking how the model can diagnose based on test X-rays. Finally, we’ll compare the price of running the model on different flavors and find out which cluster is the most favorable to run it on.

How to run the ML model

1. Create an AI cluster to run the model. This can be done through the Gcore Control panel. Select the region, flavor, OS image, and network setting, and generate an SSH key. Click Create cluster, and we’re all set.

2. Access the cluster.

ssh ubuntu@193.57.88.205

3. Create a localdata directory for the dataset and move to it.

mkdir localdata; cd localdata

4. Create a directory in which the model is expected to find its dataset.

mkdir chest-xray-nihcc; cd chest-xray-nihcc

5. Next, we need a dataset of 112,120 chest X-rays from 30,805 patients. The dataset is available at https://nihcc.app.box.com/v/ChestXray-NIHCC. The images directory contains parts of the 42 GB archive that must be downloaded and extracted. The batch_download_zips.py script can be used to make downloading easy.

wget "https://nihcc.app.box.com/index.php?rm=box_download_shared_file&vanity_name=ChestXray-NIHCC&file_id=f_371647823217" -O batch_download_zips.py

6. Download the dataset.

python batch_download_zips.py

7. Extract the downloaded parts of the archive.

for i in $(ls -1 images_*);do tar xvf ${i}; done

8. Download the metadata file. This file contains the “correct answers”, i.e., the doctors’ diagnoses for each dataset scan. We’ll use them to train and validate the model.

Our metadata contains the following fields: image file name, detected markers, age, gender, etc. The distribution of diagnoses according to the images presented in the archive is as follows (Source: https://nihcc.app.box.com/v/ChestXray-NIHCC).

wget "https://nihcc.app.box.com/index.php?rm=box_download_shared_file&vanity_name=ChestXray-NIHCC&file_id=f_219760887468" -O Data_Entry_2017_v2020.csv

9. Check the file contents.

head Data_Entry_2017_v2020.csv

Result:

All the data is ready. Almost everything is ready to run the model.

10. Let’s move to the directory with the model code.

cd ~/graphcore/tutorials/tutorials/pytorch/vit_model_training/

11. Run the docker image with jupyter and the pytorch library that our model will work with.

gc-docker -- -it --rm -v ${PWD}:/host_shared_dir -v /home/ubuntu/localdata/:/dataset graphcore/pytorch-jupyter:3.0.0-ubuntu-20.04-20221025

After executing the command, we find ourselves inside the container. All subsequent commands have to be executed there unless clearly specified otherwise.

12. Go to the directory mounted from the host machine, where you’ll see the following contents:

For everything to work correctly, we need to replace sklearn with scikit-learn in the requirements.txt file. It can be done with any text editor or with a command:

sed -i ‘s/sklearn/scikit-learn/’ requirements.txt

13. Set an environment variable with the path to the dataset.

export DATASET_DIR=/dataset

14. Now we can run jupyter.

jupyter-notebook --no-browser --port 8890 --allow-root --NotebookApp.password='' --NotebookApp.token=''

This command starts the jupyter-notebook web server without authorization and connects only via localhost. This will require using an SSH tunnel but will eliminate unauthorized access.

15. Set up a new SSH session with our AI cluster, enabling the tunnel for port 8890. In a separate window, execute the following command:

ssh -NL 8890:localhost:8890 ubuntu@193.57.88,205

16. In the browser, follow the link

http://127.0.0.1:8890/notebooks/walkthrough.ipynb

.

We should see the following:

We’re one step away from starting the model training. Just click “Run All”.

In 15 minutes, you can see the results. Here are a few examples downloaded from the dataset:

Here are the loss function and learning_rate graphs for the training set.

The following results were obtained in the evaluation set:

Model testing

We trained the model on 112,122 chest scans. It should now have a high probability of making the correct diagnosis by processing a new image. We’ll check by adding nine new scans whose diagnoses are unknown to the model.

To do this, we’ll add more cells to the notebook with the code.

1. Let’s save the results of our model training.

trainer.save_model()trainer.save_state()

2. Download the necessary modules.

 from transformers import AutoModelForImageClassification, AutoFeatureExtractorfrom PIL import Imageimport torch

3. Download feature_extractor and model from the saved files.

feature_extractor = AutoFeatureExtractor.from_pretrained(‘./results’)model = AutoModelForImageClassification.from_pretrained(‘./results’)

4. Let’s transfer looped scans from the dataset for “diagnosing”, output it as the header for images, and add the correct value for comparison.

fig = plt.figure(figsize=(20, 15))unique_labels = np.array(unique_labels)convert_image_to_float = transforms.ConvertImageDtype(dtype=torch.float32)for i, data_dict in enumerate(dataset["validation"]):    if i == 9:        break        image = data_dict["pixel_values"]    image = convert_image_to_float(image)    label = data_dict["labels"]            image_img = Image.open(data_dict["image"].filename)    encoding = feature_extractor(image_img.convert("RGB"), return_tensors="pt")        with torch.no_grad():        outputs = model(**encoding)        logits = outputs.logits    predicted_class_idx = logits.argmax(-1).item()            ax = plt.subplot(3, 3, i + 1)    ax.set_title("Prediction: " + id2label[predicted_class_idx] + "|Real: " + ", ".join(unique_labels[np.argwhere(label).flatten()]))    plt.imshow(image[0])fig.tight_layout()

The pictures above show the model predictions and the actual diagnosis. Not all results are accurate, but the inaccuracies are within the AUCROC=0.7681 and loss=0.1875 metrics. Most of the predictions fully or partially coincide with the diagnosis made by the doctor.

Please note that, regardless of the quality of the model, an ML model is no substitute for a doctor. The model can only be a good aid in giving a preliminary result that then needs to be validated.

How much it costs

Let’s calculate how much it would cost to get involved with science and reproduce this experiment. For comparison, we’ll take the cost of running the model on three flavors:

  • Minimum configuration: vPOD4 + Poplar server on a CPU60 virtual machine
  • Medium configuration: vPOD4 + bare metal Poplar server.
  • High configuration: vPOD16 + bare metal Poplar sever.

The names of the flavors and their cost per hour of use are given in the table.

We have divided the process of starting an ML model into two parts: preparation and training. Each measurement was repeated 10 times. The tables below show the average of the 10 attempts.

Preparation

Preparation includes downloading the dataset, extracting the images, and running the commands from the guide above. Their execution time doesn’t depend on which AI cluster you rent; it’s just a routine that you do. But the price will differ because of the flavors’ prices.

The model training includes all the actions that AI performs. Their duration depends on the flavor. The more powerful the AI cluster is, the faster it processes the data and yields results. We calculated net training and validation times using internal monitoring metrics. You can also calculate those using the %time command, which should be added as the first line to the desired cell in jupyter.

The figures show that increasing the cluster size significantly reduces the time it takes to train the model. vPOD16 was twice as fast as vPOD4.

Let’s summarize the figures to make it easy to assess the difference.

It’s worth noting that with large datasets, the time/money ratio will vary. Think of the small and large clusters like a truck and a Boeing. If cargo needs to be transported a short distance, a giant Boeing doesn’t have enough time to gain altitude and accelerate before it needs to land, so a truck can easily outrun it. But if we’re talking about really long distances (datasets), Boeing has the advantage. If your datasets are measured in hundreds and thousands of gigabytes, using vPOD16, vPOD64, and vPOD128 will show a considerable speed advantage.

If we increase our dataset 500-fold, the time to train the ML model will increase linearly. In this case, we’d get the following figures:

As you can see, the time/money ratio is very different—with a large dataset, a high configuration gives a lot of advantages. 

Conclusion

ML models are reshaping our lives. They help create pictures and game assets, write code, create text, and make medical diagnoses.

Today, we saw how you can run and train an open-source model to analyze chest X-rays in just two hours. We ran it on Gcore AI Infrastructure, which is designed to get the results of ML tasks quickly. Owing to it, training took only 6-15 minutes (depending on the selected cluster configuration), and most of the time was spent downloading the model and working with the code.

A comparison of AI clusters proved that working with a small dataset on a cluster with a minimal configuration is more profitable. When proceeding with small tasks, it’s almost as fast as a high configuration and way cheaper (we spent only $17). But the situation changes with a large dataset. If you increase the dataset 500-fold, the powerful cluster gets a huge speed advantage. It’ll handle the task twice as fast as its little brother.

Related articles

Edge AI is your next competitive advantage: highlights from Seva Vayner’s webinar

Edge AI isn’t just a technical milestone. It’s a strategic lever for businesses aiming to gain a competitive advantage with AI.As AI deployments grow more complex and more global, central cloud infrastructure is hitting real-world limits: compliance barriers, latency bottlenecks, and runaway operational costs. The question for businesses isn’t whether they’ll adopt edge AI, but how soon.In a recent webinar with Mobile World Live, Seva Vayner, Gcore’s Product Director of Edge Cloud and AI, made the business case for edge inference as a competitive differentiator. He outlined what it takes to stay ahead in a world where speed, locality, and control define AI success.Scroll on to watch Seva explain why your infrastructure choices now shape your market position later.Location is everything: edge over cloudAI is no longer something globally operating businesses can afford to run from a central location. Regional regulations and growing user expectations mean models must be served as close to the user as possible. This reduces latency, but perhaps more importantly is essential for compliance with local laws.Edge AI also keeps costs down by avoiding costly international traffic routes. When your users are global but your infrastructure isn’t, every request becomes an expensive, high-latency journey across the internet.Edge inference solves three problems at once in an increasingly regionally fragmented AI landscape:Keeps compute near users for low latencyCuts down on international transit for reduced costsHelps companies stay compliant with local lawsPrivate edge: control over convenienceMany businesses started their AI journey by experimenting with public APIs like OpenAI’s. But as companies and their AI use cases mature, that’s not good enough anymore. They need full control over data residency, model access, and deployment architecture, especially in regulated industries or high-sensitivity environments.That’s where private edge deployments come in. Instead of relying on public endpoints and shared infrastructure, organizations can fully isolate their AI environments, keeping data secure and models proprietary.This approach is ideal for healthcare, finance, government, and any sector where data sovereignty and operational security are critical.Optimizing edge AI: precision over powerDeploying AI at the edge requires right-sizing your infrastructure for the models and tasks at hand. That’s both technically smarter and far more cost-effective than throwing maximum power and size at every use case.Making smart trade-offs allows businesses to scale edge AI sustainably by using the right hardware for each use case.AI at the edge helps businesses deliver the experience without the excess. With the control that the edge brings, hardware costs can be cut by using exactly what each device or location requires, reducing financial waste.Final takeawayAs Seva put it, AI infrastructure decisions are no longer just financial; they’re part of serious business strategy. From regulatory compliance to operational cost to long-term scalability, edge inference is already a necessity for businesses that plan to serve AI at scale and get ahead in the market.Gcore offers a full suite of public and private edge deployment options across six continents, integrated with local telco infrastructure and optimized for real-time performance. Learn more about Everywhere Inference, our edge AI solution, or get in touch to see how we can help tailor a deployment model to your needs.Ready to get started? Deploy a model in just three clicks with Gcore Everywhere Inference.Discover Everywhere Inference

From budget strain to AI gain: Watch how studios are building smarter with AI

Game development is in a pressure cooker. Budgets are ballooning, infrastructure and labor costs are rising, and players expect more complexity and polish with every release. All studios, from the major AAAs to smaller indies, are feeling the strain.But there is a way forward. In a recent webinar, Sean Hammond, Territory Manager for the UK and Nordics at Gcore, explained how AI is reshaping game development workflows and how the right infrastructure strategy can reduce costs, speed up production, and create better player experiences.Scroll on to watch key moments from Sean's talk and explore how studios can make AI work for them.Rising costs are threatening game developmentGame revenue has slowed, but development costs continue to rise. Some AAA titles now surpass $100 million in development budgets. The complexity of modern games demands more powerful servers, scalable infrastructure, and larger teams, making the industry increasingly unsustainable.Personnel and infrastructure costs are also climbing. Developers, artists, and QA testers with specialized skills are in high demand, as are technologies like VR, AR, and AI. Studios are also having to invest more in cybersecurity to protect player data, detect cheating, and safeguard in-game economies.AI is revolutionizing GameDev, even without a perfect use caseWhile the perfect use case for AI in gaming may not have been found yet, it’s already transforming how games are built, tested, and personalized.Sean highlighted emerging applications, including:Smarter QA testingAI-driven player personalizationReal-time motion and animationAccelerated environment and character designMultilingual localizationAdaptive game balancingStudios are already applying these technologies to reduce production timelines and improve immersion.The challenge of secure, scalable AI adoptionOf course, AI adoption doesn’t come without its challenges. Chief among them is security. Public models pose risks: no studio wants their proprietary assets to end up training a competitor’s model.The solution? Deploy AI models on infrastructure you trust so you’re in complete control. That’s where Gcore comes in.Gcore Everywhere Inference reduces compute costs and infrastructure bloat by allowing you to deploy only what you need, where you need it.The future of gaming is AI at scaleTo power real-time player experiences, your studio needs to deploy AI globally, close to your users.Gcore Everywhere Inference lets you deploy models worldwide at the edge with minimal latency because data is not routed back to central servers. This means fast, responsive gameplay and a new generation of real-time, AI-driven features.As a company originally built by gamers, we’ve developed AI solutions with gaming studios in mind. Here’s what we offer:Global edge inference for real-time gameplay: Deploy your AI models close to players worldwide, enabling fast, responsive player experiences without routing data to central servers.Full control over AI model deployment and IP protection: Avoid public APIs and retain full ownership of your assets with on-prem options, preventing your proprietary data from being available to competitors.Scalable, cost-efficient infrastructure tailored to gaming workloads: Deploy only what you need to avoid overprovisioning and reduce compute costs without sacrificing performance.Enhanced player retention through AI-driven personalization and matchmaking: Real-time inference powers smarter NPCs and dynamic matchmaking, improving engagement and keeping players coming back for more.Deploy models in 3 clicks and under 10 seconds: Our developer-friendly platform lets you go from trained model to global deployment in seconds. No complex DevOps setup required.Final takeawayAI is advancing game development fast, but only if it’s deployed right. Gcore offers scalable, secure, and cost-efficient AI infrastructure that helps studios create smarter, faster, and more immersive games.Want to see how it works? Deploy your first model in just a few clicks.Check out our blog on how AI is transforming gaming in 2025

How AI-enhanced content moderation is powering safe and compliant streaming

How AI-enhanced content moderation is powering safe and compliant streaming

As streaming experiences a global boom across platforms, regions, and industries, providers face a growing challenge: how to deliver safe, respectful, and compliant content delivery at scale. Viewer expectations have never been higher, likewise the regulatory demands and reputational risks.Live content in particular leaves little room for error. A single offensive comment, inappropriate image, or misinformation segment can cause long-term damage in seconds.Moderation has always been part of the streaming conversation, but tools and strategies are evolving rapidly. AI-powered content moderation is helping providers meet their safety obligations while preserving viewer experience and platform performance.In this article, we explore how AI content moderation works, where it delivers value, and why streaming platforms are adopting it to stay ahead of both audience expectations and regulatory pressures.Real-time problems require real-time solutionsHuman moderators can provide accuracy and context, but they can’t match the scale or speed of modern streaming environments. Live streams often involve thousands of viewers interacting at once, with content being generated every second through audio, video, chat, or on-screen graphics.Manual review systems struggle to keep up with this pace. In some cases, content can go viral before it is flagged, like deepfakes that circulated on Facebook leading up to the 2025 Canadian election. In others, delays in moderation result in regulatory penalties or customer churn, like X’s 2025 fine under the EU Digital Services Act for shortcomings in content moderation and algorithm transparency. This has created a demand for scalable solutions that act instantly, with minimal human intervention.AI-enhanced content moderation platforms address this gap. These systems are trained to identify and filter harmful or non-compliant material as it is being streamed or uploaded. They operate across multiple modalities—video frames, audio tracks, text inputs—and can flag or remove content within milliseconds of detection. The result is a safer environment for end users.How AI moderation systems workModern AI moderation platforms are powered by machine learning algorithms trained on extensive datasets. These datasets include a wide variety of content types, languages, accents, dialects, and contexts. By analyzing this data, the system learns to identify content that violates platform policies or legal regulations.The process typically involves three stages:Input capture: The system monitors live or uploaded content across audio, video, and text layers.Pattern recognition: It uses models to identify offensive content, including nudity, violence, hate speech, misinformation, or abusive language.Contextual decision-making: Based on confidence thresholds and platform rules, the system flags, blocks, or escalates the content for review.This process is continuous and self-improving. As the system receives more inputs and feedback, it adapts to new forms of expression, regional trends, and platform-specific norms.What makes this especially valuable for streaming platforms is its low latency. Content can be flagged and removed in real time, often before viewers even notice. This is critical in high-stakes environments like esports, corporate webinars, or public broadcasts.Multi-language moderation and global streamingStreaming audiences today are truly global. Content crosses borders faster than ever, but moderation standards and cultural norms do not. What’s considered acceptable in one region may be flagged as offensive in another. A word that is considered inappropriate in one language might be completely neutral in another. A piece of nudity in an educational context may be acceptable, while the same image in another setting may not be. Without the ability to understand nuance, AI systems risk either over-filtering or letting harmful content through.That’s why high-quality moderation platforms are designed to incorporate context into their models. This includes:Understanding tone, not just keywordsRecognizing culturally specific gestures or idiomsAdapting to evolving slang or coded languageApplying different standards depending on content type or target audienceThis enables more accurate detection of harmful material and avoids false positives caused by mistranslation.Training AI models for multi-language support involves:Gathering large, representative datasets in each languageTeaching the model to detect content-specific risks (e.g., slurs or threats) in the right cultural contextContinuously updating the model as language evolvesThis capability is especially important for platforms that operate in multiple markets or support user-generated content. It enables a more respectful experience for global audiences while providing consistent enforcement of safety standards.Use cases across the streaming ecosystemAI moderation isn’t just a concern for social platforms. It plays a growing role in nearly every streaming vertical, including the following:Live sports: Real-time content scanning helps block offensive chants, gestures, or pitch-side incidents before they reach a wide audience. Fast filtering protects the viewer experience and helps meet broadcast standards.Esports: With millions of viewers and high emotional stakes, esports platforms rely on AI to remove hate speech and adult content from chat, visuals, and commentary. This creates a more inclusive environment for fans and sponsors alike.Corporate live events: From earnings calls to virtual town halls, organizations use AI moderation to help ensure compliance with internal communication guidelines and protect their reputation.Online learning: EdTech platforms use AI to keep classrooms safe and focused. Moderation helps filter distractions, harassment, and inappropriate material in both live and recorded sessions.On-demand entertainment: Even outside of live broadcasts, moderation helps streaming providers meet content standards and licensing obligations across global markets. It also ensures user-submitted content (like comments or video uploads) meets platform guidelines.In each case, the shared goal is to provide a safe and trusted streaming environment for users, advertisers, and creators.Balancing automation with human oversightAI moderation is a powerful tool, but it shouldn’t be the only one. The best systems combine automation with clear review workflows, configurable thresholds, and human input.False positives and edge cases are inevitable. Giving moderators the ability to review, override, or explain decisions is important for both quality control and user trust.Likewise, giving users a way to appeal moderation decisions or report issues ensures that moderation doesn’t become a black box. Transparency and user empowerment are increasingly seen as part of good platform governance.Looking ahead: what’s next for AI moderationAs streaming becomes more interactive and immersive, moderation will need to evolve. AI systems will be expected to handle not only traditional video and chat, but also spatial audio, avatars, and real-time user inputs in virtual environments.We can also expect increased demand for:Personalization, where viewers can set their own content preferencesIntegration with platform APIs for programmatic content governanceCross-platform consistency to support syndicated content across partnersAs these changes unfold, AI moderation will remain central to the success of modern streaming. Platforms that adopt scalable, adaptive moderation systems now will be better positioned to meet the next generation of content challenges without compromising on speed, safety, or user experience.Keep your streaming content safe and compliant with GcoreGcore Video Streaming offers AI Content Moderation that satisfies today’s digital safety concerns while streamlining the human moderation process.To explore how Gcore AI Content Moderation can transform your digital platform, we invite you to contact our streaming team for a demonstration. Our docs provide guidance for using our intuitive Gcore Customer Portal to manage your streaming content. We also provide a clear pricing comparison so you can assess the value for yourself.Embrace the future of content moderation and deliver a safer, more compliant digital space for all your users.Try AI Content Moderation for freeTry AI Content Moderation for free

Deploy GPT-OSS-120B privately on Gcore

OpenAI’s release of GPT-OSS-120B is a turning point for LLM developers. It’s a 120B parameter model trained from scratch, licensed for commercial use, and available with open weights. This is a serious asset for serious builders.Gcore now supports private GPT-OSS-120B deployments via our Everywhere Inference platform. That means you can stand up your own endpoint in minutes, run inference at scale, and control the full stack, without API limits, vendor lock-in, or hidden usage fees. Just fast, secure, controlled deployment on your terms. Deploy now in three clicks or read on to learn more.Why GPT-OSS-120B is big news for buildersThis model changes the game for anyone developing AI apps, platforms, or infrastructure. It brings GPT-3-level reasoning to the open-source ecosystem and frees developers from closed APIs.With GPT-OSS-120B, you get:Full access to model weights and architectureSelf-hosting for maximum data control and privacySupport for fine-tuning and model editingOffline deployment for secure or air-gapped useMassive cost savings at scaleYou can deploy in any Gcore region (or leverage Gcore’s three-click serverless inference on your own infrastructure), route traffic through your own stack, and fully control load, latency, and logs. This is LLM deployment for real-world apps, not just playground prompts.How to deploy GPT-OSS-120B with Gcore Everywhere InferenceGcore Everywhere Inference gives you a clean path from open model to production endpoint. You can spin up a dedicated deployment in just three clicks. We offer configuration options to suit your business needs:Choose your location (cloud or on-prem)Integrate via standard APIs (OpenAI-compatible)Control usage, autoscale, and costsDeploying GPT-OSS-120B on Gcore takes just three clicks in the Gcore Customer Portal.There are no shared endpoints. You get dedicated compute, low-latency routing, and full control and observability.You can also bring your own trained variant if you’ve fine-tuned GPT-OSS-120B elsewhere. We’ll help you host it reliably, close to your users.Use cases: where GPT-OSS-120B fits bestCommercial GPTs still outperform OSS models on some general tasks, but GPT-OSS-120B gives you control, portability, and flexibility where it counts. Most importantly, it gives you the ability to build privacy-sensitive applications.Great fits include:Internal dev tools and copilotsRetrieval-augmented generation (RAG) pipelinesSecure, private enterprise assistantsData-sensitive, on-prem AI workloadsModels requiring full customization or fine-tuningIt’s especially relevant for finance, healthcare, government, and legal teams operating under strict compliance rules.Deploy GPT-OSS-120B todayWant to learn more about GPT-OSS-120B and why Gcore is an ideal provider for deployment? Get all the information you need on our dedicated page.And if you’re ready to deploy in just three clicks, head on over to the Gcore Customer Portal. GPT-OSS-120B is waiting for you in the Application Catalog.Learn more about deploying GPT-OSS-120B on Gcore

Announcing new tools, apps, and regions for your real-world AI use cases

Three updates, one shared goal: helping builders move faster with AI. Our latest releases for Gcore Edge AI bring real-world AI deployments within reach, whether you’re a developer integrating genAI into a workflow, an MLOps team scaling inference workloads, or a business that simply needs access to performant GPUs in the UK.MCP: make AI do moreGcore’s MCP server implementation is now live on GitHub. The Model Context Protocol (MCP) is an open standard, originally developed by Anthropic, that turns AI models into agents that can carry out real-world tasks. It allows you to plug genAI models into everyday tools like Slack, email, Jira, and databases, so your genAI can read, write, and reason directly across systems. Think of it as a way to turn “give me a summary” into “send that summary to the right person and log the action.”“AI needs to be useful, not just impressive. MCP is a critical step toward building AI systems that drive desirable business outcomes, like automating workflows, integrating with enterprise tools, and operating reliably at scale. At Gcore, we’re focused on delivering that kind of production-grade AI through developer-friendly services and top-of-the-range infrastructure that make real-world deployment fast and easy.” — Seva Vayner, Product Director of Edge Cloud and AI, GcoreTo get started, clone the repo, explore the toolsets, and test your own automations.Gcore Application Catalog: inference without overheadWe’ve upgraded the Gcore Model Catalog into something even more powerful: an Application Catalog for AI inference. You can still deploy the latest open models with three clicks. But now, you can also tune, share, and scale them like real applications.We’ve re-architected our inference solution so you can:Run prefill and decode stages in parallelShare KV cache across pods (it’s not tied to individual GPUs) from August 2025Toggle WebUI and secure API independently from August 2025These changes cut down on GPU memory usage, make deployments more flexible, and reduce time to first token, especially at scale. And because everything is application-based, you’ll soon be able to optimize for specific business goals like cost, latency, or throughput.Here’s who benefits:ML engineers can deploy high-throughput workloads without worrying about memory overheadBackend developers get a secure API, no infra setup neededProduct teams can launch demos instantly with the WebUI toggleInnovation labs can move from prototype to production without reconfiguringPlatform engineers get centralized caching and predictable scalingThe new Application Catalog is available now through the Gcore Customer Portal.Chester data center: NVIDIA H200 capacity in the UKGcore’s newest AI cloud region is now live in Chester, UK. This marks our first UK location in partnership with Northern Data. Chester offers 2000 NVIDIA H200 GPUs with BlueField-3 DPUs for secure, high-throughput compute on Gcore GPU Cloud, serving your training and inference workloads. You can reserve your H200 GPU immediately via the Gcore Customer Portal.This launch solves a growing problem: UK-based companies building with AI often face regional capacity shortages, long wait times, or poor performance when routing inference to overseas data centers. Chester fixes that with immediate availability on performant GPUs.Whether you’re training LLMs or deploying inference for UK and European users, Chester offers local capacity, low latency, and impressive capacity and availability.Next stepsExplore the MCP server and start building agentic workflowsTry the new Application Catalog via the Gcore Customer PortalDeploy your workloads in Chester for high-performance UK-based computeDeploy your AI workload in three clicks today!

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

We’re proud to share that Gcore has been named a Leader in the 2025 GigaOm Radar for AI Infrastructure—the only European provider to earn a top-tier spot. GigaOm’s rigorous evaluation highlights our leadership in platform capability and innovation, and our expertise in delivering secure, scalable AI infrastructure.Inside the GigaOm Radar: what’s behind the Leader statusThe GigaOm Radar report is a respected industry analysis that evaluates top vendors in critical technology spaces. In this year’s edition, GigaOm assessed 14 of the world’s leading AI infrastructure providers, measuring their strengths across key technical and business metrics. It ranks providers based on factors such as scalability and performance, deployment flexibility, security and compliance, and interoperability.Alongside the ranking, the report offers valuable insights into the evolving AI infrastructure landscape, including the rise of hybrid AI architectures, advances in accelerated computing, and the increasing adoption of edge deployment to bring AI closer to where data is generated. It also offers strategic takeaways for organizations seeking to build scalable, secure, and sovereign AI capabilities.Why was Gcore named a top provider?The specific areas in which Gcore stood out and earned its Leader status are as follows:A comprehensive AI platform offering Everywhere Inference and GPU Cloud solutions that support scalable AI from model development to productionHigh performance powered by state-of-the-art NVIDIA A100, H100, H200 and GB200 GPUs and a global private network ensuring ultra-low latencyAn extensive model catalogue with flexible deployment options across cloud, on-premises, hybrid, and edge environments, enabling tailored global AI solutionsExtensive capacity of cutting-edge GPUs and technical support in Europe, supporting European sovereign AI initiativesChoosing Gcore AI is a strategic move for organizations prioritizing ultra-low latency, high performance, and flexible deployment options across cloud, on-premises, hybrid, and edge environments. Gcore’s global private network ensures low-latency processing for real-time AI applications, which is a key advantage for businesses with a global footprint.GigaOm Radar, 2025Discover more about the AI infrastructure landscapeAt Gcore, we’re dedicated to driving innovation in AI infrastructure. GPU Cloud and Everywhere Inference empower organizations to deploy AI efficiently and securely, on their terms.If you’re planning your AI infrastructure roadmap or rethinking your current one, this report is a must-read. Explore the report to discover how Gcore can support high-performance AI at scale and help you stay ahead in an AI-driven world.Download the full report

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.