Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. Orchestrating AI: Event-Driven Architectures for Complex AI Workflows

Orchestrating AI: Event-Driven Architectures for Complex AI Workflows

  • By Gcore
  • 7 min read
Orchestrating AI: Event-Driven Architectures for Complex AI Workflows

This article was originally published on The New Stack. It’s written by Georgina Tryfou, a machine learning engineer at Gcore and an AI expert with more than 15 years of experience in machine learning and speech recognition.


In the current environment of AI frenzy, the implementation of complex AI workflows is becoming increasingly popular among companies that wish to enhance their offerings with AI abilities. In this article, I’ll share a behind-the-scenes look at how we implement event-driven architecture (EDA) in complex AI workflows at Gcore. I’ll walk you through the initial challenges, the architectural decisions made, and the outcomes of employing an EDA in a dynamic, real-world scenario, showing how EDA enhances system responsiveness, scalability, and flexibility for managing AI-driven tasks like subtitle generation for video content.

Why Event-Driven Architecture Matters for AI

Event-driven architecture (EDA) is a design pattern centered around the production, detection, consumption, and reaction to events rather than static, predefined operations. An event is any significant change in a state or an update that occurs within the system. EDA allows different parts of a system to communicate and operate independently, driven by the occurrence of these events, which can be anything from a user action to a completed process.

The adoption of EDA in AI workflow management marks a significant evolution from traditional architectures, such as monolithic, service-oriented, or polling-based architectures. Its principles of asynchronous communication, decoupling, and dynamic scalability align perfectly with the demands of modern AI applications, with three key benefits:

  • The architecture’s modularity makes it easier to scale specific components independently, such as scaling up language processing during high-demand periods in customer service applications without affecting other parts of the system.
  • EDA’s modular design simplifies the process of updating or replacing models with newer versions, as seen in health tech environments where predictive algorithms are frequently refined and deployed to keep pace with medical advancements or newer data.
  • The flexible nature of EDA allows for the seamless integration of various models to realize a complex AI workflow, such as combining image recognition with predictive maintenance in manufacturing, enhancing system robustness and operational efficiency.

These benefits, observed across different sectors, enhance the scalability and responsiveness of AI systems and also their robustness and adaptability, making EDA indispensable for managing complex, multi-model AI workflows across industries and use cases.

Implementing Event-Driven Architecture in AI: A Practical Case Study

At Gcore, we’ve implemented EDA within Gcore Video Streaming AI features. Today, I’ll share with you how we apply EDA for AI subtitle generation for video.

This project began with the goal of improving the efficiency, latency, scalability, and reliability of subtitle generation in multiple languages from raw video content. The process involves several complex steps:

  1. Video decompression: The video file is either decompressed or transcoded into a format suitable for processing.
  2. Speech detection: Segments of the video where speech occurs are identified and distinguished from background noise or silence using specialized ML models.
  3. Speech-to-text conversion (transcription): The detected speech is converted into text. This step requires inference using complex speech recognition models capable of handling a range of languages, accents, and dialects.
  4. Text post-processing: Transcription errors, punctuation, and grammar are corrected. The text is formatted to match the video’s timing; for example, it could be broken into timed subtitles.
  5. Translation (optional): If subtitles are required in multiple languages, the transcribed text may be translated into one or more target languages, again via inference using specialized machine-translation models.
  6. Subtitle synchronization: Subtitle display is timed to match the speech in the video, ensuring that the subtitles appear on screen precisely when the corresponding speech is heard.

Each of these steps requires specialized AI models or algorithms and may require data processing in real- or near-real time, especially in live-streaming scenarios. The result? Serious complexity.

The complexity arises not only from the technical challenges associated with each task, but also from the need to efficiently manage the flow of data between steps, handle errors or exceptions, and scale resources dynamically based on demand.

In our pursuit of orchestrating such sophisticated and demanding AI workflows, we designed an AI system that functions with precision and agility through a well-defined EDA. The architecture of this platform, outlined in the figure below, addresses all stages of AI-driven tasks, facilitates communication between components, and checks that each task can be dynamically scaled and autonomously handled.

Four core components underlie the Gcore Streaming AI platform backend. All these components are versatile and essential to a wide range of AI applications.

Django: API Service

At the front of the architecture lies the API service, which uses the robust Django framework. This is the primary interface for user interactions and processes incoming requests for varied services including transcription and content moderation services like nudity detection. This layer validates and parses incoming requests, triggering a cascade of subsequent tasks in the workflow, as represented on the far left of the diagram above where a user initiates a transcription request to the API service.

Celery: Processing Engine and Task Orchestration

Diving deeper into the backend, we leverage Celery, an asynchronous task queue that acts as a robust background processing engine. Celery is tasked with managing AI processes, such as transcribing audio to text or analyzing content for nudity, and other standalone processes, such as synchronizing transcribed content into subtitles. Celery, in combination with Redis which acts as a message broker, orchestrates these tasks and ensures that each task initiation and completion are driven by the occurrence of predefined events.

Celery’s ability to handle AI workflows is enhanced by a suite of advanced features for orchestrating complex workflows: groups, chains, and chords. These tools allow for the decomposition of high-level, complex AI tasks into granular subtasks, handling of their dependencies, and aggregation of their results.

Redis: Broker and Mediator Pattern

Redis plays a crucial role in our system as the broker and mediator, managing the distribution and coordination of tasks across the backend. It utilizes its fast, in-memory data structure store to handle the task queue efficiently. Within our architecture, task signatures and chains act as the mediators controlling the flow and logic of task execution. This mediation is based on event signals indicating task completion.

Redis’ ability to process these signals quickly is vital for maintaining a dynamic and responsive workflow, as shown in the diagram above: tasks are received by the Redis broker, directed to the appropriate processing containers, and their results are collected post-inference for seamless task transitions and data integrity.

AI Celery Workers: Dedicated AI Task Handling

Each AI Celery worker is dedicated to a specific AI task, deploying and managing AI models such as Whisper for transcription and Pyannote for VAD. These workers operate in isolated environments to make sure that each task is processed in a controlled and secure manner, minimizing the risk of interference between tasks. This setup enhances the scalability of our system by allowing each worker to scale independently based on task demands while simultaneously ensuring high reliability and efficiency in AI model execution.

System Requirements Unveiled: Scaling, Reliability, and Latency

The Gcore backend I just described produces three major benefits that are particularly important for AI workflows: scaling, reliability, and latency reduction.

Scaling

The platform scales to handle varying demand by dynamically allocating cloud resources and leveraging GPU acceleration for intensive ML tasks. This results in seamless scaling, avoiding the performance bottlenecks and high costs typical of traditional systems. By adapting computing power in real time, our system efficiently manages workloads during both peak and off-peak times without compromising performance.

Reliability

AI features within Gcore Video Streaming are designed for high reliability with robust fault tolerance and sophisticated error handling. Strategies like data replication and automatic recovery mechanisms promote system continuity even during failures. In video transcription, if a segment of audio is corrupted, our system can either skip that segment or retry processing it, rather than wasting resources on discarding or retrying the whole audio track.

Latency Reduction

System latency for AI elements is reduced by minimizing idle times and enhancing the transition speed between tasks. We employ three key strategies:

  • Segmenting large tasks into smaller parts for parallel processing across multiple GPUs
  • Optimizing workflows for immediate task transitions
  • Smartly scheduling resources to keep computational assets fully engaged

In video transcription, rather than processing the entire video at once, we break it into segments for concurrent processing. This approach shortens transcription times and helps resources be used efficiently, boosting overall system responsiveness.

Concrete Benefits: Our EDA Success Story

Adopting this system revolutionized the management of complex AI workflows within the Gcore Video Streaming backend. Specifically, the EDA enabled us to reduce analysis time, parallelize AI tasks, scale AI workers independently, and ensure system flexibility.

  • Reduce analysis time: By utilizing EDA, we dramatically decreased the time required to analyze a single video with a set of pre-trained models. This means faster processing of videos for tasks like subtitle generation and content moderation.
  • Parallelize AI tasks: Parallel processing of AI tasks means breaking down complex processes into smaller, manageable tasks that could be executed concurrently. This approach sped up the overall process and optimized the use of computational resources.
  • Scale AI workers independently: Understanding the diverse demands of different AI tasks, our architecture scales AI workers based on the specific requirements of each task. For instance, a single request for subtitle generation might trigger one task for Pyannote (for voice activity detection) and potentially 100 tasks for Whisper (for speech-to-text), with only the latter requiring dynamic scaling due to higher demand.
  • Ensure system flexibility: We aimed to create a highly flexible system capable of quickly adapting to any new AI request. This required the ability to load models in an ad-hoc manner, ensuring our system could immediately respond to and serve new or evolving AI demands without significant reconfiguration.

We Made These Mistakes So You Don’t Have To

Sharing is caring: Here are three things to keep in mind when setting up your own EDA for AI workflows to get the best results right away.

  • Avoid common pitfalls: Design the system with fault tolerance in mind from the outset. Anticipate potential failures in individual components and ensure that the architecture can gracefully handle these incidents without disrupting the overall workflow. Effective error handling and retry mechanisms are essential.
  • Choose the correct topology: Implementing a mediator pattern topology can significantly simplify the implementation of business logic and the modularity and reusability of AI models. Initially employing a broker topology, we encountered limitations in managing complex AI tasks due to its linear communication model. To address these challenges and improve our system’s scalability and modularity, we transitioned to a mediator topology. This change introduced a central mediator to manage AI business logic and orchestrate events, allowing components to operate independently and more efficiently. The shift streamlined the development process and significantly enhanced the system’s adaptability and robustness.
  • Plan for rapid integration: Flexibility is key in any architecture designed for AI workflows. Allow for the quick addition and integration of new models into end services, essential in this fast-evolving field, where the ability to swiftly adopt and deploy new models can provide a significant competitive advantage.

Future Directions in Event-Driven AI Architectures

We’re always looking to the future and innovating our EDA AI systems at Gcore. Two future directions look particularly promising.

Continuous Learning and Adaptation

Incorporating mechanisms for continuous learning and model adaptation requires periodically updating models with new data and, less obviously, dynamically adjusting workflows and processes based on real-time performance metrics and feedback loops. As AI models continue to grow in complexity and capability, developing robust systems for continuous evaluation and deployment becomes critical. This includes automated performance monitoring, version control, and seamless deployment of updated models without disrupting service.

Embracing LLMs and GAI

Our architecture needs to adapt to AI’s changes. While the rise of LLMs and GAI might suggest that traditional AI inference workflows could become obsolete, the reality is that our proposed architecture supports critical areas of AI deployment, such as continuous model learning and evaluation. Our event-driven system’s flexibility makes it well-suited to integrate LLMs for enhanced decision-making processes and to adapt workflows in response to the capabilities of GAI, where AI models will increasingly be replaced by a single, more powerful one.

Conclusion

We’ve found that adopting an EDA for workflow processing offers significant benefits for scalability, reliability, and efficiency in managing complex AI systems in cloud and streaming environments. This approach addresses critical challenges, including dynamic scaling of large ML models, system robustness, and latency reduction. EDA is already proving itself essential for the evolution of scalable and efficient AI systems.

To experience the end product for yourself, check out Gcore Video Streaming and its impressive AI features, including transcription, translation, content moderation, and object recognition.

Try Gcore Video Streaming free for 14 days

Related articles

Deploy GPT-OSS-120B privately on Gcore

OpenAI’s release of GPT-OSS-120B is a turning point for LLM developers. It’s a 120B parameter model trained from scratch, licensed for commercial use, and available with open weights. This is a serious asset for serious builders.Gcore now supports private GPT-OSS-120B deployments via our Everywhere Inference platform. That means you can stand up your own endpoint in minutes, run inference at scale, and control the full stack, without API limits, vendor lock-in, or hidden usage fees. Just fast, secure, controlled deployment on your terms. Deploy now in three clicks or read on to learn more.Why GPT-OSS-120B is big news for buildersThis model changes the game for anyone developing AI apps, platforms, or infrastructure. It brings GPT-3-level reasoning to the open-source ecosystem and frees developers from closed APIs.With GPT-OSS-120B, you get:Full access to model weights and architectureSelf-hosting for maximum data control and privacySupport for fine-tuning and model editingOffline deployment for secure or air-gapped useMassive cost savings at scaleYou can deploy in any Gcore region (or leverage Gcore’s three-click serverless inference on your own infrastructure), route traffic through your own stack, and fully control load, latency, and logs. This is LLM deployment for real-world apps, not just playground prompts.How to deploy GPT-OSS-120B with Gcore Everywhere InferenceGcore Everywhere Inference gives you a clean path from open model to production endpoint. You can spin up a dedicated deployment in just three clicks. We offer configuration options to suit your business needs:Choose your location (cloud or on-prem)Integrate via standard APIs (OpenAI-compatible)Control usage, autoscale, and costsDeploying GPT-OSS-120B on Gcore takes just three clicks in the Gcore Customer Portal.There are no shared endpoints. You get dedicated compute, low-latency routing, and full control and observability.You can also bring your own trained variant if you’ve fine-tuned GPT-OSS-120B elsewhere. We’ll help you host it reliably, close to your users.Use cases: where GPT-OSS-120B fits bestCommercial GPTs still outperform OSS models on some general tasks, but GPT-OSS-120B gives you control, portability, and flexibility where it counts. Most importantly, it gives you the ability to build privacy-sensitive applications.Great fits include:Internal dev tools and copilotsRetrieval-augmented generation (RAG) pipelinesSecure, private enterprise assistantsData-sensitive, on-prem AI workloadsModels requiring full customization or fine-tuningIt’s especially relevant for finance, healthcare, government, and legal teams operating under strict compliance rules.Deploy GPT-OSS-120B todayWant to learn more about GPT-OSS-120B and why Gcore is an ideal provider for deployment? Get all the information you need on our dedicated page.And if you’re ready to deploy in just three clicks, head on over to the Gcore Customer Portal. GPT-OSS-120B is waiting for you in the Application Catalog.Learn more about deploying GPT-OSS-120B on Gcore

Announcing new tools, apps, and regions for your real-world AI use cases

Three updates, one shared goal: helping builders move faster with AI. Our latest releases for Gcore Edge AI bring real-world AI deployments within reach, whether you’re a developer integrating genAI into a workflow, an MLOps team scaling inference workloads, or a business that simply needs access to performant GPUs in the UK.MCP: make AI do moreGcore’s MCP server implementation is now live on GitHub. The Model Context Protocol (MCP) is an open standard, originally developed by Anthropic, that turns AI models into agents that can carry out real-world tasks. It allows you to plug genAI models into everyday tools like Slack, email, Jira, and databases, so your genAI can read, write, and reason directly across systems. Think of it as a way to turn “give me a summary” into “send that summary to the right person and log the action.”“AI needs to be useful, not just impressive. MCP is a critical step toward building AI systems that drive desirable business outcomes, like automating workflows, integrating with enterprise tools, and operating reliably at scale. At Gcore, we’re focused on delivering that kind of production-grade AI through developer-friendly services and top-of-the-range infrastructure that make real-world deployment fast and easy.” — Seva Vayner, Product Director of Edge Cloud and AI, GcoreTo get started, clone the repo, explore the toolsets, and test your own automations.Gcore Application Catalog: inference without overheadWe’ve upgraded the Gcore Model Catalog into something even more powerful: an Application Catalog for AI inference. You can still deploy the latest open models with three clicks. But now, you can also tune, share, and scale them like real applications.We’ve re-architected our inference solution so you can:Run prefill and decode stages in parallelShare KV cache across pods (it’s not tied to individual GPUs) from August 2025Toggle WebUI and secure API independently from August 2025These changes cut down on GPU memory usage, make deployments more flexible, and reduce time to first token, especially at scale. And because everything is application-based, you’ll soon be able to optimize for specific business goals like cost, latency, or throughput.Here’s who benefits:ML engineers can deploy high-throughput workloads without worrying about memory overheadBackend developers get a secure API, no infra setup neededProduct teams can launch demos instantly with the WebUI toggleInnovation labs can move from prototype to production without reconfiguringPlatform engineers get centralized caching and predictable scalingThe new Application Catalog is available now through the Gcore Customer Portal.Chester data center: NVIDIA H200 capacity in the UKGcore’s newest AI cloud region is now live in Chester, UK. This marks our first UK location in partnership with Northern Data. Chester offers 2000 NVIDIA H200 GPUs with BlueField-3 DPUs for secure, high-throughput compute on Gcore GPU Cloud, serving your training and inference workloads. You can reserve your H200 GPU immediately via the Gcore Customer Portal.This launch solves a growing problem: UK-based companies building with AI often face regional capacity shortages, long wait times, or poor performance when routing inference to overseas data centers. Chester fixes that with immediate availability on performant GPUs.Whether you’re training LLMs or deploying inference for UK and European users, Chester offers local capacity, low latency, and impressive capacity and availability.Next stepsExplore the MCP server and start building agentic workflowsTry the new Application Catalog via the Gcore Customer PortalDeploy your workloads in Chester for high-performance UK-based computeDeploy your AI workload in three clicks today!

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

We’re proud to share that Gcore has been named a Leader in the 2025 GigaOm Radar for AI Infrastructure—the only European provider to earn a top-tier spot. GigaOm’s rigorous evaluation highlights our leadership in platform capability and innovation, and our expertise in delivering secure, scalable AI infrastructure.Inside the GigaOm Radar: what’s behind the Leader statusThe GigaOm Radar report is a respected industry analysis that evaluates top vendors in critical technology spaces. In this year’s edition, GigaOm assessed 14 of the world’s leading AI infrastructure providers, measuring their strengths across key technical and business metrics. It ranks providers based on factors such as scalability and performance, deployment flexibility, security and compliance, and interoperability.Alongside the ranking, the report offers valuable insights into the evolving AI infrastructure landscape, including the rise of hybrid AI architectures, advances in accelerated computing, and the increasing adoption of edge deployment to bring AI closer to where data is generated. It also offers strategic takeaways for organizations seeking to build scalable, secure, and sovereign AI capabilities.Why was Gcore named a top provider?The specific areas in which Gcore stood out and earned its Leader status are as follows:A comprehensive AI platform offering Everywhere Inference and GPU Cloud solutions that support scalable AI from model development to productionHigh performance powered by state-of-the-art NVIDIA A100, H100, H200 and GB200 GPUs and a global private network ensuring ultra-low latencyAn extensive model catalogue with flexible deployment options across cloud, on-premises, hybrid, and edge environments, enabling tailored global AI solutionsExtensive capacity of cutting-edge GPUs and technical support in Europe, supporting European sovereign AI initiativesChoosing Gcore AI is a strategic move for organizations prioritizing ultra-low latency, high performance, and flexible deployment options across cloud, on-premises, hybrid, and edge environments. Gcore’s global private network ensures low-latency processing for real-time AI applications, which is a key advantage for businesses with a global footprint.GigaOm Radar, 2025Discover more about the AI infrastructure landscapeAt Gcore, we’re dedicated to driving innovation in AI infrastructure. GPU Cloud and Everywhere Inference empower organizations to deploy AI efficiently and securely, on their terms.If you’re planning your AI infrastructure roadmap or rethinking your current one, this report is a must-read. Explore the report to discover how Gcore can support high-performance AI at scale and help you stay ahead in an AI-driven world.Download the full report

Protecting networks at scale with AI security strategies

Network cyberattacks are no longer isolated incidents. They are a constant, relentless assault on network infrastructure, probing for vulnerabilities in routing, session handling, and authentication flows. With AI at their disposal, threat actors can move faster than ever, shifting tactics mid-attack to bypass static defenses.Legacy systems, designed for simpler threats, cannot keep pace. Modern network security demands a new approach, combining real-time visibility, automated response, AI-driven adaptation, and decentralized protection to secure critical infrastructure without sacrificing speed or availability.At Gcore, we believe security must move as fast as your network does. So, in this article, we explore how L3/L4 network security is evolving to meet new network security challenges and how AI strengthens defenses against today’s most advanced threats.Smarter threat detection across complex network layersModern threats blend into legitimate traffic, using encrypted command-and-control, slow drip API abuse, and DNS tunneling to evade detection. Attackers increasingly embed credential stuffing into regular login activity. Without deep flow analysis, these attempts bypass simple rate limits and avoid triggering alerts until major breaches occur.Effective network defense today means inspection at Layer 3 and Layer 4, looking at:Traffic flow metadata (NetFlow, sFlow)SSL/TLS handshake anomaliesDNS request irregularitiesUnexpected session persistence behaviorsGcore Edge Security applies real-time traffic inspection across multiple layers, correlating flows and behaviors across routers, load balancers, proxies, and cloud edges. Even slight anomalies in NetFlow exports or unexpected east-west traffic inside a VPC can trigger early threat alerts.By combining packet metadata analysis, flow telemetry, and historical modeling, Gcore helps organizations detect stealth attacks long before traditional security controls react.Automated response to contain threats at network speedDetection is only half the battle. Once an anomaly is identified, defenders must act within seconds to prevent damage.Real-world example: DNS amplification attackIf a volumetric DNS amplification attack begins saturating a branch office's upstream link, automated systems can:Apply ACL-based rate limits at the nearest edge routerFilter malicious traffic upstream before WAN degradationAlert teams for manual inspection if thresholds escalateSimilarly, if lateral movement is detected inside a cloud deployment, dynamic firewall policies can isolate affected subnets before attackers pivot deeper.Gcore’s network automation frameworks integrate real-time AI decision-making with response workflows, enabling selective throttling, forced reauthentication, or local isolation—without disrupting legitimate users. Automation means threats are contained quickly, minimizing impact without crippling operations.Hardening DDoS mitigation against evolving attack patternsDDoS attacks have moved beyond basic volumetric floods. Today, attackers combine multiple tactics in coordinated strikes. Common attack vectors in modern DDoS include the following:UDP floods targeting bandwidth exhaustionSSL handshake floods overwhelming load balancersHTTP floods simulating legitimate browser sessionsAdaptive multi-vector shifts changing methods mid-attackReal-world case study: ISP under hybrid DDoS attackIn recent years, ISPs and large enterprises have faced hybrid DDoS attacks blending hundreds of gigabits per second of L3/4 UDP flood traffic with targeted SSL handshake floods. Attackers shift vectors dynamically to bypass static defenses and overwhelm infrastructure at multiple layers simultaneously. Static defenses fail in such cases because attackers change vectors every few minutes.Building resilient networks through self-healing capabilitiesEven the best defenses can be breached. When that happens, resilient networks must recover automatically to maintain uptime.If BGP route flapping is detected on a peering session, self-healing networks can:Suppress unstable prefixesReroute traffic through backup transit providersPrevent packet loss and service degradation without manual interventionSimilarly, if a VPN concentrator faces resource exhaustion from targeted attack traffic, automated scaling can:Spin up additional concentratorsRedistribute tunnel sessions dynamicallyMaintain stable access for remote usersGcore’s infrastructure supports self-healing capabilities by combining telemetry analysis, automated failover, and rapid resource scaling across core and edge networks. This resilience prevents localized incidents from escalating into major outages.Securing the edge against decentralized threatsThe network perimeter is now everywhere. Branches, mobile endpoints, IoT devices, and multi-cloud services all represent potential entry points for attackers.Real-world example: IoT malware infection at the branchMalware-infected IoT devices at a branch office can initiate outbound C2 traffic during low-traffic periods. Without local inspection, this activity can go undetected until aggregated telemetry reaches the central SOC, often too late.Modern edge security platforms deploy the following:Real-time traffic inspection at branch and edge routersBehavioral anomaly detection at local points of presenceAutomated enforcement policies blocking malicious flows immediatelyGcore’s edge nodes analyze flows and detect anomalies in near real time, enabling local containment before threats can propagate deeper into cloud or core systems. Decentralized defense shortens attacker dwell time, minimizes potential damage, and offloads pressure from centralized systems.How Gcore is preparing networks for the next generation of threatsThe threat landscape will only grow more complex. Attackers are investing in automation, AI, and adaptive tactics to stay one step ahead. Defending modern networks demands:Full-stack visibility from core to edgeAdaptive defense that adjusts faster than attackersAutomated recovery from disruption or compromiseDecentralized detection and containment at every entry pointGcore Edge Security delivers these capabilities, combining AI-enhanced traffic analysis, real-time mitigation, resilient failover systems, and edge-to-core defense. In a world where minutes of network downtime can cost millions, you can’t afford static defenses. We enable networks to protect critical infrastructure without sacrificing performance, agility, or resilience.Move faster than attackers. Build AI-powered resilience into your network with Gcore.Check out our docs to see how DDoS Protection protects your network

Introducing Gcore for Startups: created for builders, by builders

Building a startup is tough. Every decision about your infrastructure can make or break your speed to market and burn rate. Your time, team, and budget are stretched thin. That’s why you need a partner that helps you scale without compromise.At Gcore, we get it. We’ve been there ourselves, and we’ve helped thousands of engineering teams scale global applications under pressure.That’s why we created the Gcore Startups Program: to give early-stage founders the infrastructure, support, and pricing they actually need to launch and grow.At Gcore, we launched the Startups Program because we’ve been in their shoes. We know what it means to build under pressure, with limited resources, and big ambitions. We wanted to offer early-stage founders more than just short-term credits and fine print; our goal is to give them robust, long-term infrastructure they can rely on.Dmitry Maslennikov, Head of Gcore for StartupsWhat you get when you joinThe program is open to startups across industries, whether you’re building in fintech, AI, gaming, media, or something entirely new.Here’s what founders receive:Startup-friendly pricing on Gcore’s cloud and edge servicesCloud credits to help you get started without riskWhite-labeled dashboards to track usage across your team or customersPersonalized onboarding and migration supportGo-to-market resources to accelerate your launchYou also get direct access to all Gcore products, including Everywhere Inference, GPU Cloud, Managed Kubernetes, Object Storage, CDN, and security services. They’re available globally via our single, intuitive Gcore Customer Portal, and ready for your production workloads.When startups join the program, they get access to powerful cloud and edge infrastructure at startup-friendly pricing, personal migration support, white-labeled dashboards for tracking usage, and go-to-market resources. Everything we provide is tailored to the specific startup’s unique needs and designed to help them scale faster and smarter.Dmitry MaslennikovWhy startups are choosing GcoreWe understand that performance and flexibility are key for startups. From high-throughput AI inference to real-time media delivery, our infrastructure was designed to support demanding, distributed applications at scale.But what sets us apart is how we work with founders. We don’t force startups into rigid plans or abstract SLAs. We build with you 24/7, because we know your hustle isn’t a 9–5.One recent success story: an AI startup that migrated from a major hyperscaler told us they cut their inference costs by over 40%…and got actual human support for the first time. What truly sets us apart is our flexibility: we’re not a faceless hyperscaler. We tailor offers, support, and infrastructure to each startup’s stage and needs.Dmitry MaslennikovWe’re excited to support startups working on AI, machine learning, video, gaming, and real-time apps. Gcore for Startups is delivering serious value to founders in industries where performance, cost efficiency, and responsiveness make or break product experience.Ready to scale smarter?Apply today and get hands-on support from engineers who’ve been in your shoes. If you’re an early-stage startup with a working product and funding (pre-seed to Series A), we’ll review your application quickly and tailor infrastructure that matches your stage, stack, and goals.To get started, head on over to our Gcore for Startups page and book a demo.Discover Gcore for Startups

Announcing a new AI-optimized data center in Southern Europe

Good news for businesses operating in Southern Europe! Our newest cloud regions in Sines, Portugal, give you faster, more local access to the infrastructure you need to run advanced AI, ML, and HPC workloads across the Iberian Peninsula and wider region. Sines-2 marks the first region launched in partnership with Northern Data Group, signaling a new chapter in delivering powerful, workload-optimized infrastructure across Europe. And Sines-3 expands capacity and availability for the region.Strategically positioned in Portugal, Sines-2 and Sines-3 enhance coverage in Southern Europe, providing a lower-latency option for customers operating in or targeting this region. With the explosive growth of AI, machine learning, and compute-intensive workloads, these new regions are designed to meet escalating demand with cutting-edge GPU and storage capabilities.You can activate Sines-2 and Sines-3 for GPU Cloud or Everywhere Inference today with just a few clicks.Built for AI, designed to scaleSines-2 and Sines-3 bring with them next-generation infrastructure features, purpose-built for today's most demanding workloads:NVIDIA H100 GPUs: Unlock the full potential of AI/ML training, high-performance computing (HPC), and rendering workloads with access to H100 GPUs.VAST NFS (file sharing protocol) support: Benefit from scalable, high-throughput file storage ideal for data-intensive operations, research, and real-time AI workflows.IaaS portfolio: Deploy Virtual Machines, manage storage, and scale infrastructure with the same consistency and reliability as in our flagship regions.Organizations operating in Portugal, Spain, and nearby regions can now deploy workloads closer to end users, improving application performance. For finance, healthcare, public sector, and other organisations running sensitive workloads that must stay within a country or region, Sines-2 and Sines-3 are easy ways to access state-of-the-art GPUs with simplified compliance. Whether you're building AI models, running simulations, or managing rendering pipelines, Sines-2 and Sines-3 offer the performance, capacity, availability, and proximity you need.And best of all, servers are available and ready to deploy today.Run your AI workloads in Portugal todayWith these new Sines regions and our partnership with Northern Data Group, we're making it easier than ever for you to run AI workloads at scale. If you need speed, flexibility, and global reach, we're ready to power your next AI breakthrough.Unlock the power of Sines-2 and Sines-3 today

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.