Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. Exploring the Benefits of Cloud Development

Exploring the Benefits of Cloud Development

  • By Gcore
  • October 31, 2023
  • 9 min read
Exploring the Benefits of Cloud Development

Cloud development allows you to write, debug, and run code directly in the cloud infrastructure, rather than working in the local environment and subsequently uploading to the cloud. This streamlines the development process, allowing you to deploy and update applications faster. This article explains what cloud development is, what tools can help you with it, and how you can use cloud development to develop and update your application to meet your customers’ needs.

What Is Cloud Development?

Cloud development is the practice of creating and managing applications that run on remote servers, allowing users to access them over the internet.

Every application is made up of different types of services, such as backend services, frontend services, and monitoring services. Normally, without cloud development, creating a new service or updating an existing service means writing and running your code in the local environment first. After ensuring your service works as expected, you then push your code to the cloud environment, and run it there. Finally, you publish your service and integrate it with the app. This process is time-consuming, and requires sufficiently powerful computing resources to run the service on the local machine.

This is when cloud development proves its value. With cloud development, you write your code directly in the cloud infrastructure. After you have finished writing your code, publishing your service takes just one click.

Diagram comparing the difference in steps between traditional development vs. cloud development

Useful Cloud Development Tools

To apply cloud development to your projects, several tools are required to build and manage the code efficiently:

  • Code editor to help you write code more efficiently
  • Version control tool to manage the code changes
  • Compiler/interpreter to run your code
  • Publisher to allow public use of your application

Let’s learn about each of the tools in more detail.

Code Editor

A code editor is a software tool that supports code highlighting, easy navigation within the codebase, test execution, and debugging capabilities. When you’re working on applications that are hosted in a cloud environment, it’s essential for your code editor to support remote access because the code you’re working on is stored on a remote cloud server.

Remote access support enables you to establish an SSH connection between your local code editor and the remote server. You can then use your code editor to create, view, and modify code files as if they were local.

Popular code editors—like Visual Studio Code and JetBrains IDE—have features that support remote development. For example, with Visual Studio Code, you can install Microsoft’s “Remote – SSH” extension to enable remote code access via SSH. Once the extension is installed, you can connect to the remote server by entering its IP address, username, and password, and work on your cloud-based projects just as easily as you would local ones.

Below is an example of using Visual Studio Code to access code via a remote machine.

Example of using Visual Studio Code to access code via a remote machine

Version Control

In software development, it’s common for many people to be working on the same code base. Having a version control tool allows you to review who made the changes, and when, to certain lines of code so that you can trace any problems back to their source. Having a version control tool also allows you to revert your code to the version you want in the instance that new code introduces bugs.

There are several version control tools out there, such as Git, SVN, and Mercurial; Git is currently the most popular. Git is an open-source version control system that allows you to manage the changes in the code. It is distributed software, meaning that you create a local copy of the Git repository on your local machine, and then create new branches, add files, commit, and merge locally. When your code is ready to ship, you then push your code to the Git repository on the server.

Compiler/Interpreter

After tools that help you to write and track changes in the code, the next most important tool required to run code in the cloud is a compiler or interpreter. You need either a compiler or an interpreter to translate your code into machine code, depending on the programming languages or the runtime you are working on. They allow the computer to understand and execute your instructions. Generally speaking, the compiler or interpreter helps to build your code into executable files. Let’s look at each in turn to understand their differences.

Compiler

Before a service actually runs, a compiler translates the high-level code that you have written into low-level code. For example, a Java compiler compiles source code to the bytecode first, then the Java Virtual Machine will interpret and convert the bytecode to a machine-code executable file. As a result, compilers require time to analyze the source code. However, they also show obvious syntax errors, so you don’t need to spend a lot of time debugging your service when it’s running.

The programming languages that use compilers to translate code are Java, C#, and Go.

Interpreter

Unlike a compiler, an interpreter only translates written code into machine code when the service runs. As a result, the interpreter does not take time to compile the code. Instead, the code is executed right away. However, an application using an interpreter is often slower than one using a compiler because the interpreter executes the code line by line.

The programming languages that use interpreters are Python, Javascript, and Ruby.

Publisher

To allow other users to access your service, you need a publisher tool. This manages the following key aspects of your service:

  • Configuring the network
  • Creating the domain name
  • Managing the scalability

Network Configuration

To allow users to access your service, the network configuration is crucial. The method for making your service available online varies based on your technology stack. For instance, if you use the Next.js framework to build your web application, you can choose Vercel to deploy your application code.

You can also customize the behavior of your application with network configuration. Here’s an example of how to use the vercel.json file to redirect requests from one path to another:

{  "redirects": [    { "source": "/book", "destination": "/api/book", "statusCode": 301 }  ]}

Domain Setting

Every service requires a URL for applications to interact with it. However, using direct IP addresses as URLs can be complex and unwieldy, so it’s advisable to assign a domain name to your service, like www.servicedomain.com. Various platforms, such as GoDaddy or SquareSpace, offer domain registration services for this purpose.

Scalability

To allow your service to handle more requests from your users, you need to define a scalability mechanism for your services. This way, your service will automatically scale according to the workload. Scalability also keeps costs in check; you pay for what you use, rather than wasting money by allocating resources based on peak usage.

Below is an example definition file for applying autoscaling to your service, using Kubernetes HorizontalPodAutoscaler.

apiVersion: autoscaling/v1        kind: HorizontalPodAutoscaler        metadata:        name: app        spec:        scaleTargetRef:            apiVersion: apps/v1            kind: Deployment            name: appdeploy        minReplicas: 1        maxReplicas: 10        targetCPUUtilizationPercentage: 70

How to Run Code In the Cloud

Now that you are familiar with the tools you need for cloud development, let’s learn about how to run code in the cloud. There are two ways to run code in the cloud: using virtual machines or using containers. We explain the difference in depth in our dedicated article, but let’s review their relevance to cloud development here.

Virtual Machines

A virtual machine (VM) is like a computer that runs within a computer. It mimics a standalone system, with its own virtual hardware and software. Since a VM is separate from its host computer, you can pick the VM operating system that suits your needs without affecting your host’s system. Plus, its isolation offers an extra layer of security: if one VM is compromised, the others remain unaffected.

Architecture of a VM, which includes a guest OS

While VMs offer versatility in terms of OS choices for cloud development, scaling applications on VMs tends to be more challenging and costly compared to using containers. This is because each VM runs a full operating system, leading to higher resource consumption and longer boot-up times. Containers, on the other hand, share the host OS and isolate only the application environment, making them more lightweight and quicker to scale up or down.

Containers

A container is a software unit that contains a set of software packages and other dependencies. Since it uses the host operating system’s kernel and hardware, it doesn’t possess its own dedicated resources as a virtual machine does. As a result, it’s more lightweight and takes less time to start up. For instance, an e-commerce application can have thousands of containers for its backend and frontend services. This allows the application to easily scale out when needed by increasing the number of containers for its services.

Architecture of a container, which is more lightweight than VM architecture due to the lack of guest OS

Using containers for cloud code enables efficient resource optimization and ease of scaling due to their lightweight nature. However, you have limited flexibility in choosing the operating system, as most containers are Linux-based.

Cloud Development Guide

We’ve addressed cloud development tools and ways to run code in the cloud. In this section, we offer a step-by-step guide to using cloud development for your project.

Check Computing Resources

Before anything else, you’ll need the correct computing resources to power your service. This includes deciding between virtual machines and containers. If your service tends to have a fixed number of user requests everyday, or it needs a specific operating system like Mac or Windows in order to run, go with virtual machines. If you expect your service to experience a wide range in the number of user requests and you want it to be scalable to optimize operational costs, go with containers.

After choosing between virtual machines and containers, you need to allocate computing resources for use. The important resources that you need to consider are CPUs, RAM, disk volumes, and GPUs. The specifications for these resources can vary significantly depending on the service you’re developing. For instance, if you’re building a monitoring service with a one-year data retention plan, you’ll need to allocate disk volumes of approximately 100GB to store all generated logs and metrics. If you’re building a service to apply deep learning models to large datasets, you’ll require not only a powerful CPU and ample RAM, but also a GPU.

Install Software Packages and Dependencies

After preparing the computing resources, you’ll next install the necessary software and dependencies. The installation process varies depending on whether you’re using virtual machines or containers.

As a best practice, you should set up the mechanism to install the required dependencies automatically upon initialization of the virtual machine or container. This ensures that your service has all the necessary dependencies to operate immediately upon deployment. Additionally, it facilitates easy redeployment to a different virtual machine or container, if necessary. For example, if you want to install software packages and dependencies for an Ubuntu virtual machine to host a Node.js service, you can configure cloud-init scripts for the deployed virtual machine as below:

#cloud-config...apt:  sources:    docker.list:      source: deb [signed-by=$KEY_FILE] https://deb.nodesource.com/node_18.x $RELEASE main      keyid: 9FD3B784BC1C6FC31A8A0A1C1655A0AB68576280package_update: truepackage_upgrade: truepackages:  - apt-transport-https  - ca-certificates  - gnupg-agent  - software-properties-common  - gnupg  - nodejspower_state:  mode: reboot  timeout: 30  condition: True

To set up a web service on containers using Node.js, you’ll need to install Node along with the required dependencies. Below is a Dockerfile example for doing so:

# Pull the Node.js image version 18 as a base imageFROM node:18# Set the service directoryWORKDIR /usr/src/appCOPY package*.json ./# Install service dependenciesRUN npm install

Write Code

When you’ve installed the necessary software packages and dependencies, you can begin the fun part: writing the code. You can use code editors that support remote access to write the code for your service directly in the cloud. Built-in debugging tools in these editors can help you to identify any issues during this period.

Below is an example of using IntelliJ to debug a Go service for managing the playlists.

Using IntelliJ to debug a Go service

Test Your Service

After you finish writing your code, it’s crucial to test your service. As a best practice, start with unit tests to ensure that individual components work, followed by integration tests to see how your service interacts with existing application services, and finally E2E (end-to-end) tests to assess the overall user experience and system behavior.

Below is a test pyramid that gives a structured overview of each test type’s coverage. This will help you allocate your testing efforts efficiently across unit and integration tests for your service.

Test pyramid demonstrates the proportion that should be allocated to each test

Configure Network Settings

To make your service available to users, you need to configure its network settings. This might involve configuring the rules for inbound and outbound data, creating a domain name for your service, or setting a static IP address for your service.

Here is an example of using cloud-init configuration to set a static IP for a virtual machine that hosts your service:

#cloud-config...write_files:  - content: |        network:            version: 2            renderer: networkd            ethernets:              enp3s0:                addresses:                - 192.170.1.25/24                - 2020:1::1/64                nameservers:                  addresses:                  - 8.8.8.8                  - 8.8.4.4    path: /etc/netplan/00-add-static-ip.yaml    permissions: 0644power_state:  mode: reboot  timeout: 30  condition: True

Add Autoscaling Mechanism

With everything in place, it’s time to add an autoscaling mechanism. This adjusts resources based on demand, which will save costs during quiet times and boost performance during busy periods.

Assuming that you use Kubernetes to manage the containers of your service, the following is an example of using Gcore Managed Kubernetes to set the autoscaling mechanism for your Kubernetes cluster:

Configuring a Gcore Managed Kubernetes cluster to enable cluster autoscaling

Set Up Security Enhancement

Finally, ensure your service is secure. Enhancements can range from setting up robust authentication measures to using tools like API gateways to safeguard your service. You can even set up a mechanism to protect your service from malicious activities such as DDoS attacks.

Below is an example of how to apply security protection to your service by creating a resource for your service URL using Gcore Web Security:

Create a web security resource for the service domain to protect it from attacks

Gcore Cloud Development

Accelerating feature delivery through cloud development can offer a competitive edge. However, the initial setup of tools and environments can be daunting—and mistakes in this phase can undermine the benefits.

Here at Gcore, we recognize these obstacles and offer Gcore Function as a Service (FaaS) as a solution. Gcore FaaS eliminates the complexities of setup, allowing you to dive straight into coding without worrying about configuring code editors, compilers, debuggers, or deployment tools. Ideally suited for straightforward services that require seamless integration with existing applications, Gcore FaaS excels in the following use cases:

  • Real-time stream processing
  • Third-party service integration
  • Monitoring and analytics services

Conclusion

Cloud development allows you to deliver your service to users immediately after you’ve finished the coding and testing phases. You can resolve production issues and implement features faster to better satisfy your customers. However, setting up cloud infrastructure can be time intensive and ideally needs a team of experienced system administrators for building and maintenance.

With Gcore FaaS, you don’t have to take on that challenge yourself. You can focus on writing code, and we’ll handle the rest—from configuring pods and networking to implementing autoscaling. Plus, you are billed only for time your customers actually use your app, ensuring cost effective operation.

Want to try out Gcore FaaS to see how it works? Get started for free.

Related articles

Run AI inference faster, smarter, and at scale

Training your AI models is only the beginning. The real challenge lies in running them efficiently, securely, and at scale. AI and reality meet in inference—the continuous process of generating predictions in real time. It is the driving force behind virtual assistants, fraud detection, product recommendations, and everything in between. Unlike training, inference doesn’t happen once; it runs continuously. This means that inference is your operational engine rather than just technical infrastructure. And if you don’t manage it well, you’re looking at skyrocketing costs, compliance risks, and frustrating performance bottlenecks. That’s why it’s critical to rethink where and how inference runs in your infrastructure.The hidden cost of AI inferenceWhile training large models often dominates the AI conversation, it’s inference that carries the greatest operational burden. As more models move into production, teams are discovering that traditional, centralized infrastructure isn’t built to support inference at scale.This is particularly evident when:Real-time performance is critical to user experienceRegulatory frameworks require region-specific data processingCompute demand fluctuates unpredictably across time zones and applicationsIf you don’t have a clear plan to manage inference, the performance and impact of your AI initiatives could be undermined. You risk increasing cloud costs, adding latency, and falling out of compliance.The solution: optimize where and how you run inferenceOptimizing AI inference isn’t just about adding more infrastructure—it’s about running models smarter and more strategically. In our new white paper, “How to Optimize AI Inference for Cost, Speed, and Compliance”, we break it down into three key decisions:1. Choose the right stage of the AI lifecycleNot every workload needs a massive training run. Inference is where value is delivered, so focus your resources on where they matter most. Learn when to use pretrained models, when to fine-tune, and when simple inference will do the job.2. Decide where your inference should runFrom the public cloud to on-prem and edge locations, where your model runs, impacts everything from latency to compliance. We show why edge inference is critical for regulated, real-time use cases—and how to deploy it efficiently.3. Match your model and infrastructure to the taskBigger models aren’t always better. We cover how to choose the right model size and infrastructure setup to reduce costs, maintain performance, and meet privacy and security requirements.Who should read itIf you’re responsible for turning AI from proof of concept into production, this guide is for you.Inference is where your choices immediately impact performance, cost, and customer experience, whether you’re managing infrastructure, developing models, or building AI-powered solutions. This white paper will help you cut through complexity and focus on what matters most: running smarter, faster, and more scalable inference.It’s especially relevant if you’re:A machine learning engineer or AI architect deploying models across environmentsA product manager introducing real-time AI featuresA technical leader or decision-maker managing compute, cloud spend, or complianceOr simply trying to scale AI without sacrificing controlIf inference is the next big challenge on your roadmap, this white paper is where to start.Scale AI inference seamlessly with GcoreEfficient, scalable inference is critical to making AI work in production. Whether you’re optimizing for performance, cost, or compliance, you need infrastructure that adapts to real-world demand. Gcore Everywhere Inference brings your models closer to users and data sources—reducing latency, minimizing costs, and supporting region-specific deployments.Our latest white paper, “How to optimize AI inference for cost, speed, and compliance”, breaks down the strategies and technologies that make this possible. From smart model selection to edge deployment and dynamic scaling, you’ll learn how to build an inference pipeline that delivers at scale.Ready to make AI inference faster, smarter, and easier to manage?Download the white paper

How to comply with NIS2: practical tips and key requirements

The European Union is boosting cybersecurity legislation with the introduction of the NIS2 Directive. The new rules represent a significant expansion in how organizations across the continent approach digital security. NIS2 establishes specific and clear expectations that impact not just technology departments but also legal teams and top decision-makers. It refines old protocols while introducing additional obligations that companies must meet to operate within the EU.In this article, we explain the role and scope of the NIS2 Directive, break down its key security requirements, analyze the anticipated business impact, and provide a checklist of actions that businesses can take to remain in compliance with continually evolving regulatory demands.Who needs to comply with NIS2?The NIS2 Directive applies to essential and important organizations operating within the European Union in sectors deemed critical to society and the economy. NIS2 also applies to non-EU companies offering services within the EU, requiring non-EU companies that offer covered services in the EU without a local establishment to appoint a representative in one of the member states where they operate.In general, organizations with 50 or more employees and an annual turnover above €10M fall under NIS2. Smaller entities can also be included if they provide key services, including energy, transport, banking, healthcare, water supply, digital infrastructure, and public administration.4 key security requirements of NIS2Under the NIS2 Directive, organizations are required to have an integrated approach to cybersecurity. There are 10 basic measures that companies subject to this legislation must follow: risk policies, incident handling, supply-chain security, MFA, cryptography, backups, BCP/DRP, vulnerability management, security awareness, crypto-control, and “informational hygiene”. In this article, we will cover the four most important of them.These four are necessary steps for limiting disruptions and achieving full compliance with stringent regulatory demands. They include incident response, risk management, corporate accountability, and reporting obligations.#1 Incident responseUnder NIS2, a solid incident response is required. Companies must document processes for the detection, analysis, and management of cyber incidents. Additionally, organizations must have a trained team ready to respond quickly when there's a breach, reducing damage and downtime. Having the right plan in place can make the difference between a minor issue and a major disruption.#2 Risk managementContinuous risk evaluation is paramount within NIS2. Businesses should constantly be scouting out internal vulnerabilities and external dangers while following a clear, defined risk management protocol. Regular audits and monitoring help businesses stay a step ahead of future threats.#3 Corporate accountabilityNIS2 emphasizes corporate accountability by requiring clear cybersecurity responsibilities across all management levels, placing direct oversight on executive leadership. Additionally, due to the dependency of most organizations on third-party suppliers, supply chain security is paramount. Executives need to check the security measures of their partners. One weak link in the chain can destroy the entire system, making stringent security measures a prerequisite for all partners to reduce risks.#4 Reporting obligationsTransparency lies at the heart of NIS2. Serious incidents need to be reported promptly to maintain the culture of accountability the directive encourages. Good reporting mechanisms ensure that vital information is delivered to the concerned authorities in a timely manner, akin to formal channels in data protection legislation such as the GDPR.What NIS2 means for applicable organizationsSome of the potential implications of NIS2 include an increased regulatory burden, financial and reputational risks, and operational challenges. These apply to all businesses that are already established in the European Union. With compliance now becoming mandatory in all member states, businesses that have lagged behind in implementing effective cybersecurity measures will be put under increased pressure to improve their processes and systems.Increased regulatory burdenFor most firms, the new directive means a huge increase in their regulatory burden. The broadened scope of the directive applies to more industries, and this may lead to additional administrative tasks. Legal personnel and compliance officers will need to sift through current cybersecurity policies and ensure all parts of the organization are in line with the new requirements. This exercise can entail considerable coordination between different departments, including IT, risk management, and supply chain management.Financial and reputational risksThe penalty for non-compliance is steep. The fines for failure to comply with the NIS2 Directive are comparable to the GDPR fines for non-compliance, up to €10 million or 2% of a company's worldwide annual turnover for critical entities, while important organizations face a fine of up to €7M or 1.4% of their global annual turnover. Financial fines and reputational damage are significant risks that organizations must take into account. A single cybersecurity incident can lead to costly investigations, legal battles, and a loss of trust among customers and partners. For companies that depend on digital infrastructure for their day-to-day operations, the cost of non-compliance can be crippling.Operational challengesNIS2 compliance requires more than administrative change. Firms may have to make investments into new technology when trying to meet the directive's requirements, such as expanded monitoring, expanded protection of data, and sophisticated incident response protocols. Legacy system firms can be put at a disadvantage with the need for rapid cybersecurity improvements.NIS2 compliance checklistDue to the comprehensive nature of the NIS2 Directive, organizations will need to adopt a systematic compliance strategy. Here are 5 practical steps organizations can take to comply:Start with a thorough audit. Organizations must review their current cybersecurity infrastructure and identify areas of vulnerability. This kind of audit helps reveal areas of weakness and makes it easier to decide where to invest funds in new tools and training employees.Develop a realistic incident response plan. It is essential to have a short, actionable plan in place when things inevitably go wrong. Organizations need to develop step-by-step procedures for handling breaches and rehearse them through regular training exercises. The plan needs to be constantly updated as new lessons are learned and industry practices evolve.Sustain continued risk management. Risk management is not a static activity. Organizations need to keep their systems safe at all times and update risk analyses from time to time to combat new issues. This allows for timely adjustments to their approach.Check supply chain security. Organizations need to find out how secure their third-party vendors are. They need to have clear-cut security standards and check periodically to help ensure that all members of the supply chain adhere to those standards.Establish clear reporting channels. Organizations must have easy ways of communicating with regulators. They must establish proper reporting schedules and maintain good records. Training reporting groups to report issues early can avoid delays and penalties.Partner with Gcore for NIS2 successGcore’s integrated platform helps organizations address key security concerns relevant to NIS2 and reduce cybersecurity risk:WAAP: Real-time bot mitigation, API protection, and DDoS defense support incident response and ongoing threat monitoring.Edge Cloud: Hosted in ISO 27001 and PCI DSS-compliant EU data centers, offering scalable, resilient infrastructure that aligns with NIS2’s focus on operational resilience and data protection.CDN: Provides fast, secure content delivery while improving redundancy and reducing exposure to availability-related disruptions.Integrated ecosystem: Offers unified visibility across services to strengthen risk management and simplify compliance.Our infrastructure emphasizes data and infrastructure sovereignty, critical for EU-based companies subject to local and cross-border data regulation. With fully-owned data centers across Europe and no reliance on third-party hyperscalers, Gcore enables businesses to maintain full control over where and how their data is processed.Explore our secure infrastructure overview to learn how Gcore’s ecosystem can support your NIS2 compliance journey with continuous monitoring and threat mitigation.Please note that while Gcore’s services support many of the directive’s core pillars, they do not in themselves guarantee full compliance.Ready to get compliant?NIS2 compliance doesn’t have to be overwhelming. We offer tailored solutions to help businesses strengthen their security posture, align with key requirements, and prepare for audits.Interested in expert guidance? Get in touch for a free consultation on compliance planning and implementation. We’ll help you build a roadmap based on your current security posture, business needs, and regulatory deadlines.Get a free NIS2 consultation

Securing vibe coding: balancing speed with cybersecurity

Vibe coding has emerged as a cultural phenomenon in 2025 software development. It’s a style defined by coding on instinct and moving fast, often with the help of AI, rather than following rigid plans. It lets developers skip exhaustive design phases and dive straight into building, writing code (or prompting an AI to write it) in a rapid, conversational loop. It has caught on fast and boasts a dedicated following of developers hosting vibe coding game jams.So why all the buzz? For one, vibe coding delivers speed and spontaneity. Enthusiasts say it frees them to prototype at the speed of thought, without overthinking architecture. A working feature can be blinked into existence after a few AI-assisted prompts, which is intoxicating for startups chasing product-market fit. But as with any trend that favors speed over process, there’s a flip side.This article explores the benefits of vibe coding and the cybersecurity risks it introduces, examines real incidents where "just ship it" coding backfired, and outlines how security leaders can keep up without slowing innovation.The upside: innovation at breakneck speedVibe coding addresses real development needs and has major benefits:Allows lightning-fast prototyping with AI assistance. Speed is a major advantage, especially for startups, and allows faster validation of ideas and product-market fit.Prioritizes creativity over perfection, rewarding flow and iteration over perfection.Lowers barriers to entry for non-experts. AI tooling lowers the skill floor, letting more people code.Produces real success stories, like a game built via vibe coding hitting $1M ARR in 17 days.Vibe coding aligns well with lean, agile, and continuous delivery environments by removing overhead and empowering rapid iteration.When speed bites backVibe coding isn’t inherently insecure, but the culture of speed it promotes can lead to critical oversights, especially when paired with AI tooling and lax process discipline. The following real-world incidents aren’t all examples of vibe coding per se, but they illustrate the kinds of risks that arise when developers prioritize velocity over security, skip reviews, or lean too heavily on AI without safeguards. These three cases show how fast-moving or under-documented development practices can open serious vulnerabilities.xAI API key leak (2025)A developer at Elon Musk’s AI company, xAI, accidentally committed internal API keys to a public GitHub repo. These keys provided access to proprietary LLMs trained on Tesla and SpaceX data. The leak went undetected for two months, exposing critical intellectual property until a researcher reported it. The error likely stemmed from fast-moving development where secrets were hardcoded for convenience.Malicious NPM packages (2024)In January 2024, attackers uploaded npm packages like warbeast2000 and kodiak2k, which exfiltrated SSH keys from developer machines. These were downloaded over 1,600 times before detection. Developers, trusting AI suggestions or searching hastily for functionality, unknowingly included these malicious libraries.OpenAI API key abuse via Replit (2024)Hackers scraped thousands of OpenAI API keys from public Replit projects, which developers had left in plaintext. These keys were abused to access GPT-4 for free, racking up massive bills for unsuspecting users. This incident shows how projects with weak secret hygiene, which is a risk of vibe coding, become easy targets.Securing the vibe: smart risk mitigationCybersecurity teams can enable innovation without compromising safety by following a few simple cybersecurity best practices. While these don’t offer 100% security, they do mitigate many of the major vulnerabilities of vibe coding.Integrate scanning tools: Use SAST, SCA, and secret scanners in CI/CD. Supplement with AI-based code analyzers to assess LLM-generated code.Shift security left: Embed secure-by-default templates and dev-friendly checklists. Make secure SDKs and CLI wrappers easily available.Use guardrails, not gates: Enable runtime protections like WAF, bot filtering, DDoS defense, and rate limiting. Leverage progressive delivery to limit blast radius.Educate, don’t block: Provide lightweight, modular security learning paths for developers. Encourage experimentation in secure sandboxes with audit trails.Consult security experts: Consider outsourcing your cybersecurity to an expert like Gcore to keep your app or AI safe.Secure innovation sustainably with GcoreVibe coding is here to stay, and for good reason. It unlocks creativity and accelerates delivery. But it also invites mistakes that attackers can exploit. Rather than fight the vibe, cybersecurity leaders must adapt: automating protections, partnering with devs, and building a culture where shipping fast doesn't mean shipping insecure.Want to secure your edge-built AI or fast-moving app infrastructure? Gcore’s Edge Security platform offers robust, low-latency protection with next-gen WAAP and DDoS mitigation to help you innovate confidently, even at speed. As AI and security experts, we understand the risks and rewards of vibe coding, and we’re ideally positioned to help you secure your workloads without slowing down development.Into vibe coding? Talk to us about how to keep it secure.

Qwen3 models available now on Gcore Everywhere Inference

We’ve expanded our model library for Gcore Everywhere Inference with three powerful additions from the Qwen3 series. These new models bring advanced reasoning, faster response times, and even better multilingual support, helping you power everything from chatbots and coding tools to complex R&D workloads.With Gcore Everywhere Inference, you can deploy Qwen3 models in just three clicks. Read on to discover what makes Qwen3 special, which Qwen3 model best suits your needs, and how to deploy it with Gcore today.Introducing the new Qwen3 modelsQwen3 is the latest evolution of the Qwen series, featuring both dense and Mixture-of-Experts (MoE) architectures. It introduces dual-mode reasoning, letting you toggle between “thinking” and “non-thinking” modes to balance depth and speed:Thinking mode (enable_thinking=True): The model adds a <think>…</think> block to reason step-by-step before generating the final response. Ideal for tasks like code generation or math where accuracy and logic matter.Non-thinking mode (enable_thinking=False): Skips the reasoning phase to respond faster. Best for straightforward tasks where speed is a priority.Model sizes and use casesWith three new sizes available, you can choose the level of performance required for your use case:Qwen3-14B: A 14B parameter model tuned for responsive, multilingual chat and instruction-following. Fast, versatile, and ready for real-time applications with lightning-fast responses.Qwen3-30B-A3B: Built on the Arch-3 backbone, this 30B model offers advanced reasoning and coding capabilities. It’s ideal for applications that demand deeper understanding and precision while balancing performance. It provides high-quality output with faster inference and better efficiency.Qwen3-32B: The largest Qwen3 model yet, designed for complex, high-performance tasks across reasoning, generation, and multilingual domains. It sets a new standard for what’s achievable with Gcore Everywhere Inference, delivering exceptional results with maximum reasoning power. Ideal for complex computation and generation tasks where every detail matters.ModelArchitectureTotal parametersActive parametersContext lengthBest suited forQwen3-14BDense14B14B128KMultilingual chatbots, instruction-following tasks, and applications requiring strong reasoning capabilities with moderate resource consumption.Qwen3-30B-A3BMoE30B3B128KScenarios requiring advanced reasoning and coding capabilities with efficient resource usage; suitable for real-time applications due to faster inference times.Qwen3-32BDense32B32B128KHigh-performance tasks demanding maximum reasoning power and accuracy; ideal for complex R&D workloads and precision-critical applications.How to deploy Qwen3 models with Gcore in just a few clicksGetting started with Qwen3 on Gcore Everywhere Inference is fast and frictionless. Simply log in to the Gcore Portal, navigate to the AI Inference section, and select your desired Qwen3 model. From there, deployment takes just three clicks—no setup scripts, no GPU wrangling, no DevOps overhead. Check out our docs to discover how it works.Deploying Qwen3 via the Gcore Customer Portal takes just three clicksPrefer to deploy programmatically? Use the Gcore API with your project credentials. We offer quick-start examples in Python and cURL to get you up and running fast.Why choose Qwen3 + Gcore?Flexible performance: Choose from three models tailored to different workloads and cost-performance needs.Immediate availability: All models are live now and deployable via portal or API.Next-gen architecture: Dense and MoE options give you more control over reasoning, speed, and output quality.Scalable by design: Built for production-grade performance across industries and use cases.With the latest Qwen3 additions, Gcore Everywhere Inference continues to deliver on performance, scalability, and choice. Ready to get started? Get a free account today to explore Qwen3 and deploy with Gcore in just a few clicks.Sign up free to deploy Qwen3 today

Run AI workloads faster with our new cloud region in Southern Europe

Good news for businesses operating in Southern Europe! Our newest cloud region in Sines, Portugal, gives you faster, more local access to the infrastructure you need to run advanced AI, ML, and HPC workloads across the Iberian Peninsula and wider region. Sines-2 marks the first region launched in partnership with Northern Data Group, signaling a new chapter in delivering powerful, workload-optimized infrastructure across Europe.Strategically positioned in Portugal, Sines-2 enhances coverage in Southern Europe, providing a lower-latency option for customers operating in or targeting this region. With the explosive growth of AI, machine learning, and compute-intensive workloads, this new region is designed to meet escalating demand with cutting-edge GPU and storage capabilities.Built for AI, designed to scaleSines-2 brings with it next-generation infrastructure features, purpose-built for today’s most demanding workloads:NVIDIA H100 GPUs: Unlock the full potential of AI/ML training, high-performance computing (HPC), and rendering workloads with access to H100 GPUs.VAST NFS (file sharing protocol) support: Benefit from scalable, high-throughput file storage ideal for data-intensive operations, research, and real-time AI workflows.IaaS portfolio: Deploy Virtual Machines, manage storage, and scale infrastructure with the same consistency and reliability as in our flagship regions.Organizations operating in Portugal, Spain, and nearby regions can now deploy workloads closer to end users, improving application performance. For finance, healthcare, public sector, and other organisations running sensitive workloads that must stay within a country or region, Sines-2 is an easy way to access state-of-the-art GPUs with simplified compliance. Whether you're building AI models, running simulations, or managing rendering pipelines, Sines-2 offers the performance and proximity you need.And best of all, servers are available and ready to deploy today.Run your AI workloads in Portugal todayWith Sines-2 and our partnership with Northern Data Group, we’re making it easier than ever for you to run AI workloads at scale. If you need speed, flexibility, and global reach, we’re ready to power your next AI breakthrough.Unlock the power of Sines-2 today

Edge Cloud news: more regions and volume options available

At Gcore, we’re committed to delivering high-performance, globally distributed infrastructure that adapts to your workloads—wherever they run. This month, we’re excited to share major updates to our Edge Cloud platform: two new cloud IaaS regions in Europe and expanded storage options in São Paulo.New IaaS regions in Luxembourg and Portugal available nowLuxembourg‑3 and Sines‑2 mark the next step in the Gcore mission to bring compute closer to users. From compliance-focused deployments in Central Europe to GPU‑powered workloads in the Iberian Peninsula, these new regions are built to support diverse infrastructure needs at scale.Luxembourg‑3: expanding connectivity in Central EuropeWe’re expanding our European footprint by opening an additional IaaS point of presence (PoP) in Luxembourg. Strategically located in the heart of Europe, this region offers low-latency connectivity across the EU and is a strong compliance choice for data residency requirements.Here’s what’s available in Luxembourg‑3:Virtual Machines: High-performance, reliable, and scalable compute power for a wide range of workloads - with free egress traffic and pay-as-you-go billing for active instances only.Volumes: Standard, High IOPS, and Low Latency block storage for any workload profile.Load Balancers: Distribute traffic intelligently across instances to boost availability, performance, and fault tolerance.Managed Kubernetes: Fully managed Kubernetes clusters with automated provisioning, scaling, and updates optimized for production-ready deployments.Sines‑2, Portugal: a new hub for Southern Europe and a boost for AI workloadsWe’re also opening a brand-new location: Sines‑2, Portugal. This location enhances coverage across Southern Europe and boosts our AI and compute capabilities with more GPU availability.In addition to offering the same IaaS services as Luxembourg‑3, Sines‑2 also includes:H100 NVIDIA GPUs for AI/ML, high-performance computing, and rendering workloads.New VAST NFS Fileshare support for scalable, high-throughput file storage.This new region is ideal for organizations looking to deploy close to the Iberian Peninsula, reducing latency for regional users while gaining access to powerful GPU resources.Enhanced volume types in São PauloVolumes are the backbone of any cloud workload. They store the OS, applications, and essential data for your virtual machines. Developers and businesses building latency-sensitive or I/O-intensive applications now have more options in the São Paulo-2 region, thanks to two newly added volume types optimized for speed and responsiveness:Low-latency volumesDesigned for applications where every millisecond matters, Low Latency Volumes are non-redundant block storage ideal for:ETCD clustersTransactional databasesOther real-time, latency-critical workloadsBy minimizing overhead and focusing on speed, this volume type delivers faster response times for performance-sensitive use cases. This block storage offers IOPS up to 5000 and an average latency of 300 microseconds.High-IOPS volumesFor applications that demand both speed and resilience, High IOPS Volumes offer a faster alternative to our Standard Volumes:Higher IOPS and increased throughputSuitable for high-traffic web apps, analytics engines, and demanding databasesThis volume type accelerates data-heavy workloads and keeps performance consistent under peak demand by delivering significantly higher throughput and IOPS. The block storage offers IOPS up to 9,000 and a 500 MB/s bandwidth limit.Ready to deploy with Gcore?These new additions help to fine-tune your performance strategy, whether you're optimizing for throughput, latency, or both.From scaling in LATAM to expanding into the EU or pushing performance at the edge, Gcore continues to evolve with your needs. Explore our new capabilities in Luxembourg‑3, Sines‑2, and São Paulo‑2.Discover more about Gcore Cloud Edge Services

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.