Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Developers
  3. Query your cloud with natural language: A developer’s guide to Gcore MCP

Query your cloud with natural language: A developer’s guide to Gcore MCP

  • By Gcore
  • September 10, 2025
  • 2 min read
Query your cloud with natural language: A developer’s guide to Gcore MCP

What if you could ask your infrastructure questions and get real answers?

With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.

In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.

What is MCP and why do devs love it?

Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.

With MCP, you can:

  • Ask questions about your infrastructure
  • List, inspect, or filter cloud resources
  • View cost data, audit logs, or deployment metadata
  • Export configs to Terraform
  • Chain multi-step operations via natural language

Gcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.

That means:

  • Faster debugging and audits
  • More accessible infra visibility
  • Fewer repetitive setup tasks
  • Better team collaboration

Because it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.

What can you do with it?

This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.

Infrastructure inspection

  • “List all VMs running in the Frankfurt region”
  • “Which projects have over 80% GPU utilization?”
  • “Show all volumes not attached to any instance”

Audit and cost analysis

  • “Get me the API usage for the last 24 hours”
  • “Which users deployed resources in the last 7 days?”
  • “Give a cost breakdown by region for this month”

Security and governance

  • “Show me firewall rules with open ports”
  • “List all active API tokens and their scopes”

Experimental automation

  • “Create a secure network in Tokyo, export to Terraform, then delete it”

We’ll walk through that last one in the full demo below.

Full video demo

Watch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.

Step-by-step walkthrough

This section maps to the video and shows exactly how to replicate the workflow locally.

1. Install MCP locally (0:00–1:28)
We use uv to isolate the environment and pull the project directly from GitHub.

curl -Ls https://astral.sh/uv/install.sh | sh
uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server

Requirements:

  • Python
  • Gcore account + API key
  • Tool config file (from the repo)

2. Set up your environment (1:28–2:47)
Configure two environment variables:
GCORE_API_KEY for auth
GCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)

Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.

3. Run a basic query (3:19–4:11)
Prompt:
“Find the Gcore region closest to Antalya.”

The agent maps this to a regions.list call and returns: Istanbul
No need to dig through docs or write an API request.

4. Provision, export, and clean up (4:19–5:32)
This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.

Prompt:
“Create a secure network in Tokyo. Export to Terraform. Then clean up.”

The agent:

  • Provisions the network
  • Exports it to Terraform format
  • Destroys the resources afterward

You get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.

Gcore: always building for developers

Try it now:

  • Clone the repo
  • Install UVX + configure your environment
  • Start prompting your infrastructure
  • Open issues, contribute tools, or share your use cases

This is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.

Watch how easy it is to deploy an inference instance with Gcore

Related articles

3 underestimated security risks of AI workloads and how to overcome them

3 underestimated security risks of AI workloads and how to overcome them

Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pi

Securing AI from the ground up: defense across the lifecycle

As more AI workloads shift to the edge for lower latency and localized processing, the attack surface expands. Defending a data center is old news. Now, you’re securing distributed training pipelines, mobile inference APIs, and storage envi

How AI is reshaping the future of interactive streaming

Interactive streaming is entering a new era. Artificial intelligence is changing how live content is created, delivered, and experienced. Advances in real-time avatars, voice synthesis, deepfake rendering, and ultra-low-latency delivery are

What are virtual machines?

An online virtual machine (VM), also called a virtual instance, is a software-based version of a physical computer. Instead of running directly on hardware, a VM operates inside a program that emulates a complete computer system, including

How to deploy DeepSeek 70B with Ollama and a Web UI on Gcore Everywhere Inference

Large language models (LLMs) like DeepSeek 70B are revolutionizing industries by enabling more advanced and dynamic conversational AI solutions. Whether you’re looking to build intelligent customer support systems, enhance content generatio

What is AI inference and how does it work?

Artificial intelligence (AI) inference is what happens when a trained AI model is used to predict outcomes from new, unseen data. While training focuses on learning from historical datasets, inference is about putting that learned knowledge

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.