Data stored in a Kubernetes container’s file system is temporary and not intended for long-term retention. When you replace or terminate a container, all information within it is deleted and can’t be restored.
To prevent data loss, you can connect your Managed Kubernetes cluster to the NFS Container Storage Interface (CSI) driver. This driver connects Kubernetes and Gcore File Shares, allowing Kubernetes clusters to dynamically provision, attach, and manage NFS volumes as persistent volumes.
Kubernetes integration with the NFS driver is enabled by default, and the driver is installed in each cluster.
To provision persistent NFS volumes, ensure that a Managed Kubernetes cluster and File Share are created in the same project and connected to the same network and subnetwork.
Create a cluster by following the instructions from our guide: Create a Managed Kubernetes cluster.
If you already have a Kubernetes cluster created, proceed to the next step.
Create a File Share in the same project and region as your Kubernetes cluster:
1. In the Gcore Customer Portal, navigate to Storage > File Shares.
2. Click Create File Share.
3. Configure File Share settings:
Basic settings: Enter the name of the File Share and specify its size and protocol.
File Share network settings: Select the private network and subnetwork that you will use for file sharing. Ensure that they match those selected in your Kubernetes cluster.
Access: Click the Add rule button and specify the IP addresses of machines that should have access to the File Share, along with their access modes.
(optional) Add tags: Add metadata to your File Share.
The File Share should be set up within a few minutes.
Run the following command to retrieve information about the storage classes configured in your Kubernetes cluster:
kubectl get storageclass
If everything’s set up correctly, the gcore-nfs storage
class should be listed among other storage classes.
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.