Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Developers
  3. Docker Swarm and Shared Storage Volumes

Docker Swarm and Shared Storage Volumes

  • By Gcore
  • March 25, 2023
  • 3 min read
Docker Swarm and Shared Storage Volumes

As we know, volumes provide a flexible and powerful way to add persistent storage to managed dockers, but what should we do if we want to share storage volumes across several Docker hosts, for instance, a Swarm cluster? In this topic, we will consider a simple method of creating shared volumes usable across several swarm nodes using the sshfs volume driver.

How it works?

Considering the case when the majority of applications are data oriented there is a requirement to share files/directories amongst each other. In fact, it also requires files and directories to be saved and distributed.
We will consider vieux plugin which allows us to mount remote folder using sshFS in our containers easily. Please note that this plugin supports volume drivers only. Now, we will consider a simple exmplae of how to operate with sshFS plugin.

  • First step, you need to install the plugin:
docker plugin install vieux/sshfs

Accept the request of the plugin privileges by typing y

  • Now, check whether plugin was successfully installed:
docker plugin ls

You are good to go if you see true under ENABLED column

  • Next, try to create a volume using installed plugin:
docker volume create -d vieux/sshfs --name sshfsvolume -o sshcmd=username@hostname:/some/path mysshfsvolume
  • Verify that plugin created the volume:
docker volume ls
  • Great, now you can use created volume:
docker run -it --rm -v sshfsvolume:/path busybox ls /path

You will get the content of remote machine hostname:/some/path
Note: You can stop and/or remove plugin with the following commands:

docker plugin disabledocker plugin remove

Objective

Let’s say you want to run an Apache service in a Docker Swarm cluster with several replicas, but you want to run these containers with a shared customized Apache configuration file. This file can be used for all mentioned replicas and you want to store this file in a certain location so that the Apache configuration can be changed without the need to re-create running containers.  

You have to create a shared storage volume housed on a storage server. This volume should be accessible to all containers in the cluster, doesn’t matter of which node they are running on. And it should contain the Apache configuration file, and should be mounted to each of the containers replica.

Prerequisites

  • Swarm Manager (SM) with /etc/httpd/conf directory
  • Swarm Worker (SW) with /etc/httpd/conf directory
  • Storage Server (SS) with /etc/docker/shared directory
  • Volume Storage (VS) will be created using vieux/sshfs driver
  • swarmuser on SM, SW and SS

Setup

  1. In order to set up the external storage location you have to create shared directory at /etc/docker/shared on Storage Server. Please make sure swarmuser can read and write to the created directory:
sudo mkdir -p /etc/docker/sharedsudo chown swarmuser:swarmuser /etc/docker/shared
  1. Next step would be to copy Apache configuration file to create directory on Storage Server:
cp /etc/httpd/conf/httpd.conf /etc/docker/shared
  1. Now, we have to install the vieux/sshfs docker plugin on Swarm Manager cluster and Swarm Worker nodes:
docker plugin install --grant-all-permissions vieux/sshfs
  1. And last step would be to create an Apache service/container called apache-web using the apache:latest image with N number of replicas. Mount shared apache-vol volume to the containers at /etc/httpd/conf/. Publish port 9773 on the containers to port 8080 on the nodes. Please run the following command on Swarm Manager:
docker service create -d \  --replicas=<N> \  --name apache-web \  -p 8080:9773 \  --mount volume-driver=vieux/sshfs,source=apache-vol,target=/etc/httpd/conf/,volume-opt=sshcmd=swarmuser@<Storage Server IP>:/etc/docker/shared,volume-opt=password=<swarmuser password> apache:latest

Please note, that docker volume apache-vol was created using the vieux/sshfs driver that stores data in /etc/docker/shared/ on the Storage Server. You should use the swarmuser to do this.
Worth to mention that you should create the volume using docker service create command so that the volume will be configured automatically on all Swarm Workers that execute the service’s tasks.

Now you can verify whether service is working properly:

curl localhost:8080

Or by checking Apache default web page on localhost.

Potential problems and risks

There are several problems you may face during the setup or mount activities such as:

  • Error while mounting the volume:
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/<ID>/mnt/volumes/<VolumeID>': VolumeDriver.Mount: exit status 2.ERRO[0000] error getting events from daemon: request canceled

In this case, you should try to create using the volume using vieux/sshfs:next instead of vieux/sshfs

  • Error while trying to run:
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/<ID>/mnt/volumes/<VolumeID>': VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer

That might be a permission issue. Please try to run it as root user or check your key pair:

docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/<username>/.ssh/

or

ssh-copy-id -i /<username>/.ssh/ <username>@<hostname>:/<username/.ssh/>

Note:
Worth to mention that there are several potential risks of using sshFS:

  • sshFS will wait forever on a dead connection once server get unreachable
  • Creating volumes with vieux/sshfs in the compose file may lead to zombie process accumulation
  • sshFS is limited to remote mounting and anvailable on system with FUSE availability only
  • sshFS is more of relevance when limited few users/services are performing the access on the filesystem at the same time

Explore Gcore Container as a Service

Related articles

What's the difference between multi-cloud and hybrid cloud?

Multi-cloud and hybrid cloud represent two distinct approaches to distributed computing architecture that build upon the foundation of cloud computing to help organizations improve their IT infrastructure.Multi-cloud environments involve us

What is multi-cloud? Strategy, benefits, and best practices

Multi-cloud is a cloud usage model where an organization utilizes public cloud services from two or more cloud service providers, often combining public, private, and hybrid clouds, as well as different service models, such as Infrastructur

What is cloud migration? Benefits, strategy, and best practices

Cloud migration is the process of transferring digital assets, such as data, applications, and IT resources, from on-premises data centers to cloud platforms, including public, private, hybrid, or multi-cloud environments. Organizations can

What is a private cloud? Benefits, use cases, and implementation

A private cloud is a cloud computing environment dedicated exclusively to a single organization, providing a single-tenant infrastructure that improves security, control, and customization compared to public clouds.Private cloud environment

What is a cloud GPU? Definition, types, and benefits

A cloud GPU is a remotely rented graphics processing unit hosted in a cloud provider's data center, accessible over the internet via APIs or virtual machines. These virtualized resources allow users to access powerful computing capabilities

What is cloud networking: benefits, components, and implementation strategies

Cloud networking is the use and management of network resources, including hardware and software, hosted on public or private cloud infrastructures rather than on-premises equipment. Over 90% of enterprises are expected to adopt cloud netwo

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.