Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Developers
  3. Install and Setup Docker Using Ansible on Ubuntu 18.04 (Part 2)

Install and Setup Docker Using Ansible on Ubuntu 18.04 (Part 2)

  • By Gcore
  • March 29, 2023
  • 3 min read
Install and Setup Docker Using Ansible on Ubuntu 18.04 (Part 2)

In the last guide, you learned how to set up, install, and configure Ansible on Ubuntu 18.04. Now, you will use the Ansible to install and set Docker on a remote machine. To begin this guide, you need the following:

  • One Ansible Control Node: You need a Ansible installed and configured machine.
  • One or more Ansible Hots: At least one remote host with Ubuntu 18.04 with sudo permissions.

Please make sure that your Ansible control node is able to connect to your Ansible remote machines. To test the connection, you can use ansible all -m ping command.

Creating Playbook for Operations

You will be using Ansible Playbook to perform a set of actions on your Ansible remote machine which are as following:

  1. Ansible prefers aptitude package manager over the default apt.
  2. Install the required system packages like python3-pip, curl, and other such packages.
  3. Install Docker GPG APT key to the system and add the official Docker repository to the apt source.
  4. Install Docker on the remote machine.
  5. Install Python Docker module via pip.
  6. Pull an image from Docker Registry.

Once you are through with this guide, you will be running a defined number of containers on your remote host. Let’s begin this guide.

Create an Ansible Playbook:

First, you’ve to create a working directory where all your files will reside:

$ mkdir docker_server && cd $_$ mkdir vars && cd $_ && touch default.yml$ cd .. && touch main.yml

The directory layout should look like:

docker_server/|-- main.yml`-- vars   `-- default.yml 1 directory, 2 files

Let’s see what each of these files are:

  1. docker_server: This is the project root directory containing all variable files and main playbook.
  2. vars/default.yml: Variable file resides in vars directory through which you are going to customize the playbook settings.
  3. main.yml: Here, you are going to define the task that is going to execute on the remote server.

vars/default.yml

Now first begin with the playbook’s variable file. Here you are going to customize your Docker setup. Open vars/default.yml in your editor of choice:

$ cd docke_server && nano vars/default.yml

Copy the below lines and paste it in vars/default.yml:

---containers: 2container_name: docker_ubuntucontainer_image: ubuntu:18.04container_command: sleep 1d

A brief explanation of each of these variables:

  • containers: You can define n number of containers you want to launch. Just make sure that your remote system has enough juice to run it smoothly.
  • container_name: This variable is used to name the running containers.
  • container_image: Image that you use when creating containers.
  • container_command: Command that is going to run inside the new containers.

main.yml

In this file, you are going to define all tasks, where you are going to define the group of servers that should be targeted with privilege sudo. Here you are also going to load the vars/default.yml variable file you created previously. Again paste the following lines, make sure that file is in a format that follows the YAML standards.

---- hosts: all  become: true  vars_files:   - vars/default.yml  tasks:   - name: Install aptitude using apt     apt: name=aptitude state=latest update_cache=yes force_apt_get=yes    - name: Install required system packages     apt: name={{ item }} state=latest update_cache=yes     loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']    - name: Add Docker GPG apt Key     apt_key:       url: https://download.docker.com/linux/ubuntu/gpg       state: present    - name: Add Docker Repository     apt_repository:       repo: deb https://download.docker.com/linux/ubuntu bionic stable       state: present    - name: Update apt and install docker-ce     apt: update_cache=yes name=docker-ce state=latest    - name: Install Docker Module for Python     pip:       name: docker    - name: Pull default Docker image     docker_image:       name: "{{ container_image }}"       source: pull    - name: Create default containers     docker_container:       name: "{{ container_name }}{{ item }}"       image: "{{ container_image }}"       command: "{{ container_command }}"       state: present     with_sequence: count={{ containers }}

Execute The Ansible Playbook:

Now, execute the playbook you created previously. For example, our playbook is on remote1, and you are going to connect it as the root user, then use the following command:

$ ansible-playbook main.yml -l remote1 -u root

You will see a similar output:

...TASK [Add Docker GPG apt Key] **************************************************************************************changed: [remote1] TASK [Add Docker Repository] **************************************************************************************changed: [remote1] TASK [Update apt and install docker-ce] **************************************************************************************changed: [remote1] TASK [Install Docker Module for Python] **************************************************************************************changed: [remote1] TASK [Pull default Docker image] **************************************************************************************changed: [remote1] TASK [Create default containers] **************************************************************************************changed: [remote1] => (item=1)changed: [remote1] => (item=2) PLAY RECAP **************************************************************************************remote1  : ok=8  changed=7  unreachable=0  failed=0  skipped=0  rescued=0  ignored=0

Once your playbook is finished running, you can log in to your remote server via SSH and confirm if docker container was created successfully:

$ ssh -i remote1-key.pem -p 4576 remote1@youripaddresshere$ sudo docker ps -a

Flag -i to include your private key and -p to specify the port number SSH is listening.

You should see output similar to the following:

CONTAINER ID   IMAGE     COMMAND      CREATED        STATUS    PORTS  NAMESt3gejb7o82dy   ubuntu    "sleep 1d"   3 minutes ago  Created          docker_ubuntu19df96gced2fg   ubuntu    "sleep 1d"   3 minutes ago  Created          docker_ubuntu2

Conclusion

In this guide, you used Ansible to automate the process of installing and setting up Docker on a remote server. You can modify the playbook as per your need and workflow; it is also recommended that you do visit Ansible user guide for docker_container module.

Related articles

What's the difference between multi-cloud and hybrid cloud?

Multi-cloud and hybrid cloud represent two distinct approaches to distributed computing architecture that build upon the foundation of cloud computing to help organizations improve their IT infrastructure.Multi-cloud environments involve us

What is multi-cloud? Strategy, benefits, and best practices

Multi-cloud is a cloud usage model where an organization utilizes public cloud services from two or more cloud service providers, often combining public, private, and hybrid clouds, as well as different service models, such as Infrastructur

What is cloud migration? Benefits, strategy, and best practices

Cloud migration is the process of transferring digital assets, such as data, applications, and IT resources, from on-premises data centers to cloud platforms, including public, private, hybrid, or multi-cloud environments. Organizations can

What is a private cloud? Benefits, use cases, and implementation

A private cloud is a cloud computing environment dedicated exclusively to a single organization, providing a single-tenant infrastructure that improves security, control, and customization compared to public clouds.Private cloud environment

What is a cloud GPU? Definition, types, and benefits

A cloud GPU is a remotely rented graphics processing unit hosted in a cloud provider's data center, accessible over the internet via APIs or virtual machines. These virtualized resources allow users to access powerful computing capabilities

What is cloud networking: benefits, components, and implementation strategies

Cloud networking is the use and management of network resources, including hardware and software, hosted on public or private cloud infrastructures rather than on-premises equipment. Over 90% of enterprises are expected to adopt cloud netwo

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.