Setting up Docker Swarm on ‘multipass’ sandbox

ChittaBlog
Towards Dev
Published in
4 min readFeb 28, 2022

--

Previously for all my proof of concept (POC) work I used to spin up throw away virtual machines on Oracle Virtual Box. Though there is no doubt that Virtual Box is still popular, I have recently come across multipass a virtualization platform that helps in managing ubuntu VMs on the fly. Although my experience with multipass before this POC is none, I wanted to give it a try to see if this is going to be part of my POC toolset for all my tech experiments.

Objective of my POC is to setup a Docker Swarm cluster to test a container based batch workflow (more details in next blog post), although for this kind of environment I would ideally like to test in Kubernetes platform, the POC is to prove the same workflow can also be implemented on-prem with a simple Docker cluster (one manager and two workers). I am sure you can find many blog posts explaining how to setup Docker Swarm cluster, this post all about how easy or difficult setting up a Docker cluster on multipass environment.

Target State

Step 1: Installation of multipass

based on the operating system you are using the installation is fairly easy, for my POC, I used Mac and you can find the installation process here. Also installing on windows should be fairly easy.

Step 2: Launching ubuntu VMs on multipass

for my little experiment, I need a minimal setup of Docker Swarm, as per my understanding one manager and two worker nodes will be sufficient along with one server for NFS shared storage.

$ multipass launch -n manager 
$ multipass launch -n worker1
$ multipass launch -n worker2
$ multipass launch -n nfsserver

running above commands with defaults, created required VMs for this POC.

$ multipass list 
Name State IPv4 Image manager Running 192.168.64.5 Ubuntu 20.04 LTS
nfsserver Running 192.168.64.4 Ubuntu 20.04 LTS
worker1 Running 192.168.64.6 Ubuntu 20.04 LTS
worker2 Running 192.168.64.7 Ubuntu 20.04 LTS
Note: output shows the active VMs in my multipass instance

Install Docker CE in manager and worker nodes is pretty straight forward, run below scripts as root user

$ apt-get update && apt-get upgrade -y 
$ apt-get remove docker docker-engine -y
$ apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common python-setuptools -y
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" $ apt-get update
$ apt-get install docker-ce -y
$ systemctl enable docker
$ systemctl restart docker
$ apt install python3-pip -y
$ pip install docker-compose
$ usermod -aG docker ubuntu

Login to manager, worker1 and worker2 shell and verify if docker is properly installed, by executing below command from each VM.

ubuntu@manager:~$ docker --version Docker version 20.10.8, build 3967b7d

THERE SEEMS TO BE A KNOWN ISSUE THAT VMS CAN’T LOCATE EACH OTHER WITH DNS NAME, ALSO SETTING DNS NAMES FROM LINUX HOSTS FILE IS NOT PRESERVING AS WELL. TRYING TO FIND AN ALTERNATIVE SOLUTION FOR THIS.

Initialize docker manager instance

$ docker swarm init --advertise-addr 192.168.64.5

run join command from each worker VM

$ docker swarm join --token <enter the token> 192.168.64.5:2377

check if the setup is working as expected

$ docker node ls 
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
nmjgfn0t79gy8dnu45x21m2x5 * manager Ready Active Leader 20.10.8
zrkx07rc4zw63wufog2gz6ieg worker1 Ready Active 20.10.8
wd4zm15e82uc27ojzqse8r182 worker2 Ready Active 20.10.8

try installing an nginx web server in this cluster and verify if container deployment is working as expected

$ docker service create --name my-web --publish 8080:80 --replicas 2 nginx 
$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS s9eabxqjgu98 my-web replicated 2/2 nginx:latest *:8080->80/tcp

above command deploys nginx webserver with 2 replicas, check where these 2 containers are deployed to

ubuntu@manager:~$ docker service ps my-web ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS 
i8ujszdpp3mr my-web.1 nginx:latest worker2 Running Running 48 seconds ago
akwxcuqqk5n0 my-web.2 nginx:latest manager Running Running 56 seconds ago

a quick test confirms everything looks good

next step is to setup nfs storage and mount a volume that is accessible from all nodes

To be continued…

--

--