Ansible III: Multiple machines, multiple playbooks

#it automation #configuration as code

Here comes the third post of the Ansible series. This time we expose a scenario that will serve us through all the series. I had been wondering how this scenario should look like, but since I’m not an expert we will use a simple scenario: frontend + backend + database. Later you can expand this scenario to include cache, load balancing, service registry, vault, etc. Let’s dive in!

Note: we are working both in PowerShell and in the WSL. Everytime you encounter an “bash” code box and you are in windows, you should be running it inside the WSL. Check the first post of the series to install the WSL.

The components and Vagrantfile

First things first: a line describing each of our elements.

So we will create three machines, one for component. For that we need to expand the Vagrantfile and include those machines. If you used the example I gave in the first part of the series, you should have this:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "main" do |main| = "ubuntu/bionic64" "private_network", ip: ""

Now triplicate the block of config.vm... until the end clause. Change the main name for frontend, backend and database, and then change their IPs.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "frontend" do |frontend| = "ubuntu/bionic64" "private_network", ip: ""

  config.vm.define "backend" do |backend| = "ubuntu/bionic64" "private_network", ip: ""

  config.vm.define "database" do |database| = "ubuntu/bionic64" "private_network", ip: ""

Now it is time to run vagrant up and get our machines running!

Connecting to each machine

With vagrant, to connect to each machine there is the vagrant ssh <name>, where name is frontend, backend or database in our case:

vagrant ssh frontend
vagrant ssh backend
vagrant ssh database

Each of this will attach us to a console inside the machine, so we can run things inside it. Try using top to view processes, lsb_release -a or ip address.

The next thing we want to do is to connect directly with SSH, without using vagrant ssh. For that we will need the SSH private keys, located inside the .vagrant folder. Those keys are generated automatically by Vagrant. Each key is under a folder called as the machines, in our case we will have .vagrant/machines/frontend, .vagrant/machines/backend and .vagrant/machines/database. We will do the process of connecting inside the WSL since it is there where it is important.

To connect to the frontend…

$ ssh -i .vagrant/machines/frontend/virtualbox/private_key vagrant@
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in C:\\Users\\hector/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in C:\\Users\\hector/.ssh/known_hosts:14
ECDSA host key for has changed and you have requested strict checking.
Host key verification failed.

This means that the identity of the machines under a certain IP has changed. If you followed the previous posts you created a machine at then destroyed it and just now you created another: the frontend one. So our beloved SSH is protecting us correctly. If you read carefully the message, it says where the error is, but not how it is concretely corrected. We can use ssh-keygen to fix the error:

$ ssh-keygen -R
# Host found: line 11
/home/hector/.ssh/known_hosts updated.
Original contents retained as /home/hector/.ssh/known_hosts.old

Now if you try to connect again…

$ ssh -i .vagrant/machines/frontend/virtualbox/private_key vagrant@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:8J3X1e8cYZcHTxxVRP0rWdgC5q9N7E0pYRLhJjb1XSA.
Are you sure you want to continue connecting (yes/no)?

Here anwser yes to trust the machine now and in the following connections. But… Boom! Another error:

$ ssh -i .vagrant/machines/frontend/virtualbox/private_key vagrant@
The authenticity of host ' (' can not be established.
ECDSA key fingerprint is SHA256:8J3X1e8cYZcHTxxVRP0rWdgC5q9N7E0pYRLhJjb1XSA.
Are you sure you want to continue connecting (yes/no)? yes
Permissions 0777 for '.vagrant/machines/frontend/virtualbox/private_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key ".vagrant/machines/frontend/virtualbox/private_key": bad permissions

The error is self-explanatory too: permissions to open. We should restrict them. However, no concrete command is given to fix it. Here we should modify the permissions of the private key with the chmod command:

chmod 400 .vagrant/machines/frontend/virtualbox/private_key

The 400 will change the permissions so the file can only be read by us, even we won’t be able to edit it. And finally we should be able to connect to the machine:

$ ssh -i .vagrant/machines/frontend/virtualbox/private_key vagrant@
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-65-generic x86_64)

 * Documentation:
 * Management:
 * Support:

  System information as of Fri Oct 18 14:23:26 UTC 2019

  System load:  0.0               Processes:             96
  Usage of /:   10.0% of 9.63GB   Users logged in:       0
  Memory usage: 12%               IP address for enp0s3:
  Swap usage:   0%                IP address for enp0s8:

0 packages can be updated.
0 updates are security updates.

Last login: Fri Oct 18 14:23:16 2019 from

Now repeat the process for the backend and database machines. Here the process should have the same permission error, so you should run chmod too.

$ ssh -i .vagrant/machines/backend/virtualbox/private_key vagrant@
$ ssh -i .vagrant/machines/database/virtualbox/private_key vagrant@

Remember to change the folder for the private key and the IP of the machine!

Lots of playbooks

Now that we have our machines up and running and we can connect to them we will start creating our Ansible files. In a folder anywhere create five files:

# cd $HOME/Projects/ansible-series/ansible-03/
touch main.yml
touch frontend.yml
touch backend.yml
touch database.yml
touch inventory

We will fill our first one, the main.yml. We use the import_playbook to tell Ansible to get the other playbooks and execute them too. So the main.yml is basically an aggregator:

# main.yml
- import_playbook: database.yml
- import_playbook: backend.yml
- import_playbook: frontend.yml

Now the other three playbooks. They will follow the same template, just change ‘backend’ for ‘frontend’ or ‘database’. By default Vagrant creates the vagrant user and associate it with the private key so we added it with the appropiate ansible_user and ansible_private_key_file to each playbook too. We also added a placeholder task that will echo some things if the run with debug enabled:

# database.yml
- name: configure database
  hosts: database

    ansible_user: vagrant
    ansible_private_key_file: .vagrant/machines/database/virtualbox/private_key

    - name: Hello!
        msg: Hello from database!

Last but not least, the inventory. There we should create our hosts groups too:

# inventory




With this inventory, we will good to go. We created here three groups for the same reason we have three different playbooks: we want to do different things in the frontend, backend and database machines. So we can target each of them separately. Now you should be able to run the main.yml playbook:

$ ansible-playbook -i inventory main.yml

PLAY [configure database]

TASK [Gathering Facts]
ok: []

TASK [Hello!]
ok: [] => {
    "msg": "Hello from database!"

PLAY [configure backend]

TASK [Gathering Facts]
ok: []

TASK [Hello!]
ok: [] => {
    "msg": "Hello from backend!"

PLAY [configure frontend]

TASK [Gathering Facts]
ok: []

TASK [Hello!]
ok: [] => {
    "msg": "Hello from frontend!"

PLAY RECAP              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

And this should be the output. We ran two tasks per host: “Gathering Facts” and “Hello!”. The first one is basically getting lots of variables about the hosts being configured that you can use inside your tasks, while the second is our own dummy task.


Easy? Probably not, I know. I can even be doing it worse due to my writing skills (I’m trying to improve I swear). However, if any problem comes up and you need some help write a comment below or reach me on social media.

Continue reading

It was useful? Done something similar? Have feedback?