Docker simulation of the Varilink Computing Ltd server estate for testing Ansible playbooks that use roles defined in the libraries-ansible repository.
If I were to add an additional Docker client tool that implemented Ansible then that would remove a client/desktop dependency to use this repo. Then you would only need to have Docker installed on the client/desktop, not Ansible, and you'd still be able to use this repo.
When you raise-hosts including the backup service, the service is configured for all known backed up hosts, which will probably include hosts that weren't included in the raise-hosts scope. Can we make it that only the hosts in the raise-hosts scope are configured without polluting the Libraries - Ansible repository in doing so.
I am running MariaDB within containers using mysqld_safe, since according to the documentation "mysqld_safe is the recommended way to start mysqld on Linux and Unix distributions that do not support systemd." However, this approach is not currently appending log output to /var/log/services.log like all the other services do and as is required for my logging approach to work. I need to find a way to correct this.
The Docker Compose playbook service runs Ansible playbooks. It allows you to pass arguments to it that docker-compose run supports; for example --start-at-task. Without fail, the task name that follows --start-at-task as spaces in it. This is actually handled okay but the "What I am about to run" report doesn't report that task name in quotes as it should.
This repo currently uses bash helper scripts. This makes it dependent on bash support on the client/desktop. That limits it currently to my Debian desktops, so I couldn't use when on my Windows desktops.
I think that this dependency could be eliminated by wrapping the helper scripts themselves within Docker containers.
We still have intermittent failure to start processes in the roles in my-roles/ when running playbooks. When this happens you can just rerun the playbook and if the role includes a start task between the install and configure tasks (i.e. we're not reliant on handlers only to start the process) then everything works okay. This seems to confirm that the cause of these failures is intermittent.
I want to add two things to mitigate this:
Some guidance in the output when this happens stating that all you need to do is rerun the playbook, perhaps with some additional instructions if they're needed.
A way to ensure that where there isn't an explicit start task between the install and configure tasks we do attempt to start the process on the rerun.