nickjj / ansible-docker Goto Github PK
View Code? Open in Web Editor NEWInstall / Configure Docker and Docker Compose using Ansible.
License: MIT License
Install / Configure Docker and Docker Compose using Ansible.
License: MIT License
First of all, thanks for the role!
A question about the cron. I got it created without issue:
TASK [nickjj.docker : Create Docker related cron jobs] ***************************************************************
ok: [localhost] => (item={u'job': u'docker system prune -af > /dev/null 2>&1', u'cron_file': u'docker-disk-clean-up', u'user': u'root', u'name': u'Docker disk clean up', u'schedule': [u'0', u'0', u'*', u'*', u'0']})
But when I check the cron list with crontab -l
, I get:
no crontab for root
How can I check if the cron is correctly installed?
For context here is the playbook:
---
- hosts: all
roles:
- role: "nickjj.docker"
tags: ["docker"]
Which I play in a debian 10 in docker. https://github.com/geerlingguy/docker-debian10-ansible
❯ docker exec --tty debian-ansible env TERM=xterm ansible-playbook /etc/ansible/playbooks/docker.yml
...
TASK [nickjj.docker : Create Docker related cron jobs] ***************************************************************
ok: [localhost] => (item={u'job': u'docker system prune -af > /dev/null 2>&1', u'cron_file': u'docker-disk-clean-up', u'user': u'root', u'name': u'Docker disk clean up', u'schedule': [u'0', u'0', u'*', u'*', u'0']})
...
❯ docker exec -it debian-ansible bash
root@0257f15480e0:/# crontab -l
no crontab for root
If I launch the installation with the following two variables set:
docker_edition: "ce"
docker_channel: "stable"
I get the following error:
fatal: [host]: FAILED! => {"cache_update_time": 1524049656, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-ce=18.04.0~ce~3-0~ubuntu' -o APT::Install-Recommends=no' failed: E: Version '18.04.0~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "rc": 100, "stderr": "E: Version '18.04.0~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "stderr_lines": ["E: Version '18.04.0~ce~3-0~ubuntu' for 'docker-ce' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
note that I get an error as well if I set the version to 18.03.0-ce:
docker_version: "18.03.0"
I get:
fatal: [host]: FAILED! => {"cache_update_time": 1524049427, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-ce=18.03.0~ce~3-0~ubuntu' -o APT::Install-Recommends=no' failed: E: Version '18.03.0~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "rc": 100, "stderr": "E: Version '18.03.0~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "stderr_lines": ["E: Version '18.03.0~ce~3-0~ubuntu' for 'docker-ce' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
and if I set the version to:
docker_version: "18.03.0-ce"
I get:
fatal: [host]: FAILED! => {"cache_update_time": 1524049574, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-ce=18.03.0-ce~ce~3-0~ubuntu' -o APT::Install-Recommends=no' failed: E: Version '18.03.0-ce~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "rc": 100, "stderr": "E: Version '18.03.0-ce~ce~3-0~ubuntu' for 'docker-ce' was not found\n", "stderr_lines": ["E: Version '18.03.0-ce~ce~3-0~ubuntu' for 'docker-ce' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
Playbook like this:
---
- hosts: stage
tasks:
- name: 'Install docker'
become: yes
become_method: sudo
include_role:
name: nickjj.docker
throws this error:
failed: [52.136.244.212] (item=[u'apt-transport-https', u'ca-certificates', u'software-properties-common', u'cron']) => {"changed": false, "cmd": "apt-get update", "item": ["apt-transport-https", "ca-certificates", "software-properties-common", "cron"], "msg": "E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)\nE: Unable to lock directory /var/lib/apt/lists/\nW: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)\nW: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)", "rc": 100, "stderr": "E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)\nE: Unable to lock directory /var/lib/apt/lists/\nW: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)\nW: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)\n", "stderr_lines": ["E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)", "E: Unable to lock directory /var/lib/apt/lists/", "W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)", "W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)"], "stdout": "Reading package lists...\n", "stdout_lines": ["Reading package lists..."]}```
Task contains become flag, I will be thankful for any hints
Hi,
Maybe I'm doing something wrong (I'm new to Ansible), but I get an issue with task 2 named "Get upstream APT GPG key" which failed every time on a Debian Strech managed host with below error:
TASK [ansible-docker : Get upstream APT GPG key] ******************************************************************************************************************************************************************************************************************************
fatal: [192.168.22.136]: FAILED! => {"changed": false, "cmd": "/usr/bin/apt-key adv --keyserver hkp://pool.sks-keyservers.net --recv 9DC858229FC7DD38854AE2D88D81803C0EBFCD88", "msg": "Error fetching key 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 from keyserver: hkp://pool.sks-keyservers.net", "rc": 2, "stderr": "Warning: apt-key output should not be parsed (stdout is not a terminal)\ngpg: keyserver receive failed: Invalid object\n", "stderr_lines": ["Warning: apt-key output should not be parsed (stdout is not a terminal)", "gpg: keyserver receive failed: Invalid object"], "stdout": "Executing: /tmp/apt-key-gpghome.sjlsvljxVm/gpg.1.sh --keyserver hkp://pool.sks-keyservers.net --recv 9DC858229FC7DD38854AE2D88D81803C0EBFCD88\n", "stdout_lines": ["Executing: /tmp/apt-key-gpghome.sjlsvljxVm/gpg.1.sh --keyserver hkp://pool.sks-keyservers.net --recv 9DC858229FC7DD38854AE2D88D81803C0EBFCD88"]}
I fixed it by modifying the task in your role and adding explicitly the port number ":80" at the end of the URI ;
- name: Get upstream APT GPG key
apt_key:
id: "{{ docker_apt_key }}"
keyserver: "{{ ansible_local.core.keyserver
if (ansible_local|d() and ansible_local.core|d() and
ansible_local.core.keyserver)
else 'hkp://pool.sks-keyservers.net:80' }}"
state: "present"
That worked perfectly after:
TASK [ansible-docker : Get upstream APT GPG key] ******************************************************************************************************************************************************************************************************************************
changed: [192.168.22.136]
Installation on ubuntu 18.04 fails fails with the following error:
failed: [X.X.X.X] (item=gnupg2) => {"changed": false, "item": "gnupg2", "msg": "No package matching 'gnupg2' is available"}
ok: [X.X.X.X] => (item=cron)
On Ubuntu 18.04 package with gnupg>=2 is named just 'gnupg'
Dependencies can be overriden (thanks for it), but is inconvinient.
By default, docker will use /var/lib/docker
for its files. However, we tend to use /srv/docker
as our /srv/
partitions are where we store the application files and data, so it tends to be much bigger than the root partition.
Telling docker to use a different directory is achieved by setting the graph
value in /etc/docker/daemon.json
. If the file exists before docker is installed then it uses it straightaway and nothing gets created in /var/lib
.
However, when I try to set graph
via the playbook configuration, everything installs but the service then fails to start. I think this is happening because docker is installed then /etc/docker/daemon.json
is created, which means that docker has already set up /var/lib/docker
.
Would it be possible to move the initialization of /etc/docker/daemon.json
to before you install docker so that it picks up the values before it first runs?
Running v1.1.0:
- nickjj.docker, v1.1.0
with this configuration:
- hosts: all
become: True
roles:
- common
- { role: "nickjj.docker", tags: "docker" }
vars:
docker__daemon_options: "\"dns\": [\"{{ dns_servers | join('\", \"')}}\"]"
docker__registries:
- registry_url: "redacted"
username: "redacted"
password: "redacted"
state: present
but if I cat /etc/docker/daemon.json
:
{ "log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "1000"
}
}
and the registry is not logged in either.
Any idea how else to debug this?
I am writing a playbook and running it several times, so Docker is alreay installed on my target machine.
In the playbook, other then ansible-docker I have another role to update packages. So Docker (I am using the ce
edition) has been installed and automatically upgraded to 18.03.1
from channel stable:
$ dpkg -l 'docker*'
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii docker-ce 18.03.1~ce-0 amd64 Docker: the open-source applicati
This is my playbook configuration:
docker_edition: "ce"
docker_channel: "stable"
docker_version: "18.03.0"
When I execute the playbook I get the following error:
TASK [ansible-docker : Install Docker]
*****************************************
fatal: [host]: FAILED! =>
{"cache_update_time": 1524950924, "cache_updated": false, "changed": false,
"msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o
\"Dpkg::Options::=--force-confold\" install 'docker-
ce=18.03.0~ce-0~ubuntu' -o APT::Install-Recommends=no' failed: E: Packages
were downgraded and -y was used without --allow-downgrades.\n", "rc": 100,
"stderr": "E: Packages were downgraded and -y was used without --allow-
downgrades.\n", "stderr_lines": ["E: Packages were downgraded and -y was used
without --allow-downgrades."], "stdout": "Reading package lists...\nBuilding
dependency tree...\nReading state information...\nRecommended packages:\n
aufs-tools cgroupfs-mount | cgroup-lite pigz\nThe following packages will be
DOWNGRADED:\n docker-ce\n0 upgraded, 0 newly installed, 1 downgraded, 0 to
remove and 0 not upgraded.\n", "stdout_lines": ["Reading package lists...",
"Building dependency tree...", "Reading state information...", "Recommended
packages:", " aufs-tools cgroupfs-mount | cgroup-lite pigz", "The following
packages will be DOWNGRADED:", " docker-ce", "0 upgraded, 0 newly installed,
1 downgraded, 0 to remove and 0 not upgraded."]}
I am not even sure that this is a "proper" bug of this ansible role and I am not sure what should be the most reasonable expected behavior in this case. I am reporting it just to point out this behavior.
Hi,
Nice work here. I was trying to install docker-compose the way you do here (and the way Docker recommends) with curl
. But I kept running into errors about the compose
module not being available, which I finally solved by installing docker-compose with pip.
How'd you avoid that problem?
Thanks!
This change causes my runs to miss the first and last quote.
Hi! I am trying to use the role to install docker on both Ubuntu 16.04 and Debian 9.
My playbook failed with the following error:
TASK [nickjj.docker : Install Docker] ***********************************************************************************************************************************************
fatal: [debian]: FAILED! => {"cache_update_time": 1512539489, "cache_updated": false, "changed": false, "failed": true, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-engine=17.05.0-0~stretch'' failed: E: Version '17.05.0-0~stretch' for 'docker-engine' was not found\n", "rc": 100, "stderr": "E: Version '17.05.0-0~stretch' for 'docker-engine' was not found\n", "stderr_lines": ["E: Version '17.05.0-0~stretch' for 'docker-engine' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
fatal: [ubuntu]: FAILED! => {"cache_update_time": 1512543344, "cache_updated": false, "changed": false, "failed": true, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-engine=17.05.0-0~xenial'' failed: E: Version '17.05.0-0~xenial' for 'docker-engine' was not found\n", "rc": 100, "stderr": "E: Version '17.05.0-0~xenial' for 'docker-engine' was not found\n", "stderr_lines": ["E: Version '17.05.0-0~xenial' for 'docker-engine' was not found"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information..."]}
So I dig into an apt-cache on my test machines and see that packages into repositories named docker-engine=17.05.0~ce-0~debian-stretch
and docker-engine=17.05.0~ce-0~ubuntu-xenial
.
Looks like invalid version name generates on this line
Apt repo has:
Package: docker-ce
Architecture: amd64
Version: 18.04.0~ce~3-0~debian
I'm trying update with option docker_version: "18.04.0
Error:
Version '18.04.0~ce-0~debian' for 'docker-ce' was not found
Problem on line 36
Really love what you did. Was looking for something like this.
What I miss though is an example project where you have the Ansible root and so on setup.
Also, I miss the role or mention in the README to add the Docker files and or directories to the online host. When you provision the Dockerfiles really should be part of it. The application you can deploy using deployer or another playbook, but for this provision package the role to add your Docker setup would be really useful.
Perhaps with something like ansible/roles/site_name/tasks/main.yml
:
---
- name: Create Docker base directory
file: path={{ work_dir }} state=directory
- name: Copy docker-compose file
template: src=docker-compose.j2 dest={{ work_dir }}/docker-compose.yml
- name: Setup ufw
ufw: rule=allow port=80 proto=tcp
- name: Open up SSL ufw
ufw: rule=allow port=443 proto=tcp
- name: create the logrotate conf for docker
copy: src=logrotate_docker dest=/etc/logrotate.d/docker
- name: copy the backup script
copy: src=site-backup dest={{ work_dir }}/site-backup mode=755
tags:
- prod
- name: install s3cmd
apt: name=s3cmd state=present update_cache=yes
tags:
- prod
- name: install s3cfg
template: src=s3cfg dest=/root/.s3cfg
tags:
- prod
- name: schedule backup to run weekly
cron: name="site backup" minute="0" hour="2" weekday="1" job="{{ work_dir }}/site-backup" user="root"
tags:
- prod
- name: Copy over site-upgrade
copy: src=site-upgrade dest={{ work_dir }}/ mode=755
- name: Copy over site-normal
copy: src=site-normal dest={{ work_dir }}/ mode=755
--
and
---
- name: Create {{ service }} directory
file: path={{ work_dir }} state=directory owner=root group=root
- name: Load {{ service }}
synchronize: src={{ service }} dest={{ work_dir }} group=no owner=no rsync_path='sudo rsync'
- name: Make files executable
file: path={{ work_dir }}/{{ service }}/{{ item }} mode=755
with_items:
as mentioned at https://ejosh.co/de/2015/05/ansible-for-server-provisioning/ and repo https://github.com/johanan/Ansible-and-Docker
When running a playbook that uses the role with the following settings:
docker__edition: "ce"
docker__channel: "stable"
docker__install_docker_compose: true
docker__users: ["ubuntu"]
on a clean install of Ubuntu 18.04.1.
The error is the following:
FAILED! => {"changed": false, "msg": "Unable to find any of pip3 to use. pip needs to be installed."}
However, when checking pip is installed and found at /usr/bin/pip.
Hi,
I use ELK on docker swarm.
When i install docker manually, the stack is working.
But when i use your role, ElasticSearch replicas fail to discover each other.
Is there anything you which could change default network behaviour when using swarm on top of your role ?
Here's what i use
---
- hosts: swarm_test:swarm_prod
roles:
# https://github.com/nickjj/ansible-docker
- role: nickjj.docker
tags: docker
vars:
docker__edition: "ce"
docker__channel: ["stable"]
docker__version: "19.03"
docker__state: "present"
docker__compose_version: ""
# `a` removes unused images (useful in production).
# `f` forces it to happen without prompting you to agree.
docker__cron_jobs_prune_flags: "af"
# Control the schedule of the docker system prune.
docker__cron_jobs_prune_schedule: ["0", "0", "*", "*", "0"]
docker__cron_jobs:
- name: "Docker disk clean up"
job: "docker system prune -{{ docker__cron_jobs_prune_flags }} > /dev/null 2>&1"
schedule: "{{ docker__cron_jobs_prune_schedule }}"
cron_file: "docker-disk-clean-up"
#user: "{{ (docker__users | first) | d('root') }}"
#state: "present"
docker__pip_virtualenv: "/usr/local/lib/docker/virtualenv"
docker__pip_docker_state: "present"
docker__pip_docker_compose_state: "present"
# https://docs.ansible.com/ansible/latest/modules/docker_swarm_module.html
- role: docker.swarm
docker.swarm
role
---
- name: install docker pip dep
apt:
name: python-docker
state: present
- name: Create swarm on manager node
docker_swarm:
state: present
advertise_addr: "{{ ansible_default_ipv4.address }}"
register: swarm
when: "'managers' in group_names"
- name: Gather network facts
setup:
gather_subset:
- network
- name: debug managers
debug:
msg: "Join workers at {{ hostvars[groups['managers_test'][0]]['ansible_facts']['default_ipv4']['address'] }}:2377 with token {{ swarm['swarm_facts']['JoinTokens']['Worker'] }}"
when: "'managers' in group_names"
- name: set facts token
set_fact:
token : "{{ swarm['swarm_facts']['JoinTokens']['Worker'] }}"
when: "'managers' in group_names"
- name: Add other nodes to swarm_test
docker_swarm:
state: join
advertise_addr: "{{ ansible_default_ipv4.address }}"
join_token: "{{ hostvars[groups['managers_test'][0]]['token'] }}"
remote_addrs:
- "{{ hostvars[groups['managers_test'][0]]['ansible_facts']['default_ipv4']['address'] }}:2377"
when: "'workers' in group_names"
Hi,
a few changes are required:
Host vars:
docker__apt_repository: >
deb [arch=armhf]
https://download.docker.com/linux/raspbian
{{ ansible_distribution_release }} {{ docker__channel | join (' ') }}
And install_recommends: no
- name: Install Docker
apt:
name: "docker-{{ docker__edition }}"
state: "{{ docker__state }}"
install_recommends: no
Later it fails in "Install Python packages".
I saw some warnings about python 2 deprecation. But this is the error:
ERROR: Failed building wheel for cryptography\nERROR: Could not build wheels for cryptography which use PEP 517 and cannot be installed directly
Any ideas?
Edit: Solution see below.
It would be nice to have cron_tasks.cron_file as an option, which would place the docker cleanup work in a cron.d file, allowing easier administration for those who prefer to use this construct.
comparing to ubuntu, boot2docker is more lightweight.
Error:
Unable to load docker-compose. Try `pip install docker-compose`. Error: No module named compose
playbook:
roles:
# Install most docker_service dependencies
- { role: "nickjj.docker", tags: ["docker"] }
tasks:
- name: test docker-compose
docker_service:
project_src: test
Stack trace:
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_docker_service_payload_U2NKy6/__main__.py", line 456, in <module>
from compose import __version__ as compose_version
fatal: [138.xx.xx.183]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"build": false,
"cacert_path": null,
"cert_path": null,
"debug": false,
"definition": null,
"dependencies": true,
"docker_host": "unix://var/run/docker.sock",
"files": null,
"hostname_check": false,
"key_path": null,
"nocache": false,
"project_name": null,
"project_src": "../<...>",
"pull": false,
"recreate": "smart",
"remove_images": null,
"remove_orphans": false,
"remove_volumes": false,
"restarted": false,
"scale": null,
"services": null,
"ssl_version": null,
"state": "present",
"stopped": false,
"timeout": 10,
"tls": false,
"tls_hostname": "localhost",
"tls_verify": false
}
},
"msg": "Unable to load docker-compose. Try `pip install docker-compose`. Error: No module named compose"
}
Running pip install docker-compose
on the server managed by ansible fixed the issue.
galaxy info:
Role: nickjj.docker
description: Install Docker and optionally Docker Compose.
active: True
commit: d498cc15f393312ae9e104e13cc3ac7d07365c8d
commit_message: Update README to link to the latest release
commit_url: https://api.github.com/repos/nickjj/ansible-docker/git/commits/d498cc15f393312ae9e104e13cc3ac7d07365c8d
company:
created: 2016-10-08T23:13:50.040014Z
dependencies: []
download_count: 2838
forks_count: 69
galaxy_info:
author: Nick Janetakis
galaxy_tags: ['containers', 'compose', 'docker', 'packaging', 'system']
license: MIT
min_ansible_version: 2.5
platforms: [{'name': 'Ubuntu', 'versions': ['xenial', 'bionic']}, {'name': 'Debian', 'versions': ['jessie', 'stretch']}]
role_name: docker
github_branch: master
github_repo: ansible-docker
github_user: nickjj
id: 12606
imported: 2018-11-08T21:18:19.792556-05:00
install_date: Thu Nov 15 16:08:42 2018
intalled_version: v1.5.0
is_valid: True
issue_tracker_url: https://github.com/nickjj/ansible-docker/issues
license: MIT
min_ansible_version: 2.5
modified: 2018-11-09T02:18:19.792718Z
open_issues_count: 0
path: ['/home/xxx/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles']
role_type: ANS
stargazers_count: 195
travis_status_url:
ansible version:
ansible --version
ansible 2.7.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/olivier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.1 (default, Oct 22 2018, 10:41:28) [GCC 8.2.1 20180831]
Hey, thanks for this role! I get an error, when running the role several times on the same server. Do you have any idea why this is happening?
fatal: [test]: FAILED! => {"cache_update_time": 1530223408, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'docker-ce=18.04.0~ce~3-0~debian' -o APT::Install-Recommends=no' failed: E: Packages were downgraded and -y was used without --allow-downgrades.\n", "rc": 100, "stderr": "E: Packages were downgraded and -y was used without --allow-downgrades.\n", "stderr_lines": ["E: Packages were downgraded and -y was used without --allow-downgrades."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nRecommended packages:\n aufs-tools cgroupfs-mount | cgroup-lite pigz\nThe following packages will be DOWNGRADED:\n docker-ce\n0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Recommended packages:", " aufs-tools cgroupfs-mount | cgroup-lite pigz", "The following packages will be DOWNGRADED:", " docker-ce", "0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded."]}
Hey,
I find this solution very useful! so, first of all, thank you.
My problem starts when I try to change configs that normally would be located under
daemon.json. For example:
{
"experimental": true,
"hosts": ["fd://", "tcp://0.0.0.0:2375"]
}
I tried to change 'docker__systemd_override' and even 'docker__daemon_json' variables under defaults.main.yml file with no success (didnt see daemon.json creation or override yml file for system.d).
I tried to follow the readme instructions. What do I miss?
sorry for my newbie question, but I am not sure, if I can change my config to get rid of these warnings, or if these warnings are okay for now and need to be fixed in this role.
in my playbook I try to deactivate the default cron job like this:
vars:
docker__cron_jobs: []
then I get this warning:
TASK [nickjj.docker : Configure Docker daemon environment variables] ***************************************************
[DEPRECATION WARNING]: evaluating [] as a bare variable, this behaviour will go away and you might need to add |bool to
the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in
version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
There is also another warning like this, which seems not related to my config:
TASK [nickjj.docker : Configure Docker daemon options (flags)] *********************************************************
[DEPRECATION WARNING]: evaluating [u'-H unix://'] as a bare variable, this behaviour will go away and you might need to
add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will be
removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
Given this vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "generic/ubuntu1804"
config.vm.define "vagrant"
config.vm.provision "deploy", type: 'ansible' do |ansible|
ansible.compatibility_mode = "2.0"
ansible.playbook = "ansible/deploy.yml"
ansible.groups = {
"staging" => ["vagrant"]
}
end
end
and this playbook:
- name: Deploy
hosts: vagrant
tasks:
- name: Update and upgrade apt packages
become: yes
apt:
upgrade: 'yes'
update_cache: yes
cache_valid_time: 3600
# Reference: https://github.com/ansible/ansible/issues/56832
force_apt_get: yes
- name: Install Docker & Docker compose
include_role:
name: "nickjj.docker"
apply:
become: yes
tags:
- docker
- name: Pip install docker for Ansible's docker_* modules
pip:
name:
- docker
- "docker-compose"
- name: Save services
local_action:
module: docker_image
archive_path: /tmp/{{ item }}.tar
build:
path: ../{{ item }}
pull: yes
name: {{ item }}
tag: latest
force_source: yes
source: build
with_items:
- database
- server
- client
- documents
- name: Upload services
copy:
src: /tmp/{{ item }}.tar
dest: "{{ base_path }}/{{ item }}.tar"
with_items:
- database
- server
- client
- documents
- name: Load services
become: yes
docker_image:
load_path: "{{ base_path }}/{{ item }}.tar"
name: {{ item }}
tag: latest
source: load
with_items:
- database
- server
- client
- documents
vars:
ansible_python_interpreter: "/usr/bin/env python-docker"
I get:
failed: [vagrant] (item=database) => {"ansible_loop_var": "item", "changed": false, "item": "database", "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on ubuntu1804.localdomain's Python /usr/local/bin/python-docker. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named 'docker'"}
I'm having trouble using docker__users
. I've tried a number of methods, and the most promising so far has been:
- hosts: dockerhosts
roles:
- name: "Docker Install and Configure"
tags: ["docker"]
become: yes
become_method: sudo
role: nickjj.docker
docker__users: ["jdoe", "sdoe"]
However, I haven't been able to get any of it working. How do you implement the variables when specifying the role?
I use the docker_service
Ansible role to start services based on a docker-compose.yml
file. This role depends on the docker-compose
Python module which itself depends on the docker
Python module. But ansible-docker installs the docker-py
Python module and it's not possible to install both docker
and docker-py
Python modules at the same time.
More info: https://docs.ansible.com/ansible/latest/modules/docker_service_module.html#requirements
I'm going to try changing my copy of docker-ansible so that it installs docker
instead of docker-py
and see how it goes.
Hi Nick, sorry to drop in, my 1st time I use your role, loading to RPi3B+ with Buster lite on it. Probably do not understand your instructions properly. Want to install CE. Have not configured my docker id anywhere. Used the Playbook example in your readme:
- name: Docker to Host
hosts: "all"
become: true
roles:
- role: "nickjj.docker"
tags: ["docker"]
Roles before the one below installed, but the one below failed. Any pointers?
TASK [nickjj.docker : Install Docker] *********************************************************************************************
fatal: [rh02test]: FAILED! => {"changed": false, "msg": "No package matching 'docker-ce' is available"}
i'm running into problems installing docker-compose in the virtuelenv, because my ubuntu 18.04 installation uses python2 for creating the virtuelenv. i fixed it with the option "virtualenv_python: python3".
I might made an Pill Request if suggested.
Or is there any reason to support python2?
Installing the role:
roles:
- { role: "nickjj.docker", tags: ["docker"] }
Output:
TASK [nickjj.docker : Install Python packages] *********************************
ok: [host] => (item={u'state': u'present', u'name': u'docker'})
failed: [host] (item={u'path': u'/usr/local/bin/docker-compose', u'state': u'present', u'version': u'latest', u'name': u'docker-compose', u'src': u'/usr/local/lib/docker/virtualenv/bin/docker-compose'}) => {"changed": false, "cmd": "/usr/local/lib/docker/virtualenv/bin/pip2 install docker-compose==latest", "item": {"name": "docker-compose", "path": "/usr/local/bin/docker-compose", "src": "/usr/local/lib/docker/virtualenv/bin/docker-compose", "state": "present", "version": "latest"}, "msg": "stdout: Collecting docker-compose==latest\n\n:stderr: Could not find a version that satisfies the requirement docker-compose==latest (from versions: 1.1.0rc1, 1.1.0rc2, 1.1.0, 1.2.0rc1, 1.2.0rc2, 1.2.0rc3, 1.2.0rc4, 1.2.0, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.4.0rc1, 1.4.0rc2, 1.4.0rc3, 1.4.0, 1.4.1, 1.4.2, 1.5.0rc1, 1.5.0rc2, 1.5.0rc3, 1.5.0, 1.5.1, 1.5.2, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.7.0rc1, 1.7.0rc2, 1.7.0, 1.7.1, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.1, 1.9.0rc1, 1.9.0rc2, 1.9.0rc3, 1.9.0rc4, 1.9.0, 1.10.0rc1, 1.10.0rc2, 1.10.0, 1.10.1, 1.11.0rc1, 1.11.0, 1.11.1, 1.11.2, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.13.0rc1, 1.13.0, 1.14.0rc1, 1.14.0rc2, 1.14.0, 1.15.0rc1, 1.15.0, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.17.0rc1, 1.17.0, 1.17.1, 1.18.0rc1, 1.18.0rc2, 1.18.0, 1.19.0rc1, 1.19.0rc2, 1.19.0rc3, 1.19.0, 1.20.0rc1, 1.20.0rc2, 1.20.0, 1.20.1, 1.21.0rc1, 1.21.0, 1.21.1, 1.21.2, 1.22.0rc1, 1.22.0rc2, 1.22.0, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2)\nNo matching distribution found for docker-compose==latest\n"}
Using with the .yml
---
- name: Docker Server
hosts: "all"
become: true
roles:
- role: "nickjj.docker"
tags: ["docker"]
I got the result:
$ ansible-playbook sciencedesk-docker.yml
PLAY [Docker Server] ****************************************************************
TASK [Gathering Facts] ************************************
ok: [sandbox.sciencedesk.net]
TASK [nickjj.docker : Disable pinned Docker version] ***************************************************************
ok: [sandbox.sciencedesk.net]
TASK [nickjj.docker : Enable pinned Docker version] ***************************************************************
skipping: [sandbox.sciencedesk.net]
TASK [nickjj.docker : Install Docker's dependencies] ***************************************************************
fatal: [sandbox.sciencedesk.net]: FAILED! => {"changed": false, "msg": "No package matching 'gnupg2' is available"}
PLAY RECAP ***************************************************************
sandbox.sciencedesk.net : ok=2 changed=0 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
Hi,
I've got two servers:
When running the very same ansible playbook, I'm running into issues while trying to configure private docker registries on 20.04 (as no previous step makes use of ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
, but all which do suffer the same issue):
TASK [nickjj.docker : Manage Docker registry login credentials] *********************************************************************************************************************************************************************failed: [infra1] (item={u'username': u'***', u'password': u'***', u'registry_url': u'****'}) => {"ansible_loop_var": "item", "changed": false, "item": {"password": "***", "registry_url": "****", "username": "***"}, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on infra1's Python /usr/local/bin/python-docker. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named 'docker'"}
changed: [infra2] => (item={u'username': u'***', u'password': u'***', u'registry_url': u'****'})
failed: [infra1] (item={u'username': u'***', u'password': u'***', u'config_path': u'***/.docker/config.json', u'registry_url': u'****'}) => {"ansible_loop_var": "item", "changed": false, "item": {"config_path": "***/.docker/config.json", "password": "***", "registry_url": "****", "username": "***"}, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on infra1's Python /usr/local/bin/python-docker. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named 'docker'"}
changed: [infra2] => (item={u'username': u'***', u'password': u'***', u'config_path': u'***/.docker/config.json', u'registry_url': u'****'})
This made me curious and I wanted to dig deeper:
$ ll /usr/local/bin/python-docker
lrwxrwxrwx 1 root root 43 May 11 22:32 /usr/local/bin/python-docker -> /usr/local/lib/docker/virtualenv/bin/python*
$ ll /usr/local/lib/docker/virtualenv/bin/python
lrwxrwxrwx 1 root root 7 May 11 22:31 /usr/local/lib/docker/virtualenv/bin/python -> python2*
$ python-docker
Python 2.7.17 (default, Apr 15 2020, 17:20:14)
[GCC 7.5.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.path)
['', '/usr/local/lib/docker/virtualenv/lib/python2.7', '/usr/local/lib/docker/virtualenv/lib/python2.7/plat-x86_64-linux-gnu', '/usr/local/lib/docker/virtualenv/lib/python2.7/lib-tk', '/usr/local/lib/docker/virtualenv/lib/python2.7/lib-old', '/usr/local/lib/docker/virtualenv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/local/lib/docker/virtualenv/local/lib/python2.7/site-packages', '/usr/local/lib/docker/virtualenv/lib/python2.7/site-packages']
>>> import docker
>>>
$ which python-docker
/usr/local/bin/python-docker
$ ll /usr/local/bin/python-docker
lrwxrwxrwx 1 root root 43 May 11 22:32 /usr/local/bin/python-docker -> /usr/local/lib/docker/virtualenv/bin/python*
$ ll /usr/local/lib/docker/virtualenv/bin/python
lrwxrwxrwx 1 root root 16 May 11 22:31 /usr/local/lib/docker/virtualenv/bin/python -> /usr/bin/python3*
$ python-docker
Python 3.8.2 (default, Apr 27 2020, 15:53:34)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.path)
['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages']
>>> import docker
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'docker'
>>> exit()
$ source /usr/local/lib/docker/virtualenv/bin/activate
(virtualenv) $ python
Python 3.8.2 (default, Apr 27 2020, 15:53:34)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import docker
>>> import sys
>>> print(sys.path)
['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/usr/local/lib/docker/virtualenv/lib/python3.8/site-packages']
So it seems that either the change to python3 or Ubuntu 20 changed the way they are importing paths in virtual environments: the /usr/local/lib/docker/virtualenv/lib/python3.8/site-packages
are only included when I activate the venv directly.
I know it's not necessarily an issue with this ansible role, but have you experienced something similar before? (I'll also continue searching of course, but not tonight :D)
Thanks!
I'm in the process of upgrading this role and I'm struggling to figure out how I can deal with extra hosts. Until now I was using this:
docker_daemon_options:
- "-H 0.0.0.0:4444"
- "-H fd://"
- "--tlsverify"
- "--tlscacert /etc/docker/certs/ca.pem"
- "--tlscert /etc/docker/certs/server-cert.pem"
- "--tlskey /etc/docker/certs/server-key.pem"
From what I understand, it's no longer possible to add flags to the systemd unit file as all daemon configuration is now done via docker__daemon_options
which ends up in /etc/docker/daemon.json
.
The problem is that, as stated by Docker documentation:
Note: You cannot set options in daemon.json that have already been set on daemon startup as a flag. On systems that use systemd to start the Docker daemon, -H is already set, so you cannot use the hosts key in daemon.json to add listening addresses. See https://docs.docker.com/engine/admin/systemd/#custom-docker-daemon-options for how to accomplish this task with a systemd drop-in file.
And indeed, if I try this:
docker__daemon_options: |
"tlsverify": true,
"tlscacert": "/etc/docker/certs/ca.pem",
"tlscert": "/etc/docker/certs/server-cert.pem",
"tlskey": "/etc/docker/certs/server-key.pem",
"hosts": ["0.0.0.0:4444"]
Docker cannot start:
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
hosts: (from flag: [fd://], from file: [0.0.0.0:4444])
As pointed out by Docker documentation, I can overwrite the systemd unit file myself, but from my point of view it seems that there might be a regression in this Ansible role, unless I'm missing something.
Thank you for your time reading my report! Any thoughts on this?
First thanks for this role! :-)
One "little" security issue here:
Example:
TASK [nickjj.docker : Manage Docker registry login credentials] *********************************************************************************************************************
changed: [] => (item={u'username': u'', u'reauthorize': True, u'state': u'present', u'password': u'', u'email': u'', u'registry_url': u''})
Of course i could set "no_log: true" in the playbook but then i don't have any log of the task at all, which is also not optimal.
Please set "no_log: true" on the task itself. Thanks
It is my host_vars:
docker__registries:
- registry_url: "..."
username: "...."
password: "....."
I see this error:
TASK [nickjj.docker : Manage login credentials for 1 or more Docker registries] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'NoneType' object is not iterable
fatal: [192.168.142.5]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
Ansible version:
ansible 2.7.1
config file = None
configured module search path = [u'/home/claud/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.6 (default, Oct 26 2016, 20:32:47) [GCC 4.8.4]
After the role has run (on the first deploy) I need to run reset_connection to be able to connect to docker with the user defined in docker__users. Should this be part of the role itself? Alternatively the task could notify the playbook that user groups was updated so the playbook can do it only if needed.
One possible downside relying on reset_connection is that it seems broken in ansible < 2.5.8 (see https://stackoverflow.com/a/44753457)
Hi,
As mentioned on the official docker page, its recommended to pass docker daemon options
using daemon.json file instead of passing them on the command line.
I can create a PR which copies the daemon.json file from "files" directory to /etc/docker/daemon.json
and provides a variable include_docker_daemon_file: True or False
in defaults/main.yml. The user can just enter all daemon options in that file.
For eg:
{ "insecure-registries" : ["example.com:5000"], "live-restore": true }
If variable's value is True
, the script would include the file and if False
the script would remove the file.
Would this be a PR of interest to you?
For your reference:
https://docs.docker.com/config/containers/live-restore/#enable-live-restore
If you prefer, you can start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.
Best Regards,
Ritesh Puj
This is my playbook:
- name: Deploy
hosts: all
become: true
tasks:
- name: "Install Docker and Docker-Compose"
include_role:
name: "nickjj.docker"
vars:
docker__edition: "ce"
docker__channel: "stable"
docker__install_docker_compose: true
docker__users: ["ubuntu"]
tags: ["docker"]
run with a Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "generic/ubuntu1804"
config.vm.define "machine1"
config.vm.provision :ansible do |ansible|
ansible.compatibility_mode = "2.0"
ansible.playbook = "devops/deploy.yml"
ansible.groups = {
"vagrant" => ["machine1"]
}
end
end
it fails on TASK [nickjj.docker : Install Docker's dependencies]
with the following message:
fatal: [machine1]: FAILED! => {"cache_update_time": 1574398993, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'apt-transport-https' 'gnupg2' 'python-setuptools' 'python3-pip'' failed: E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-libc-dev_4.15.0-70.79_amd64.deb 404 Not Found [IP: 91.189.91.14 80]\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", "rc": 100, "stderr": "E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-libc-dev_4.15.0-70.79_amd64.deb 404 Not Found [IP: 91.189.91.14 80]\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", "stderr_lines": ["E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-libc-dev_4.15.0-70.79_amd64.deb 404 Not Found [IP: 91.189.91.14 80]", "E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n binutils binutils-x86-64-linux-gnu build-essential cpp cpp-7 dh-python\n dpkg-dev fakeroot g++ g++-7 gcc gcc-7 gcc-7-base libalgorithm-diff-perl\n libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan4 libatomic1\n libc-dev-bin libc6-dev libcc1-0 libcilkrts5 libdpkg-perl libexpat1-dev\n libfakeroot libfile-fcntllock-perl libgcc-7-dev libgomp1 libisl19 libitm1\n liblsan0 libmpc3 libmpx2 libpython-stdlib libpython2.7-minimal\n libpython2.7-stdlib libpython3-dev libpython3.6-dev libquadmath0\n libstdc++-7-dev libtsan0 libubsan0 linux-libc-dev make manpages-dev python\n python-minimal python-pip-whl python-pkg-resources python2.7\n python2.7-minimal python3-crypto python3-dev python3-distutils\n python3-keyring python3-keyrings.alt python3-lib2to3 python3-secretstorage\n python3-setuptools python3-wheel python3-xdg python3.6-dev\nSuggested packages:\n binutils-doc cpp-doc gcc-7-locales debian-keyring g++-multilib\n g++-7-multilib gcc-7-doc libstdc++6-7-dbg gcc-multilib autoconf automake\n libtool flex bison gdb gcc-doc gcc-7-multilib libgcc1-dbg libgomp1-dbg\n libitm1-dbg libatomic1-dbg libasan4-dbg liblsan0-dbg libtsan0-dbg\n libubsan0-dbg libcilkrts5-dbg libmpx2-dbg libquadmath0-dbg glibc-doc bzr\n libstdc++-7-doc make-doc python-doc python-tk python-setuptools-doc\n python2.7-doc binfmt-support python-crypto-doc gnome-keyring\n libkf5wallet-bin gir1.2-gnomekeyring-1.0 python-secretstorage-doc\nThe following NEW packages will be installed:\n apt-transport-https binutils binutils-x86-64-linux-gnu build-essential cpp\n cpp-7 dh-python dpkg-dev fakeroot g++ g++-7 gcc gcc-7 gcc-7-base gnupg2\n libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl\n libasan4 libatomic1 libc-dev-bin libc6-dev libcc1-0 libcilkrts5 libdpkg-perl\n libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-7-dev libgomp1\n libisl19 libitm1 liblsan0 libmpc3 libmpx2 libpython-stdlib\n libpython2.7-minimal libpython2.7-stdlib libpython3-dev libpython3.6-dev\n libquadmath0 libstdc++-7-dev libtsan0 libubsan0 linux-libc-dev make\n manpages-dev python python-minimal python-pip-whl python-pkg-resources\n python-setuptools python2.7 python2.7-minimal python3-crypto python3-dev\n python3-distutils python3-keyring python3-keyrings.alt python3-lib2to3\n python3-pip python3-secretstorage python3-setuptools python3-wheel\n python3-xdg python3.6-dev\n0 upgraded, 66 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 1079 kB/89.1 MB of archives.\nAfter this operation, 263 MB of additional disk space will be used.\nIgn:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-70.79\nErr:1 http://security.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-70.79\n 404 Not Found [IP: 91.189.91.14 80]\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "The following additional packages will be installed:", " binutils binutils-x86-64-linux-gnu build-essential cpp cpp-7 dh-python", " dpkg-dev fakeroot g++ g++-7 gcc gcc-7 gcc-7-base libalgorithm-diff-perl", " libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan4 libatomic1", " libc-dev-bin libc6-dev libcc1-0 libcilkrts5 libdpkg-perl libexpat1-dev", " libfakeroot libfile-fcntllock-perl libgcc-7-dev libgomp1 libisl19 libitm1", " liblsan0 libmpc3 libmpx2 libpython-stdlib libpython2.7-minimal", " libpython2.7-stdlib libpython3-dev libpython3.6-dev libquadmath0", " libstdc++-7-dev libtsan0 libubsan0 linux-libc-dev make manpages-dev python", " python-minimal python-pip-whl python-pkg-resources python2.7", " python2.7-minimal python3-crypto python3-dev python3-distutils", " python3-keyring python3-keyrings.alt python3-lib2to3 python3-secretstorage", " python3-setuptools python3-wheel python3-xdg python3.6-dev", "Suggested packages:", " binutils-doc cpp-doc gcc-7-locales debian-keyring g++-multilib", " g++-7-multilib gcc-7-doc libstdc++6-7-dbg gcc-multilib autoconf automake", " libtool flex bison gdb gcc-doc gcc-7-multilib libgcc1-dbg libgomp1-dbg", " libitm1-dbg libatomic1-dbg libasan4-dbg liblsan0-dbg libtsan0-dbg", " libubsan0-dbg libcilkrts5-dbg libmpx2-dbg libquadmath0-dbg glibc-doc bzr", " libstdc++-7-doc make-doc python-doc python-tk python-setuptools-doc", " python2.7-doc binfmt-support python-crypto-doc gnome-keyring", " libkf5wallet-bin gir1.2-gnomekeyring-1.0 python-secretstorage-doc", "The following NEW packages will be installed:", " apt-transport-https binutils binutils-x86-64-linux-gnu build-essential cpp", " cpp-7 dh-python dpkg-dev fakeroot g++ g++-7 gcc gcc-7 gcc-7-base gnupg2", " libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl", " libasan4 libatomic1 libc-dev-bin libc6-dev libcc1-0 libcilkrts5 libdpkg-perl", " libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-7-dev libgomp1", " libisl19 libitm1 liblsan0 libmpc3 libmpx2 libpython-stdlib", " libpython2.7-minimal libpython2.7-stdlib libpython3-dev libpython3.6-dev", " libquadmath0 libstdc++-7-dev libtsan0 libubsan0 linux-libc-dev make", " manpages-dev python python-minimal python-pip-whl python-pkg-resources", " python-setuptools python2.7 python2.7-minimal python3-crypto python3-dev", " python3-distutils python3-keyring python3-keyrings.alt python3-lib2to3", " python3-pip python3-secretstorage python3-setuptools python3-wheel", " python3-xdg python3.6-dev", "0 upgraded, 66 newly installed, 0 to remove and 0 not upgraded.", "Need to get 1079 kB/89.1 MB of archives.", "After this operation, 263 MB of additional disk space will be used.", "Ign:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-70.79", "Err:1 http://security.ubuntu.com/ubuntu bionic-updates/main amd64 linux-libc-dev amd64 4.15.0-70.79", " 404 Not Found [IP: 91.189.91.14 80]"]}
What am I doing wrong?
@nickjj thanks for your hard work on this role; I think you are aware that it isn't supporting 20.04 LTS yet due to docker not publishing public focal
packages yet [1].
My question is whether you are thinking about supporting it (for example by using bionic
instead as outlined in [2]), or whether you want to wait for Docker to publish public packages?
Right now it fails at this step:
TASK [nickjj.docker : Configure Docker's upstream APT repository] ***************************************************************************************************************************task path: /home/helli/.ansible/roles/nickjj.docker/tasks/main.yml:27
The full traceback is:
File "/tmp/ansible_apt_repository_payload_mjo5emml/ansible_apt_repository_payload.zip/ansible/modules/packaging/os/apt_repository.py", line 548, in main
File "/usr/lib/python3/dist-packages/apt/cache.py", line 591, in update
raise FetchFailedException(e)
fatal: [#removed#]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"codename": null,
"filename": null,
"install_python_apt": true,
"mode": null,
"repo": "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable\n",
"state": "present",
"update_cache": true,
"validate_certs": true
}
},
"msg": "apt cache update failed"
}
Thanks!
Edit: additional info, typos
[1] https://download.docker.com/linux/ubuntu/dists/
[2] https://askubuntu.com/a/1230190
Hello
I'm getting the below error when I try to run the role using:
- include_role:
name: nickjj.docker
become: true
vars:
docker__version: "18.09"
docker__daemon_json: "{ \"insecure-registries\": [\"172.30.0.0/16\"] }"
TASK [nickjj.docker : Enable pinned Docker version] **********************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "checksum": "9e991811a558ae7ed14939f514a28e38044e58ca", "msg": "Destination /etc/apt/preferences.d not writable"}
NAME="Ubuntu"
VERSION="19.04 (Disco Dingo)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.04"
VERSION_ID="19.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=disco
UBUNTU_CODENAME=disco
ansible 2.7.9
config file = None
configured module search path = [u'/home/anton/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
When doing an installation on localhost using the following playbook:
---
- hosts: all #realsense-docker-hosts
tasks:
- include_role:
name: nickjj.docker
tags:
- docker
become: true
become_method: sudo
vars:
docker_edition: 'ce'
docker_channel: 'edge'
docker_install_docker_compose: true
executed with:
ansible-playbook -i "localhost," -c local -K playbook.yaml
The installation fails with:
TASK [nickjj.docker : Install Docker] ***********************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No package matching 'docker-engine' is available"}
When trying to install it manually using:
apt-get install docker-engine
it mentions a rename:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package docker-engine is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
docker-ce
Hi Nick, I hope you're well.
I have an issue with the new docker__daemon_flags
. Here is a playbook:
---
- hosts: all
become: true
tasks:
- name: Install required packages
apt:
state: present
update_cache: true
name:
- python-pip
- python-setuptools
- hosts: all
become: true
vars:
docker__daemon_flags:
- "-H unix://"
- "-H tcp://0.0.0.0:5432"
roles:
- role: nickjj.docker
When I run this on a fresh Debian 9, I find that dockerd has been started without the extra flag:
debian@playground:~$ ps ax|grep dockerd
4080 ? Ssl 0:00 /usr/bin/dockerd -H unix://
4414 pts/0 S+ 0:00 grep dockerd
However the flag has been written properly to the systemd file:
debian@playground:~$ cat /etc/systemd/system/docker.service.d/options.conf
# Ansible managed
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:5432
So I restart and this time dockerd does get started with the extra flag:
debian@playground:~$ sudo systemctl restart docker.service
debian@playground:~$ ps ax|grep dockerd
4420 ? Ssl 0:00 /usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:5432
4537 pts/0 S+ 0:00 grep dockerd
Hi,
I'd request a change on the cronjob to prune unused docker data. We run this role on several servers and receive a mail each time the command get's executed via cron.
How's about this?
docker_cron_tasks:
- job: docker system prune -f &>/dev/null
name: "Docker clean up"
schedule: ["0", "0", "*", "*", "0"]
A more complex solution would be a switch allowing the role user to select a behaviour.
Hey, just wanted to mention that Galaxy will by default hand you the highest and not the latest uploaded version, so you'll end up with 17.12 (where it's still docker_opts
, that took me a while to debug :) )
Unfortunately I have no suggestion what to do here since I've never published a Galaxy package.
Hi,
I'm trying to setup docker with custom daemon options, but when ansible write the file, there is just the default opton written in it. The cron job is written to for root, and the user is not added to docker group.
I don't understand what i am doing wrong ?
### Begin nickjj.docker ###
docker__edition: "ce"
docker__channel: ["stable"]
docker__version: ""
docker__state: "present"
docker__users: ["generic"]
docker__daemon_json: |
"bip": "172.1.1.0/24"
"data-root": "/data/docker"
"log-driver": "journald"
"storage-driver": "overlay2"
"log-opts": { "max-size": "10m", "max-file":"5" }
"hosts": [ "unix:///var/run/docker.sock", "tcp://0.0.0.0:2375" ]
docker__cron_jobs_prune_flags: "af"
docker__cron_jobs_prune_schedule: ["0", "0", "*", "*", "0"]
docker__cron_jobs:
- name: "Docker disk clean up"
job: "docker system prune -{{ docker__cron_jobs_prune_flags }} > /dev/null 2>&1"
schedule: "{{ docker__cron_jobs_prune_schedule }}"
cron_file: "docker-disk-clean-up"
user: "generic"
state: "present"
### End nickjj.docker ###
Thanks.
Hi,
Would it be possible to add support for the latest Ubuntu LTS release ?
Thanks.
It took me about an hour to find out why my group_vars
are getting ignored ... the most recent commit where docker_users
got docker__users
did not find its way to ansible galaxy yet.
I think we need to add predefined default variables to allow user kick start installation without any additional configuration )
I have a problem with the first run of the module.
The following task fails, probably because it's usualley run with a superuser account or so.
TASK [nickjj.docker : Get upstream APT GPG key] ********************************
fatal: [default]: FAILED! => {"changed": false, "cmd": "/usr/bin/apt-key adv --keyserver hkp://pool.sks-keyservers.net --recv 9DC858229FC7DD38854AE2D88D81803C0EBFCD88", "msg": "Error fetching key 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 from keyserver: hkp://pool.sks-keyservers.net", "rc": 1, "stderr": "gpg: requesting key 0EBFCD88 from hkp server pool.sks-keyservers.net\ngpg: key 0EBFCD88: public key \"Docker Release (CE deb) <[email protected]>\" imported\ngpg: Total number processed: 1\ngpg: imported: 1 (RSA: 1)\ngpg: no writable keyring found: eof\ngpg: error reading `[stdin]': general error\ngpg: import from `[stdin]' failed: general error\ngpg: Total number processed: 0\n", "stderr_lines": ["gpg: requesting key 0EBFCD88 from hkp server pool.sks-keyservers.net", "gpg: key 0EBFCD88: public key \"Docker Release (CE deb) <[email protected]>\" imported", "gpg: Total number processed: 1", "gpg: imported: 1 (RSA: 1)", "gpg: no writable keyring found: eof", "gpg: error reading `[stdin]': general error", "gpg: import from `[stdin]' failed: general error", "gpg: Total number processed: 0"], "stdout": "Executing: /tmp/tmp.UfbnPw8o65/gpg.1.sh --keyserver\nhkp://pool.sks-keyservers.net\n--recv\n9DC858229FC7DD38854AE2D88D81803C0EBFCD88\n", "stdout_lines": ["Executing: /tmp/tmp.UfbnPw8o65/gpg.1.sh --keyserver", "hkp://pool.sks-keyservers.net", "--recv", "9DC858229FC7DD38854AE2D88D81803C0EBFCD88"]}
which is, formatted for human digestion:
TASK [nickjj.docker : Get upstream APT GPG key] ********************************
fatal: [default]: FAILED! => {"changed": false, "cmd": "/usr/bin/apt-key adv --keyserver hkp://pool.sks-keyservers.net --recv 9DC858229FC7DD38854AE2D88D81803C0EBFCD88", "msg": "Error fetching key 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 from keyserver: hkp://pool.sks-keyservers.net", "rc": 1, "stderr": "gpg: requesting key 0EBFCD88 from hkp server pool.sks-keyservers.net
gpg: key 0EBFCD88: public key \"Docker Release (CE deb) <[email protected]>\" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
gpg: no writable keyring found: eof
gpg: error reading `[stdin]': general error
gpg: import from `[stdin]' failed: general error
gpg: Total number processed: 0
", "stderr_lines": ["gpg: requesting key 0EBFCD88 from hkp server pool.sks-keyservers.net", "gpg: key 0EBFCD88: public key \"Docker Release (CE deb) <[email protected]>\" imported", "gpg: Total number processed: 1", "gpg: imported: 1 (RSA: 1)", "gpg: no writable keyring found: eof", "gpg: error reading `[stdin]': general error", "gpg: import from `[stdin]' failed: general error", "gpg: Total number processed: 0"], "stdout": "Executing: /tmp/tmp.UfbnPw8o65/gpg.1.sh --keyserver
hkp://pool.sks-keyservers.net
--recv
9DC858229FC7DD38854AE2D88D81803C0EBFCD88
", "stdout_lines": ["Executing: /tmp/tmp.UfbnPw8o65/gpg.1.sh --keyserver", "hkp://pool.sks-keyservers.net", "--recv", "9DC858229FC7DD38854AE2D88D81803C0EBFCD88"]}
I'm running Vagrant (2.0.2) + Ansible (2.4.3.0) + Python 3.6 (in a pipenv shell), with the following provision setup in the Vagrantfile:
...
config.vm.provision "ansible" do |ansible|
ansible.compatibility_mode = "2.0"
ansible.playbook = "playbook.yml"
ansible.extra_vars = { ansible_python_interpreter: "/usr/bin/python3" }
end
end
Is there anything I forgot to do or read?
Currently the role doesn't have support for Centos 7/8
Would it be possible for you to add this functionality?
Include of role:
- name: "Install Docker"
include_role:
name: nickjj.docker
Error:
TASK [nickjj.docker : Install Docker] *****************************************************************************************************************************************************************
fatal: [dev-local-ansible-1]: FAILED! => {
"cache_update_time": 1518626749,
"cache_updated": false,
"changed": false,
"rc": 100
}
STDOUT:
Reading package lists...
Building dependency tree...
Reading state information...
The following package was automatically installed and is no longer required:
pigz
Use 'sudo apt autoremove' to remove it.
Recommended packages:
aufs-tools cgroupfs-mount | cgroup-lite
The following packages will be DOWNGRADED:
docker-ce
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.
STDERR:
E: Packages were downgraded and -y was used without --allow-downgrades.
MSG:
'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'docker-ce=18.01.0~ce-0~ubuntu' -o APT::Install-Recommends=no' failed: E: Packages were downgraded and -y was used without --allow-downgrades.
version that gets installed on run 1:
dpkg -l |grep docker-ce
ii docker-ce 18.02.0~ce-0~ubuntu amd64 Docker: the open-source application container engine
Ubuntu version:
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
All packages required for docker are installed.
TASK [docker : Install Docker] ***************************************************************************************************************************************************************************************************************
fatal: [example.com]: FAILED! => {"changed": false, "msg": "No package matching 'docker-ce' is available"}
root@manager-test:~# apt install docker-ce
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package docker-ce is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
docker-ce-cli:amd64
E: Package 'docker-ce' has no installation candidate
apt install docker-ce-cli
Host OS: Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-154-generic i686)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.