hashicorp / packer-plugin-ansible Goto Github PK
View Code? Open in Web Editor NEWPacker plugin for Ansible Provisioner
Home Page: https://www.packer.io/docs/provisioners/ansible
License: Mozilla Public License 2.0
Packer plugin for Ansible Provisioner
Home Page: https://www.packer.io/docs/provisioners/ansible
License: Mozilla Public License 2.0
This issue was originally opened by @mwhooker as hashicorp/packer#6174. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
reporting for a user on the mailing list
$ packer --version
1.2.2
$ ansible --version
ansible 2.5.0
config file = None
configured module search path = [u'/home/loren/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.14+ (default, Feb 6 2018, 19:12:18) [GCC 7.3.0]
$ PACKER_LOG=1 packer build -debug buildbot.json
<snip>
2018/04/22 18:28:58 [INFO] (telemetry) Starting provisioner ansible
2018/04/22 18:28:58 ui: ==> digitalocean: Provisioning with Ansible...
==> digitalocean: Provisioning with Ansible...
2018/04/22 18:28:59 packer: 2018/04/22 18:28:59 SSH proxy: serving on 127.0.0.1:35145
2018/04/22 18:28:59 ui: ==> digitalocean: Executing Ansible: ansible-playbook <REDACTED>
digitalocean:
2018/04/22 18:29:00 ui: digitalocean:
2018/04/22 18:29:00 ui: digitalocean: PLAY [Install python2 so that Ansible can run] *********************************
digitalocean: PLAY [Install python2 so that Ansible can run] *********************************
2018/04/22 18:29:01 ui: digitalocean:
digitalocean:
2018/04/22 18:29:31 ui: digitalocean: TASK [Install python2] *********************************************************
digitalocean: TASK [Install python2] *********************************************************
2018/04/22 18:29:31 packer: 2018/04/22 18:29:31 SSH proxy: accepted connection
2018/04/22 18:29:31 packer: 2018/04/22 18:29:31 authentication attempt from 127.0.0.1:49448 to 127.0.0.1:35145 as loren using none
2018/04/22 18:29:31 packer: 2018/04/22 18:29:31 authentication attempt from 127.0.0.1:49448 to 127.0.0.1:35145 as loren using publickey
2018/04/22 18:29:31 packer: ==> digitalocean: Pausing before cleanup of step 'StepConnect'. Press enter to continue.
panic: runtime error: invalid memory address or nil pointer dereference
2018/04/22 18:29:31 packer: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x99b2fc]
2018/04/22 18:29:31 packer:
2018/04/22 18:29:31 packer: goroutine 96 [running]:
2018/04/22 18:29:31 [INFO] (telemetry) ending ansible
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.(*CertChecker).CheckCert(0xc42019e4b0, 0xc4201a63c8, 0x5, 0xc42037c2c0, 0xc420272158, 0x5)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh/certs.go:373 +0x57c
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.(*CertChecker).Authenticate(0xc42019e4b0, 0x22fba00, 0xc42058c080, 0x22f4ba0, 0xc42037c2c0, 0x22e23c0, 0xc42007cae0, 0x0)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh/certs.go:320 +0x85
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.(*CertChecker).Authenticate-fm(0x22fba00, 0xc42058c080, 0x22f4ba0, 0xc42037c2c0, 0x657, 0x657, 0x0)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/provisioner/ansible/provisioner.go:221 +0x52
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.(*connection).serverAuthenticate(0xc42058c080, 0xc42058a0c0, 0x11, 0x40, 0x0)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh/server.go:381 +0x17c1
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.(*connection).serverHandshake(0xc42058c080, 0xc42058a0c0, 0xc42014cee8, 0x4d744c, 0xc4200a80f0)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh/server.go:228 +0x519
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh.NewServerConn(0x22fdfe0, 0xc4201b21a8, 0xc42036a0c0, 0x0, 0x0, 0xc42048d9e0, 0x89609e, 0x1)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/vendor/golang.org/x/crypto/ssh/server.go:159 +0xe7
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/provisioner/ansible.(*adapter).Handle(0xc42019e500, 0x22fdfe0, 0xc4201b21a8, 0x22f8820, 0xc4203511e0, 0xc4201bb2c0, 0x431dc8)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/provisioner/ansible/adapter.go:67 +0x93
2018/04/22 18:29:31 packer: github.com/hashicorp/packer/provisioner/ansible.(*adapter).Serve.func1(0xc42019e500, 0x22fdfe0, 0xc4201b21a8)
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/provisioner/ansible/adapter.go:57 +0x51
2018/04/22 18:29:31 packer: created by github.com/hashicorp/packer/provisioner/ansible.(*adapter).Serve
2018/04/22 18:29:31 packer: /Users/phinze/go/src/github.com/hashicorp/packer/provisioner/ansible/adapter.go:56 +0x1d0
2018/04/22 18:29:31 /usr/local/bin/packer: plugin process exited
2018/04/22 18:29:31 ui: ask: ==> digitalocean: Pausing before cleanup of step 'StepConnect'. Press enter to continue.
2018/04/22 18:30:18 ui: ask: ==> digitalocean: Pausing before cleanup of step 'stepDropletInfo'. Press enter to continue.
==> digitalocean: Pausing before cleanup of step 'stepDropletInfo'. Press enter to continue.
2018/04/22 18:30:34 ui: ask: ==> digitalocean: Pausing before cleanup of step 'stepCreateDroplet'. Press enter to continue.
==> digitalocean: Pausing before cleanup of step 'stepCreateDroplet'. Press enter to continue.
2018/04/22 18:30:35 ui: ==> digitalocean: Destroying droplet...
==> digitalocean: Destroying droplet...
2018/04/22 18:30:35 ui: ask: ==> digitalocean: Pausing before cleanup of step 'stepCreateSSHKey'. Press enter to continue.
==> digitalocean: Pausing before cleanup of step 'stepCreateSSHKey'. Press enter to continue.
2018/04/22 18:30:37 ui: ==> digitalocean: Deleting temporary ssh key...
==> digitalocean: Deleting temporary ssh key...
<snip>
This issue was originally opened by @Stavroswd as hashicorp/packer#9034. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Hi guys, I'm trying to build an image for lxd with packer and ansible for provisioning.
During the build for some reason the process get's stuck at the gathering facts task.
Even after disabling this step it then starts to do the playbook tasks but can't continue.
Run the build command with the provided config files.
sudo packer build build-config.json
packer => 1.5.5
ansible => 2.9.6
lxd => 3.0.3
lxc => client: 3.0.3 , server 3.0.3
build-config.json:
{
"builders": [
{
"type": "lxd",
"name": "lxd-image",
"image": "ubuntu:18.04",
"output_image": "lxd-image",
"publish_properties": {
"description": "Building and provision image."
}
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "build/modules/vagrant-box-commandcenter/provision.yml",
"user": "lxd-image",
"ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ],
"extra_arguments": [ "-vvvv" ]
}
]
}
Ansible playbook :
- hosts: default
become: yes
gather_facts: no
roles:
- role: install-ruby
Packer verbose build output:
==> lxd-image: Creating container...
==> lxd-image: Provisioning with Ansible...
==> lxd-image: Executing Ansible: ansible-playbook --extra-vars packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible670992089 /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml -e ansible_ssh_private_key_file=/tmp/ansible-key831963874 -vvvv
lxd-image: ansible-playbook 2.9.6
lxd-image: config file = /devops/repo/namespaces/3-operations/ansible.cfg
lxd-image: configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
lxd-image: ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
lxd-image: executable location = /usr/local/bin/ansible-playbook
lxd-image: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
lxd-image: Using /devops/repo/namespaces/3-operations/ansible.cfg as config file
lxd-image: setting up inventory plugins
lxd-image: host_list declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: script declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: auto declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: Parsed /tmp/packer-provisioner-ansible670992089 inventory source with ini plugin
lxd-image: Loading callback plugin yaml of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
lxd-image:
lxd-image: PLAYBOOK: provision.yml ********************************************************
lxd-image: Positional arguments: /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
lxd-image: become_method: sudo
lxd-image: inventory: (u'/tmp/packer-provisioner-ansible670992089',)
lxd-image: forks: 10
lxd-image: tags: (u'all',)
lxd-image: extra_vars: (u'packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes', u'ansible_ssh_private_key_file=/tmp/ansible-key831963874')
lxd-image: verbosity: 4
lxd-image: connection: smart
lxd-image: timeout: 10
lxd-image: 1 plays in /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
lxd-image:
lxd-image: PLAY [default] *****************************************************************
lxd-image: META: ran handlers
lxd-image:
lxd-image: TASK [0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user : Install build tools] ***
lxd-image: task path: /devops/repo/namespaces/0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user/tasks/main.yml:1
lxd-image: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: lxd-image
lxd-image: <127.0.0.1> SSH: EXEC ssh -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37521 -o 'IdentityFile="/tmp/ansible-key831963874"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="lxd-image"' -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/e3ac3f2d4b 127.0.0.1 '/bin/sh -c '"'"'echo ~lxd-image && sleep 0'"'"''
OS: Ubuntu 18.04 generic.
Simply running my ansible playbook against a running container runs perfectly so it must be something with the lxd builder.
Has anyone had the same issues with the lxd builder and can help?
This issue was originally opened by @sanvila as hashicorp/packer#8860. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When using the ansible provisioner with either googlecompute, digitalocean or hetzner builders, final images contain an unwanted directory like this:
/root/~user
where "user" is the local user being used to run packer.
As opposed to /root/.ansible, I think this should not happen because it does not happen when using ansible without packer (i.e. using root as the remote user). Because of this, I have to add an extra step to remove the junk.
I also think this is bad because by default the contents of the resulting image should ideally not depend on the user running packer (i.e. it should be as reproducible as possible).
I can reproduce this effect every time as far as I use at least one ansible provisioner in
the packer JSON file.
(Not including a minimal JSON to reproduce because this happens always to me, but I will be more than happy to provide one if required).
I'm using packer v1.5.4 from packer_1.5.4_linux_amd64.zip and ansible version 2.7.7+dfsg-1 as distributed by Debian 10.
Thanks.
Currently this plugin generates RSA keys:
However, RSA/SHA-1 was deprecated in the latest OpenSSH release (changelog) and my builds fail with the following error:
qemu.example-vm: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Unable to negotiate with 127.0.0.1 port 37115: no matching host key type
found. Their offer: ssh-rsa", "unreachable": true}
Please, consider changing the algorithm to something newer.
This issue was originally opened by @Helcaraxan as hashicorp/packer#8420. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When running the Ansible provisioner on a host provided by the Google Compute builder it appears to be impossible to run task commands with become
and become_user
if the username in question (created via Ansible) is identical to the user that is running the packer build
. Instead the requested command is executed by the user specified via ssh_username
in the build file.
I have attempted to reproduce this with only Ansible, using manual machine provisioning combined with Ansible's own remote SSH capabilities but this did not result in the same behaviour.
My test setup can be found in this repo. This repro's run.sh
script is intended to be run from a GCE instance provisioned with a GCE service-account playing the role of the controller.
Outside of this minimal repro I have observed this issue in a more involved setup where the command that is being run is actually docker-credential-gcr configure-docker
.
In this other setup there is no unreachable error due to lacking permissions. However the resulting $HOME/.docker/config.json
ends up in the wrong directory.
Examples:
User running packer build
: controller
Configured ssh_username
user: ansible
User created by Ansible on target: controller
-> File ends up as /home/ansible/.docker/config.json
instead of /home/controller/.docker/config.json
.
User running packer build
: controller
Configures ssh_username
user: root
User created by Ansible on target: controller
-> File ends up as /root/.docker/config.json
instead of /home/controller/.docker/config.json
.
1.3.5, 1.4.3 and 1.4.5
Copy-pasted from the repo linked above.
{
"builders": [
{
"type": "googlecompute",
"project_id": "{{user `project_id`}}",
"zone": "{{user `zone`}}",
"instance_name": "packer-{{uuid}}",
"machine_type": "n1-standard-1",
"preemptible": true,
"source_image_family": "ubuntu-1804-lts",
"communicator": "ssh",
"ssh_username": "{{user `packer_user`}}"
}
],
"provisioners": [
{
"type": "ansible",
"ansible_env_vars": [
"ANSIBLE_DIFF_ALWAYS=1",
"ANSIBLE_FORCE_COLOR=1"
],
"extra_arguments": [
"-vvv",
"--extra-vars",
"target_user={{user `target_user`}}"
],
"playbook_file": "ansible/playbook.yaml"
}
]
}
Tested and reproduced with all combinations between Packer versions 1.3.5, 1.4.3 and 1.4.5 and Ansible versions 2.9.0, 2.9.1 and latest 2.10.0.dev0 (ansible/ansible@6ae01d4).
Identical for both the controller and the target: Ubuntu 18.04 LTS - GCE instance, with only the minimal software installs required to run the repro scenario.
I have not attempted to reproduce this with builders other than the Google Compute one.
Logs of both a failed and a successful build can be found here.
This issue was originally opened by @snesbittsea as hashicorp/packer#6052. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When provisioning a LXD container, the current Ansible provisioner fails to make group variables available to the Ansible run.
In debugging this I dumped the Ansible hostvars and saw that there are two hosts defined - a "default" host and a host corresponding to the builder "name" parameter with a "packer-" prefix. The expected group vars are found in the "default" host but not the "packer-" host. Attempting to force the provisioner to use the default host with the ansible-playbook -l option fails because there is no LXD container named "default" available and the host address is set to 127.0.0.1.
Setting the host_alias to 'packer-consulserver' does result in the group vars being available but provisioning fails with can't reach container
The core issue I believe is the generation of the temporary inventory file. The problem is that as part of the creation of the temporary inventory file the ansible_host is set to 127.0.0.1 (hardcoded). I believe this means that for any host which activates the temporary host file the connection settings will be wrong for LXD containers.
As a quick and dirty check I replaced the existing code with the following removing the setting of the ansible_host variable:
host := fmt.Sprintf("%s ansible_user=%s ansible_port=%s\n",
p.config.HostAlias, p.config.User, p.config.LocalPort)
This works and I have both group vars available and can access the LXD container.
Summary - my analysis suggests that the hard coding of the ansible_host to 127.0.0.1 when generating the inventory file will cause the provisioner to be unable to successfully connect to the LXD container.
As a final note, I can work around the issue by defining the group variables in the template. This is very non-DRY and hacky.
I think there are at least three issues that need attention here:
The provisioner documentation needs to be updated to describe how to get the provisioner to work with LXD containers. This includes documenting that the container reference is the builder name parameter with a packer- prefix, that the host_alias needs to be set to this concated name and the -l variable (and possible -i) variables need to be set in extra_arguments
The creation of the temporary inventory file needs to be changed so that ansible_host value is set correctly.
Rethinking the temporary host file to allow the user to explicitly set the host file to be used. I might be missing something here, but why shouldn't I be able to pass in one of my existing Ansible hosts files?This would DRY things out
Here are my work products:
packer version: 1.2.2-dev
host platform: Ubuntu 17.10
More detail on the debug process can be found in the packer google group discussion: Can't get remote ansible provisioner to assign to existing group during provisioning
packer command:
PACKER_LOG=1 $GOPATH/src/github.com/hashicorp/packer/bin/packer build -debug -only consulserver template.json
packer template:
{
"builders": [
{
"type": "lxd",
"name": "consulserver",
"image": "AWTAlpine37",
"output_image": "consulserver",
"publish_properties": {
"description": "Consul Server"
}
}
],
"provisioners": [
{
"type": "shell",
"inline": [ "sleep 10; apk update && apk add python2" ]
},
{
"type": "ansible",
"groups": ["consul_instances"],
"host_alias": "packer-consulserver",
"ansible_env_vars": [ "ANSIBLE_CONFIG=/home/FUZZBUTT/snesbitt/projects/ansible/fuzzbutt.awt_ansible/ansible.cfg" ],
"inventory_directory": "/home/FUZZBUTT/snesbitt/projects/ansible/fuzzbutt.awt_ansible/inventories/prod",
"playbook_file": "/home/FUZZBUTT/snesbitt/projects/ansible/fuzzbutt.awt_ansible/inventories/prod/domain-server.yml",
"extra_arguments": [ "-c", "lxd", "-l", "packer-consulserver", "-i", "packer-consulserver,"]
}
]
}
Playbook:
---
- hosts: all
tasks:
- name: Play hosts
debug: msg="play_hosts={{play_hosts}}"
- name: Dump consul_instance group
debug: msg="consul instances {{ groups['consul_instances'] | to_nice_yaml }}"
- name: Dump inventory_hostname
debug: var=inventory_hostname
- name: Dump workstations group
debug: var=groups['workstations']
- name: Dump consul_instances group var
debug: var=consul_node_role
- name: Dump consul_instances group var
debug: var=hostvars.default.consul_node_role
- name: Dump consul_instances group var
debug: var=consul_node_role
- name: Hello world
command: cat "/etc/fstab"
register: fstab
- name: Dump fstab
debug: var=fstab
provisioner.go
provisioner.go.zip
This issue was originally opened by @adamdoupe as hashicorp/packer#10374. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Similar to the problem of hashicorp/packer#9118 (where ~<local_username>
directories are created), however packer will fail if the local username exists on the remote system.
Follow steps in hashicorp/packer#9118, however run packer as a user that exists on the remote system (such as root
) but is not the user specified in ssh_username
.
When packer attempts to create the temporary directory to hold the ansible files to run, it executes echo ~<local_username>
on the remote machine. If <local_username>
does not exist, bash will return ~<local_username>
(which gives rise to the behavior in hashicorp/packer#9118).
However, if <local_username>
does exist on the remote machine, then bash will expand it to that user's home directory, for instance echo ~root
will typically expand to /root
. In the next step, packer will try to create a temporary directory in that directory, and this will fail in the case of /root
(because the ssh user does not have permissions to that directory).
This issue was originally opened by @jesusch as hashicorp/packer#9540. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
I have multiple inventory files per environment:
ansible git:(master) ✗ ls hosts
hosts.dev hosts.old hosts.prod staging
hosts.ds hosts.poc
hosts.inf
any of those inventories has group_vars for e.g. frontend servers
my packer.json file looks like:
"provisioners": [
{
"type": "ansible",
"playbook_file": "packer/playbooks/frontend.yml",
"groups": [ "frontends" ],
"inventory_directory": "hosts/hosts.{{ user `env` }}",
"extra_arguments": [
"--ssh-extra-args", "-o IdentitiesOnly=yes"
]
}
],
while running this task packer always creates a dynamic inventory which does not include my group_vars:
ansible-playbook -e ... -i hosts/hosts.preprev/packer-provisioner-ansible441481275 ..ansible/packer/playbooks/frontend.yml
How do I add already defined inventory variables to such a packer host?
This issue was originally opened by @pjnagel in hashicorp/packer#5412 and has been migrated to this repository. The original issue description is below.
When the Ansible provisioner executes ansible, it prints a line like the following:
Executing Ansible: ansible-playbook --extra-vars packer_build_name=virtualbox-ovf packer_builder_type=virtualbox-ovf -i /tmp/packer-provisioner-ansible432653744 /home/r2d2/setup/lautus_ansible/tau.yml --private-key /tmp/ansible-key857554005 --limit tau-desktop-image.lautus.net
The above command-line is not suitable for pasting into a shell and re-running, due to the embedded spaces not being shell escaped, or quoted. The actual command that was executed is (note the quotes):
ansible-playbook --extra-vars "packer_build_name=virtualbox-ovf packer_builder_type=virtualbox-ovf" -i /tmp/packer-provisioner-ansible432653744 /home/r2d2/setup/lautus_ansible/tau.yml --private-key /tmp/ansible-key857554005 --limit tau-desktop-image.lautus.net
This makes it difficult to debug failing builds. One strategy for debugging builds is to arrange for the build to pause, and then manually re-running the ansible command.
I am not sure whether this issue is specific to ansible builds, or whether this bug affects printing out of shell command-lines in general.
Packer version 1.1.0
Ubuntu 16.04.3 LTS
This issue was originally opened by @JKetelaar as hashicorp/packer#10477. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Whenever I am creating a parallel build, the Ansible provisioner uses only one build entity, in each of the tasks.
Run Packer with an Ansible provisioner and two builders.
It will run the provisioner on only one builder.
Packer v1.6.6
Build file
{
"variables": {
"hcloud_key": "{{ env `HCLOUD_KEY` }}"
},
"builders": [
{
"name": "common",
"type": "hcloud",
"token": "{{ user `hcloud_key` }}",
"image": "ubuntu-20.04",
"location": "nbg1",
"server_type": "cpx11",
"ssh_username": "root",
"snapshot_name": "test-1"
},
{
"name": "gocd",
"type": "hcloud",
"token": "{{ user `hcloud_key` }}",
"image": "ubuntu-20.04",
"location": "nbg1",
"server_type": "cpx11",
"ssh_username": "root",
"snapshot_name": "test-2"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"/usr/bin/cloud-init status --wait"
]
},
{
"type": "ansible",
"playbook_file": "./playbooks/packer-debug.yml",
"keep_inventory_file": true
}
]
}
Ansible playbook (./playbooks/packer-debug.yml
)
---
- hosts: all
remote_user: root
become: true
tasks:
- debug: var=ansible_all_ipv4_addresses
- debug: var=ansible_default_ipv4.address
As you can see in the log output on line 5 and line 7 it shows two different IPs (116.203.213.239
and 116.203.51.13
).
If you then look on line 28, it shows that the gocd builder indeed has the right IP (116.203.213.239
), though looking at line 67 it shows that also the common builder uses the gocd builder IP address.
Both on Ubuntu 20 and MacOS.
https://gist.github.com/JKetelaar/6cfab554d0f24f0b4f1b653c82aadf84
When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.
The documentation (here) show the host_alias
, however in practice the HCL file isn't valid. I think either the documentation is incorrect or the code is as host_alias
var doesn't seem to exist for ansible-local
(however golang is pretty unknown to me). I imagine it is supposed to exist, although a workaround is to add an inventory_group
.
Steps to reproduce this issue
From packer version
Packer = 1.7.3
Ansible plugin = 1.0.0
packer {
required_plugins {
vagrant = {
version = ">= 1.0.0"
source = "github.com/hashicorp/vagrant"
}
}
required_plugins {
ansible = {
version = ">= 1.0.0"
source = "github.com/hashicorp/ansible"
}
}
}
source "vagrant" "my-box" {
communicator = "ssh"
source_path = "geerlingguy/centos8"
provider = "virtualbox"
output_dir = "build"
}
build {
sources = [
"source.vagrant.my-box"]
provisioner "shell" {
inline = [
"sudo yum install epel-release -y",
"sudo yum install ansible -y",
]
}
provisioner "ansible-local" {
host_alias = "my-box"
playbook_file = "./playbook.yml"
galaxy_file = "./requirements.yml"
}
}
OS, Architecture, and any other information you can provide about the
environment.
Include appropriate log fragments. If the log is longer than a few dozen lines,
please include the URL to the gist of the log or
use the Github detailed format instead of posting it directly in the issue.
Set the env var PACKER_LOG=1
for maximum log detail.
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.
When an ansible task fails, it seems to escape line breaks \n
and displays them instead of creating a line break. This makes debugging issues difficult. I'm not sure if this is purely a fault of packer or ansible, but even if it is ansible, packer could unescape line breaks if they exist in the output
all packer/ansible debugging
Below is a (bad) example of a stacktrace from a failed ansible task. Yellow highlighting is mine (from iTerm).
The fix here would be to unescape these line breaks and show the stacktrace appropriately.
ansible_env_vars
argument for the ansible-local provisioner is not supported
Configure ansible-local provisioner with ansible_env_vars
Packer v1.7.2
required_plugins {
amazon = {
version = "~> 1.0"
source = "github.com/hashicorp/amazon"
}
ansible = {
version = ">= 1.0.0"
source = "github.com/hashicorp/ansible"
}
}
provisioner "ansible-local" {
playbook_dir = abspath("./provisioners/ansible")
playbook_file = abspath("./provisioners/ansible/playbook.yml")
ansible_env_vars = ["ANSIBLE_HOST_KEY_CHECKING=False"]
}
macOS big Sur 11.4
Error: Failed preparing provisioner-block "ansible-local" ""
on build.pkr.hcl line 27:
(source code not available)
build.pkr.hcl:31,5-21: Unsupported argument; An argument named
"ansible_env_vars" is not expected here.
This issue was originally opened by @Doni7722 as hashicorp/packer#10592. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
I'm using qemu as the builder and ansible as provisioner. The templates will be created from ISO and packer creates a temporary SSH key, which will be used by Ansible to connect. That's not working any longer. First issue: I can see that the generated Key in /temp/ansisible-keyXXXX is empty and second issue: I can't see any difference in "use_proxy:true" or "use_proxy:false".
1: packer build with qemu
2: create a template from ISO where you connect over username / password
3: ansible as provisioner who should use temporary ssh key
4: ansible is unable to connect (permission denied)
1.6.6
{
"builders": [
{
"accelerator": "kvm",
"boot_command": [
"<up><tab> inst.text inst.ks=hd:fd0:/CentOS-7-x86_64-cloud.cfg <enter><wait>"
],
"boot_wait": "20s",
"communicator": "ssh",
"cpus": 1,
"disk_interface": "virtio-scsi",
"disk_size": "20480M",
"floppy_files": [
"templates/ks/CentOS/7/CentOS-7-x86_64-cloud.cfg"
],
"format": "qcow2",
"headless": false,
"iso_checksum": "{{user `iso_checksum_type`}}:{{user `iso_centos7_checksum`}}",
"iso_url": "{{user `iso_centos7_url`}}",
"memory": 2048,
"net_device": "virtio-net",
"output_directory": "templates/kvm/centos7/template",
"shutdown_command": "shutdown --poweroff now",
"ssh_password": "{{user `vm_root_pw`}}",
"ssh_timeout": "15m",
"ssh_username": "root",
"ssh_clear_authorized_keys": true,
"type": "qemu",
"vm_name": "packer_kvm_centos7"
}
],
"provisioners": [
{
"host_alias": "packer-template",
"playbook_file": "templates/kvm/centos7/playbooks/main.yml",
"type": "ansible",
"use_proxy": false,
"extra_arguments": [ "-vvvv" ]
},
{
"expect_disconnect": true,
"inline": [
"reboot"
],
"start_retry_timeout": "30m",
"type": "shell"
}
]
}
building machine: fedora 33 with packer 1.6.6 & ansible 2.9.17
building template: CentOS 7 from ISO
here the logs:
https://gist.github.com/Doni7722/666afd5fa7fd364850c0be2835d8d3ae
I have a packer file containing the following:
[....]
source "docker" "Test" {
image = "centos:7"
export_path = "test.tar"
}
[....]
build {
sources = ["source.docker.Test"]
provisioner "shell" {
inline = ["echo 'proxy=http://<proxy_url>' >> /etc/yum.conf", "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7", "yum install -y python3"]
}
provisioner "ansible" {
extra_arguments = [
"-e", "proxy_url='http://<proxy_url>'",
"-e", "ansible_connection=docker"
]
playbook_file = "playbooks/GoldenImage.yml"
user = "root"
}
}
Ansible fails the first time that tries to connect to the docker container (when doing the initial host scan, called "facts gathering" in Ansible). And I suspect is because the contents of the inventory file are like
default ansible_host=127.0.0.1 ansible_user=root ansible_port=42779
I think, although I am not sure, that 'default' should actually be the ID of the docker container. I have checked in hashicorp/packer-plugin-ansible/provisioner/ansible/provisioner.go, lines 265-267 and 'default' is the value of HostAlias if nothing else is set.
I guess setting the host_alias parameter to the container ID in my packer section would be enough, but I do not see how can I get the container ID from the docker builder in packer. Is this a bug? or a configuration mistake on my side?
UPDATE: If I set the host_alias parameter in the packer config file, the inventory file gets updated as expected. However, I do not understand how can I access the state variable instance_id (called like this on the packer-plugin-docker, to be used by the provisioners) from the config file.
packer version: 1.7.4
I do not know how to get the plugin versions
This issue was originally opened by @gamethis as hashicorp/packer#9300. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
add variable for ansible extra vars to payload.
A written overview of the feature.
add option in payload for ansible_extra_vars
This would be to seperate and simplify form.
today you have to do something like this
"extra_arguments": ["-vv",
"-T 1200",
"--extra-vars",
"ansible_host_key_checking=False ansible_scp_if_ssh=True ansible_python_interpreter=/usr/bin/python3.6 ansible_ssh_retries=20 ansible_ssh_common_args='-C -o ControlMaster=auto -o ControlPersist=180s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' ansible_shell_type=powershell ansible_user={{user `ssh_username`}} ansible_password={{user `ssh_password`}} ansible_become_method=runas ansible_become_user=System proxy_addr=thd-svr-proxy.homedepot.com:9090 proxy_bypass=retail.net ansible_shell_executable=None"
],
the idea would be to allow the extra vars to be seperated to make for a clearer payload
"extra_arguments": ["-vv",
"-T 1200",
],
"ansible_extra_vars":[
"ansible_host_key_checking=False",
"ansible_scp_if_ssh=True"
"ansible_python_interpreter=/usr/bin/python3.6"
]
This issue was originally opened by @mbrancato as hashicorp/packer#8827. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.
When using multiple builders with the ansible provisioner and a galaxy_file
, there seems to be a race condition where multiple builders may try to download the role at the same time. This results in an error like the following:
==> azure-arm-two: Provisioning with Ansible...
==> azure-arm-one: Connected to SSH!
==> azure-arm-one: Provisioning with Ansible...
azure-arm-one: Executing Ansible Galaxy
azure-arm-two: Executing Ansible Galaxy
azure-arm-one: - extracting <role name> to /root/.ansible/roles/<role name>
azure-arm-one: - <role name> (<hash>) was installed successfully
azure-arm-two: - extracting <role name> to /root/.ansible/roles/<role name>
azure-arm-two: [WARNING]: - <role name> was NOT installed successfully: the specified role
azure-arm-two: <role name> appears to already exist. Use --force to replace it.
azure-arm-two: ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
==> azure-arm-two: Provisioning step had errors: Running the cleanup provisioner, if present...
Interestingly, I think this only happens if both processes seem to be extracting the role at the same time. If the builders are not near each other in the steps being performed, the second builder doesn't seem to care that the role already exists.
Note, the error is actually coming from Ansible. It seems like Packer would need to more tightly coordinate the use of ansible-galaxy
between builders and not invoke the same request more than one at a time.
https://github.com/ansible/ansible/blob/c64202a49563fefb35bd8de59bceb0b3b2fa5fa1/lib/ansible/galaxy/role.py#L309
Use two builders with an ansible galaxy file.
Packer v1.5.4
n/a
Linux, amd64
n/a
This issue was originally opened by @Stavroswd as hashicorp/packer#9034. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Hi guys, I'm trying to build an image for lxd with packer and ansible for provisioning.
During the build for some reason the process get's stuck at the gathering facts task.
Even after disabling this step it then starts to do the playbook tasks but can't continue.
Run the build command with the provided config files.
sudo packer build build-config.json
packer => 1.5.5
ansible => 2.9.6
lxd => 3.0.3
lxc => client: 3.0.3 , server 3.0.3
build-config.json:
{
"builders": [
{
"type": "lxd",
"name": "lxd-image",
"image": "ubuntu:18.04",
"output_image": "lxd-image",
"publish_properties": {
"description": "Building and provision image."
}
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "build/modules/vagrant-box-commandcenter/provision.yml",
"user": "lxd-image",
"ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ],
"extra_arguments": [ "-vvvv" ]
}
]
}
Ansible playbook :
- hosts: default
become: yes
gather_facts: no
roles:
- role: install-ruby
Packer verbose build output:
==> lxd-image: Creating container...
==> lxd-image: Provisioning with Ansible...
==> lxd-image: Executing Ansible: ansible-playbook --extra-vars packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible670992089 /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml -e ansible_ssh_private_key_file=/tmp/ansible-key831963874 -vvvv
lxd-image: ansible-playbook 2.9.6
lxd-image: config file = /devops/repo/namespaces/3-operations/ansible.cfg
lxd-image: configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
lxd-image: ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
lxd-image: executable location = /usr/local/bin/ansible-playbook
lxd-image: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
lxd-image: Using /devops/repo/namespaces/3-operations/ansible.cfg as config file
lxd-image: setting up inventory plugins
lxd-image: host_list declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: script declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: auto declined parsing /tmp/packer-provisioner-ansible670992089 as it did not pass its verify_file() method
lxd-image: Parsed /tmp/packer-provisioner-ansible670992089 inventory source with ini plugin
lxd-image: Loading callback plugin yaml of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/yaml.pyc
lxd-image:
lxd-image: PLAYBOOK: provision.yml ********************************************************
lxd-image: Positional arguments: /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
lxd-image: become_method: sudo
lxd-image: inventory: (u'/tmp/packer-provisioner-ansible670992089',)
lxd-image: forks: 10
lxd-image: tags: (u'all',)
lxd-image: extra_vars: (u'packer_build_name=lxd-image packer_builder_type=lxd -o IdentitiesOnly=yes', u'ansible_ssh_private_key_file=/tmp/ansible-key831963874')
lxd-image: verbosity: 4
lxd-image: connection: smart
lxd-image: timeout: 10
lxd-image: 1 plays in /devops/repo/namespaces/3-operations/build/modules/vagrant-box-commandcenter/provision.yml
lxd-image:
lxd-image: PLAY [default] *****************************************************************
lxd-image: META: ran handlers
lxd-image:
lxd-image: TASK [0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user : Install build tools] ***
lxd-image: task path: /devops/repo/namespaces/0-tools/ruby/modules/install-ruby/roles/ruby-equipped-user/tasks/main.yml:1
lxd-image: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: lxd-image
lxd-image: <127.0.0.1> SSH: EXEC ssh -vvv -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=37521 -o 'IdentityFile="/tmp/ansible-key831963874"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="lxd-image"' -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/e3ac3f2d4b 127.0.0.1 '/bin/sh -c '"'"'echo ~lxd-image && sleep 0'"'"''
OS: Ubuntu 18.04 generic.
Simply running my ansible playbook against a running container runs perfectly so it must be something with the lxd builder.
Has anyone had the same issues with the lxd builder and can help?
This issue was originally opened by @queglay as hashicorp/packer#11058. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
If playbooks are attempted to be run from within a collection, packer will error since it has not yet installed the requirements.
Try to run a playbook from within a collection.
v1.7.2
example:
provisioner "ansible" {
playbook_file = "./ansible/collections/ansible_collections/firehawkvfx/fsx/fsx_packages.yaml"
extra_arguments = [
"-vv",
"--extra-vars",
"variable_user=deadlineuser resourcetier=${var.resourcetier} variable_host=default user_deadlineuser_pw='' package_python_interpreter=/usr/bin/python2.7"
]
collections_path = "./ansible/collections"
roles_path = "./ansible/roles"
ansible_env_vars = ["ANSIBLE_CONFIG=ansible/ansible.cfg"]
galaxy_file = "./requirements.yml"
only = [
"amazon-ebs.centos7-rendernode-ami",
]
}
Amazon linux 2
Error: Failed preparing provisioner-block "ansible" ""
on /home/ec2-user/environment/firehawk/deploy/packer-firehawk-amis/modules/firehawk-ami/firehawk-ami.pkr.hcl line 814:
(source code not available)
1 error(s) occurred:
* playbook_file:
./ansible/collections/ansible_collections/firehawkvfx/fsx/fsx_packages.yaml is
invalid: stat
./ansible/collections/ansible_collections/firehawkvfx/fsx/fsx_packages.yaml: no
such file or directory
This issue was originally opened by @pun-ky in hashicorp/packer#11123 and has been migrated to this repository. The original issue description is below.
Expecting adding to the documentation another error message which might occur, in my case it was:
2021-06-28 09:03:22,348 p=1938 u=root n=ansible | Using /xxx/env/ansible/ansible.cfg as config file
2021-06-28 09:03:23,968 p=1938 u=root n=ansible | PLAY [aem] *********************************************************************
2021-06-28 09:03:24,278 p=1938 u=root n=ansible | Monday 28 June 2021 09:03:24 +0000 (0:00:00.345) 0:00:00.345 ***********
2021-06-28 09:03:24,283 p=1938 u=root n=ansible | [started TASK: Gathering Facts on aem]
2021-06-28 09:03:24,469 p=1978 u=root n=paramiko.transport | Connected (version 2.0, client Go)
2021-06-28 09:03:24,495 p=1978 u=root n=paramiko.transport | Authentication (publickey) successful!
2021-06-28 09:03:26,057 p=1938 u=root n=ansible | TASK [Gathering Facts] *********************************************************
2021-06-28 09:03:26,069 p=1938 u=root n=ansible | fatal: [aem]: FAILED! => {}
MSG:
failed to open a SFTP connection (EOF during negotiation)
2021-06-28 09:03:26,087 p=1938 u=root n=ansible | PLAY RECAP *********************************************************************
2021-06-28 09:03:26,095 p=1938 u=root n=ansible | aem : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
2021-06-28 09:03:26,098 p=1938 u=root n=ansible | Monday 28 June 2021 09:03:26 +0000 (0:00:01.819) 0:00:02.165 ***********
2021-06-28 09:03:26,105 p=1938 u=root n=ansible | ===============================================================================
2021-06-28 09:03:26,107 p=1938 u=root n=ansible | Gathering Facts --------------------------------------------------------- 1.83s
https://www.packer.io/docs/provisioners/ansible/ansible#redhat-centos
Documentation in the above link is nice, because finally after n days I found it, but it is not saying that it helps with "failed to open a SFTP connection" on Ansible side. It's completely not trivial to figure it out, because Ansible in my case was working fine when called directly. But when it was called by Packer, the default value of Packer, I mean that sftp_command is introducing that non-trivial problem.
Use RHEL 8.3 image with azure-arm builder. Then use Ansible remote provisioner without specifying sftp_command
.
1.7.3
Alpine / Packer run from inside Docker container.
This issue was originally opened by @duijf as hashicorp/packer#7241. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
This is a feature request.
Summary: I would like the "command"
key for the ansible
provisioner to accept lists of strings.
Background: Internally, we have a wrapper around Ansible playbook which ensures that our Ansible playbooks run in the environment they expect. It is responsible for passing the right flags to ansible-playbook
for common scenario's, checking preconditions, and some setup. (FWIW, I believe we're not unique in this: I've talked to quite a few people about the way they use Ansible setup and this is quite common from what I've heard)
The wrapper has a subcommand based CLI. So there is our-ansible-wrapper foo
, our-ansible-wrapper bar
, etc. Because of this, I would like Packer to call our-ansible-wrapper packer-provision PLAYBOOK
. This is currently not possible without an intermediate shell script.
What I've tried: This does not work:
{
"type": "ansible",
"command": "our-ansible-wrapper packer-provision",
"playbook_file": "image.yml"
}
This is because Packer tries to find a binary with a file name of the full command
key. So including the space and packer-provision
.
Adding packer-provision
to extra_arguments
also does not work.
{
"type": "ansible",
"command": "our-ansible-wrapper",
"playbook_file": "playbooks/image-base.yml",
"extra_arguments": [
"packer-provision"
]
}
This is because those extra arguments are appended to the command
after other arguments generated by Packer. So packer generates something like our-ansible-wrapper /path/to/playbook -i /tmp/inventory packer-provision
.
More control over the order of arguments would be nice.
Request: I would like this to work:
{
"type": "ansible",
"command": ["our-ansible-wrapper", "packer-provision"],
"playbook_file": "playbooks/image-base.yml"
}
I currently work around this by having a separate shell script where the only purpose is to pass that additional argument to our wrapper. I'd prefer not to need it.
This request might generalize to other provisioners, but I haven't tried those yet.
This issue was originally opened by @queglay as hashicorp/packer#10348. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Packer suppresses ANSI color encoding. When using provisioners like Ansible, some very long logs can be output and without the color it's difficult to determine where something went wrong.
Build anything with Ansible. Any colors that would normally be output are not visible.
1.6.4
The URL in the GitHub repo description sends to Not Found:
https://www.packer.io/plugins/provisioners/ansible
This issue was originally opened by @SteveTalbot as hashicorp/packer#6347. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Affected version: Packer 1.2.4
Host platform: Ubuntu 16.04 LTS, Jenkins slave
Builder: amazon-chroot
Provisioner: ansible-local
with Ansible v2.5.4.0
This was originally raised as issue hashicorp/packer#5335. A software bug relating to escaping of special characters was fixed as a result of hashicorp/packer#5335, and this ticket captures the remaining incompatibility between amazon-chroot and ansible-local. The problem is entirely with documentation, not the Packer software itself.
The following Packer configuration causes Ansible to fail with the error "provided hosts list is empty, only localhost is available" when used with the Amazon chroot builder.
{
"builders": [
{
"type": "amazon-chroot",
"name": "amazon-chroot-ansible-local",
"ami_name": "packer-bug-demo/amazon-chroot/ansible-local",
"ami_description": "Demo to reproduce Packer issue; based on Ubuntu 16.04 LTS (HVM)",
"source_ami": "ami-58d7e821",
"chroot_mounts": [
["proc", "proc", "/proc"],
["sysfs", "sysfs", "/sys"],
["bind", "/dev", "/dev"],
["bind", "/dev/pts", "/dev/pts"],
["bind", "/dev/shm", "/dev/shm"],
["binfmt_misc", "binfmt_misc", "/proc/sys/fs/binfmt_misc"]
],
"command_wrapper": "sudo {{.Command}}",
"copy_files": [
"/etc/resolv.conf"
],
"ena_support": true,
"force_deregister": true,
"sriov_support": true
},
],
"provisioners": [
{
"type": "shell",
"execute_command": "chmod +x {{.Path}}; {{.Vars}} sudo -E sh \"{{.Path}}\"",
"script": "{{template_dir}}/chroot/pre.sh"
},
{
"type": "ansible-local",
"command": "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_LOCAL_TEMP=/tmp/ansible ANSIBLE_REMOTE_TEMP=/tmp/ansible-managed ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles:/etc/ansible/roles ansible-playbook",
"playbook_dir": "{{template_dir}}/ansible",
"playbook_file": "{{template_dir}}/ansible/playbook.yml",
"staging_directory": "/tmp/packer-provisioner-ansible-local",
"extra_arguments": [
"--tags=install,package",
"-vvv"
]
},
{
"type": "shell",
"execute_command": "chmod +x {{.Path}}; {{.Vars}} sudo -E sh \"{{.Path}}\"",
"script": "{{template_dir}}/chroot/post.sh"
},
{
"type": "shell-local",
"command": "sleep 5"
}
]
}
The pre.sh
script prevents services from being started within the chroot and installs Ansible and its dependencies. The posh.sh
script cleans up afterwards. But otherwise you don't need to worry about what they're doing for the purpose of this example.
Packer runs a command that looks like the following:
cd /tmp/packer-provisioner-ansible-local && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_LOCAL_TEMP=/tmp/ansible ANSIBLE_REMOTE_TEMP=/tmp/ansible-managed ANSIBLE_ROLES_PATH=/tmp/packer-provisioner-ansible-local/galaxy_roles:/etc/ansible/roles ansible-playbook /tmp/packer-provisioner-ansible-local/playbook.yml --extra-vars "packer_build_name=amazon-chroot-ansible-local packer_builder_type=amazon-chroot packer_http_addr=" --tags=install,package -vvv -c local -i /tmp/packer-provisioner-ansible-local/packer-provisioner-ansible-local117135750
The reason for the failure is that Packer's ansible-local provisioner builds an inventory file containing localhost (an "implicit localhost"), rather than specifying -i localhost,
on the ansible-playbook command line (an "explicit localhost"). Most recent versions of Ansible do not include an implicit localhost in the "all" hosts group, so none of the playbook tasks get run.
Firstly, the Packer documentation does say "Building within a chroot (e.g. amazon-chroot) requires changing the Ansible connection to chroot", but it says this on the documentation page for the Ansible remote provisioner. It would be helpful if this were included with the "gotchas" documentation for the amazon-chroot builder and also with the documentation for the Ansible local provisioner.
Secondly, for the benefit of anyone who might stumble across this ticket in future, it is possible to use Ansible in local mode with a chroot, but you need to:
/etc/ansible/hosts
in the chrootSo an example working configuration might look like:
"provisioners": [
{
"type": "shell",
"execute_command": "chmod +x {{.Path}}; {{.Vars}} sudo -E sh \"{{.Path}}\"",
"script": "{{template_dir}}/chroot/pre.sh"
},
{
"type": "file",
"source": "{{template_dir}}/ansible",
"destination": "/etc/ansible/local-playbook"
},
{
"type": "file",
"source": "{{template_dir}}/ansible/inventories/local.ini",
"destination": "/etc/ansible/hosts"
},
{
"type": "shell",
"inline": [
"sudo -E ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ANSIBLE_LOCAL_TEMP=/tmp/ansible ANSIBLE_REMOTE_TEMP=/tmp/ansible-managed ANSIBLE_ROLES_PATH=/etc/ansible/local-playbook/galaxy_roles:/etc/ansible/roles ansible-playbook --extra-vars \"packer_build_name={{build_name}} packer_builder_type={{build_type}}\" --tags=install,package -vvv /etc/ansible/local-playbook/playbook.yml"
]
},
]
We have often found it necessary to specify the ANSIBLE_LOCAL_TEMP and ANSIBLE_REMOTE_TEMP environment variables as part of the ansible-playbook command, but your mileage may vary.
Theoretically it would be possible to add an option to the ansible-local provisioner to specify the location of the generated inventory file within the chroot (it would have to be /etc/ansible/hosts
), but as per the documentation, it is probably better to use the ansible provisioner in chroot connection mode instead.
So in summary, a couple of additions to the amazon-chroot and ansible-local documentation would help stop people getting stuck with the same problem I did.
This issue was originally opened by @joubbi as hashicorp/packer#10427. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When running a playbook containing the ansible.builtin.template module with a validation command, Ansible fails to find the file generated by the template in Packer.
It is the variable %s that breaks. It seems to me that the variable %s points to the wrong path (See part of debug at the bottom).
- name: "Configure aide"
template:
src: "aide.conf.j2"
dest: "{{ aide_conf_path }}"
validate: "{{ aide_path.stdout }} -D -c %s"
Same playbook runs fine with ansible-playbook from the same machine to the same image once the image is created without the offending playbook.
Install https://galaxy.ansible.com/ahuffman/aide
Invoke that role from Packer. This will fail due to the validate row in the main.yml task.
Comment the row containing validate and the build will succeed.
Build an image without the role, run the image in a virtual machine and run the role against the machine and it will work as expected.
Packer v1.6.6
"provisioners": [
{
"type": "ansible",
"playbook_file": "../ansible/aide.yml",
"extra_arguments": [ "-vvvv" ]
}
]
$ cat ../ansible/aide.yml
- name: "Install and configure AIDE"
hosts: "default"
roles:
- "ahuffman.aide"
$ cat /etc/lsb-release
DISTRIB_ID=LinuxMint
DISTRIB_RELEASE=20
DISTRIB_CODENAME=ulyana
DISTRIB_DESCRIPTION="Linux Mint 20 Ulyana"
$ kvm --version
QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.10)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
$ ansible --version
ansible 2.10.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/farid/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/farid/.local/lib/python3.8/site-packages/ansible
executable location = /home/farid/.local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
qemu: fatal: [default]: FAILED! => {
qemu: "changed": false,
qemu: "checksum": "402ef66e0f34f01c23ba31f8ea8f7ed64416d27e",
qemu: "diff": [],
qemu: "exit_status": 17,
qemu: "invocation": {
qemu: "module_args": {
qemu: "_original_basename": "aide.conf.j2",
qemu: "attributes": null,
qemu: "backup": false,
qemu: "checksum": "402ef66e0f34f01c23ba31f8ea8f7ed64416d27e",
qemu: "content": null,
qemu: "dest": "/etc/aide.conf",
qemu: "directory_mode": null,
qemu: "follow": false,
qemu: "force": true,
qemu: "group": null,
qemu: "local_follow": null,
qemu: "mode": null,
qemu: "owner": null,
qemu: "remote_src": null,
qemu: "selevel": null,
qemu: "serole": null,
qemu: "setype": null,
qemu: "seuser": null,
qemu: "src": "~farid/.ansible/tmp/ansible-tmp-1609189480.6933067-48840-116779293865971/source",
qemu: "unsafe_writes": false,
qemu: "validate": "/usr/sbin/aide -D -c %s"
qemu: }
qemu: },
qemu: "msg": "failed to validate",
qemu: "stderr": "Cannot access config file:/rootfarid/.ansible/tmp/ansible-tmp-1609189480.6933067-48840-116779293865971/source:No such file or directory\nNo config defined\nConfiguration error\n",
qemu: "stderr_lines": [
qemu: "Cannot access config file:/rootfarid/.ansible/tmp/ansible-tmp-1609189480.6933067-48840-116779293865971/source:No such file or directory",
qemu: "No config defined",
qemu: "Configuration error"
Note My local user running Packer is farid. To me the paths in the above log looks strange/wrong: ~farid/.ansible/..
and /rootfarid/.ansible/tmp/ansible-tmp-...
This issue was originally opened by @nWmCZ as hashicorp/packer#7993. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Parameter roles_path is not taken into account with ansible provisioner on azure-arm
Gist related content (forgive me, I don't know how to quote it yet)
https://gist.github.com/nWmCZ/70441b91ee64f3cef0fc14b1e8ec09b5
EDIT:
Already found workaround:
"ansible_env_vars": [ ["ANSIBLE_ROLES_PATH=/git/roles"] ]
currently, this option is not for using custom ssh private key (Actually no option cloud change it)
For those who want to use custom key pair in both cloud-init and Ansible. It would be useful that they can just use one key pair without generate it every time.
also, there is not any output info about the onetime Ansible key pair. So even I want to use the onetime key pair, I cloud do nothing.
This issue was originally opened by @ppennanen as hashicorp/packer#6146. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When used with the LXD builder, the ansible-local provisioner extra_arguments
do not work as expected. See a (full example in this gist. Given this provisioner:
{
"type": "ansible-local",
"clean_staging_directory": true,
"playbook_file": "playbook.yml",
"extra_arguments": [
"--extra-vars \"test_variable={{user `test_variable`}}\"",
"-vvv"
]
}
It looks like extra_arguments
are added to the command when ansible is called:
test-image: Executing Ansible: cd /tmp/packer-provisioner-ansible-local/5ad5b523-7407-ba59-c9c3-9c717f25eaa7 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/5ad5b523-7407-ba59-c9c3-9c717f25eaa7/playbook.yml --extra-vars "packer_build_name=test-image packer_builder_type=lxd packer_http_addr=" --extra-vars "test_variable=cli-test" -vvv -c local -i /tmp/packer-provisioner-ansible-local/5ad5b523-7407-ba59-c9c3-9c717f25eaa7/packer-provisioner-ansible-local031086414
However the extra variable is not defined and ansible does not run in extra verbose -vvv
mode:
test-image:
test-image: TASK [debug] *******************************************************************
test-image: ok: [localhost] => {
test-image: "test_variable": "VARIABLE IS NOT DEFINED!"
test-image: }
Packer version: 1.2.2
Host platform: Ubuntu 16.04
Gist: https://gist.github.com/ppennanen/77e64f55fa1d7218e10ecd8b1ae1f2e8
This issue was originally opened by @emansom as hashicorp/packer#10666. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Providing the ansible-local
provisioner with a galaxy_file
results in no Ansible Collections being installed.
Upstream Ansible project are aware of this issue and have documented when the behavior of the ansible-galaxy
results in installation of collections and roles and when not. Following table created by @jborean93 eleborates on it further:
Command | Installed roles? | Installed collections? | Change in behaviour |
---|---|---|---|
ansible-galaxy install -r requirements.yml |
Yes | Yes | Will install collections as well as roles |
ansible-galaxy role install -r requirements.yml |
Yes | No (warning in -vvv) | Adds msg in -vvv that collections are skipped |
ansible-galaxy collection install -r requirements.yml |
No (warning in -vvv) | Yes | Adds msg in -vvv that roles are skipped |
ansible-galaxy install -r requirements.yml -p ./ |
Yes | No (warning in default) | Adds warning saying collections are skipped |
Removing the -p
argument from the ansible-galaxy
command invocation results in the desired behavior of both roles and collections being installed. I locally use a forked version of Packer with this patch applied with success.
There's a PR containing this patch to be applied, before or after further rework.
Attempt to install a Ansible Collection using the ansible-local
provisioner, see provided files below to reproduce. Modify builder type to your needs accordingly; the builder type is not a prerequisite.
Packer version: 1.6.5 [go1.15.5 linux amd64]
centos8-nginx-lxd.json
{
"provisioners": [
{
"type": "shell",
"inline": "dnf makecache && dnf -y install epel-release && dnf makecache && dnf install -y python39 libsodium python3-bcrypt python3-paramiko python3-pynacl sshpass python3-pyyaml python3-jinja2 platform-python-pip python3-pip && alternatives --set python /usr/bin/python3.9 && pip3 install ansible"
},
{
"type": "ansible-local",
"galaxy_file": "requirements.yml",
"playbook_file": "playbook.yml",
"extra_arguments": [
"ansible_python_interpreter=/usr/bin/python3.9"
]
}
],
"builders": [
{
"type": "lxd",
"name": "centos-nginx",
"image": "images:centos/8-Stream",
"output_image": "centos-8-nginx-test"
}
]
}
playbook.yml
---
- hosts: 127.0.0.1
connection: local
collections:
- nginxinc.nginx_core
roles:
- role: nginx
requirements.yml
---
collections:
- name: nginxinc.nginx_core
version: 0.3.0
Workstation running Linux, latest stable Packer and LXD.
See following gist. Excerpt where it goes haywire:
2021/02/19 21:19:39 packer-builder-lxd plugin: Executing lxc exec: /bin/sh []string{"/bin/sh", "-c", "lxc exec packer-acme-focal -- /bin/sh -c \"cd /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/playbook.yml --extra-vars \"packer_build_name=acme-focal packer_builder_type=lxd packer_http_addr=ERR_HTTP_ADDR_NOT_IMPLEMENTED_BY_BUILDER -o IdentitiesOnly=yes\" -vv ansible_python_interpreter=/usr/bin/python3.9 -c local -i /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/packer-provisioner-ansible-local240743931\""}
==> acme-focal: [WARNING]: No inventory was parsed, only implicit localhost is available
==> acme-focal: [WARNING]: provided hosts list is empty, only localhost is available. Note that
==> acme-focal: the implicit localhost does not match 'all'
==> acme-focal: ERROR! the role 'nginx' was not found in nginxinc.nginx_core:ansible.legacy:/tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2
==> acme-focal:
==> acme-focal: The error appears to be in '/tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/playbook.yml': line 7, column 7, but may
==> acme-focal: be elsewhere in the file depending on the exact syntax problem.
==> acme-focal:
==> acme-focal: The offending line appears to be:
==> acme-focal:
==> acme-focal: roles:
==> acme-focal: - role: nginx
==> acme-focal: ^ here
2021/02/19 21:19:39 packer-builder-lxd plugin: lxc exec execution exited with '1': 'cd /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2 && ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/playbook.yml --extra-vars "packer_build_name=acme-focal packer_builder_type=lxd packer_http_addr=ERR_HTTP_ADDR_NOT_IMPLEMENTED_BY_BUILDER -o IdentitiesOnly=yes" -vv ansible_python_interpreter=/usr/bin/python3.9 -c local -i /tmp/packer-provisioner-ansible-local/60301cdb-5fcb-3f4b-cc3f-515856a077d2/packer-provisioner-ansible-local240743931'
Looking at how this is implemented right now it will always get an absolute path to the playbook
Maybe a new input can be added to the plugin where we can use as input a playbook that exists in a Galaxy collection.
Example:
provisioner "ansible" {
playbook = "namespace.collection.playbook"
galaxy_file = "requirements.yml"
}
This issue was originally opened by @queglay as hashicorp/packer#10314. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Packer appears to have a bug to do with replicating Ansible's standard relative path behaviour with local collections. Playbooks that would normally work that use a local collection (unpublished) are not working with packer for me.
I have a packer template in a base folder:
/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami
...
provisioner "ansible" {
playbook_file = "./ansible/deadline-db-install.yaml"
extra_arguments = [
"--extra-vars", "user_deadlineuser_name=ubuntu"
]
collections_path = "./ansible/"
roles_path = "./ansible/roles"
}
...
According to Geerling's blog ( https://www.jeffgeerling.com/blog/2020/ansible-best-practices-using-project-local-collections-and-roles ), I apply that to this setting normally in ansible.cfg. In this case in the HCL build template...
collections_path = "./ansible/"
roles_path = "./ansible/roles"
We do that to utilise a role located in:
/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/ansible_collections/firehawkvfx/core/roles/s3_bucket_shared
...so a playbook can reference this with :
roles:
- role: firehawkvfx.core.s3_bucket_shared
And this behaviour does indeed work for me correctly when I use ansible normally, but when I use packer it doesn't.
It produces this error:
==> amazon-ebs.ubuntu18-ami: Executing Ansible: ansible-playbook -e packer_build_name="ubuntu18-ami" -e packer_builder_type=amazon-ebs --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars user_deadlineuser_name=ubuntu -e ansible_ssh_private_key_file=/tmp/ansible-key434307918 -i /tmp/packer-provisioner-ansible225252949 /home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/deadline-db-install.yaml
amazon-ebs.ubuntu18-ami: ERROR! the role 'firehawkvfx.core.s3_bucket_shared' was not found in /home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/roles:/home/ec2-user/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible
amazon-ebs.ubuntu18-ami:
amazon-ebs.ubuntu18-ami: The error appears to be in '/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/deadline-db-install.yaml': line 19, column 7, but may
amazon-ebs.ubuntu18-ami: be elsewhere in the file depending on the exact syntax problem.
amazon-ebs.ubuntu18-ami:
amazon-ebs.ubuntu18-ami: The offending line appears to be:
amazon-ebs.ubuntu18-ami:
amazon-ebs.ubuntu18-ami: roles:
amazon-ebs.ubuntu18-ami: - role: firehawkvfx.core.s3_bucket_shared
amazon-ebs.ubuntu18-ami: ^ here
It appears to only be searching for the role in the specified roles_path, when normally it should be considering the collection namespace.
Also, for thoroughness around a potential point of ambiguity, I also tried this which didn't work (and it shouldn't, but Ansible's relative paths can be confusing so I tried it anyway):
collections_path = "./ansible/ansible_collections/"
roles_path = "./ansible/roles"
1.6.4
build {
sources = [
"source.amazon-ebs.ubuntu18-ami"
]
provisioner "ansible" {
playbook_file = "./ansible/deadline-db-install.yaml"
extra_arguments = [
"--extra-vars", "user_deadlineuser_name=ubuntu"
]
collections_path = "./ansible/"
roles_path = "./ansible/roles"
}
post-processor "manifest" {
output = "${local.template_dir}/manifest.json"
strip_path = true
custom_data = {
timestamp = "${local.timestamp}"
}
}
}
Cloud 9, Amazon Linux 2.
2020/11/28 11:27:27 packer-provisioner-ansible plugin: Creating inventory file for Ansible run...
2020/11/28 11:27:27 ui: �[1;32m==> amazon-ebs.ubuntu18-ami: Executing Ansible: ansible-playbook -e packer_build_name="ubuntu18-ami" -e packer_builder_type=amazon-ebs --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars user_deadlineuser_name=ubuntu -e ansible_ssh_private_key_file=/tmp/ansible-key291491776 -i /tmp/packer-provisioner-ansible870890015 /home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/deadline-db-install.yaml�[0m
2020/11/28 11:27:27 packer-provisioner-ansible plugin: SSH proxy: serving on 127.0.0.1:45643
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: ERROR! the role 'firehawkvfx.core.s3_bucket_shared' was not found in /home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/roles:/home/ec2-user/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami:�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: The error appears to be in '/home/ec2-user/environment/firehawk-main-rollout/firehawk-main/modules/deadline-db-ami/ansible/deadline-db-install.yaml': line 19, column 7, but may�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: be elsewhere in the file depending on the exact syntax problem.�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami:�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: The offending line appears to be:�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami:�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: roles:�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: - role: firehawkvfx.core.s3_bucket_shared�[0m
2020/11/28 11:27:28 ui: �[0;32m amazon-ebs.ubuntu18-ami: ^ here�[0m
2020/11/28 11:27:28 [INFO] (telemetry) ending ansible
I have migrated my packer files from json to hcl2. The ansible provisioner does not recognize anymore the "{{ .WinRMPassword }}" template to add the ansible_password as extra-arguments. I have checked the source code of the plugin and I have found out that in case ansible_password is not found in extra-arguments, connection is winrm and 'use proxy' is false, the password gets added by default to the command line and everything works, but maybe this should be added somewhere to the documentation?
Ansible provisioner fails when run by Packer, runs successfully on its own. Same environment, same hosts, same inventory, same vars.
packer build -on-error ask -force -var-file kali-repo/packer/2021.1.json kali-repo/packer/kali-base.json
ansible-playbook
command Packer specifies in the output. It'll run without fail:1.7.2
Provisiner setup:
...
"provisioners": [
{
"type": "ansible",
"user": "kali",
"use_proxy": false,
"keep_inventory_file": true,
"playbook_file": "kali-repo/ansible/base-playbook.yml",
"extra_arguments": [
"--extra-vars",
"\"ansible_user=kali ansible_password=kali ansible_sudo_pass=kali ansible_become_method=sudo ansible_become_user=root ansible_host={{ build `Host` }} ansible_port=22 ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_password=kali\""
],
"ansible_env_vars": [
"ANSIBLE_HOST_KEY_CHECKING=False"
]
}
],
...
64 bit Alpine Linux container
Packer:
==> vsphere-clone: Provisioning with Ansible...
vsphere-clone: Not using Proxy adapter for Ansible run:
vsphere-clone: Using ssh keys from Packer communicator...
==> vsphere-clone: Executing Ansible: ansible-playbook -e packer_build_name="vsphere-clone" -e packer_builder_type=vsphere-clone -e packer_http_addr=10.80.0.18:0 --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars "ansible_user=kali ansible_password=kali ansible_sudo_pass=kali ansible_become_method=sudo ansible_become_user=root ansible_host=172.16.50.19 ansible_port=22 ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_password=kali" -e ansible_ssh_private_key_file=/tmp/ansible-key061932775 -i /tmp/packer-provisioner-ansible235206426 /tmp/build/put/kali-repo/ansible/base-playbook.yml
vsphere-clone:
vsphere-clone: PLAY [configure Kali base] *****************************************************
vsphere-clone:
vsphere-clone: TASK [Gathering Facts] *********************************************************
vsphere-clone: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.16.50.19' (ECDSA) to the list of known hosts.\r\nLoad key \"/tmp/ansible-key061932775\": invalid format\r\[email protected]: Permission denied (publickey,password).", "unreachable": true}
vsphere-clone: to retry, use: --limit @/tmp/build/put/kali-repo/ansible/base-playbook.retry
vsphere-clone:
vsphere-clone: PLAY RECAP *********************************************************************
vsphere-clone: default : ok=0 changed=0 unreachable=1 failed=0
vsphere-clone:
==> vsphere-clone: Error executing Ansible: Non-zero exit status: exit status 4
Ansible:
ansible-playbook -e packer_build_name="vsphere-clone" -e packer_builder_type=vsphere-clone -e packer_http_addr=10.80.0.18:0 --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars "ansible_user=kali ansible_password=kali ansible_sudo_pass=kali ansible_bec
ome_method=sudo ansible_become_user=root ansible_host=172.16.50.19 ansible_port=22 ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_password=kali" -e ansible_ssh_private_key_file=/tmp/ansible-key061932775 -i /tmp/packer-provisioner-ansible235206426 /tmp/bui
ld/put/kali-repo/ansible/base-playbook.yml
PLAY [configure Kali base] ***************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************************************************************************************
[WARNING]: Module invocation had junk after the JSON data: AttributeError("module 'platform' has no attribute 'dist'") KeyError('ansible_os_family')
ok: [default]
TASK [Pause for 1 minute] ****************************************************************************************************************************************************************************************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [default]
TASK [Install xrdp] **********************************************************************************************************************************************************************************************************************************************************
changed: [default]
This issue was originally opened by @SwampDragons as hashicorp/packer#9436. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When the ansible provisioner runs, it leaves behind a .ansible folder in the ansible user's homedir. It would be cool if we could clean that directory up so that the image is left in a more pristine state.
See comment hashicorp/packer#9118 (comment) for more details.
This issue was originally opened by @pjgoodall as hashicorp/packer#5740. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
> packer --version
1.1.3
> ansible --version
ansible 2.4.2.0
> lxd --version
2.21
N.B. Ansible was installed using pip
I am trying to provision an lxd container template on Ubuntu using packer using the lxd builder with the ansible remote provisioner. The provisioning freezes when ansible is trying to gather facts. To me, this looks like a regression of hashicorp/packer#5155
source files -> packer-ansible-lxd.zip
gist with output of packer -debug
This issue was originally opened by @shadowink as hashicorp/packer#10264. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
I've hit a bug in packer using the WinRM communicator with the vsphere-clone builder. When Packer spins up the VM it cannot connect via WinRM. If I create an ansible inventory with the VM IP and credentials, then ansible can directly connect to the VM packer created.
The VM I am using as a template in vcenter has WinRM setup and configured correctly for our domain. WinRM was setup by running
ConfigureRemotingForAnsible.ps1 -EnableCredSSP -DisableBasicAuth
and disabling HTTP with
winrm delete winrm/config/Listener?Address=*+Transport=HTTP
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [INFO] Attempting WinRM connection...
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [DEBUG] connecting to remote shell using WinRM
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [ERROR] connection error: http response error: 401 - invalid content type
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [ERROR] WinRM connection err: http response error: 401 - invalid content type
While packer repeatedly tries and fails to connect to the VM, create an ansible inventory file targeting the VM:
all:
hosts:
10.17.3.2:
ansible_user: VMADMINACCT
ansible_password: VMADMINPASS
ansible_connection: winrm
ansible_winrm_transport: credssp
ansible_winrm_server_cert_validation: ignore
Use ansible to connect to the VM via ansible:
ansible -i mypacker.yml -m setup all -vvvv
https://gist.github.com/shadowink/a0aeee534418a0ce7baee859059e76d1#file-mypacker-yml
The packer build will continue to loop through failed connections with the 401 error until the build times out. During this time, ansible can connect to the machine directly and will return the json of a successful setup run.
1.6.5
https://gist.github.com/shadowink/a0aeee534418a0ce7baee859059e76d1#file-mypacker-json
VM is Windows 10 Pro. Packer and ansible are being run on MacOS Catalina 10.15.7
Ansible is v2.10.1
==> vsphere-clone: Waiting for WinRM to become available...
2020/11/16 15:18:54 packer-builder-vsphere-clone plugin: [INFO] Attempting WinRM connection...
2020/11/16 15:18:54 packer-builder-vsphere-clone plugin: [DEBUG] connecting to remote shell using WinRM
2020/11/16 15:19:24 packer-builder-vsphere-clone plugin: [ERROR] connection error: unknown error Post "https://10.17.3.234:5986/wsman": dial tcp 10.17.3.234:5986: i/o timeout
2020/11/16 15:19:24 packer-builder-vsphere-clone plugin: [ERROR] WinRM connection err: unknown error Post "https://10.17.3.234:5986/wsman": dial tcp 10.17.3.234:5986: i/o timeout
2020/11/16 15:19:29 packer-builder-vsphere-clone plugin: [INFO] Attempting WinRM connection...
2020/11/16 15:19:29 packer-builder-vsphere-clone plugin: [DEBUG] connecting to remote shell using WinRM
2020/11/16 15:19:59 packer-builder-vsphere-clone plugin: [ERROR] connection error: unknown error Post "https://10.17.3.234:5986/wsman": dial tcp 10.17.3.234:5986: i/o timeout
2020/11/16 15:19:59 packer-builder-vsphere-clone plugin: [ERROR] WinRM connection err: unknown error Post "https://10.17.3.234:5986/wsman": dial tcp 10.17.3.234:5986: i/o timeout
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [INFO] Attempting WinRM connection...
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [DEBUG] connecting to remote shell using WinRM
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [ERROR] connection error: http response error: 401 - invalid content type
2020/11/16 15:20:04 packer-builder-vsphere-clone plugin: [ERROR] WinRM connection err: http response error: 401 - invalid content type
2020/11/16 15:20:09 packer-builder-vsphere-clone plugin: [INFO] Attempting WinRM connection...
2020/11/16 15:20:09 packer-builder-vsphere-clone plugin: [DEBUG] connecting to remote shell using WinRM
2020/11/16 15:20:10 packer-builder-vsphere-clone plugin: [ERROR] connection error: http response error: 401 - invalid content type
2020/11/16 15:20:10 packer-builder-vsphere-clone plugin: [ERROR] WinRM connection err: http response error: 401 - invalid content type
This issue was originally opened by @timblaktu as hashicorp/packer#11086. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Here it tells users that they need to use an =
between key/value in extra_arguments, but it's mysterious why until you look in one of the many past Packer issues related to this, e.g. here or here. I believe the latter link indicates that Packer is doing word splitting by whitespace before passing these to ansible, which is why the args are all wrong and the ansible call doesn't get off the ground.
I believe the description here should explain this, and also suggest that because the whitespace is the problem, users should also be mindful and take care to use quotes around entities containing whitespace."
I am trying to connect to a Windows VM using OpenSSH. I was able to set the configuration in order for packer to connect properly but ansible is failing with errors
packer 1.7.8
source "googlecompute" "windows-ssh-ansible" {
project_id = var.project_id
source_image_project_id = ["windows-cloud"]
source_image_family = "windows-2019"
zone = "us-east4-a"
disk_size = 50
machine_type = "n1-standard-8"
communicator = "ssh"
ssh_username = var.packer_username
ssh_private_key_file = var.ssh_key_file_path
ssh_timeout = "1h"
tags = ["packer"]
preemptible = true
image_name = "gcp-win-2019-full-baseline"
image_description = "GCP Windows 2019 Base Image"
image_labels = {
server_type = "windows-2019"
}
metadata = {
windows-startup-script-cmd = "net user ${var.packer_username} \"${var.packer_user_password}\" /add /y & wmic UserAccount where Name=\"${var.packer_username}\" set PasswordExpires=False & net localgroup administrators ${var.packer_username} /add & powershell Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 & echo ${var.ssh_pub_key} > C:\\ProgramData\\ssh\\administrators_authorized_keys & icacls.exe \"C:\\ProgramData\\ssh\\administrators_authorized_keys\" /inheritance:r /grant \"Administrators:F\" /grant \"SYSTEM:F\" & New-ItemProperty -Path \"HKLM:\\SOFTWARE\\OpenSSH\" -Name DefaultShell -Value \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -PropertyType String -Force & powershell Start-Service sshd & powershell Set-Service -Name sshd -StartupType 'Automatic' & powershell New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22 & powershell.exe -NoProfile -ExecutionPolicy Bypass -Command \"Set-ExecutionPolicy -ExecutionPolicy bypass -Force\""
}
account_file = var.account_file_path
}
build {
sources = ["sources.googlecompute.windows-ssh-ansible"]
provisioner "ansible" {
playbook_file = "./playbooks/playbook.yml"
use_proxy = false
ansible_ssh_extra_args = ["-o StrictHostKeyChecking=no -o IdentitiesOnly=yes"]
ssh_authorized_key_file = "/Users/user1/.ssh/packer_gcp_key.pub"
extra_arguments = ["-e", "win_packages=${var.win_packages}",
"-e",
"ansible_shell_type=powershell",
"-e",
"ansible_shell_executable=None"
]
user = var.packer_username
}
}
OS, Architecture, and any other information you can provide about the
environment.
21/11/30 20:34:36 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:34:36 [DEBUG] TCP connection to SSH ip/port failed: dial tcp 34.86.165.202:22: i/o timeout
2021/11/30 20:34:56 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:34:56 [DEBUG] TCP connection to SSH ip/port failed: dial tcp 34.86.165.202:22: i/o timeout
2021/11/30 20:35:01 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:01 [INFO] Attempting SSH connection to 34.86.165.202:22...
2021/11/30 20:35:01 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:01 [DEBUG] reconnecting to TCP connection for SSH
2021/11/30 20:35:01 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:01 [DEBUG] handshaking with SSH
2021/11/30 20:35:02 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:02 [DEBUG] handshake complete!
2021/11/30 20:35:02 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:02 [DEBUG] Opening new ssh session
2021/11/30 20:35:04 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:04 [ERROR] RequestAgentForwarding: &errors.errorString{s:"forwarding request denied"}
==> googlecompute.windows-ssh-ansible: Connected to SSH!
2021/11/30 20:35:04 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:04 Running the provision hook
2021/11/30 20:35:04 [INFO] (telemetry) Starting provisioner ansible
2021/11/30 20:35:05 packer-provisioner-ansible plugin: ansible-playbook version: 2.11.6
==> googlecompute.windows-ssh-ansible: Provisioning with Ansible...
googlecompute.windows-ssh-ansible: Using ssh keys from Packer communicator...
googlecompute.windows-ssh-ansible: Not using Proxy adapter for Ansible run:
googlecompute.windows-ssh-ansible: Using ssh keys from Packer communicator...
2021/11/30 20:35:05 packer-provisioner-ansible plugin: Creating inventory file for Ansible run...
==> googlecompute.windows-ssh-ansible: Executing Ansible: ansible-playbook -e packer_build_name="windows-ssh-ansible" -e packer_builder_type=googlecompute --ssh-extra-args '-o StrictHostKeyChecking=no -o IdentitiesOnly=yes' -e win_packages=git,notepadplusplus,python3 -e ansible_shell_type=powershell -e ansible_shell_executable=None -e ansible_ssh_private_key_file=/Users/lmayorga/.ssh/packer_gcp_key -i /var/folders/y_/16kgz8gd39nb9q7376qr6ly00000gn/T/packer-provisioner-ansible91967104 /Users/lmayorga/repos/packer/win/ansible/playbooks/playbook.yml
googlecompute.windows-ssh-ansible:
googlecompute.windows-ssh-ansible: PLAY [Default Playbook] ********************************************************
googlecompute.windows-ssh-ansible:
googlecompute.windows-ssh-ansible: TASK [Gathering Facts] *********************************************************
googlecompute.windows-ssh-ansible: fatal: [default]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "Warning: Permanently added '34.86.165.202' (ED25519) to the list of known hosts.\r\nParameter format not correct - ;\r\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}
googlecompute.windows-ssh-ansible:
googlecompute.windows-ssh-ansible: PLAY RECAP *********************************************************************
googlecompute.windows-ssh-ansible: default : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
googlecompute.windows-ssh-ansible:
2021/11/30 20:35:08 [INFO] (telemetry) ending ansible
==> googlecompute.windows-ssh-ansible: Provisioning step had errors: Running the cleanup provisioner, if present...
2021/11/30 20:35:08 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:08 Skipping cleanup of IAP tunnel; "iap" is false.
==> googlecompute.windows-ssh-ansible: Deleting instance...
2021/11/30 20:35:09 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:09 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:12 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:12 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:15 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:15 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:17 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:17 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:19 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:19 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:22 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:22 Retryable error: retrying for state DONE, got RUNNING
2021/11/30 20:35:24 packer-plugin-googlecompute_v1.0.6_x5.0_darwin_amd64 plugin: 2021/11/30 20:35:24 Retryable error: retrying for state DONE, got RUNNING
googlecompute.windows-ssh-ansible: Instance has been deleted!
==> googlecompute.windows-ssh-ansible: Deleting disk...
googlecompute.windows-ssh-ansible: Disk has been deleted!
2021/11/30 20:35:29 [INFO] (telemetry) ending googlecompute.windows-ssh-ansible
==> Wait completed after 7 minutes 25 seconds
2021/11/30 20:35:29 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
2021/11/30 20:35:29 machine readable: googlecompute.windows-ssh-ansible,error []string{"Error executing Ansible: Non-zero exit status: exit status 2"}
==> Builds finished but no artifacts were created.
This issue was originally opened by @npearson72 as hashicorp/packer#10485. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Using multi-line arguments in extra_arguments
does not work.
I have the following Ansible provisioner block:
When the --extra-vars
arguments are arranged as a single line, all works as expected:
...
{
"type": "ansible",
"playbook_file": "ansible/playbook.yml",
"extra_arguments": [
"--extra-vars",
"ssh_user={{user `ssh_user`}} ssh_port={{user `ssh_port`}} pg_data_dir={{user `pg_data_dir`}}",
"--tags",
"{{ user `tags`}}"
]
}
...
When I arrange the --extra-vars
as multi-line (valid json syntax), Ansible rejects the command and the script fails to run:
...
"provisioners": [
{
"type": "ansible",
"playbook_file": "ansible/playbook.yml",
"extra_arguments": [
"--extra-vars",
"ssh_user={{user `ssh_user`}}",
"ssh_port={{user `ssh_port`}}",
"pg_data_dir={{user `pg_data_dir`}}",
"--tags",
"{{ user `tags`}}"
]
}
]
...
Interestingly... the terminal output shows the same input command in both forms. However the second is rejected.
==> digitalocean: Executing Ansible: ansible-playbook -e packer_build_name="digitalocean" -e packer_builder_type=digitalocean --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars ssh_user=deployer ssh_port=1111 pg_data_dir=/mnt/pg_data --tags database -e ansible_ssh_private_key_file=/var/folders/jk/_4zr0chs4h13kpkzrfr_gkm40000gn/T/ansible-key233643224 -i /var/folders/jk/_4zr0chs4h13kpkzrfr_gkm40000gn/T/packer-provisioner-ansible381681239 /Users/me/Dev/project/devops/infrastructure/packer/ansible/playbook.yml
==> digitalocean: Executing Ansible: ansible-playbook -e packer_build_name="digitalocean" -e packer_builder_type=digitalocean --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars ssh_user=deployer ssh_port=1111 pg_data_dir=/mnt/pg_data --tags database -e ansible_ssh_private_key_file=/var/folders/jk/_4zr0chs4h13kpkzrfr_gkm40000gn/T/ansible-key108274220 -i /var/folders/jk/_4zr0chs4h13kpkzrfr_gkm40000gn/T/packer-provisioner-ansible576775579 /Users/me/Dev/project/devops/infrastructure/packer/ansible/playbook.yml
They appear identical, but second throws a non-zero exit with Ansible complaining that the arguments were not correct.
1.6.6
https://gist.github.com/npearson72/a363b559a72f703905a643d4d4ae0450
MacOS 11.1
ansible 2.10.3
https://gist.github.com/npearson72/b998db9cac3af5a7b1b72692f30b1e3d
This issue was originally opened by @petr-tichy as hashicorp/packer#10049. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Ansible Provisioner with Azure Builder (chroot) builder stuck infinitely at first Ansible task. Using Ansible 2.7.7 with Packer 1.6.4
packer build
azure-chroot: Provisioning with Ansible...
Packer v1.6.4
"builders": [
{
"type": "azure-chroot",
"subscription_id": "{{user `subscription_id`}}",
"image_resource_id": "/subscriptions/{{vm `subscription_id`}}/resourceGroups/{{vm `resource_group`}}/providers/Microsoft.Compute/images/{{user `managed_image_name`}}",
"source": "{{user `image_publisher`}}:{{user `image_offer`}}:{{user `image_sku`}}:latest"
}
],
"provisioners": [
{
"type": "ansible",
"playbook_file": "playbook.yml",
"extra_arguments":"-vvvv"
}
]
Debian GNU/Linux 10 (buster) on Azure
==> azure-chroot: Provisioning with Ansible...
azure-chroot: Setting up proxy adapter for Ansible....
2020/10/06 09:53:11 packer-provisioner-ansible plugin: Creating inventory file for Ansible run...
==> azure-chroot: Executing Ansible: ansible-playbook -e packer_build_name="azure-chroot" -e packer_builder_type=azure-chroot -vvvv -e ansible_ssh_private_key_file=/tmp/ansible-key153810056 -i /tmp/packer-provisioner-ansible739888199 /playbook.yml
2020/10/06 09:53:11 packer-provisioner-ansible plugin: SSH proxy: serving on 127.0.0.1:35267
azure-chroot: ansible-playbook 2.7.7
azure-chroot: config file = /etc/ansible/ansible.cfg
azure-chroot: configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
azure-chroot: ansible python module location = /usr/lib/python3/dist-packages/ansible
azure-chroot: executable location = /usr/bin/ansible-playbook
azure-chroot: python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
azure-chroot: Using /etc/ansible/ansible.cfg as config file
azure-chroot: setting up inventory plugins
azure-chroot: /tmp/packer-provisioner-ansible739888199 did not meet host_list requirements, check plugin documentation if this is unexpected
azure-chroot: /tmp/packer-provisioner-ansible739888199 did not meet script requirements, check plugin documentation if this is unexpected
azure-chroot: Parsed /tmp/packer-provisioner-ansible739888199 inventory source with ini plugin
azure-chroot: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3/dist-packages/ansible/plugins/callback/default.py
azure-chroot:
azure-chroot: PLAYBOOK: playbook.yml *********************************************************
azure-chroot: 1 plays in /playbook.yml
azure-chroot:
azure-chroot: PLAY [test] ********************************************************************
azure-chroot: META: ran handlers
azure-chroot:
azure-chroot: TASK [install packages] ********************************************************
azure-chroot: task path: /playbook.yml:6
azure-chroot: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
azure-chroot: <127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35267 -o 'IdentityFile="/tmp/ansible-key153810056"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/608cf842cc 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
2020/10/06 09:53:12 packer-provisioner-ansible plugin: SSH proxy: accepted connection
2020/10/06 09:53:12 packer-provisioner-ansible plugin: authentication attempt from 127.0.0.1:57744 to 127.0.0.1:35267 as root using none
2020/10/06 09:53:12 packer-provisioner-ansible plugin: authentication attempt from 127.0.0.1:57744 to 127.0.0.1:35267 as root using publickey
2020/10/06 09:53:12 packer-provisioner-ansible plugin: new exec request: /bin/sh -c 'echo ~root && sleep 0'
2020/10/06 09:53:12 packer-builder-azure-chroot plugin: Executing: /bin/sh []string{"/bin/sh", "-c", "chroot /mnt/packer-azure-chroot-disks/sdd /bin/sh -c \"/bin/sh -c 'echo ~root && sleep 0'\""}
When I run the ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35267 -o 'IdentityFile="/tmp/ansible-key153810056"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/608cf842cc 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
command from another terminal, I get expected output /root
and the channel is still open. Closing it with Ctrl-D or redirecting input from /dev/null
completes the command as expected:
First, connecting with SSH:
root@vm:/# ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=40559 -o 'IdentityFile="/tmp/ansible-key526524123"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ff86607c8d 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolve_canonicalize: hostname 127.0.0.1 is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 3233
debug3: mux_client_request_session: session request sent
/root
Packer outputs:
2020/10/06 10:11:45 packer-provisioner-ansible plugin: new exec request: /bin/sh -c 'echo ~root && sleep 0'
2020/10/06 10:11:45 packer-builder-azure-chroot plugin: Executing: /bin/sh []string{"/bin/sh", "-c", "chroot /mnt/packer-azure-chroot-disks/sdd /bin/sh -c \"/bin/sh -c 'echo ~root && sleep 0'\""}
and after sending Ctrl-D on from the client
2020/10/06 10:14:54 [INFO] 0 bytes written for 'stdin'
2020/10/06 10:14:54 packer-provisioner-ansible plugin: [INFO] 0 bytes written for 'stdin'
2020/10/06 10:14:54 [INFO] 6 bytes written for 'stdout'
2020/10/06 10:14:54 [INFO] 0 bytes written for 'stderr'
2020/10/06 10:14:54 packer-builder-azure-chroot plugin: Chroot execution exited with '0': '"/bin/sh -c 'echo ~root && sleep 0'"'
2020/10/06 10:14:54 packer-builder-azure-chroot plugin: [INFO] RPC endpoint: Communicator ended with: 0
2020/10/06 10:14:54 [INFO] RPC client: Communicator ended with: 0
2020/10/06 10:14:54 [INFO] RPC endpoint: Communicator ended with: 0
2020/10/06 10:14:54 packer-provisioner-ansible plugin: [INFO] 6 bytes written for 'stdout'
2020/10/06 10:14:54 packer-provisioner-ansible plugin: [INFO] 0 bytes written for 'stderr'
2020/10/06 10:14:54 packer-provisioner-ansible plugin: [INFO] RPC client: Communicator ended with: 0
When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.
I'm creating a ubuntu 21.10 raspberry pi image using packer-builder-arm and ansible 2.9.6. My ansible playbook is very simple (so far), but it hangs when it tries to run echo ~root && sleep 0
, which, AFAICT, ansible runs to test the connection. When I try to connect via ssh to the packer ssh communicator and run a command, the command output returns but the connection never does, which is what I think ansible is waiting on.
% sudo PACKER_DEBUG=1 packer build packer/ubuntu_server_21.10_arm64.json.pkr.hcl
v1.7.8
source "arm" "raspberry_pi_k8s" {
file_checksum_type = "sha256"
file_checksum_url = "http://cdimage.ubuntu.com/releases/21.10/release/SHA256SUMS"
file_target_extension = "xz"
file_unarchive_cmd = ["xz", "--decompress", "$ARCHIVE_PATH"]
file_urls = ["http://cdimage.ubuntu.com/releases/21.10/release/ubuntu-21.10-preinstalled-server-arm64+raspi.img.xz"]
image_build_method = "reuse"
image_chroot_env = ["PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"]
image_partitions {
filesystem = "fat"
mountpoint = "/boot/firmware"
name = "boot"
size = "256M"
start_sector = "2048"
type = "c"
}
image_partitions {
filesystem = "ext4"
mountpoint = "/"
name = "root"
size = "2.8G"
start_sector = "526336"
type = "83"
}
image_path = "ubuntu-21.10.img"
image_size = "3.1G"
image_type = "dos"
qemu_binary_destination_path = "/usr/bin/qemu-aarch64-static"
qemu_binary_source_path = "/usr/bin/qemu-aarch64-static"
}
build {
sources = ["source.arm.raspberry_pi_k8s"]
provisioner "ansible" {
playbook_file = "ansible/playbook.yml"
extra_arguments = [ "-vvvv" ]
}
}
---
- name: 'Provision image'
hosts: default
become: true
gather_facts: no
vars_files:
- vars.yml
tasks:
- name: Debug
debug:
msg: "Testing 1 2 3"
- name: Create user group
group:
name: "{{ username }}"
state: present
Ubuntu 20.04 on x86_64
% sudo ssh -v 127.0.0.1 -o Port=34669 -o 'IdentityFile="/tmp/ansible-key544648103"' 'pwd'
OpenSSH_8.2p1 Ubuntu-4ubuntu0.4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 34669.
debug1: Connection established.
debug1: identity file /tmp/ansible-key544648103 type -1
debug1: identity file /tmp/ansible-key544648103-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.4
debug1: Remote protocol version 2.0, remote software version Go
debug1: no match: Go
debug1: Authenticating to 127.0.0.1:34669 as 'root'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: [email protected]
debug1: kex: host key algorithm: ssh-rsa
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ssh-rsa SHA256:9ZU07UhwYA8vXs1g6QILOxwnzOYM4peD+AX1ZoB1tks
debug1: Host '[127.0.0.1]:34669' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:16
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /tmp/ansible-key544648103 explicit
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /tmp/ansible-key544648103
debug1: Authentication succeeded (publickey).
Authenticated to 127.0.0.1 ([127.0.0.1]:34669).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug1: Sending command: pwd
/
<Command hangs here>
^Cdebug1: channel 0: free: client-session, nchannels 1
Killed by signal 2.
When setting the ssh_authorized_key_file property, it seems to be ignored. The temp key still is generated (which is empty per another issue), and passed to ansible via the ansible_ssh_private_key_file param.
provisioner "ansible" {
playbook_file = "main.yml"
use_proxy = false
ansible_env_vars = ["ANSIBLE_CONFIG=ansible.cfg"]
ssh_authorized_key_file = "privatekey.file"
}
packer 1.7.0
ansible 2.9.10
This happens with Windows and RHEL.
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.
There is currently no option to easily pass the --force-with-deps
option to ansible-galaxy
, but there is an option for --force
.
I am currently running into an issue where two different ansible playbooks require two versions of the same role, but that role is a dependency. As a result --force
will not install the different version and I am forced to manually delete the role so that it gets refreshed each time. Adding this option will resolve this problem.
This issue was originally opened by @gamethis as hashicorp/packer#9298. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
I would like to have a use_port option added to the ansible-remote provisioner.
would like a variable that will add th anisble_port to either the command line or to the inventory file.
This allows for use of whatever dynamic port is setup by packer.
when using a static inventory file. Rather than the dynamic inventory file.
This issue was originally opened by @lmayorga1980 in hashicorp/packer#11412 and has been migrated to this repository. The original issue description is below.
Allow variable type list to be added to extra_arguments
in the ansible-local
provisioner
variable "packages" {
type = list(string)
description = "List of Packages"
default = ["vim","net-tools"]
}
source "googlecompute" "ubuntu-ansible" {
project_id = var.project_id
source_image_project_id = ["ubuntu-os-pro-cloud"]
source_image_family = "ubuntu-pro-1804-lts"
ssh_username = "ubuntu"
machine_type = "g1-small"
zone = "us-east4-b"
image_name = "ubuntu-custom-2"
image_description = "custom ubuntu image"
disk_size = 10
preemptible = true
tags = ["packer","image"]
image_labels = {
built_by = "packer"
}
account_file = var.account_file_path
}
build {
name = "ubuntu-custom-2"
sources = [
"sources.googlecompute.ubuntu-ansible"
]
provisioner "ansible-local" {
playbook_file = "./playbooks/playbook.yml"
extra_arguments = ["--extra-vars","packages=${var.packages}"]
}
}
1.7.8
OS, Architecture, and any other information you can provide about the
environment.
Error: Failed preparing provisioner-block "ansible-local" ""
on gcp-ubuntu-ansible.pkr.hcl line 34:
(source code not available)
gcp-ubuntu-ansible.pkr.hcl:36,51-63: Invalid template interpolation value;
Cannot include the given value in a string template: string required.
When using the plugin on Fedora 33+, you can't login on VM displaying this:
==> amazon-ebs.debian: failed to handshake
amazon-ebs.debian: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '[127.0.0.1]:38133' (RSA) to the list of known hosts.\r\nsign_and_send_pubkey: no mutual signature supported\r\[email protected]: Permission denied (publickey).", "unreachable": true}
I have found that sign_and_send_pubkey: no mutual signature supported
seems to be caused by temporary keys generated by Ansible, using the DSA format, which is considered as insecure and not supported anymore.
A temporary solution is to authorize DSA keys on the client, which has workd for me:
echo "PubkeyAcceptedKeyTypes +ssh-dss" >> ~/.ssh/config
chmod 0600 ~/.ssh/config
From packer version 1.7.3
and default ansible plugin
Fedora 34, amd64.
amazon-ebs.debian:
amazon-ebs.debian: PLAY [Configure the system] ****************************************************
amazon-ebs.debian:
amazon-ebs.debian: TASK [Gathering Facts] *********************************************************
==> amazon-ebs.debian: failed to handshake
amazon-ebs.debian: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '[127.0.0.1]:33277' (RSA) to the list of known hosts.\r\nsign_and_send_pubkey: no mutual signature supported\r\[email protected]: Permission denied (publickey).", "unreachable": true}
amazon-ebs.debian:
amazon-ebs.debian: PLAY RECAP *********************************************************************
amazon-ebs.debian: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
This issue was originally opened by @Syntax3rror404 as hashicorp/packer#10752. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Packer is running with the Proxmox provider. SSH provisioner works fine. The ansible provisioner is running after installing the OS. After this step i like to use the ansible provisioner. Packer ansible gives the error: invalid format
The same code with vmware-iso provider works.
Try to run a VM e.g. Ubuntu 20 with Ansible provisioner on a Proxmox hypervisor
1.7.0
"provisioners": [
{
"type": "shell",
"inline": ["[% shell_provision %]"]
}{% if provision_with_ansible %},
{
"type": "ansible",
"playbook_file": "./playbook.yml",
{% if debug is defined %}"extra_arguments": [ "-vvv" ],{% endif %}
"use_proxy": "false"
}
{% endif %}
]
Proxmox hypervisor latest version, Packer template OS Ubuntu 20 with qemu-guest-agent installed
proxmox-iso: Reading state information...
proxmox-iso: machine-id was reset successfully
==> proxmox-iso: Provisioning with Ansible...
proxmox-iso: Not using Proxy adapter for Ansible run:
proxmox-iso: Using ssh keys from Packer communicator...
==> proxmox-iso: Executing Ansible: ansible-playbook -e packer_build_name="proxmox-iso" -e packer_builder_type=proxmox-iso -e packer_http_addr=10.31.104.137:8000 --ssh-extra-args '-o IdentitiesOnly=yes' -e ansible_ssh_private_key_file=/tmp/ansible-key042128868 -i /tmp/packer-provisioner-ansible471370739 /tmp/packer/labul/u20-2021-03-11/playbook.yml
proxmox-iso:
proxmox-iso: PLAY [Provision u20-2021-03-11] ************************************************
proxmox-iso:
proxmox-iso: TASK [Gathering Facts] *********************************************************
proxmox-iso: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.31.101.137' (ECDSA) to the list of known hosts.\r\nLoad key \"/tmp/ansible-key042128868\": invalid format\r\[email protected]: Permission denied (publickey,password).", "unreachable": true}
proxmox-iso:
proxmox-iso: PLAY RECAP *********************************************************************
proxmox-iso: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
proxmox-iso:
==> proxmox-iso: Provisioning step had errors: Running the cleanup provisioner, if present...
==> proxmox-iso: Stopping VM
==> proxmox-iso: Deleting VM
Build 'proxmox-iso' errored after 8 minutes 38 seconds: Error executing Ansible: Non-zero exit status: exit status 4
==> Wait completed after 8 minutes 38 seconds
==> Some builds didn't complete successfully and had errors:
--> proxmox-iso: Error executing Ansible: Non-zero exit status: exit status 4
==> Builds finished but no artifacts were created.
This issue was originally opened by @guybarzi as hashicorp/packer#10639. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Hi,
I'm trying to provision an Ubuntu 18.04.5 machine with ansible provisioner after building it with vsphere-iso.
However, Ansible can't SSH to the machine, it prompts the following error:
fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.0.0.1' (ECDSA) to the list of known hosts.\r\nLoad key \"/tmp/ansible-key305487509\": invalid format\r\[email protected]: Permission denied (publickey,password).\r\n", "unreachable": true}
After looking into the problem a little bit, it seems that the problem is that the key file created is empty. I do use the ssh_password
for the SSH communicator instead of a key file. When I insert ansible_ssh_pass
as an extra argument for ansible, everything works. However, I think it should work automatically, even with ssh_password
.
I would appreciate help in fixing this issue or telling me what I did wrong if the problem is on my end.
Here is the configuration associated with the issue:
source "vsphere-iso" "ubuntu_18_04_5" {
CPUs = 1
RAM = 1024
boot_command = ["<enter><wait><f6><wait><esc><wait>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs>",
"/install/vmlinuz",
" initrd=/install/initrd.gz",
" priority=critical",
" locale=en_US",
" file=/media/preseed.cfg",
"<enter>"]
boot_order = "disk,cdrom"
cluster = var.cluster
convert_to_template = true
datacenter = var.datacenter
datastore = var.datastore
disk_controller_type = ["pvscsi"]
floppy_files = [var.preseed_path]
folder = var.vm_folder
guest_os_type = "ubuntu64Guest"
host = var.host
insecure_connection = true
iso_checksum = "sha256:8c5fc24894394035402f66f3824beb7234b757dd2b5531379cb310cedfdf0996"
iso_url = "http://cdimage.ubuntu.com/releases/18.04/release/ubuntu-18.04.5-server-amd64.iso"
network_adapters {
network = var.network
network_card = "vmxnet3"
}
storage {
disk_size = "10240"
disk_thin_provisioned = true
}
vcenter_server = var.vcenter
username = var.vcenter_user
password = var.vcenter_password
ssh_username = var.ssh_user
ssh_password = var.ssh_password
vm_name = "Ubuntu_18_04_5-Packer"
notes = "Packer™ Created"
}
build {
sources = ["source.vsphere-iso.ubuntu_18_04_5"]
provisioner "ansible" {
playbook_file = "ubuntu_18_04_5/playbook.yml"
keep_inventory_file = true
use_proxy = false
ansible_env_vars = ["ANSIBLE_HOST_KEY_CHECKING=False"]
}
}
Thanks in advance!
This issue was originally opened by @Olesp as hashicorp/packer#10838. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
When using the Ansible provisioner. i try to download roles from ansible galaxy with a requirements.yml file.
When trying to get the roles I get an error message <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:unable to get local issuer certificate (_ssl.c:1123)>
Adding galaxy_command="ansible-galaxy -c"
creates an error read |0: file already closed
Runing the packer template to create an AMI create the issue.
1.7.0
source "amazon-ebs" "example"{
assume_role {
role_arn = "arn:aws:iam::xxxxx:role/xxxxxx"
}
instance_type = "t2.micro"
source_ami = "ami-01e7ca2ef94a0ae86"
region = "us-east-2"
ssh_username = "ubuntu"
}
build {
source "source.amazon-ebs.example" {
ami_name = "packer_generated_ami-{{timestamp}}"
ami_description = "This is the first AMI created by packer"
}
provisioner "ansible" {
playbook_file = "./project_1/playbooks/playbook.yml"
galaxy_file="./project_1/requirements.yml"
roles_path="./project_1/roles"
}
}
Mac OS Big Sur 11.2.3
Python 3.9
Ansible 2.10.7
==> amazon-ebs.example: Provisioning with Ansible...
amazon-ebs.example: Setting up proxy adapter for Ansible....
amazon-ebs.example: Executing Ansible Galaxy
amazon-ebs.example: Starting galaxy role install process
amazon-ebs.example: [WARNING]: - andrewrothstein.miniconda was NOT installed successfully: Unknown
amazon-ebs.example: error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/':
amazon-ebs.example: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:
amazon-ebs.example: unable to get local issuer certificate (_ssl.c:1123)>
amazon-ebs.example: ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
==> amazon-ebs.example: Provisioning step had errors: Running the cleanup provisioner, if present...
==> amazon-ebs.example: Terminating the source AWS instance...
==> amazon-ebs.example: Cleaning up any extra volumes...
==> amazon-ebs.example: No volumes to clean up, skipping
==> amazon-ebs.example: Deleting temporary security group...
==> amazon-ebs.example: Deleting temporary keypair...
Build 'amazon-ebs.example' errored after 1 minute 19 seconds: Error executing Ansible: Error executing Ansible Galaxy: Non-zero exit status: exit status 1
This issue was originally opened by @timblaktu as hashicorp/packer#11099. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.
Using Packer 1.7.2 in a project that builds a Debian 10 machine with vsphere-clone, then provisions with a sequence of shell and ansible provisioners, ansible provisioning is working fine until one point in the sequence when I observe ansible using an unexpected ssh user and the provisioner fails at the ssh authentication level.
This is similar to #9680, in which the user was seeing ansible use the wrong ssh identity, and the answer was "try use_proxy = false
", without explanation of what's going on. In my case, use_proxy = false
does not appear to fix the issue.
The build source in use here is defined in sources.pkr.hcl
and prescribes to use the ansible
user for ssh, and the private key file:
source "vsphere-clone" "jenkins" {
ssh_username = "ansible"
ssh_private_key_file = "${var.ansible_private_key_file}"
ssh_wait_timeout = "300s"
boot_wait = "10s"
convert_to_template = false
datacenter = local.vsphere_datacenter
folder = local.vsphere_folder
template = "${local.vsphere_template_to_clone}${var.upstream_scm_suffix}"
remove_cdrom = true
http_directory = "http"
insecure_connection = true
network = local.vsphere_network
shutdown_command = local.shutdown_command
vcenter_server = local.vsphere_vcenter_server
username = local.vsphere_username
password = "${var.ansible_active_directory_password}"
notes = "${var.vsphere_vm_description}"
tools_sync_time = true
There's nothing interesting in the usage of this build source ,as seen by this excerpt from packer.pkr.hcl
:
source "source.vsphere-clone.jenkins" {
name = "jenkins-staging"
# TODO: how to take vm_name from builder name?
vm_name = "jenkins-staging"
cluster = "mycluster"
host = "myhost"
CPUs = 16
RAM = 32768
RAM_reservation = 0
datastore = "mydatastore"
disk_size = local.disk_size_mb_jenkins_staging
}
so let's get on to the provisioners:
The first two work great, connecting and authenticating to the new jenkins-staging
host just fine, using the user and private key provided in the builder:
provisioner "ansible" {
user = "ansible"
playbook_file = "../../../../ansible/playbooks/rename-host.yml"
# Packer will use 'default' for the inventory_hostname if you do not set host_alias
host_alias = "${source.name}"
extra_arguments = [
"--extra-vars", "new_machine_name=${source.name}",
]
}
provisioner "ansible" {
user = "ansible"
only = ["vsphere-clone.jenkins-staging"]
playbook_file = "../../../../ansible/playbooks/disk-resize.yml"
# Packer will use 'default' for the inventory_hostname if you do not set host_alias
host_alias = "${source.name}"
extra_arguments = [
"--extra-vars", "disk_block_device=${local.disk_block_device}",
"--extra-vars", "root_lv_name=${local.root_lv_name}",
"--extra-vars", "disk_size_gb=${local.disk_size_gb_jenkins_staging}",
]
}
The next provisioner fails, UNLESS I explicitly add use use_proxy = false
AND the --user ansible
ansible argument as shown below:
provisioner "ansible" {
user = "ansible" # **This does not appear to do anything at this point**
# do not set up a localhost proxy adapter for ansible to use to connect to host with
use_proxy = false
only = ["vsphere-clone.jenkins-staging"]
playbook_file = "../../../../ansible/projects/jenkins/playbooks/jenkins-controller.yml"
galaxy_file = "../../../../ansible/projects/jenkins/roles/requirements.yml"
# Packer will use 'default' for the inventory_hostname if you do not set host_alias
host_alias = "${source.name}"
inventory_directory = "../../../../ansible/projects/jenkins/inventories/staging"
inventory_file = "../../../../ansible/projects/jenkins/inventories/staging/hosts"
extra_arguments = [
"--user=ansible", # **added this explicitly to force packer to call ansible with correct user**
"--vault-password-file=/jenkins-vault-id-password",
"--limit=${source.name}",
"-vvvv",
]
}
Running the playbooks with -vvvv, I see the first two provisioners do this:
14:04:36 vsphere-clone.jenkins-staging: <127.0.0.1> **ESTABLISH SSH CONNECTION FOR USER: ansible**
14:04:36 vsphere-clone.jenkins-staging: <127.0.0.1> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o Port=44941 -o 'IdentityFile="/tmp/ansible-key901546086"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no **-o 'User="ansible"'** -o ConnectTimeout=60 -o IdentitiesOnly=yes -o ControlPath=/home/jenkins/.ansible/cp/86388c6a10 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir "` echo /home/ansible/.ansible/tmp/ansible-tmp-1623790893.0852802-20846-267265745241926 `" && echo ansible-tmp-1623790893.0852802-20846-267265745241926="` echo /home/ansible/.ansible/tmp/ansible-tmp-1623790893.0852802-20846-267265745241926 `" ) && sleep 0'"'"''
But the last provisioner, before I added the explicit --user ansible
arg AND the use_proxy = false
, did this:
13:31:59 vsphere-clone.jenkins-staging: fatal: [jenkins-staging]: UNREACHABLE! => changed=false
13:31:59 vsphere-clone.jenkins-staging: msg: |-
13:31:59 vsphere-clone.jenkins-staging: Failed to connect to the host via ssh: OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
13:31:59 vsphere-clone.jenkins-staging: debug1: Reading configuration data /etc/ssh/ssh_config
13:31:59 vsphere-clone.jenkins-staging: debug1: /etc/ssh/ssh_config line 19: Applying options for *
13:31:59 vsphere-clone.jenkins-staging: debug1: auto-mux: Trying existing master
13:31:59 vsphere-clone.jenkins-staging: debug1: Control socket "/home/jenkins/.ansible/cp/e086600817" does not exist
13:31:59 vsphere-clone.jenkins-staging: debug2: resolving "jenkins-staging" port 22
13:31:59 vsphere-clone.jenkins-staging: debug2: ssh_connect_direct
13:31:59 vsphere-clone.jenkins-staging: debug1: Connecting to jenkins-staging [172.16.22.103] port 22.
13:31:59 vsphere-clone.jenkins-staging: debug2: fd 3 setting O_NONBLOCK
13:31:59 vsphere-clone.jenkins-staging: debug1: fd 3 clearing O_NONBLOCK
13:31:59 vsphere-clone.jenkins-staging: debug1: Connection established.
13:31:59 vsphere-clone.jenkins-staging: debug3: timeout: 59995 ms remain after connect
13:31:59 vsphere-clone.jenkins-staging: debug1: identity file id_rsa_ansible type -1
13:31:59 vsphere-clone.jenkins-staging: debug1: identity file id_rsa_ansible-cert type -1
13:31:59 vsphere-clone.jenkins-staging: debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
13:31:59 vsphere-clone.jenkins-staging: debug1: Remote protocol version 2.0, remote software version OpenSSH_7.9p1 Debian-10+deb10u2
13:31:59 vsphere-clone.jenkins-staging: debug1: match: OpenSSH_7.9p1 Debian-10+deb10u2 pat OpenSSH* compat 0x04000000
13:31:59 vsphere-clone.jenkins-staging: debug2: fd 3 setting O_NONBLOCK
13:31:59 vsphere-clone.jenkins-staging: debug1: **Authenticating to jenkins-staging:22 as 'jenkins'**
I believe what is happening is that since the second playbook is rebooting the host being provisioned, the ssh communicator is losing its connection and is not being restored as before, respecting the user
parameter in the builder. Before doing this it was using the jenkins
user, which is the name of the linux user running the packer process.
I am looking for better understanding of what's going on here, and why my workaround is working when I use use_proxy = false
AND passing --user=ansible
to the provisioner explicitly.
I got here (needing to reboot the server) because I was having connection issues after the provisioner changed hostname. This was due to the machine getting a different IP address midstream. This was related to #8528 in which the dhcp client on Debian 10 machines has a default configuration that prevents MAC addr IP reservations from happening. I fixed it (quickly) in my case by just rebooting, which seems to enable Packer to re-establish an ssh connection to the correct IP address (via new DNS record), successfully.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.