Code Monkey home page Code Monkey logo

terraform-provisioner-ansible's Introduction

Ansible provisioner for Terraform

CircleCI codecov

Ansible with Terraform 0.13.x - remote and local provisioners.

General overview

The purpose of the provisioner is to provide an easy method for running Ansible to configure hosts created with Terraform.

This provisioner, however, is not designed to handle all possible Ansible use cases. Lets consider what's possible and what's not possible with this provisioner.

For after provisioning, you may find the following Ansible module useful if you use AWS S3 for state storage: terraform-state-ansible-module.

What's possible

  • compute resource local provisioner

    • configured on a compute resource e.g. aws_instance, ibm_compute_vm_instance
    • runs Ansible installed on the same machine where Terraform is executed
    • the provisioner will create a temporary inventory and execute Ansible only against hosts created with Terraform resource
    • If count is used with the compute resource and is greater than 1, the provisioner runs after each resource instance is created, passing the host information for that instance only.
    • Ansible Vault password file / Vault ID files can be used
    • the temporary inventory uses ansible_connection=ssh, the host alias is resolved from the resource.connection attribute, it is possible to specify an ansible_host using plays.hosts
  • compute resource remote provisioner

    • configured on a compute resource e.g. aws_instance, ibm_compute_vm_instance
    • runs Ansible on the hosts created with Terraform resource
    • if Ansible is not installed on the newly created hosts, the provisioner can install one
    • the provisioner will create a temporary inventory and execute Ansible only against hosts created with Terraform resource
    • playbooks, roles, Vault password file / Vault ID files and the temporary inventory file will be uploaded to the each host prior to Ansible run
    • hosts are provisioned using ansible_connection=local
    • an alias can be provided using hosts, each host will be included in every group provided with groups but each of them will use ansible_connection=local
  • null_resource local provisioner

    • configured on a null_resouce
    • runs Ansible installed on the same machine where Terraform is executed
    • Executes Ansible against the hosts defined by a list of IP addresses passed by interpolation on the plays.hosts attribute. The host group is defined by plays.groups.
    • Executes the Ansible provisioner once against all hosts defined in plays.hosts, triggered by the availability of the interpolated vars.
    • Alternatively an inventory file (staticly defined or dynamically templated) can be passed to Ansible to specify a list of Terraform provisioned hosts and groups to be passed to Ansible to execute against in a single run.
    • Inventory file can also be used with Ansible dynamic inventory and inventory plugins.
    • The Terraform depends_on attribute can be used to determine when the Ansible provisioner is executed in relation to the provisioning of other Terraform resources
    • If the Terraform host is on the same network (cloud hosted or VPN) as the provisioned hosts, private IP addresses can be passed eliminating the requirement for bastion hosts or public SSH access.
    • Ansible Vault password file / Vault ID files can be used

What's not possible

The provisioner by no means attempts to implement all Ansible use cases. The provisioner is not intended to be used as a jump host. For example, the remote mode does not allow provisioning hosts other than the one where Ansible is executed. The number of use cases and possibilities covered by Ansible is so wide that having to strive for full support is a huge undertaking for one person. Using the provisioner with a null_resource provides further options for passing the Ansible inventory, including dynamic inventory, to meet use cases not addressed when used with a compute resource.

If you find yourself in need of executing Ansible against well specified, complex inventories, either follow the regular process of provisoning hosts via Terraform and executing Ansible against them as a separate step, or initate the Ansible execution as the last Terraform task using null_resource and depends_on. Of course, pull requests are always welcomed!

Installation

Using Docker

$ cd /my-terraform-project
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest init
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest apply

Local Installation

Note that although terraform-provisioner-ansible is in the terraform registry, it cannot be installed using a module terraform stanza, as such a configuration will not cause terraform to download the terraform-provisioner-ansible binary.

Prebuilt releases are available on GitHub. Download a release for the version you require and place it in ~/.terraform.d/plugins directory, as documented here.

Caution: you will need to rename the file to match the pattern recognized by Terraform: terraform-provisioner-ansible_v<version>.

Alternatively, you can download and deploy an existing release using the following script:

curl -sL \
  https://raw.githubusercontent.com/radekg/terraform-provisioner-ansible/master/bin/deploy-release.sh \
  --output /tmp/deploy-release.sh
chmod +x /tmp/deploy-release.sh
/tmp/deploy-release.sh -v <version number>
rm -rf /tmp/deploy-release.sh

Configuration

Example:

resource "aws_instance" "test_box" {
  # ...
  connection {
    host = "..."
    user = "centos"
  }
  provisioner "ansible" {
    plays {
      playbook {
        file_path = "/path/to/playbook/file.yml"
        roles_path = ["/path1", "/path2"]
        force_handlers = false
        skip_tags = ["list", "of", "tags", "to", "skip"]
        start_at_task = "task-name"
        tags = ["list", "of", "tags"]
      }
      # shared attributes
      enabled = true
      hosts = ["zookeeper"]
      groups = ["consensus"]
      become = false
      become_method = "sudo"
      become_user = "root"
      diff = false
      extra_vars = {
        extra = {
          variables = {
            to = "pass"
          }
        }
      }
      forks = 5
      inventory_file = "/optional/inventory/file/path"
      limit = "limit"
      vault_id = ["/vault/password/file/path"]
      verbose = false
    }
    plays {
      module {
        module = "module-name"
        args = {
          "arbitrary" = "arguments"
        }
        background = 0
        host_pattern = "string host pattern"
        one_line = false
        poll = 15
      }
      # shared attributes
      # enabled = ...
      # ...
    }
    plays {
      galaxy_install {
        force = false
        server = "https://optional.api.server"
        ignore_certs = false
        ignore_errors = false
        keep_scm_meta = false
        no_deps = false
        role_file = "/path/to/role/file"
        roles_path = "/optional/path/to/the/directory/containing/your/roles"
        verbose = false
      }
      # shared attributes other than:
      # enabled = ...
      # are NOT taken into consideration for galaxy_install
    }
    defaults {
      hosts = ["eu-central-1"]
      groups = ["platform"]
      become_method = "sudo"
      become_user = "root"
      extra_vars = {
        extra = {
          variables = {
            to = "pass"
          }
        }
      }
      forks = 5
      inventory_file = "/optional/inventory/file/path"
      limit = "limit"
      vault_id = ["/vault/password/file/path"]
    }
    ansible_ssh_settings {
      connect_timeout_seconds = 10
      connection_attempts = 10
      ssh_keyscan_timeout = 60
      insecure_no_strict_host_key_checking = false
      insecure_bastion_no_strict_host_key_checking = false
      user_known_hosts_file = ""
      bastion_user_known_hosts_file = ""
    }
    remote {
      use_sudo = true
      skip_install = false
      skip_cleanup = false
      install_version = ""
      local_installer_path = ""
      remote_installer_directory = "/tmp"
      bootstrap_directory = "/tmp"
    }
  }
}
resource "aws_instance" "test_box" {
  # ...
}

resource "null_resource" "test_box" {
  depends_on = "aws_instance.test_box"
  connection {
    host = "${aws_instance.test_box.0.public_ip}"
    private_key = "${file("./test_box")}"
  }
  provisioner "ansible" {
    plays {
      playbook {
        file_path = "/path/to/playbook/file.yml"
        roles_path = ["/path1", "/path2"]
        force_handlers = false
        skip_tags = ["list", "of", "tags", "to", "skip"]
        start_at_task = "task-name"
        tags = ["list", "of", "tags"]
      }
      hosts = ["aws_instance.test_box.*.public_ip"]
      groups = ["consensus"]
    }
  }
}

Plays

Selecting what to run

Each plays must contain exactly one playbook or module. Define multiple plays when more than one Ansible action shall be executed against a host.

Playbook attributes

  • plays.playbook.file_path: full path to the playbook YAML file; remote provisioning: a complete parent directory will be uploaded to the host
  • plays.playbook.roles_path: ansible-playbook --roles-path, list of full paths to directories containing your roles; remote provisioning: all directories will be uploaded to the host; string list, default empty list (not applies)
  • plays.playbook.force_handlers: ansible-playbook --force-handlers, boolean, default false
  • plays.playbook.skip_tags: ansible-playbook --skip-tags, string list, default empty list (not applied)
  • plays.playbook.start_at_task: ansible-playbook --start-at-task, string, default empty string (not applied)
  • plays.playbook.tags: ansible-playbook --tags, string list, default empty list (not applied)

Module attributes

  • plays.module.args: ansible --args, map, default empty map (not applied); values of type list and map will be converted to strings using %+v, avoid using those unless you really know what you are doing
  • plays.module.background: ansible --background, int, default 0 (not applied)
  • plays.module.host_pattern: ansible <host-pattern>, string, default all
  • plays.module.one_line: ansible --one-line, boolean , default false (not applied)
  • plays.module.poll: ansible --poll, int, default 15 (applied only when background > 0)

Galaxy Install attributes

  • play.galaxy_install.force: ansible-galaxy install --force, bool, force overwriting an existing role, default false
  • play.galaxy_install.ignore_certs: ansible-galaxy --ignore-certs, bool, ignore SSL certificate validation errors, default false
  • play.galaxy_install.ignore_errors: ansible-galaxy install --ignore-errors, bool, ignore errors and continue with the next specified role, default false
  • play.galaxy_install.keep_scm_meta: ansible-galaxy install --keep-scm-meta, bool, use tar instead of the scm archive option when packaging the role, default false
  • play.galaxy_install.no_deps: ansible-galaxy install --no-deps, bool, don't download roles listed as dependencies, default false
  • play.galaxy_install.role_file: ansible-galaxy install --role-file, string, required full path to the requirements file
  • play.galaxy_install.roles_path: ansible-galaxy install --roles-path, string, the path to the directory containing your roles, the default is the roles_path configured in your ansible.cfgfile (/etc/ansible/roles if not configured); for the remote provisioner: if the path starts with filesystem path separator, the bootstrap directory will not be prepended, if the path does not start with filesystem path separator, the path will appended to the bootstrap directory, if the value is empty, the default value of galaxy-roles is used
  • play.galaxy_install.server: ansible-galaxy install --server, string, optional API server
  • play.galaxy_install.verbose: ansible-galaxy --verbose, bool, verbose mode, default false

Plays attributes

  • plays.hosts: list of hosts to include in auto-generated inventory file when inventory_file not given, string list, default empty list; When used with null_resource this can be an interpolated list of host IP address public or private; more details below
  • plays.groups: list of groups to include in auto-generated inventory file when inventory_file not given, string list, default empty list; more details below
  • plays.enabled: boolean, default true; set to false to skip execution
  • plays.become: ansible[-playbook] --become, boolean, default false (not applied)
  • plays.become_method: ansible[-playbook] --become-method, string, default sudo, only takes effect when become = true
  • plays.become_user: ansible[-playbook] --become-user, string, default root, only takes effect when become = true
  • plays.diff: ansible[-playbook] --diff, boolean, default false (not applied)
  • plays.extra_vars: ansible[-playbook] --extra-vars, map, default empty map (not applied); will be serialized to a JSON string, supports values of different types, including lists and maps
  • plays.forks: ansible[-playbook] --forks, int, default 5
  • plays.inventory_file: full path to an inventory file, ansible[-playbook] --inventory-file, string, default empty string; if inventory_file attribute is not given or empty, a temporary inventory using hosts and groups will be generated; when specified, hosts and groups are not in use
  • plays.limit: ansible[-playbook] --limit, string, default empty string (not applied)
  • plays.vault_id: ansible[-playbook] --vault-id, list of full paths to vault password files; remote provisioning: files will be uploaded to the server, string list, default empty list (not applied); takes precedence over plays.vault_password_file
  • plays.vault_password_file: ansible[-playbook] --vault-password-file, full path to the vault password file; remote provisioning: file will be uploaded to the server, string, default empty string (not applied)
  • plays.verbose: ansible[-playbook] --verbose, boolean, default false (not applied)

Defaults

Some of the plays settings might be common across multiple plays. Such settings can be provided using the defaults attribute. Any setting from the following list can be specified in defaults:

  • defaults.hosts
  • defaults.groups
  • defaults.become_method
  • defaults.become_user
  • defaults.extra_vars
  • defaults.forks
  • defaults.inventory_file
  • defaults.limit
  • defaults.vault_id
  • defaults.vault_password_file

None of the boolean attributes can be specified in defaults. Neither playbook nor module can be specified in defaults.

Ansible SSH settings

  • ansible_ssh_settings.connect_timeout_seconds: SSH ConnectTimeout, default 10 seconds
  • ansible_ssh_settings.connection_attempts: SSH ConnectionAttempts, default 10
  • ansible_ssh_settings.ssh_keyscan_timeout: when ssh-keyscan is used, how long to try fetching the host key until failing, default 60 seconds

Following settings apply to local provisioning only:

  • ansible_ssh_settings.insecure_no_strict_host_key_checking: if true, host key checking will be disabled when connecting to the target host, default false; when connecting via bastion, bastion will not execute any SSH keyscan
  • ansible_ssh_settings.insecure_bastion_no_strict_host_key_checking: if true, host key checking will be disabled when connecting to the bastion host, default false
  • ansible_ssh_settings.user_known_hosts_file: used only when ansible_ssh_settings.insecure_no_strict_host_key_checking=false; if set, the provided path will be used instead of an auto-generate known hosts file; when executing via bastion host, it allows the administrator to provide a known hosts file, no SSH keyscan will be executed on the bastion; default empty string
  • ansible_ssh_settings.bastion_user_known_hosts_file: used only when ansible_ssh_settings.insecure_bastion_no_strict_host_key_checking=false; if set, the provided path will be used instead of an auto-generate known hosts file

Remote

The existence of this resource enables remote provisioning. To use remote provisioner with its default settings, simply add remote {} to your provisioner.

  • remote.use_sudo: should sudo be used for bootstrap commands, boolean, default true, become does not make much sense; this attribute has no relevance to Ansible --sudo flag
  • remote.skip_install: if set to true, Ansible installation on the server will be skipped, assume Ansible is already installed, boolean, default false
  • remote.skip_cleanup: if set to true, Ansible bootstrap data will be left on the server after bootstrap, boolean, default false
  • remote.install_version: Ansible version to install when skip_install = false and default installer is in ude, string, default empty string (latest version available in respective repositories)
  • remote.local_installer_path: full path to the custom Ansible installer on the local machine, used when skip_install = false, string, default empty string; when empty and skip_install = false, the default installer is used
  • remote.remote_installer_directory: full path to the remote directory where custom Ansible installer will be deployed to and executed from, used when skip_install = false, string, default /tmp; any intermediate directories will be created; the program will be executed with sh, use shebang if program requires a non-shell interpreter; the installer will be saved as tf-ansible-installer under the given directory; for /tmp, the path will be /tmp/tf-ansible-installer
  • remote.bootstrap_directory: full path to the remote directory where playbooks, roles, password files and such will be uploaded to, used when skip_install = false, string, default /tmp; the final directory will have tf-ansible-bootstrap appended to it; for /tmp, the directory will be /tmp/tf-ansible-bootstrap

Examples

Working examples.

Usage

The provisioner does not support passwords. It is possible to add password support for:

  • remote provisioner without bastion: host passwords reside in the inventory file
  • remote provisioner with bastion: host passwords reside in the inventory file, bastion is handled by Terraform, password is never visible
  • local provisioner without bastion: host passwords reside in the inventory file

However, local provisioner with bastion currently rely on executing an Ansible command with SSH -o ProxyCommand, this would require putting the password on the terminal. For consistency, consider no password support.

Local provisioner: SSH details

Local provisioner requires the resource.connection with, at least, the user defined. After the bootstrap, the plugin will inspect the connection info, check if the user and private_key are set and that provisioning succeeded, indeed, by checking the host (which should be an ip address of the newly created instance). If the connection info does not provide the SSH private key, ssh agent mode is assumed.

In the process of doing so, a temporary inventory will be created for the newly created host, the pem file will be written to a temp file and a temporary known_hosts file will be created. Temporary known_hosts and temporary pem are per provisioner run, inventory is created for each plays. Files are cleaned up after the provisioner finishes or fails. Inventory will be removed only if not supplied with inventory_file.

Local provisioner: host and bastion host keys

Because the provisioner executes SSH commands outside of itself, via Ansible command line tools, the provisioner must construct a temporary SSH known_hosts file to feed to Ansible. There are two possible scenarios.

Host without a bastion

  1. If connection.host_key is used, the provisioner will use the provided host key to construct the temporary known_hosts file.
  2. If connection.host_key is not given or empty, the provisioner will attempt a connection to the host and retrieve first host key returned during the handshake (similar to ssh-keyscan but using Golang SSH).

Host with bastion

This is a little bit more involved than the previous case.

  1. If connection.bastion_host_key is provided, the provisioner will use the provided bastion host key for the known_hosts file.
  2. If connection.bastion_host_key is not given or empty, the provisioner will attempt a connection to the bastion host and retrieve first host key returned during the handshake (similar to ssh-keyscan but using Golang SSH).

However, Ansible must know the host key of the target host where the bootstrap actually happens. If connection.host_key is provided, the provisioner will simply use the provieded value. But, if no connection.host_key is given (or empty), the provisioner will open an SSH connection to the bastion host and perform an ssh-keyscan operation against the target host on the bastion host.

In the ssh-keyscan case, the bastion host must:

  • be a Linux / BSD based system
  • unless bastion_host_key is used:
    • have cat, echo, grep, mkdir, rm, ssh-keyscan commands available on the $PATH for the SSH user
    • have $HOME enviornment variable set for the SSH user

Compute resource local provisioner: hosts and groups

The plays.hosts and defaults.hosts attributes can be used with local provisioner. When used with a compute resource only the first defined host will be used when generating the inventory file and additional hosts will be ignored. If plays.hosts or defaults.hosts is not specified, the provisioner uses the public IP address of the Terraform provisioned resource instance. The inventory file is generated in the following format with a single host:

aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh

For each group, additional ini section will be added, where each section is:

[groupName]
aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh

For a host list ["someHost"] and a group list of ["group1", "group2"], the inventory would be:

someHost ansible_host=<ip> ansible_connection-ssh

[group1]
someHost ansible_host=<ip> ansible_connection-ssh

[group2]
someHost ansible_host=<ip> ansible_connection-ssh

If hosts is an empty list or not given, the resulting generated inventory is:

<ip> ansible_connection-ssh

[group1]
<ip> ansible_connection-ssh

[group2]
<ip> ansible_connection-ssh

Null_resource local provisioner: hosts and groups

The plays.hosts and defaults.hosts can be used with local provisioner on a null_resource. All passed hosts are used when generating the inventory file. The inventory file is generated in the following format:

<firstHost IP> 
<secondHost IP>

For each group, additional ini section will be added, where each section is:

[groupName]
<firstHost IP> 
<secondHost IP>

For a host list ["firstHost IP", "secondHost IP"] and a group list of ["group1", "group2"], the inventory would be:

<firstHost IP> 
<secondHost IP>

[group1]
<firstHost IP> 
<secondHost IP>

[group2]
<firstHost IP> 
<secondHost IP>

Remote provisioner: running on hosts created by Terraform

Remote provisioner can be enabled by adding remote {} resource to the provisioner resource.

resource "aws_instance" "ansible_test" {
  # ...
  connection {
    user = "centos"
    private_key = "${file("${path.module}/keys/centos.pem")}"
  }
  provisioner "ansible" {
    plays {
      # ...
    }
    
    # enable remote provisioner
    remote {}
    
  }
}

Unless remote.skip_install = true, the provisioner will install Ansible on the bootstrapped machine. Next, a temporary inventory file is created and uploaded to the host, any playbooks, roles, Vault password files are uploaded to the host.

Remote provisioning works with a Linux target host only.

Supported Ansible repository layouts

This provisioner supports two main repository layouts.

  1. Roles nested under the playbook directory:

    .
    โ”œโ”€โ”€ install-tree.yml
    โ””โ”€โ”€ roles
        โ””โ”€โ”€ tree
            โ””โ”€โ”€ tasks
                โ””โ”€โ”€ main.yml
    
  2. Roles and playbooks directories separate:

    .
    โ”œโ”€โ”€ playbooks
    โ”‚ย ย  โ””โ”€โ”€ install-tree.yml
    โ””โ”€โ”€ roles
        โ””โ”€โ”€ tree
            โ””โ”€โ”€ tasks
                โ””โ”€โ”€ main.yml
    

In the first case, to reference the roles, it is necessary to use plays.playbook.roles_path attribute:

    plays {
      playbook {
        file_path = ".../playbooks/install-tree.yml"
        roles_path = [
            ".../ansible-data/roles"
        ]
      }
    }

In the second case, it is sufficient to use only the plays.playbook.file_path, roles are nested, thus available to Ansible:

    plays {
      playbook {
        file_path = ".../playbooks/install-tree.yml"
      }
    }

Remote provisioning directory upload

A remark regardng remote provisioning. Remote provisioner must upload referenced playbooks and role paths to the remote server. In case of a playbook, the complete parent directory of the YAML file will be uploaded. Remote provisioner attempts to deduplicate uploads, if multiple plays reference the same playbook, the playbook will be uploaded only once. This is achieved by generating an MD5 hash of the absolute path to the playbook's parent directory and storing your playbooks at ${remote.bootstrap_direcotry}/${md5-hash} on the remote server.

For the roles path, the complete directory as referenced in roles_path will be uploaded to the remote server. Same deduplication method applies but the MD5 hash is the roles_path itself.

Tests

Integration tests require ansible and ansible-playbook on the $PATH. To run tests:

make test-verbose

Creating releases

To cut a release, run:

curl -sL https://raw.githubusercontent.com/radekg/git-release/master/git-release --output /tmp/git-release
chmod +x /tmp/git-release
/tmp/git-release --repository-path=$GOPATH/src/github.com/radekg/terraform-provisioner-ansible
rm -rf /tmp/git-release

After the release is cut, build the binaries for the release:

git checkout v${RELEASE_VERSION}
./bin/build-release-binaries.sh

Handle Docker image:

git checkout v${RELEASE_VERSION}
docker build --build-arg TAP_VERSION=$(cat .version) -t radekg/terraform-ansible:$(cat .version) .
docker login --username=radekg
docker tag radekg/terraform-ansible:$(cat .version) radekg/terraform-ansible:latest
docker push radekg/terraform-ansible:$(cat .version)
docker push radekg/terraform-ansible:latest

Note that the version is hardcoded in the Dockerfile. You may wish to update it after release.

terraform-provisioner-ansible's People

Contributors

adamwg avatar bagel-dawg avatar cryptobioz avatar gliptak avatar killerwhile avatar mcanevet avatar nsteinmetz avatar radekg avatar sherzberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provisioner-ansible's Issues

Interpolation not working in extra_vars

I cannot currently use interpolation to inject values from terraform into extra_vars.
The provisioner currently throws a false error when attempting.

  provisioner "ansible" {
    connection {
      type = "ssh"
      host = "${aws_instance.master.private_ip}"
      user = "ec2-user"
    }

    plays {
      playbook = "${var.ansible_path}/playbooks/greenplum/server.yaml"
      extra_vars {
        variable = "${aws_instance.master.private_ip}"
      }
    }

    hosts = ["${aws_instance.master.private_ip}"]
    local  = "no"
    verbose = "yes"
    use_sudo = "yes"
    skip_install = "no"
    skip_cleanup = "yes"
  }

Error

Warning: module.name.null_resource.ansible-master: nothing to play

Error: module.name.null_resource.ansible-master: file ${var.ansible_path}/playbooks/application/server.yaml does not exist

New release

Could you please prepare a new release with the latest commits?

Change groups to a list

Instead of taking a []string{ ... }, take a list of groups. The functionality will be:

group {
  name = "..."
  inherit_hosts = true
}

group {
  name = "..."
  host {
    ...
  }
}

It must be possible to combine inherit_hosts with additional hosts and have an empty group which does not inherit, neither provides additional hosts.

Unreachable; "argument must be an int"

Steps to reproduce

Use the local Ansible provisioner, with or without insecure_no_strict_host_key_checking.

Expected behavior

Ansible authenticates via SSH.

Actual behavior

Hosts are "unreachable", but this is not the case for the Ansible remote provisioner.

Configuration

Terraform version: latest stable release

terraform-provisioner-ansible version/SHA: latest release

Terraform run log:

TASK [Gathering Facts] *********************************************************
fatal: [IP redacted]: UNREACHABLE! => {"changed": false, "msg": "argument must be an int, or have a fileno() method", "unreachable": true}

plays: validate arguments based on the run mode

When running a module:

  • host_pattern: currently all is hard coded, new, must be added
  • --args=MODULE_ARGS
  • --background=SECONDS: new, must be added
  • --extra-vars=EXTRA_VARS
  • --forks=FORKS
  • --inventory-file=INVENTORY
  • --limit=SUBSET
  • --module-name=MODULE_NAME
  • --one-line
  • --poll=POLL_INTERVAL: new, must be added, only when --background
  • --vault-password-file=VAULT_PASSWORD_FILE
  • --verbose

When running a playbook:

  • --extra-vars=EXTRA_VARS
  • --force-handlers
  • --forks=FORKS
  • --inventory-file=INVENTORY
  • --limit=SUBSET
  • --skip-tags=SKIP_TAGS
  • --start-at-task=START_AT_TASK
  • --tags=TAGS
  • --vault-password-file=VAULT_PASSWORD_FILE
  • --verbose

As such:


Module only:

  • host_pattern
  • --args=MODULE_ARGS
  • --background=SECONDS
    ---module-name=MODULE_NAME
  • --one-line
  • --poll=POLL_INTERVAL

Playbook only:

  • --force-handlers
  • --skip-tags=SKIP_TAGS
  • --start-at-task=START_AT_TASK
  • --tags=TAGS

Shared:

  • --extra-vars=EXTRA_VARS
  • --forks=FORKS
  • --inventory-file=INVENTORY
  • --limit=SUBSET
  • --vault-password-file=VAULT_PASSWORD_FILE
  • --verbose

unexpected EOF whenver using bastion with user/key

Steps to reproduce

My example:

resource "aws_instance" "main" {
    ami                         = "${data.aws_ami.ami.id}"
    instance_type               = "t3.nano"
    key_name                    = "${var.ssh_key_pair}"
    associate_public_ip_address = false
    subnet_id                   = "${var.subnet_id}"
    vpc_security_group_ids      = ["${var.ssh_security_group_id}"]

    provisioner "ansible" {
        connection {
           agent               = false

            port                = 22
            user                = "ubuntu"
            private_key         = "${base64decode("${var.private_key}")}"

            bastion_host        = "${var.bastion_host_ip}"
            bastion_user        = "${var.bastion_user}"
            bastion_private_key = "${base64decode("${var.bastion_private_key}")}"
        }

        plays {
            playbook      = {
                file_path = "${path.module}/provision.yml"
            }
            become        = true
            become_method = "sudo"
            become_user   = "root"
        }
    }
}

Expected behavior

Ansible correctly Provisions

...

Actual behavior

Get an error: unexpected EOF

...

Configuration

Terraform version: Terraform v0.11.8

terraform-provisioner-ansible version/SHA: v2.0.1

Terraform file / provisioner configuration: Shown above

Terraform run log:

// ...

๏ฟฝ[0m๏ฟฝ[1mmodule.letsencrypt.aws_instance.main: Provisioning with 'ansible'...๏ฟฝ[0m๏ฟฝ[0m
2018/10/28 23:44:55 [TRACE] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2018/10/28 23:44:55 [TRACE] dag/walk: vertex "provisioner.ansible (close)", waiting for: "module.letsencrypt.aws_instance.main"
2018/10/28 23:44:55 [TRACE] dag/walk: vertex "provider.aws (close)", waiting for: "module.letsencrypt.aws_instance.main"
2018/10/28 23:44:55 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "module.letsencrypt.aws_instance.main"
2018-10-28T23:44:56.069-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: panic: runtime error: invalid memory address or nil pointer dereference
2018-10-28T23:44:56.069-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x18f8a80]
2018-10-28T23:44:56.069-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 
2018-10-28T23:44:56.069-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: goroutine 38 [running]:
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh.(*connection).clientAuthenticate(0xc00047e680, 0xc00009ec30, 0x0, 0xa)
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh/client_auth.go:54 +0x450
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh.(*connection).clientHandshake(0xc00047e680, 0xc0000fe0f0, 0x50, 0xc00009ec30, 0x0, 0x0)
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh/client.go:113 +0x2b4
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh.NewClientConn(0x1cffc40, 0xc000464050, 0xc0000fe0f0, 0x50, 0xc00009e0d0, 0x1cffc40, 0xc000464050, 0x0, 0x0, 0xc0000fe0f0, ...)
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh/client.go:83 +0xf8
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh.Dial(0x1b6d06d, 0x3, 0xc0000fe0f0, 0x50, 0xc00009e0d0, 0xc0000fe0f0, 0x50, 0xc0001c6090)
2018-10-28T23:44:56.070-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/golang.org/x/crypto/ssh/client.go:177 +0xb3
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/mode.(*bastionHost).connect(0xc000464018, 0xc000464020, 0x198efc9, 0x1d)
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/mode/ssh_bastion_host.go:64 +0x1bc
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/mode.(*LocalMode).Run(0xc00017c240, 0xc00017b0c8, 0x1, 0x1, 0xc00049ab80, 0x0, 0x0)
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/mode/mode_local.go:98 +0xc6a
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: main.applyFn(0x1cfa280, 0xc0004b40f0, 0x19cf260, 0xc0004ab040)
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/resource_provisioner.go:120 +0x44f
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/helper/schema.(*Provisioner).Apply(0xc000462000, 0x1ced360, 0xc000464008, 0xc00044e7d0, 0xc000449830, 0x0, 0x295d6c0)
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go:179 +0x4c2
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/plugin.(*ResourceProvisionerServer).Apply(0xc0003d2040, 0xc0004683c0, 0xc00017ad40, 0x0, 0x0)
2018-10-28T23:44:56.071-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/plugin/resource_provisioner.go:142 +0x168
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: reflect.Value.call(0xc000175560, 0xc00000c2a0, 0x13, 0x1b6d338, 0x4, 0xc0000a7f18, 0x3, 0x3, 0xc000173b80, 0x1013087, ...)
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/usr/local/go/src/reflect/value.go:447 +0x449
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: reflect.Value.Call(0xc000175560, 0xc00000c2a0, 0x13, 0xc00018ef18, 0x3, 0x3, 0x12a05f200, 0xc00018ef10, 0xc00018efb8)
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/usr/local/go/src/reflect/value.go:308 +0xa4
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: net/rpc.(*service).call(0xc00007a880, 0xc0003d42d0, 0xc000032368, 0xc000032380, 0xc0003f6200, 0xc00000a800, 0x199f500, 0xc0004683c0, 0x16, 0x199f540, ...)
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/usr/local/go/src/net/rpc/server.go:384 +0x14e
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: created by net/rpc.(*Server).ServeCodec
2018-10-28T23:44:56.072-0700 [DEBUG] plugin.terraform-provisioner-ansible_v2.0.1: 	/usr/local/go/src/net/rpc/server.go:481 +0x47e
2018/10/28 23:44:56 [ERROR] root.letsencrypt: eval: *terraform.EvalApplyProvisioners, err: unexpected EOF
2018/10/28 23:44:56 [ERROR] root.letsencrypt: eval: *terraform.EvalSequence, err: unexpected EOF

// ...

Change hosts to a structure

Instead of taking []string{ ... }, take a set of hosts. This will allow for:

host {
  name = "..."
  ansible_connection = "..."
  ansible_port = ...
  ansible_user = ...
}

This change will eliminate double template. ansible_port can be read from connInfo.Port.

Specifying multiple host groups in one run of Ansible with unique hosts in each group

I want an inventory file that looks like this

[controller]
10.1.1.10
[infra]
10.1.1.11

Because I have a playbook task that looks like this:

---
# loop through each backend supporting the cinder api proxy
# to ensure they are up.
- name: checking cinder api haproxy backend
  haproxy:
    state: enabled
    host: "controller{{ item[0] }}"
    backend: cinder_api
    wait: yes
    wait_interval: 1
  delegate_to: "{{ item[1] }}"
  with_nested:
    - "{{ range(groups['controller'] | length) | list }}"
    - "{{ groups['infra'] }}"

Notice the task references both controller and infra host group.

So I tried something like this:

resource "openstack_compute_instance_v2" "infra" {
    name = "infra"
    image_name = "ubuntu-18.04"

    network {
        name = "public"
    }
}

resource "openstack_compute_instance_v2" "controller" {
    name = "controller-${random_string.unique.result}"
    image_name = "ubuntu-18.04"

    network {
        name = "public"
    }
}

resource "null_resource" "openstack-playbook" {

    provisioner "ansible" {
        connection {
            user = "ubuntu"
            private_key = "${var.ssh_private_key}"
        }
        plays  {
             groups = ["infra"]
             hosts = ["${openstack_compute_instance_v2.infra.access_ip_v4}"]            
             playbook {
                 file_path = "../playbook.yml"
             }      
        }
        plays  {
             groups = ["controller"]
             hosts = ["${openstack_compute_instance_v2.controller.access_ip_v4}"]        
             playbook {
                 file_path = "../playbook.yml"
             }                      
        }              
    }
}

But this fails with Error: Local mode requires a connection with username and host

It appears each provisioner run instance will only connect to a single Ansible host, regardless how many plays.hosts are specified.

[2.0.0] Use connection.bastion_host_key and connection.host_key

This is related and potentially fixing #50.

Terraform SSH connection takes bastion_host_key and host_key arguments.

  • when bastion_host_key is defined and non-empty, use the value instead of executing ssh-keyscan for bastion host
  • when host_key is defined and non-empty, use the value instead of executing ssh-keyscan for the host
  • document the solution and ssh-keyscan fallback

Related Terraform PR: hashicorp/terraform#17354

[2.0.0] Restore tests.

While working on migration to 2.0.0 code structure, most of the tests have been commented out. Restore and fix tests.

make build-linux fails with Go 1.6

This does not compile with the golang-1.6 package in the Ubuntu 16.04 release:

mkdir -p ~/.terraform.d/plugins
CGO_ENABLED=0 GOOS=linux installsuffix=cgo go build -o ./terraform-provisioner-ansible-linux
provisioner.go:6:2: cannot find package "context" in any of:
        /home/user/go/src/github.com/radekg/terraform-provisioner-ansible/vendor/context (vendor tree)
        /usr/lib/go-1.6/src/context (from $GOROOT)
        /home/user/go/src/context (from $GOPATH)
Makefile:22: recipe for target 'build-linux' failed
make: *** [build-linux] Error 1

I had installed go1.6.2 linux/amd64 on Ubuntu 16.04.4 LTS running on Windows 10 v.1709
Now, I installed manually golang 1.10 and it works.

Roles not uploaded with playbook

My current ansible repo has the structure:
โ”œโ”€โ”€ inventories
โ”‚ย ย  โ”œโ”€โ”€ directory
โ”œโ”€โ”€ playbooks
โ”‚ย ย  โ”œโ”€โ”€ directory
โ”œโ”€โ”€ roles
โ”‚ย ย  โ”œโ”€โ”€ directory

Upon copying the playbook I want to run, the roles are not copied.

plays.hosts needs documenting

The plays.hosts argument is not documented. It is unclear how this interacts with the hosts provisioner argument and why the latter is ignored in local mode.

Add local mode

Currently, the plugin runs on a remote host. Preferably, the plugin should allow running in local and remote mode. While implementing #12, changes were made to distinguish between remote and local modes.

On top of my head, running in local mode would require following changes:

  • add local = true to provisioner
  • when local == true, inventory_file=string path would be required

Don't mark the resource as created if there is something to apply and we are in check mode

I have an idea to implement but I don't really know how to do it.
I'd like to have an option to not mark the resource as created if check mode is enabled and the playbook has some changes to apply.
AFAIK, Ansible does not have some kind of --detailed-exitcode, the only way I found to check if there is some changes to apply is to search for the changed=0 string in the output.

With this simple Terraform manifest:

variable "ansible_check" {
  default = "true"
}

resource "aws_instance" "test_box" {
  ...
}

resource "null_resource" "provisioner" {
  # ...
  connection {
    user = "centos"
    host = "${aws_instance.test_box.public_ip}"
  }
  provisioner "ansible" {
    plays {
      playbook = {
        file_path = "/path/to/playbook/file.yml"
        roles_path = ["/path1", "/path2"]
        force_handlers = false
        skip_tags = ["list", "of", "tags", "to", "skip"]
        start_at_task = "task-name"
        tags = ["list", "of", "tags"]
      }
      # shared attributes
      enabled = true
      hosts = ["zookeeper"]
      groups = ["consensus"]
      become = false
      become_method = "sudo"
      become_user = "root"
      diff = false
      check = "${var.ansible_check}"
      extra_vars = {
        extra = {
          variables = {
            to = "pass"
          }
        }
      }
      forks = 5
      inventory_file = "/optional/inventory/file/path"
      limit = "limit"
      vault_id = ["/vault/password/file/path"]
      verbose = false
    }

Currently I have to do something like this:

$ terraform taint null_resource.provisioner
$ terraform apply -target null_resource.provisioner | tee /dev/stderr | grep -q changed=0 || terraform taint null_resource.provisioner # dry-run with tainting again if something has to be applied
$ TF_VAR_ansible_check=false terraform apply

That works as long as I have only one null_resource in my Terraform project, but I usually have 2 (one generic playbook to apply to all my instances, and sometimes one specific module to do some specific stuffs on one instance). In that case, dependencies makes hard to know which null_resource have to be marked as tainted again.

It would be easier (and more logical I think), if the Ansible provisioner would return in failure if there is some changes to apply and we are in check mode. In that case, my workflow would be:

$ terraform taint null_resource.provisioner
$ terraform apply -target null_resource.provisioner # dry-run, null_resource will not be marked as created if something has to be applied
$ TF_VAR_ansible_check=false terraform apply

Thus if the ansible playbook hasn't converged yet, it will be re-applied.

Does not work with alicloud provider

This provisioner(as awesome as it is!) does not seem to work with the alicloud provider.

When I try to provision with local = true I get:

alicloud_instance.host[0]: Provisioning with 'ansible'...
...
* alicloud_instance.host[0]: Local mode requires a connection with username and host.

I'm not sure if this should be reported here or with the provider itself, if it should I'll be happy to close this and open an issue there, I'm just not clear where the problem is.

How can I change inventory_hostname?

I know I can set inventory_hostname in extra_vars, but it does not affect the key under which the host metadata exists in the inventory, it's still under an IP. How can I change it so instead of IP it uses my given DNS entry?

support host variables

Playbooks that take advantage of host variables like so

http_port: "{{ default_http_port | default(hostvars[groups['Atlanta'][0]]['http_port']) }}"

would need an inventory that has host variables like so

[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909

Failed to connect to the host via ssh: invalid format

For some reason when a terraform variable storing the ssh key is passed in via an environment variable, there are some extra bits at the end of the key by the time it reaches terraform-provisioner-ansible.

Steps to reproduce

    connection {
        type = "ssh"
        user = "ubuntu"
        private_key = "${var.ssh_private_key}"
        //private_key = "${file("~/.ssh/id_rsa")}"  WORKS
    }
  1. export TF_VAR_ssh_private_key="$(cat ~/.ssh/id_rsa)"
  2. terraform apply

Expected behavior

Ansible provisioner executes without error

Actual behavior

openstack_compute_instance_v2.infra (ansible): TASK [Gathering Facts] *********************************************************
openstack_compute_instance_v2.infra (ansible): fatal: [(ip redacted)]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Load key \"/private/var/folders/kt/8r8l44m54159f_r1k0qwsmmm0000gp/T/b660abc6-f02b-4210-b36f-72a0a3e3fea1845955416\": invalid format\r\nubuntu@(ip redacted): Permission denied (publickey).\r\n", "unreachable": true}

Configuration

Terraform version:
v0.11.10

In validate function, check types before doing happy cast

If the value is of a wrong type, the plugin craps out like this:

Error: aws_instance.ansible: unexpected EOF


panic: interface conversion: interface {} is bool, not string
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible:
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: goroutine 10 [running]:
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: main.validateFn(0xc42029fa10, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/Users/rad/dev/golang/src/github.com/radekg/terraform-provisioner-ansible/provisioner.go:675 +0x2306
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/helper/schema.(*Provisioner).Validate(0xc4202e6140, 0xc42029fa10, 0xc42005b308, 0xc42036ab28, 0x1010957, 0xc42029fce0, 0x30, 0x28)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/Users/rad/dev/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go:198 +0x23d
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/plugin.(*ResourceProvisionerServer).Validate(0xc4202ea620, 0xc42000cac0, 0xc42029fc20, 0x0, 0x0)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/Users/rad/dev/golang/src/github.com/radekg/terraform-provisioner-ansible/vendor/github.com/hashicorp/terraform/plugin/resource_provisioner.go:152 +0x59
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: reflect.Value.call(0xc42005b2c0, 0xc42000c9b0, 0x13, 0x1af01a7, 0x4, 0xc42036af20, 0x3, 0x3, 0xc420001708, 0xc4200017b8, ...)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/usr/local/opt/go/libexec/src/reflect/value.go:434 +0x905
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: reflect.Value.Call(0xc42005b2c0, 0xc42000c9b0, 0x13, 0xc420036f20, 0x3, 0x3, 0x1054b5c, 0x1, 0xc420036fd0)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/usr/local/opt/go/libexec/src/reflect/value.go:302 +0xa4
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: net/rpc.(*service).call(0xc42016b100, 0xc4200874a0, 0xc420269110, 0xc42012cb00, 0xc420310a60, 0x19381c0, 0xc42000cac0, 0x16, 0x1938200, 0xc42029fc20, ...)
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/usr/local/opt/go/libexec/src/net/rpc/server.go:381 +0x142
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: created by net/rpc.(*Server).ServeCodec
2018-02-05T23:24:38.615+0100 [DEBUG] plugin.terraform-provisioner-ansible: 	/usr/local/opt/go/libexec/src/net/rpc/server.go:475 +0x36b
2018/02/05 23:24:38 [ERROR] root: eval: *terraform.EvalValidateProvisioner, err: Warnings: []. Errors: [unexpected EOF]
2018/02/05 23:24:38 [ERROR] root: eval: *terraform.EvalSequence, err: Warnings: []. Errors: [unexpected EOF]
2018/02/05 23:24:38 [TRACE] [walkValidate] Exiting eval tree: aws_instance.ansible
2018/02/05 23:24:38 [TRACE] dag/walk: walking "provider.aws (close)"
2018/02/05 23:24:38 [TRACE] vertex 'root.provider.aws (close)': walking
2018/02/05 23:24:38 [TRACE] dag/walk: walking "meta.count-boundary (count boundary fixup)"
2018/02/05 23:24:38 [TRACE] vertex 'root.meta.count-boundary (count boundary fixup)': walking
2018/02/05 23:24:38 [TRACE] dag/walk: walking "provisioner.ansible (close)"
2018/02/05 23:24:38 [TRACE] vertex 'root.provisioner.ansible (close)': walking
2018/02/05 23:24:38 [TRACE] vertex 'root.provider.aws (close)': evaluating
2018/02/05 23:24:38 [TRACE] vertex 'root.provisioner.ansible (close)': evaluating
2018/02/05 23:24:38 [TRACE] [walkValidate] Entering eval tree: provisioner.ansible (close)
2018/02/05 23:24:38 [TRACE] root: eval: *terraform.EvalCloseProvisioner
2018/02/05 23:24:38 [TRACE] [walkValidate] Exiting eval tree: provisioner.ansible (close)
2018/02/05 23:24:38 [TRACE] [walkValidate] Entering eval tree: provider.aws (close)
2018/02/05 23:24:38 [TRACE] root: eval: *terraform.EvalCloseProvider
2018/02/05 23:24:38 [TRACE] [walkValidate] Exiting eval tree: provider.aws (close)
2018/02/05 23:24:38 [TRACE] vertex 'root.meta.count-boundary (count boundary fixup)': evaluating
2018/02/05 23:24:38 [TRACE] [walkValidate] Entering eval tree: meta.count-boundary (count boundary fixup)
2018/02/05 23:24:38 [TRACE] root: eval: *terraform.EvalCountFixZeroOneBoundaryGlobal
2018/02/05 23:24:38 [TRACE] EvalCountFixZeroOneBoundaryGlobal: count 1, search "aws_security_group.ssh.0", replace "aws_security_group.ssh"
2018/02/05 23:24:38 [TRACE] EvalCountFixZeroOneBoundaryGlobal: count 1, search "aws_instance.ansible.0", replace "aws_instance.ansible"
2018/02/05 23:24:38 [TRACE] [walkValidate] Exiting eval tree: meta.count-boundary (count boundary fixup)
2018-02-05T23:24:38.617+0100 [DEBUG] plugin: plugin process exited: path=/Users/rad/.terraform.d/plugins/terraform-provisioner-ansible
2018/02/05 23:24:38 [TRACE] dag/walk: walking "root"
2018/02/05 23:24:38 [TRACE] vertex 'root.root': walking
2018/02/05 23:24:38 [DEBUG] plugin: waiting for all plugin processes to complete...
2018-02-05T23:24:38.617+0100 [WARN ] plugin: error closing client during Kill: err="connection is shut down"
2018-02-05T23:24:38.619+0100 [DEBUG] plugin: plugin process exited: path=/Users/rad/dev/my/tf-kubernetes/.terraform/plugins/darwin_amd64/terraform-provider-aws_v1.8.0_x4



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Not very nice.

More tests

Add tests. Find out what's the way of testing these plugins and provide sufficient tests.

support complex variables

Currently it seems extra_vars only supports map to string variables, but need more complex variable types than just string.

This example is attempting to use a list type, which fails.

            extra_vars = {
                ansible_user = "ubuntu"
                nameservers = ["10.1.1.1"]
            }

This has more to do with the limitation of hcl, and perhaps hcl2 will help. Until then, could perhaps something like the chef provisioner with its attributes_json.

  provisioner "chef" {
    attributes_json = <<-EOF
      {
        "key": "value",
        "app": {
          "cluster1": {
            "nodes": [
              "webserver1",
              "webserver2"
            ]
          }
        }
      }
    EOF
  }

Can't build application "exit status 128". Request for pre-built artifacts.

After running make install build always fail with:
[ERROR] Export failed for github.com/hashicorp/terraform: Unable to export source: exit status 128
and sometimes same error with other packages like google.golang.org/genproto etc.
I'm not "go person" and for people like me it would be cool to just download package from releases and be happy. Hope it is possible.

"Could not find platform independent libraries"

Hey @radekg ๐Ÿ‘‹

Thank you for this awesome plugin!
Unfortunately, I'm running into an issue with it.

Environment/configuration

Client (terraform & ansible): macOS 10.12.6
Remote (provisioning target): OpenBSD 6.1
Terraform version: v0.11.6
Ansible version: 2.4.2.0

Issue info

I'm trying to provision an OpenBSD system using a playbook that works absolutely fine if I execute it from the shell against the exact machine that's been provisioned by Terraform.
However, if it's done as a provision step, using your plugin, these are the results I get:

triton_machine.vpn: Provisioning with 'ansible'...
triton_machine.vpn (ansible): Generating temporary ansible inventory...
triton_machine.vpn (ansible): Writing temporary ansible inventory to '/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/temporary-ansible-inventory182202682'...
triton_machine.vpn (ansible): Ansible inventory written.
triton_machine.vpn (ansible): Executing: ["/bin/sh" "-c" "ssh-keyscan -p 22 172.19.2.26 2>/dev/null | head -n1 > \"/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/7467afe4-f8d1-4222-b7ff-c1fb7fb90c03\""]
triton_machine.vpn (ansible): running local command: ANSIBLE_FORCE_COLOR=true ansible-playbook ../../ansible/plays/ipsec.yml --inventory-file='/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/temporary-ansible-inventory182202682' --extra-vars='{"ansible_bash_interpreter":"/usr/local
/bin/bash","ansible_python_interpreter":"/usr/local/bin/python2.7"}' --forks=5 --vault-password-file='../../ansible/pass.sh' --user='root' --ssh-extra-args='-p 22 -o UserKnownHostsFile=/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/7467afe4-f8d1-4222-b7ff-c1fb7fb90c03 -o ConnectTimeout=10 -o ConnectionAttempts=10'
triton_machine.vpn (ansible): Executing: ["/bin/sh" "-c" "ANSIBLE_FORCE_COLOR=true ansible-playbook ../../ansible/plays/ipsec.yml --inventory-file='/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/temporary-ansible-inventory182202682' --extra-vars='{\"ansible_bash_interpreter\":\"/u
sr/local/bin/bash\",\"ansible_python_interpreter\":\"/usr/local/bin/python2.7\"}' --forks=5 --vault-password-file='../../ansible/pass.sh' --user='root' --ssh-extra-args='-p 22 -o UserKnownHostsFile=/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/7467afe4-f8d1-4222-b7ff-c1fb7fb90c03
 -o ConnectTimeout=10 -o ConnectionAttempts=10'"]

triton_machine.vpn (ansible): PLAY [vpn_servers] *************************************************************

triton_machine.vpn (ansible): TASK [Ensure sysctl params are set] ********************************************
triton_machine.vpn (ansible): failed: [172.19.2.26] (item=ip.forwarding) => {"changed": false, "item": "ip.forwarding", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTH
ONHOME to <prefix>[:<exec_prefix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
triton_machine.vpn (ansible): failed: [172.19.2.26] (item=esp.enable) => {"changed": false, "item": "esp.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME
 to <prefix>[:<exec_prefix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
triton_machine.vpn (ansible): failed: [172.19.2.26] (item=ah.enable) => {"changed": false, "item": "ah.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME t
o <prefix>[:<exec_prefix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
triton_machine.vpn (ansible): failed: [172.19.2.26] (item=ipcomp.enable) => {"changed": false, "item": "ipcomp.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTH
ONHOME to <prefix>[:<exec_prefix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
triton_machine.vpn (ansible):   to retry, use: --limit @/Users/cmacrae/code/git/ops/ansible/plays/ipsec.retry

triton_machine.vpn (ansible): PLAY RECAP *********************************************************************
triton_machine.vpn (ansible): 172.19.2.26                : ok=0    changed=0    unreachable=0    failed=1


Error: Error applying plan:

1 error(s) occurred:

* triton_machine.vpn: Error running command 'ANSIBLE_FORCE_COLOR=true ansible-playbook ../../ansible/plays/ipsec.yml --inventory-file='/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/temporary-ansible-inventory182202682' --extra-vars='{"ansible_bash_interpreter":"/usr/local/bin/bas
h","ansible_python_interpreter":"/usr/local/bin/python2.7"}' --forks=5 --vault-password-file='../../ansible/pass.sh' --user='root' --ssh-extra-args='-p 22 -o UserKnownHostsFile=/var/folders/sn/f0p54j5x3l926vy3br_h1c3r0000gn/T/7467afe4-f8d1-4222-b7ff-c1fb7fb90c03 -o ConnectTimeout=10
 -o ConnectionAttempts=10'': exit status 2. Output:
PLAY [vpn_servers] *************************************************************

TASK [Ensure sysctl params are set] ********************************************
failed: [172.19.2.26] (item=ip.forwarding) => {"changed": false, "item": "ip.forwarding", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_pref
ix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
failed: [172.19.2.26] (item=esp.enable) => {"changed": false, "item": "esp.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\r
\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
failed: [172.19.2.26] (item=ah.enable) => {"changed": false, "item": "ah.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\r\n
ImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
failed: [172.19.2.26] (item=ipcomp.enable) => {"changed": false, "item": "ipcomp.enable", "module_stderr": "Shared connection to 172.19.2.26 closed.\r\n", "module_stdout": "Could not find platform independent libraries <prefix>\r\nConsider setting $PYTHONHOME to <prefix>[:<exec_pref
ix>]\r\nImportError: No module named site\r\n", "msg": "MODULE FAILURE", "rc": 0}
        to retry, use: --limit @/Users/cmacrae/code/git/ops/ansible/plays/ipsec.retry

PLAY RECAP *********************************************************************
172.19.2.26                : ok=0    changed=0    unreachable=0    failed=1

provisioner expression being used

  provisioner "ansible" {
    local = "yes"
    vault_password_file = "../../ansible/pass.sh"
    hosts = ["${self.primaryip}"]
    groups = ["vpn_servers"]
    plays {
      playbook = "../../ansible/plays/ipsec.yml"
      extra_vars {
	ansible_python_interpreter = "/usr/local/bin/python2.7"
	ansible_bash_interpreter = "/usr/local/bin/bash"
      }
    }
  }

Conjecture

It seems to be a problem with the local execution of Ansible.
If I run the exact command produced by your provisioner in the shell, it all works fine.
It's only when the execution is being called by the plugin that it's failing as above.
The only thing I can think of that might be interfering is the means by which I have Ansible installed o my client machine: I use Nix for system management on my macOS machines.
The "binaries" provided by the Ansible package are wrapped (as many other packages are) like so:

cmacrae $ cat $(which ansible-playbook)                                                                                            ~
#! /nix/store/h6815iv0gcn7hq1nk6qc1rzsv7x9gcz7-bash-4.4-p12/bin/bash -e
export PATH='/nix/store/573lgkld523h7hh5djfzfl8p66fcxawc-python-2.7.14/bin:/nix/store/9zr862kirdabm620qcni5rgi4glyfa0y-python2.7-ansible-2.4.2.0/bin:/nix/store/smgs43b3mnrja89f0g5dcvl0haf8pqyg-python2.7-setuptools-38.4.0/bin:/nix/store/yp8y4hfdbxf2dbfvgs8k0qcx67jzjsay-python2.7-boto-2.47.0/bin:/nix/store/nn6kp8jknb0al4yhxa39vp9bhv4ssbvp-python2.7-chardet-3.0.4/bin:/nix/store/lma2f5yk6aak4z6jsm7wvvm6idpx07fm-python2.7-netaddr-0.7.19/bin'${PATH:+':'}$PATH
export PYTHONNOUSERSITE='true'
exec -a "$0" "/nix/store/9zr862kirdabm620qcni5rgi4glyfa0y-python2.7-ansible-2.4.2.0/bin/.ansible-playbook-wrapped"  "${extraFlagsArray[@]}" "$@"

I think that maybe the PYTHONNOUSERSITE is important here.
So, I was thinking that maybe somehow, when ansible-playbook is being called by the plugin, it's foregoing the wrapper script above, and PYTHONNOUSERSITE isn't getting set?

Thank you for your time. Let me know if there's any more information I can provide to assist with debugging. Any help on this is greatly appreciated!

Reintroduce plays

The usage would become:

resource "aws_instance" "..." {
  provisioner "ansible" {
    plays {
      playbook = "..."
      hosts = [...]
      groups = [...]
      ...
    }
    plays {
      module = "..."
      args {
        arg1 = ...
        arg2 = ...
      }
      hosts = [...]
      groups = [...]
      ...
    }
  }
}
  • plays must conflict with current playbook
  • plays must be executed in the order defined
  • plays.playbook must conflict with plays.module

Where possible:

  • plays must share uploaded playbooks and vault password files
  • plays must share generated inventories

Arguments:

  • use_sudo, skip_install skip_cleanup, install_version: provisioner only
  • any other existing argument: per plays with an override from provisioner

Provisioning using a bastion does not work on a restricted shell

Provisioning using a bastion requires access to the mkdir command because of this.

However, sometime the sysadmins who set up the bastion allowed only access to the ssh command, so that we can jump to the destination host.

It would be great if this provider could work without trying to store the SSH public keys of the destination host on the bastion.

Generate inventory

Before running ansible command, an inventory needs to be generated and uploaded to a specified location.

Don't set ANSIBLE_ROLES_PATH when not needed

When neither ANSIBLE_ROLES_PATH nor DEFAULT_ROLES_PATH environment variables are set,
ANSIBLE_ROLES_PATH is still set to defaultRolesPath. This avoid usage of ansible.cfg file, since ANSIBLE_ROLES_PATH has precedence over ansible.cfg:role_path attribute.

The proposal here is to avoid forcing ANSIBLE_ROLES_PATH when no path is provided, deferring to the ansible.cfg or ansible defaults when possible.

Adding role_path parameter in the provisioner should remains untouched.

aws_volume_attachment not happening.

Steps to reproduce

For the following terraform file I am never seeing the example_storage attach when also include the ansible provisioner, but it does when I strip out the ansible provisioner.

Expected behavior

Storage should attach.

Actual behavior

Doesn't.
...

Configuration

Terraform version: Terraform v0.11.11 + provider.aws v1.56.0

terraform-provisioner-ansible version/SHA: 2.1.1

Terraform file / provisioner configuration:

provider "aws" {
  region = "${var.region}"
}

resource "aws_volume_attachment" "example_storage_attachment" {
  device_name  = "/dev/xvdb"
  volume_id    = "${aws_ebs_volume.example.id}"
  instance_id  = "${aws_instance.example.id}"
}

resource "aws_instance" "example" {
  count           = "1"
  ami             = "${data.aws_ami.selected.id}"
  instance_type   = "${var.instance_type}"
  key_name        = "${var.key_name}"
  subnet_id       = "${element(data.aws_subnet_ids.selected.ids, count.index)}"
  security_groups = [...stripped out]
  depends_on      = ["aws_ebs_example_storage"]

  root_block_device {
    volume_size = 80
  }
  tags {
    Name = "Example"
  }

  connection {
    user = "ec2-user"
    private_key = "${file("~/${var.key_name}.pem")}"
  }

  provisioner "ansible" {
    plays {
      playbook = {
        file_path = "./ansible/playbook.yml"
        roles_path = ["./ansible/roles"]
      }
      groups = ["examples"]
      extra_vars = {
        ansible_user = "ec2-user"
        ansible_user_id = "ec2-user"
      }
    }

    remote { }
  }
}

# Example storage volume
resource "aws_ebs_volume" "example_storage" {
  size = 20
  type = "gp2"
  availability_zone = "${var.volume_region}"
  encrypted = "true"

  tags {
    Name = "Example Storage"
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.