Code Monkey home page Code Monkey logo

terraform-google-gcloud's People

Contributors

aaron-lane avatar apeabody avatar bharathkkb avatar cloud-foundation-bot avatar dansiviter avatar dogmatic69 avatar erjohnso avatar estecker avatar g-awmalik avatar github-actions[bot] avatar haizaar avatar kunalkg11 avatar marcus-foobar avatar milesmatthias avatar morgante avatar naormatania avatar njculver avatar release-please[bot] avatar renovate[bot] avatar taylorludwig avatar tishen25 avatar vlad-ro avatar yashbhutwala avatar ymotongpoo avatar yuryninog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-gcloud's Issues

additional_components null resource exists with not found - wrong shebang

TL;DR

The null resource for additional components uses a local-exec provider. This means, the executed command will be something like bin/sh -c <command>.
The script scripts/check_components.sh contains the shebang line #!/bin/bash. Running on images without a bash, but with a sh (like basic alpine images), the command will fail.

Expected behavior

The command script will be exectuted successfully.

Observed behavior

The script exits with exit code 127 - command not found

Terraform Configuration

# pasted from https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/dns.tf
module "gcloud_delete_default_kube_dns_configmap" {
  source                      = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version                     = "~> 2.1.0"
  enabled                     = (local.custom_kube_dns_config || local.upstream_nameservers_config) && !var.skip_provisioners
  cluster_name                = google_container_cluster.primary.name
  cluster_location            = google_container_cluster.primary.location
  project_id                  = var.project_id
  upgrade                     = var.gcloud_upgrade
  impersonate_service_account = var.impersonate_service_account

  kubectl_create_command  = "${path.module}/scripts/delete-default-resource.sh kube-system configmap kube-dns"
  kubectl_destroy_command = ""

  module_depends_on = concat(
    [google_container_cluster.primary.master_version],
    [for pool in google_container_node_pool.pools : pool.name]
  )
}

Terraform Version

➜  terraform version
Terraform v1.1.3

Additional information

No response

Using service account impersonation for terraform invoking kubectl-wrapper module

Related to terraform-google-modules/terraform-google-kubernetes-engine#874

I have a use-case where I'm using shared Terraform Cloud Agents, and my TF Cloud workspace is isolated by using service account impersonation, i.e.: the GSA that terraform agent runs terraform by default does not have GKE Admin IAM. Problem is since this module uses the kubectl-wrapper module like this, which uses this gcloud command here, it uses the agent terraform IAM instead of the impersonating ones, hence not being able to create GKE. Is there any potential workarounds/idea for such setup?

Pipe character in destroy command incorrectly interpreted

I'm using the kubectl-wrapper to install ECK (operator and CRDs). According to the documentation, the way to delete the operator is to run both of these commands:

kubectl get namespaces --no-headers -o custom-columns=:metadata.name | xargs -n1 kubectl delete elastic --all -n 

kubectl delete -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml

The problem is that when you introduce this as one command in kubectl_destroy_command, Terraform cuts off everything after the pipe character. Any recommendations on how to solve this?

module "deploy_eck_operator" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"

  project_id              = local.project.project_id
  cluster_name            = module.gke_cluster.name
  cluster_location        = var.region
  kubectl_create_command  = "kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml"
  kubectl_destroy_command = "kubectl get namespaces --no-headers -o custom-columns=:metadata.name | xargs -n1 kubectl delete elastic --all -n && kubectl delete -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml"
  skip_download           = true
  upgrade                 = false
}

When I run terraform destroy -auto-approve, it times out. When I look at the output for the kubectl-wrapper, I can see this:

module.deploy_eck_operator.module.gcloud_kubectl.null_resource.run_destroy_command[0] (local-exec): Fetching cluster endpoint and auth data.
module.deploy_eck_operator.module.gcloud_kubectl.null_resource.run_destroy_command[0] (local-exec): kubeconfig entry generated for elastic-search-cluster.
module.deploy_eck_operator.module.gcloud_kubectl.null_resource.run_destroy_command[0] (local-exec): + kubectl get namespaces --no-headers -o custom-columns=:metadata.name
module.deploy_eck_operator.module.gcloud_kubectl.null_resource.run_destroy_command[0]: Still destroying... [id=5726237551751552069, 10s elapsed]

Real skip_download, download gcloud cli on-demand

We're using the google project factory in our project and we're creating multiple projects (currently 17). Each project factory module installs this module 3 times. The cache folder is 110MB in size. Hence, we end up with a modules folder of 17 * 3 * 110MB = 5,6GB.

I would like to request that the cache/ folder is not included in the artifact and downloaded on-demand instead. We could then rely on the skip_download flag and use the gcloud cli from our pipeline container image.

The situation has gotten really bad for us because since yesterday we're running into GitHub rate limits.

Repo size is a few Gigs!

Hi,

When cloning this repo I noticed it was very large. Taking a look at the size of blobs..

git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' \
| sed -n 's/^blob //p' \
| sort --numeric-sort --key=2 \
| cut -c 1-12,41- \
| $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest

I see that the largest blobs are:

d363c8a22ba7  844KiB cache/darwin/jq
f48b0ca92144  3.8MiB cache/linux/jq
75fd8ba99519   21MiB cache/darwin/google-cloud-sdk.tar.gz
8c9828a79cc1   21MiB cache/darwin/google-cloud-sdk.tar.gz
0de79388f3ff   22MiB cache/darwin/google-cloud-sdk.tar.gz
8e63c3b02a91   22MiB cache/darwin/google-cloud-sdk.tar.gz
a7e4b6be7829   22MiB cache/darwin/google-cloud-sdk.tar.gz
63002ef2c81d   22MiB cache/darwin/google-cloud-sdk.tar.gz
14171ecf8ace   22MiB cache/darwin/google-cloud-sdk.tar.gz
6afd38c96116   22MiB cache/darwin/google-cloud-sdk.tar.gz
b78769c44c12   22MiB cache/darwin/google-cloud-sdk.tar.gz
cc42739faf5f   28MiB cache/linux/google-cloud-sdk.tar.gz
c5f94c7cf7a2   28MiB cache/linux/google-cloud-sdk.tar.gz
8d0ac84483b2   29MiB cache/linux/google-cloud-sdk.tar.gz
6f3ced3b0c2d   29MiB cache/linux/google-cloud-sdk.tar.gz
2be0be2e1ead   29MiB cache/linux/google-cloud-sdk.tar.gz
3b288442db09   29MiB cache/linux/google-cloud-sdk.tar.gz
d9ae2e931b9e   29MiB cache/linux/google-cloud-sdk.tar.gz
113c1d621257   29MiB cache/linux/google-cloud-sdk.tar.gz
b042b74c3ce9   29MiB cache/linux/google-cloud-sdk.tar.gz
4a7bff9d634c   46MiB cache/darwin/google-cloud-sdk.tar.gz
d7d8e990a635   47MiB cache/darwin/google-cloud-sdk.tar.gz
cbed5f49838e   47MiB cache/darwin/google-cloud-sdk.tar.gz
9e243acd0dce   47MiB cache/darwin/google-cloud-sdk.tar.gz
ea809a943941   47MiB cache/darwin/google-cloud-sdk.tar.gz
d69e3b0586ee   47MiB cache/darwin/google-cloud-sdk.tar.gz
b3a6db909e8a   48MiB cache/darwin/google-cloud-sdk.tar.gz
8215801b7a25   48MiB cache/darwin/google-cloud-sdk.tar.gz
15c46155a4f6   48MiB cache/darwin/google-cloud-sdk.tar.gz
7679495ee5ec   49MiB cache/darwin/google-cloud-sdk.tar.gz
ef4529f27bab   53MiB cache/linux/google-cloud-sdk.tar.gz
16fb075994b2   54MiB cache/linux/google-cloud-sdk.tar.gz
3805942c637d   54MiB cache/linux/google-cloud-sdk.tar.gz
d7a7fe100c77   54MiB cache/linux/google-cloud-sdk.tar.gz
80d8c8e72df4   54MiB cache/linux/google-cloud-sdk.tar.gz
d4efd180bd7e   54MiB cache/linux/google-cloud-sdk.tar.gz
ca7c1d74d521   54MiB cache/linux/google-cloud-sdk.tar.gz
1cc4ef259c43   55MiB cache/linux/google-cloud-sdk.tar.gz
57edbbbbc87f   55MiB cache/linux/google-cloud-sdk.tar.gz
cb72bf30b7ef   56MiB cache/linux/google-cloud-sdk.tar.gz

In the gitignore I see that the cache directory is ignored. I'm assuming that was added after these were checked in.

It's pretty easy to remove large files from git history but something that's hard to express in a PR since you have to rewrite the git history. Anyway, I'd really appreciate if a repo owner could do this.

Thank you!

use existing k8s sa fails

Hello,

I'm trying to annotate an existing k8s sa but I'm getting this:

 Error: local-exec provisioner error
│ 
│   with module.annotate-sa.module.gcloud_kubectl.null_resource.run_destroy_command[0],
│   on .terraform/modules/annotate-sa/main.tf line 258, in resource "null_resource" "run_destroy_command":
│  258:   provisioner "local-exec" {
│ 
│ Error running command 'PATH=/google-cloud-sdk/bin:$PATH
│ .terraform/modules/annotate-sa/modules/kubectl-wrapper/scripts/kubectl_wrapper.sh
│ dev-cluster us-central1  false false kubectl annotate sa -n kube-system
│ external-dns iam.gke.io/gcp-service-account-

Do you guys knows how to fix this?

terraform version : 1.0.0
google: 3.53
kubernetes: 2.3.2

use_tf_google_credentials_env_var not chained in kubectl-wrapper

Version: 2.1.0

kubectl-wrapper cannot make use of use_tf_google_credentials_env_var as it's not chained through. This means it can only authenticate using either a pre-authorised image or via a file path. Nether of which is possible via Terraform Cloud.

It is somewhat confusing that use_tf_google_credentials_env_var exists as I would have hoped it replicated the behaviour described in the provider. However, that does not seem the case here and that would need a much bigger change so just chaining will suffice.

Support using along terraform-google-provider 4.x.x

TL;DR

The current constraint on the version of terraform-google-provider forbids using on projects that have switched to the 4.x series of the provider.

Terraform Resources

No response

Detailed design

No response

Additional information

We would like to use this module to invoke gcloud to create resources not yet supported by the google and google-beta providers. These resources have dependencies on other regular resources created with the providers (for instance, regional network-endpoint-group backed by an api-gateway). I blindly tried this module here but the version constraint prevents it: we are already migrated to 4.0.0 of google and  google-beta.

Ambiguous explanations on `create_cmd_triggers` input

At the very first look of the document, I was confused because the description of create_cmd_triggers says "List of any additional triggers for the create command execution." but its type and default is defined as map. Also, the description itself wasn't clear for me like "what is 'trigger' in this context?".

To have better understanding of the input, I looked up the example directory for the use cases, but I couldn't find any examples that uses create_cmd_triggers. So I dag into the source code.

It seems create_cmd_triggers are somethings that overrides the flags or commands actually called in local-exec rather than "triggers".

Could you be more explicit on what this input is prepared for and how it is intended to use in the document's description?

platform default to linux

When used indirectly (like when creating a project using terraform-google-project-factory), the platform variable of the gcloud module is not accessible, so its default to linux is problematic if using a mac (yes, docker is better, but still :) ).

some ideas:

  • skip_download could be renamed to something like skip_download_if_gcloud_found and defaulted to true
  • there could be an attempt to detect the OS if the platform is empty (default "") with something like:
if [[ $(uname -s) == "Darwin" ]]; then echo "darwin" ; else echo "linux" ; fi

Module does not work with Ephemeral Agents/Runners

This module is not designed for ephemeral runners/agents such as Terraform Cloud Remote Operations that cannot have gcloud on the PATH and cannot have custom base images. If anything fails you have to unwind and re-apply all the changes again to force gcloud to be re-downloaded.

Example:

module.asm_install.module.gcloud_kubectl.null_resource.additional_components[0] (local-exec): Executing; ["/bin/sh" "-c" ".terraform/modules/asm_install/scripts/check_components.sh .terraform/modules/asm_isntall/cache/xxxx/google-cloud-skd/bin/gcloud kubectl,kpt,beta.kustomize"]

Error: Error running command '.terraform/modules/asm_install/scripts/check_components.sh .terraform/modules/asm_isntall/cache/xxxx/google-cloud-skd/bin/gcloud kubectl,kpt,beta.kustomize': exit status 127. Output:

In the above example, due to a previously failed run the repeat run then fails to get passed this as it assumes gcloud exists in the cache. However, when it runs it returns a 127 command not found code.

Apply fails on mkdir if command has changed

The issue that I just ran into was that I changed the value of create_cmd_body after a successful apply, and could not run another apply. From what I can tell, this is probably due to the fact the prepare_cache null resource was invalidated (due to changing the value of arguments), but the random id was already in my state file, so it tried to create the same cache directory that I already had.

There are a few ways to fix this, the easiest would probably be just to change the command to mkdir -p here.

I suppose there are other things one could do, but most things that I can think of probably would result in unnecessary downloads of binaries, or would require terraform's fileexists to work with directories.

I solved this by deleting my local cache directory in <root path>/.terraform/modules/<path to this module>/terraform-google-gcloud-x.x.x/cache/<random id> in case anyone winds up running across this in the future.

Toss it on the pile I guess, because there's an easy workaround, and thanks for this module, really made my life a lot easier!

This module does not detect drift in underlying kubernetes objects.

Problem Statement

I have an issue when using this module (indirectly from https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/workload-identity module) where it does not detect drift in the underlying kubernetes object and attempt to correct this drift.

Use Case

Use Workload Identity module to annotate a KSA that is managed by a helm_release. This should handle the case where I taint the helm_release resource and re-apply (therefore destroying and recreating the KSA). I'd expect the workload identity module to detect the drift in the underlying resource which now has a KSA missing the annotation.

I've opened this against this module because I think more generally we should provide a way for this kubectl wrapper to detect drift in k8s objects.

Ideas

I'm not a terraform expert but perhaps we could:

  1. add some kind of "trigger" or "read" command, a kubectl command to run to determine if the create method should be run again.
  2. expose some module level "recreate trigger" so I could tie this to my helm release resource saying "recreate this annotation every time the helm release modified".

cc my favorite terraform gymansts for hopefully more creative ideas: @brandonjbjelland @morgante @bharathkkb

Get kubectl command stdout

Is there a way to get the result of a command

For example getting the secrets

   source = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
   kubectl_create_command  = "kubectl get secrets"

how to get the result of this command

Mysterious change in additional_components_command path

Hi.

I'm using gcloud provider to provision Cloud Firestore in my GCP project:

module "firestore_native_mode" {
  source                = "terraform-google-modules/gcloud/google"
  version               = "2.0.0"
  additional_components = ["alpha"]
  create_cmd_body       = "--quiet --project ${var.project_id} alpha firestore databases create --region=${google_app_engine_application.app-engine-app.location_id}"
  skip_download         = false
  use_tf_google_credentials_env_var = true
}

This module is part of quite a few TF resources and modules which define around 170 different resources, including new GCP project itself. Everything was working fine for a couple of days, both terraform plan and apply behave as expected, until today. With no visible change in any TF file, gcloud module plans this:

  # module.backend.module.firestore_native_mode.null_resource.additional_components[0] must be replaced
-/+ resource "null_resource" "additional_components" {
      ~ id       = "427440199556097228" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "additional_components_command" = ".terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/scripts/check_components.sh .terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/cache/694c9ae1/google-cloud-sdk/bin/gcloud alpha" -> ".terraform/modules/backend.firestore_native_mode/scripts/check_components.sh .terraform/modules/backend.firestore_native_mode/cache/694c9ae1/google-cloud-sdk/bin/gcloud alpha"
            "arguments"                     = "005739f89467ea5f1506c12c64b7930a"
            "md5"                           = "7f22bf39aaf5ce5980111c3587bef5b5"
        }

Compare:

BEFORE: ".terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/scripts/check_components.sh .terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/cache/694c9ae1/google-cloud-sdk/bin/gcloud alpha"
AFTER:  ".terraform/modules/backend.firestore_native_mode/                             /scripts/check_components.sh .terraform/modules/backend.firestore_native_mode/                             /cache/694c9ae1/google-cloud-sdk/bin/gcloud alpha"

Actually it's one of around 10 different resources plan want to replace, but the pattern is basically the same. Seems like for
unknown reason terraform-google-gcloud-2.0.0 has disappeared from additional_components_command path.

My remote state indeed contains terraform-google-gcloud-2.0.0 for this resource:

{
  "module": "module.backend.module.firestore_native_mode",
  "mode": "managed",
  "type": "null_resource",
  "name": "additional_components",
  "each": "list",
  "provider": "provider.null",
  "instances": [
    {
      "index_key": 0,
      "schema_version": 0,
      "attributes": {
        "id": "427440199556097228",
        "triggers": {
          "additional_components_command": ".terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/scripts/check_components.sh .terraform/modules/backend.firestore_native_mode/terraform-google-gcloud-2.0.0/cache/694c9ae1/google-cloud-sdk/bin/gcloud alpha",
          "arguments": "005739f89467ea5f1506c12c64b7930a",
          "md5": "7f22bf39aaf5ce5980111c3587bef5b5"
        }
}

I have already hardcoded module version to 2.0.0, so I'd rather exclude any problem with silent version upgrade.

Will appreciate any hints. Currently, I'm blocked with applying any changes to my remote state since I'm afraid TF to execute plan which I do not understand.

Configured destroy entrypoint (and body) not being invoked

TL;DR

The configured destroy_cmd_entrypoint and destroy_cmd_body are not being invoked, even though the logs show the destroy was triggered and resource was removed from the .tfstate.

Expected behavior

On "destroy" (such as when TF needs to do a replace) the configured destroy_cmd_entrypoint and destroy_cmd_body should be invoked, similar to what happens on create, where the create_cmd_entrypoint and create_cmd_body are clearly invoked.

Observed behavior

Creates are doing as expected, calling the configured entrypoint, but destroys are not. The .tfstate data is indeed created and destroyed as expected, but the custom entrypoints are never getting invoked.

Terraform Configuration

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "3.90.0"
    }
  }
}

provider "google" {
  region      = "us-central1"
  project     = "my-xxx"
  credentials = "my-xxx-account.json"
}

# comment out the module below after "create" to get TF to "destroy"
module "my_xxx_tester" {
  source  = "terraform-google-modules/gcloud/google"
  version = "3.1.0"

  create_cmd_endpoint = "gcloud"
  create_cmd_body        = "version"
  destroy_cmd_body      = "gcloud"
  destroy_cmd_body       = "version"
}

Terraform Version

Terraform v1.1.5
on darwin_amd64

Debug Output

https://gist.github.com/crandall-chow-bl/b5a427c276f6c2c6be0d402304affe53

Additional information

Steps to Reproduce

  1. terraform apply with the TF file above to "create"
  2. Comment out the module in the TF file
  3. `terraform apply" with the TF file above to "destroy"
  4. Note that the destroy entrypoint/body do not get invoked

Stop adding binaries to the repository

Current problem

The size of this repository is much larger than it should be.

$ du -sh terraform-google-gcloud
1.3G    terraform-google-gcloud
$ time git clone https://github.com/terraform-google-modules/terraform-google-gcloud.git

Cloning into 'terraform-google-gcloud'...
remote: Enumerating objects: 636, done.
remote: Total 636 (delta 0), reused 0 (delta 0), pack-reused 636
Receiving objects: 100% (636/636), 1.23 GiB | 11.51 MiB/s, done.
Resolving deltas: 100% (292/292), done.
Updating files: 100% (62/62), done.
git clone   10.06s user 9.24s system 17% cpu 1:52.93 total

It takes almost 2 minutes to clone from GitHub with a 150mbps connection.

After cloning I investigated the cause and it looks like google-cloud-sdk.tar.gz has been added to the repository in multiple different versions, both for Linux and Darwin.

git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' \
| sed -n 's/^blob //p' \
| sort --numeric-sort --key=2 \
| cut -c 1-12,41- \
| $(command -v gnumfmt || echo numfmt) --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest | tail -50
6ef2f3411f67  4.9KiB main.tf
398085a14170  5.0KiB main.tf
8e5ef607508e  6.0KiB main.tf
3d845d8691ea  6.0KiB main.tf
4f3606e5e4f3  6.1KiB main.tf
a1e9f2110c7d  6.2KiB main.tf
318d0ea031f7  6.3KiB main.tf
1e560423e5df  6.8KiB main.tf
cc752fe0a126  6.9KiB main.tf
548021d21412  7.4KiB main.tf
6258b131c2d9  7.5KiB main.tf
e83e8397560a  8.9KiB main.tf
261eeb9e9f8b   11KiB LICENSE
d64569567334   11KiB LICENSE
d363c8a22ba7  844KiB cache/darwin/jq
f48b0ca92144  3.8MiB cache/linux/jq
75fd8ba99519   21MiB cache/darwin/google-cloud-sdk.tar.gz
8c9828a79cc1   21MiB cache/darwin/google-cloud-sdk.tar.gz
0de79388f3ff   22MiB cache/darwin/google-cloud-sdk.tar.gz
8e63c3b02a91   22MiB cache/darwin/google-cloud-sdk.tar.gz
a7e4b6be7829   22MiB cache/darwin/google-cloud-sdk.tar.gz
63002ef2c81d   22MiB cache/darwin/google-cloud-sdk.tar.gz
14171ecf8ace   22MiB cache/darwin/google-cloud-sdk.tar.gz
6afd38c96116   22MiB cache/darwin/google-cloud-sdk.tar.gz
b78769c44c12   22MiB cache/darwin/google-cloud-sdk.tar.gz
cc42739faf5f   28MiB cache/linux/google-cloud-sdk.tar.gz
c5f94c7cf7a2   28MiB cache/linux/google-cloud-sdk.tar.gz
8d0ac84483b2   29MiB cache/linux/google-cloud-sdk.tar.gz
6f3ced3b0c2d   29MiB cache/linux/google-cloud-sdk.tar.gz
2be0be2e1ead   29MiB cache/linux/google-cloud-sdk.tar.gz
3b288442db09   29MiB cache/linux/google-cloud-sdk.tar.gz
d9ae2e931b9e   29MiB cache/linux/google-cloud-sdk.tar.gz
113c1d621257   29MiB cache/linux/google-cloud-sdk.tar.gz
b042b74c3ce9   29MiB cache/linux/google-cloud-sdk.tar.gz
4a7bff9d634c   46MiB cache/darwin/google-cloud-sdk.tar.gz
d7d8e990a635   47MiB cache/darwin/google-cloud-sdk.tar.gz
cbed5f49838e   47MiB cache/darwin/google-cloud-sdk.tar.gz
9e243acd0dce   47MiB cache/darwin/google-cloud-sdk.tar.gz
ea809a943941   47MiB cache/darwin/google-cloud-sdk.tar.gz
d69e3b0586ee   47MiB cache/darwin/google-cloud-sdk.tar.gz
b3a6db909e8a   48MiB cache/darwin/google-cloud-sdk.tar.gz
8215801b7a25   48MiB cache/darwin/google-cloud-sdk.tar.gz
ef4529f27bab   53MiB cache/linux/google-cloud-sdk.tar.gz
16fb075994b2   54MiB cache/linux/google-cloud-sdk.tar.gz
3805942c637d   54MiB cache/linux/google-cloud-sdk.tar.gz
d7a7fe100c77   54MiB cache/linux/google-cloud-sdk.tar.gz
80d8c8e72df4   54MiB cache/linux/google-cloud-sdk.tar.gz
d4efd180bd7e   54MiB cache/linux/google-cloud-sdk.tar.gz
ca7c1d74d521   54MiB cache/linux/google-cloud-sdk.tar.gz
1cc4ef259c43   55MiB cache/linux/google-cloud-sdk.tar.gz

Suggested solution

Stop adding the large "cache files" to the repository by adding the cache directory to .gitignore:

echo cache/ >> .gitignore

Fix history

Caution: this will rewrite history, so if this is done all clones needs to be re-cloned.

I'd like to suggest that you might want to consider cleaning up the historic commits as well.

Example using bfg (first get a bare/mirror repo):

$ git clone --mirror https://github.com/terraform-google-modules/terraform-google-gcloud
Cloning into bare repository 'terraform-google-gcloud.git'...
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 691 (delta 0), reused 0 (delta 0), pack-reused 690
Receiving objects: 100% (691/691), 1.46 GiB | 11.68 MiB/s, done.
Resolving deltas: 100% (313/313), done.
$ bfg --strip-blobs-bigger-than 1M --no-blob-protection terraform-google-gcloud.git

Using repo : /private/tmp/terraform-google-gcloud.git

Scanning packfile for large blobs: 691
Scanning packfile for large blobs completed in 72 ms.
Found 41 blob ids for large blobs - biggest=57160066 smallest=3953824
Total size (unpacked)=1586445370
Found 0 objects to protect
Found 46 commit-pointing refs : HEAD, refs/heads/bugfix/gcloud_delete_cyclic_dependencies, refs/heads/create-pull-request/patch-gcloud-version, ...

Protected commits
-----------------

You're not protecting any commits, which means the BFG will modify the contents of even *current* commits.

This isn't recommended - ideally, if your current commits are dirty, you should fix up your working copy and commit that, check that your build still works, and only then run the BFG to clean up your history.

Cleaning
--------

Found 140 commits
Cleaning commits:       100% (140/140)
Cleaning commits completed in 1,476 ms.

Updating 45 Refs
----------------

        Ref                                                   Before     After
        -------------------------------------------------------------------------
        refs/heads/bugfix/gcloud_delete_cyclic_dependencies | 8dce5f1b | 2e945858
        refs/heads/create-pull-request/patch-gcloud-version | fb79cf05 | ec76a6e8
        refs/heads/feature/ordering                         | 7863d696 | 6cf49db5
        refs/heads/feature/skip_download                    | 8a12e54e | d2dfb194
        refs/heads/fix/triggers                             | aca620bd | c425121b
        refs/heads/master                                   | f4c9d56c | 99f3206f
        refs/heads/release-v0.5.0                           | 366035af | f55d423a
        refs/heads/release-v0.5.1                           | 893a9512 | 2e7e1aee
        refs/pull/1/head                                    | 7b1ea507 | 19cfb315
        refs/pull/11/head                                   | 9d65bec4 | 4df50447
        refs/pull/12/head                                   | 4d633f38 | 1982a584
        refs/pull/13/head                                   | a21a482c | 2d9c7a89
        refs/pull/14/head                                   | c69ca283 | 91539319
        refs/pull/15/head                                   | fd7feaf4 | 3b21da95
        refs/pull/16/head                                   | 4a824c26 | 61173d26
        ...

Updating references:    100% (45/45)
...Ref update completed in 283 ms.

Commit Tree-Dirt History
------------------------

        Earliest                                              Latest
        |                                                          |
        DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD

        D = dirty commits (file tree fixed)
        m = modified commits (commit message or parents changed)
        . = clean commits (no changes to file tree)

                                Before     After
        -------------------------------------------
        First modified commit | 7b1ea507 | 19cfb315
        Last dirty commit     | 8b6ccc89 | 1c1c069c

Deleted files
-------------

        Filename                  Git id
        ---------------------------------------------------------------------
        google-cloud-sdk.tar.gz | ea809a94 (47.2 MB), b042b74c (29.3 MB), ...
        jq                      | f48b0ca9 (3.8 MB)


In total, 318 object ids were changed. Full details are logged here:

        /private/tmp/terraform-google-gcloud.git.bfg-report/2020-04-07/08-20-16

BFG run is complete! When ready, run: git reflog expire --expire=now --all && git gc --prune=now --aggressive
$ cd terraform-google-gcloud.git

$ git reflog expire --expire=now --all && git gc --prune=now --aggressive

Enumerating objects: 691, done.
Counting objects: 100% (691/691), done.
Delta compression using up to 8 threads
Compressing objects: 100% (597/597), done.
Writing objects: 100% (691/691), done.
Selecting bitmap commits: 131, done.
Building bitmaps: 100% (105/105), done.
Total 691 (delta 330), reused 108 (delta 0), pack-reused 0

The bare repo is now less than 500 KB:

du -sh terraform-google-gcloud.git
496K    terraform-google-gcloud.git

A clone of the cleaned up bare repo shows less than 2 MB:

$ du -sh terraform-google-gcloud.cleaned
1.6M    terraform-google-gcloud.cleaned

"random_id.cache is empty tuple"

Terraform 0.12.28

I have a module that contains two instances of the gcloud module.
Here is the configuration for one of them:

module "ffhq-config" {
  source        = "terraform-google-modules/gcloud/google"
  platform      = "linux"
  enabled       = var.copy_ffhq_config
  skip_download = true
  upgrade       = false

  create_cmd_entrypoint = "gsutil"
  create_cmd_body       = "..." # irrelevant

  destroy_cmd_entrypoint = "gsutil"
  destroy_cmd_body       = "..." # irrelevant

  # Having this runs `gcloud auth activate-service-account` which has the
  # unfortunate side-effect of changing the gcloud auth on the system running Terraform
  service_account_key_file = "..." # irrelevant
}

terraform plan, terraform apply, terraform refresh all function properly.
However, for some odd, largely unrelated terraform import executions, the following errors will show up:

Error: Invalid index

  on .terraform/modules/lab-hugo.base_image/terraform-google-gcloud-1.3.0/main.tf line 19, in locals:
  19:   cache_path           = local.skip_download ? "" : "${path.module}/cache/${random_id.cache[0].hex}"
    |----------------
    | random_id.cache is empty tuple

The given key does not identify an element in this collection value.


Error: Invalid index

  on .terraform/modules/lab-hugo.ffhq-config/terraform-google-gcloud-1.3.0/main.tf line 19, in locals:
  19:   cache_path           = local.skip_download ? "" : "${path.module}/cache/${random_id.cache[0].hex}"
    |----------------
    | random_id.cache is empty tuple

The given key does not identify an element in this collection value.


Error: Invalid index

  on .terraform/modules/lab-test-zor-auto-1.base_image/terraform-google-gcloud-1.3.0/main.tf line 19, in locals:
  19:   cache_path           = local.skip_download ? "" : "${path.module}/cache/${random_id.cache[0].hex}"
    |----------------
    | random_id.cache is empty tuple

The given key does not identify an element in this collection value.


Error: Invalid index

  on .terraform/modules/lab-test-zor-auto-1.ffhq-config/terraform-google-gcloud-1.3.0/main.tf line 19, in locals:
  19:   cache_path           = local.skip_download ? "" : "${path.module}/cache/${random_id.cache[0].hex}"
    |----------------
    | random_id.cache is empty tuple

The given key does not identify an element in this collection value.

skip_download is true. I'm not sure if Terraform evaluates both sides of the conditional expression or if local.skip_download is false in this context for some reason.

I had a similar problem in my own configuration - where I could fix it myself - and the only fix I know of is to use try("${path.module}/cache/${random_id.cache[0].hex}", null), which should fix this issue - at the cost of replacing this error by a more cryptic one in the case of skip_download=false and random_id.cache is indeed an empty tuple.

Add output of the gcloud command in the terraform state

TL;DR

Hello,

I was trying to use this module in order to use the output of the gcloud but this isn't possible.
I can see it showing in the execution logs but it is not saved in the state and cannot be retrieved it through terraform.

Using the example with gcloud version, it would be great to access the gcloud output in terraform and capture the output in the state.

module.gcloud.null_resource.run_command[0] (local-exec): Google Cloud SDK 403.0.0
module.gcloud.null_resource.run_command[0] (local-exec): bq 2.0.77
module.gcloud.null_resource.run_command[0] (local-exec): bundled-python3-unix 3.9.12
module.gcloud.null_resource.run_command[0] (local-exec): core 2022.09.20
module.gcloud.null_resource.run_command[0] (local-exec): gcloud-crc32c 1.0.0
module.gcloud.null_resource.run_command[0] (local-exec): gsutil 5.13
module.gcloud.null_resource.run_command[0]: Creation complete after 1s [id=1223161320564946494]

Thanks & Regards,
Romain

Terraform Resources

No response

Detailed design

No response

Additional information

No response

BUG > Ephemeral container runs breaks decompress destroy hook

Description

Ephemeral container runs (like in Terraform Cloud or Enterprise) breaks decompress destroy hook. null_resource.decompress_destroy expects a tgz to be copied by null_resource.copy, but in a new container this cache does not exist and it does not get copied again during the destroy lifecycle hook.

This module seems to be designed with an existing vm runner (like Jenkins) in mind, which is contradictory to modern CI/CD platforms that encourage the use of ephemeral container builders.

Current Behavior

Error: Error running command 'tar -xzf .terraform/modules/project.project.project-factory.gcloud_delete/terraform-google-modules-terraform-google-gcloud-84e31b8/cache/0c610144/google-cloud-sdk.tar.gz
-C .terraform/modules/project.project.project-factory.gcloud_delete/terraform-google-modules-terraform-google-gcloud-84e31b8/cache/0c610144 &&
cp .terraform/modules/project.project.project-factory.gcloud_delete/terraform-google-modules-terraform-google-gcloud-84e31b8/cache/0c610144/jq
.terraform/modules/project.project.project-factory.gcloud_delete/terraform-google-modules-terraform-google-gcloud-84e31b8/cache/0c610144/google-cloud-sdk/bin/': exit status 2.
Output: tar (child): .terraform/modules/project.project.project-factory.gcloud_delete/terraform-google-modules-terraform-google-gcloud-84e31b8/cache/0c610144/google-cloud-sdk.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now

Expected Behavior

  • terraform destroy should complete without any issues

Steps to Reproduce

  • Use a CI/CD platform that uses ephemeral containers
  • In the first run, provision a project using this module
  • In the second run, attempt to do a terraform destroy

Additional Details

  • Platform: Terraform Enterprise
  • Terraform Version: v0.12.20
  • Module Version: ~v0.5.0

Issues running sample code in Google Cloud Build

Terraform Version

$ terraform -v
Terraform v1.0.7
on linux_amd64

  • provider registry.terraform.io/hashicorp/google v3.84.0
  • provider registry.terraform.io/hashicorp/google-beta v3.84.0

Terraform Configuration File:

module "gcloud" {
source = "terraform-google-modules/gcloud/google"
version = "~> 2.0"

platform = "linux"
additional_components = ["kubectl", "beta"]

create_cmd_entrypoint = "gcloud"
create_cmd_body = "version"
destroy_cmd_entrypoint = "gcloud"
destroy_cmd_body = "version"
}

Expected Behavior

When I execute this code using Google's Cloud Build, I expect to see the output of the gcloud versions. I confirmed this code is executing successfully in Google's Cloud Shell.

Actual Behavior

I receive the following error in Cloud Build:

module.gcloud.null_resource.additional_components[0]: Creating...
module.gcloud.null_resource.additional_components[0]: Provisioning with 'local-exec'...
module.gcloud.null_resource.additional_components[0] (local-exec): Executing: ["/bin/sh" "-c" ".terraform/modules/gcloud/scripts/check_components.sh gcloud kubectl,beta"]
module.gcloud.null_resource.additional_components[0] (local-exec): /bin/sh: .terraform/modules/gcloud/scripts/check_components.sh: not found

│ Error: local-exec provisioner error

│ with module.gcloud.null_resource.additional_components[0],
│ on .terraform/modules/gcloud/main.tf line 174, in resource "null_resource" "additional_components":
│ 174: provisioner "local-exec" {

│ Error running command
│ '.terraform/modules/gcloud/scripts/check_components.sh gcloud
│ kubectl,beta': exit status 127. Output: /bin/sh:
│ .terraform/modules/gcloud/scripts/check_components.sh: not found

If I comment out the additional_components argument, I receive a different error:

module.gcloud.null_resource.additional_components[0]: Destroying... [id=3683393589026923643]
module.gcloud.null_resource.additional_components[0]: Destruction complete after 0s
module.gcloud.null_resource.run_command[0]: Creating...
module.gcloud.null_resource.run_destroy_command[0]: Creating...
module.gcloud.null_resource.run_destroy_command[0]: Creation complete after 0s [id=3140166281657363380]
module.gcloud.null_resource.run_command[0]: Provisioning with 'local-exec'...
module.gcloud.null_resource.run_command[0] (local-exec): Executing: ["/bin/sh" "-c" "PATH=/google-cloud-sdk/bin:$PATH\ngcloud version\n"]
module.gcloud.null_resource.run_command[0] (local-exec): /bin/sh: gcloud: not found

│ Error: local-exec provisioner error

│ with module.gcloud.null_resource.run_command[0],
│ on .terraform/modules/gcloud/main.tf line 231, in resource "null_resource" "run_command":
│ 231: provisioner "local-exec" {

│ Error running command 'PATH=/google-cloud-sdk/bin:$PATH
│ gcloud version
│ ': exit status 127. Output: /bin/sh: gcloud: not found

tar: Error opening archive: Failed to open '.terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable

Having issue with executing this:

module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]: Destroying... [id=4269937703596108229]
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]: Destruction complete after 0s
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]: Creating...
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]: Provisioning with 'local-exec'...
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0] (local-exec): Executing: ["/bin/sh" "-c" "cp -R .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/linux .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd"]
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]: Creation complete after 0s [id=1751011585625952459]
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress[0]: Creating...
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress[0]: Provisioning with 'local-exec'...
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress[0] (local-exec): Executing: ["/bin/sh" "-c" "tar -xzf .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk.tar.gz -C .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd && cp .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/jq .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk/bin/"]
module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress[0] (local-exec): tar: Error opening archive: Failed to open '.terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk.tar.gz'


Error: Error running command 'tar -xzf .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk.tar.gz -C .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd && cp .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/jq .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk/bin/': exit status 1. Output: tar: Error opening archive: Failed to open '.terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/f4989ebd/google-cloud-sdk.tar.gz'

When I check the directory

ls -la .terraform/modules/bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/420e8d8f/
total 16
drwxr-xr-x  4 blahuser  1533042234  128 Jul 23 10:54 .
drwxr-xr-x  6 blahuser  1533042234  192 Jul 23 10:54 ..
-rw-r--r--  1 blahuser  1533042234   40 Jul 23 10:54 google-cloud-sdk.tar.gz.REMOVED.git-id
-rw-r--r--  1 blahuser  1533042234   40 Jul 23 10:54 jq.REMOVED.git-id

Attempted removing the module and run terraform apply, get the same error message above.

terraform state rm ' module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable'
Removed module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.copy[0]
Removed module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress[0]
Removed module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.decompress_destroy[0]
Removed module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.null_resource.upgrade_destroy[0]
Removed module.bootstrap.module.seed_project.module.project-factory.module.gcloud_disable.random_id.cache
Successfully removed 5 resource instance(s).

error: /bin/sh: 2: gcloud not found

Hi all,

Terraform v0.12.18

The null_resource "run_command" is generating the follow error:

module.non_prod.module.gcp_workspace_factory.module.gcloud.null_resource.run_command[0]: Destroying... [id=490511438395957534]
module.non_prod.module.gcp_workspace_factory.module.gcloud.null_resource.run_command[0]: Provisioning with 'local-exec'...
module.non_prod.module.gcp_workspace_factory.module.gcloud.null_resource.run_command[0] (local-exec): Executing: ["/bin/sh" "-c" "PATH=/terraform/non-prod/.terraform/modules/non_prod.gcp_workspace_factory.gcloud/terraform-google-modules-terraform-google-gcloud-b3cb8e9/cache/linux/google-cloud-sdk/bin:$PATH\ngcloud iam service-accounts enable projects/project-123456/serviceAccounts/[email protected]\n"]
2019/12/26 17:19:52 [WARN] Errors while provisioning null_resource.run_command[0] with "local-exec", so aborting
2019/12/26 17:19:52 [ERROR] module.non_prod.module.gcp_workspace_factory.module.gcloud: eval: *terraform.EvalApplyPost, err: 1 error occurred:
    * Error running command 'PATH=/terraform/non-prod/.terraform/modules/non_prod.gcp_workspace_factory.gcloud/terraform-google-modules-terraform-google-gcloud-b3cb8e9/cache/linux/google-cloud-sdk/bin:$PATH
gcloud iam service-accounts enable projects/project-123456/serviceAccounts/[email protected]
': exit status 127. Output: /bin/sh: 2: gcloud: not found

The module logic I'm using is below:

module "gcloud" {
  source                            = "terraform-google-modules/gcloud/google"
  version                           = "~> 0.1"
  use_tf_google_credentials_env_var = "true"

  create_cmd_body  = "iam service-accounts disable ${module.project_factory.service_account_name}"
  destroy_cmd_body = "iam service-accounts enable ${module.project_factory.service_account_name}"
}

Make `GOOGLE_CREDENTIALS` easier to use

Currently, using the GOOGLE_CREDENTIALS env variable requires setting the use_tf_google_credentials_env_var variable on this module. This is hard to do on downstream modules (ex. terraform-google-modules/terraform-google-kubernetes-engine#798).

We should consider either:

  1. Detecting GOOGLE_CREDENTIALS and automatically using it if found.
  2. Adding an environment variable (ex. GCLOUD_TF_LOAD_CREDENTIALS) to force loading the variable.

We should probably start with (1) and add (2) as a method to override it if necessary.

In TF 0.13+ all list elements must have the same type

TL;DR

Historically list(any) was used for

variable "module_depends_on" {
However subsequent to hashicorp/terraform#26265, all list elements must have the same type. So for example this fails: module_depends_on = [time_sleep.wait_300_seconds, null_resource.test] The given value is not suitable as all list elements must have the same type.

@bharathkkb - When you have time, could you confirm/feedback as if so this might apply to several modules? The vast majority of module_depends_on only have a single argument, are therefore work as expected. However we might want to switch to a string or update the documentation to clarify the situation.

Expected behavior

No response

Observed behavior

No response

Terraform Configuration

N/A

Terraform Version

Terraform v1.2.2

Additional information

No response

Prevent reinstalling already installed additional_components

Currently we do not check if any component in additional_components is installed. This causes issues in environments like the cloud shell where the component maybe already been installed but component manager is disabled. Proposed solution is to only install components that are not installed.

Be able to specify path to gcloud for auth

Use service_account_key_file and skip_download together successfully.

Expected:
Be able to use these two options successfully when gcloud is not in the PATH

Result:
Fails when gcloud is not in PATH

Summary:
If the options service_account_key_file and skip_download are set, then the command used has local.gcloud which value is simply gcloud.

Since there is no setting to specify either a global gcloud nor something like authorize_cmd_entrypoint this will fail if gcloud (the executable) is not in the path.

Request:
Either a global gcloud_abs_path input variable (the value of which would become, perhaps local.gcloud_bin_path, or a authorize_cmd_entrypoint variable to set the location of the gcloud used for auth.

Path change during module upgrade results in error

Error

In a scenario as described below, upgrading from one module version to next results in error.

module "foo" {
  source            = "terraform-google-modules/gcloud/google"
+  version           = "1.4.1"
-  version           = "1.4.0"
....
}

Error:

 Error running command '.terraform/modules/bar.foo/terraform-google-gcloud-1.4.0/scripts/check_components.sh gcloud foo bar': exit status 127. Output: /bin/sh: .terraform/modules/bar.foo/terraform-google-gcloud-1.4.0/scripts/check_components.sh: not found

The reason seems to be that initially the directory structure is

main.tf
.terraform
    - modules > bar.foo > terraform-google-gcloud-1.4.0

but after a tf init with the new version of the module, it becomes

main.tf
.terraform
    - modules > bar.foo > terraform-google-gcloud-1.4.1

hence the file .terraform/modules/bar.foo/terraform-google-gcloud-1.4.0/scripts/check_components.sh no longer exists and has been replaced by .terraform/modules/bar.foo/terraform-google-gcloud-1.4.1/scripts/check_components.sh

Workaround

Possible workaround seems to be to taint the resource but is not ideal.

terraform taint module.example.module.bar.foo.null_resource.additional_components_destroy[0]

Invalid count argument error

Currently getting an error when using the module:
Error

Error: Invalid count argument

  on .terraform/modules/gcloud/main.tf line 57, in resource "random_id" "cache":
  57:   count = (! local.skip_download) ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

Module

module "gcloud" {
  source                = "terraform-google-modules/gcloud/google"
  version               = "~> 2.0"
  gcloud_sdk_version    = "304.0.0"
  platform              = "linux"
  additional_components = ["beta"]
  skip_download         = "false"
  count = 1

  create_cmd_triggers = {
    timestamp = local.timestamp
  }

  service_account_key_file = "secrets/terraform-user-credentials.json"

  create_cmd_entrypoint = "scripts/runGcloudCommands.sh"

  depends_on = [module.composer]
}

Environment

  • Terraform version: 0.13.3
  • hashicorp/external: version = "~> 1.2.0"
  • hashicorp/google: version = "~> 3.41.0"
  • hashicorp/google-beta: version = "~> 3.41.0"
  • hashicorp/null: version = "~> 2.1.2"
  • hashicorp/random: version = "~> 2.3.0"

bin_dir is an object

TL;DR

I'm trying to pass bin_dir as an argument and it's being interpreted as a string rather than an object

Expected behavior

bin_dir is a string

Observed behavior

bin_dir is an object

Terraform Configuration

# used to download gcloud
module "gcloud" {
  source  = "terraform-google-modules/gcloud/google"
  version = "~> 3.0"

  platform = "linux"
  additional_components = ["beta"]

  # see: https://github.com/terraform-google-modules/terraform-google-gcloud/issues/94
  for_each = {
    timestamp = "${timestamp()}"
  }

  create_cmd_entrypoint  = "gcloud"
  create_cmd_body        = "version"
  destroy_cmd_entrypoint = "gcloud"
  destroy_cmd_body       = "version"
}


data "external" "bastion" {
  program = ["python3", "${path.module}/scripts/create_bastion_proxy.py"]
  query = {
    project  = var.project_id
    zone     = local.bastion_zone
    hostname = module.bastion.hostname
    bin_dir = tostring(module.gcloud.bin_dir)
  }

  depends_on = [module.bastion]
}

Terraform Version

1.3.7 (this is terraform cloud)

Additional information

while I don't think it's germane, the script being run is from here: jenkins-x/terraform-google-jx@647deca#diff-4852de6762e97bf63cf2e2b61d762eb3c100180d17783c2e6cdb5f7592013ebc

Enable overriding create_cmd_triggers completely

I am using this module to deploy a Kubernetes resource foo like so

create_cmd_entrypoint = "${path.module}/scripts/kubectl_wrapper.sh"
create_cmd_body       = "https://${module.cluster.endpoint} ${data.google_client_config.default.access_token} ${module.cluster.ca_certificate} kubectl apply -k foo"

Problem:
However due to data.google_client_config.default.access_token changing between Terraform plans this causes noise in the diff as well as blows away and recreates the foo resource

Proposal:
Either let

  • create_cmd_triggers override triggers completely

  • new var create_cmd_triggers_override if set completely overrides the triggers

Open to other suggestions or if I am missing something here

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Pending Status Checks

These updates await pending status checks. To force their creation now, click the checkbox below.

  • chore(deps): Update Terraform terraform-google-modules/project-factory/google to v16

Detected dependencies

regex
Makefile
  • cft/developer-tools 1.22
build/int.cloudbuild.yaml
  • cft/developer-tools 1.22
build/lint.cloudbuild.yaml
  • cft/developer-tools 1.22
terraform
examples/dependency_example/main.tf
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
examples/dependency_example/versions.tf
  • hashicorp/terraform >= 0.13
examples/kubectl_wrapper_example/main.tf
  • terraform-google-modules/project-factory/google ~> 15.0
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/network/google ~> 9.0
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
examples/kubectl_wrapper_example/versions.tf
  • hashicorp/terraform >= 0.13
examples/script_example/main.tf
  • terraform-google-modules/gcloud/google ~> 3.0
examples/script_example/versions.tf
  • hashicorp/terraform >= 0.13
examples/simple_example/main.tf
  • terraform-google-modules/gcloud/google ~> 3.0
  • terraform-google-modules/gcloud/google ~> 3.0
examples/simple_example/versions.tf
  • hashicorp/terraform >= 0.13
modules/kubectl-fleet-wrapper/main.tf
modules/kubectl-fleet-wrapper/versions.tf
  • google >= 3.53, < 6
  • hashicorp/terraform >= 0.13
modules/kubectl-wrapper/main.tf
modules/kubectl-wrapper/versions.tf
  • google >= 3.53, < 6
  • hashicorp/terraform >= 0.13
test/fixtures/dependency_example/main.tf
test/fixtures/dependency_example/versions.tf
  • hashicorp/terraform >= 0.12
test/fixtures/kubectl_wrapper_example/main.tf
test/fixtures/override_example/main.tf
test/fixtures/override_example/versions.tf
  • hashicorp/terraform >= 0.12
test/fixtures/script_example/main.tf
test/fixtures/script_example/versions.tf
  • hashicorp/terraform >= 0.12
test/fixtures/simple_example/main.tf
test/fixtures/simple_example/versions.tf
  • hashicorp/terraform >= 0.12
test/setup/main.tf
  • terraform-google-modules/project-factory/google ~> 15.0
test/setup/versions.tf
  • hashicorp/terraform >= 0.13
versions.tf
  • external >= 2.2.2
  • google >= 3.53, < 6
  • null >= 2.1.0
  • random >= 2.1.0
  • hashicorp/terraform >= 0.13

  • Check this box to trigger a request for Renovate to run again on this repository

module path does not match for null_resource local-execs

Getting an error that the module path does not exist. When I look at the .terraform directory, the path of the module is incorrect.

its looking for

.terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/

however when I look in the directory only this exists.

.terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1

The specific part is
terraform-google-modules-terraform-google-gcloud-09a6f10/
vs
terraform-google-gcloud-0.5.1

Error: Error running command 'tar -xzf .terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/cache/d9047a7c/google-cloud-sdk.tar.gz -C .terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/cache/d9047a7c && cp .terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/cache/d9047a7c/jq .terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/cache/d9047a7c/google-cloud-sdk/bin/': exit status 1. Output: tar: Error opening archive: Failed to open '.terraform/modules/project_devbox.gcp_service_project.project-factory.gcloud_disable/terraform-google-modules-terraform-google-gcloud-09a6f10/cache/d9047a7c/google-cloud-sdk.tar.gz'

terraform-google-gcloud does not run on Windows

The terraform-google-gcloud module does not run on Windows due to a reliance on shell scripts.

Repro

Run, say, terraform plan for a config which uses terraform-google-gcloud.

Result

Error: failed to execute ".terraform/modules/my_rule/scripts/check_env.sh": fork/exec .terraform/modules/my_rule/scripts/check_env.sh: %1 is not a valid Win32 application.

Expected

Terraform to run the gcloud or other command specified in the Terraform config.

The documentation says the requirements are

  • Terraform v0.12
  • Terraform Provider for GCP plugin v2.0
  • curl

All of these run well on Windows, as does the gcloud command line tool.

Proof of concept

It appears to work with these tweaks to main.tf:

  • Commented out data.external.env_override, set download_override to "never" .
  • Changed the run_command local-exec command to be ${self.triggers.create_cmd_entrypoint} ${self.triggers.create_cmd_body}. That is, remove the 'temporary' path part, which caused the command to fail silently.

This demonstrates that the concept works fine on Windows, though of course these changes are not suitable to be committed.

Possible cross-platform fixes

I am a Terraform novice, so take these with a big grain of salt.

  • Instead of check_env shell script, use an env var called, say, TF_VAR_GCLOUD_TF_DOWNLOAD and read it from within Terraform, e.g. ${var.GCLOUD_TF_DOWNLOAD}
  • Instead of check_components shell script, always run gcloud components install.
  • Instead of modifying the path, use (and possibly rename) gcloud_bin_abs_path . Note that if the install cache isn't being used, this prefix would be the empty string "", to use the version on the existing path.

Another possibility would be to create .cmd files parallel to the .sh files.

Alternate or interim solution

Document that a sh interpreter is required.

No file diectory issue

TL;DR

Error: External Program Execution Failed

│ with module.build_terraform_image.data.external.env_override[0],
│ on .terraform/modules/build_terraform_image/main.tf line 73, in data "external" "env_override":
│ 73: program = ["${path.module}/scripts/check_env.sh"]

│ The data source received an unexpected error while attempting to execute the program.

│ Program: .terraform/modules/build_terraform_image/scripts/check_env.sh
│ Error: fork/exec .terraform/modules/build_terraform_image/scripts/check_env.sh: no such file or directory

Expected behavior

getting the error in remote module while running the script in ubuntu

Observed behavior

No response

Terraform Configuration

module "bootstrap_csr_repo" {
  source                = "terraform-google-modules/gcloud/google"
  version               = "~> 3.1.0"
  upgrade               = false
  
  create_cmd_entrypoint = "${path.module}/scripts/push-to-repo.sh"
  create_cmd_body       = "${module.tf_source.cloudbuild_project_id} ${split("/", module.tf_source.csr_repos[local.cloudbuilder_repo].id)[3]} ${path.module}/Dockerfile"
}

Terraform Version

1.3.1

Additional information

No response

Detect additional_components binaries that can be installed without gcloud

Current fix for #52 will not detect binaries like kubectl, kustomize etc installed without using the gcloud component manager. Proposal is to replace

CURRENTLY_INSTALLED=$($GCLOUD_PATH components list --quiet --format json 2> /dev/null | jq -r '[.[] | select(.state.name!="Not Installed") | .id] | @tsv | gsub("\\t";",")')

with command -v $BINARY to detect if it exists in path.
\cc @cgrant

Module does not work with WIF creds

Hello,

We are using WIF (Workload Identity Federation ) to deploy GKE cluster from AWS environment. I have successfully completed the following:

  • Create a WIF for an AWS role.
  • Initialize terraform gcp provider with WIF credentials.
  • Terraform is running and creating various resources in GCP like vpc, subnet, cluster.

The problem I am facing is:

module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (local-exec): ERROR: (gcloud.container.clusters.get-credentials) Your current active account [[email protected]] does not have any valid credentials.

Please guide me how to use WIF creds with this module.

Thank you

Deprecation warning for external references from destroy provisioners

There is a deprecation warning when this module is run from the project factory for Terraform version 0.12.20

Warning: External references from destroy provisioners are deprecated

  on <module path obfuscated>/terraform-google-modules-terraform-google-gcloud-84e31b8/main.tf line 179, in resource "null_resource" "run_command":
 179:     command = <<-EOT
 180:     PATH=${local.gcloud_bin_abs_path}:$PATH
 181:     ${var.destroy_cmd_entrypoint} ${var.destroy_cmd_body}
 182:     EOT

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.

It seems related to https://github.com/terraform-google-modules/terraform-google-gcloud/blob/master/main.tf#L179-L182

provisioner "local-exec" {
    when    = destroy
    command = <<-EOT
    PATH=${local.gcloud_bin_abs_path}:$PATH
    ${var.destroy_cmd_entrypoint} ${var.destroy_cmd_body}
    EOT
  }

Error opening google-cloud-sdk

Hello. I've been using 0.3 of this module without issue for some time, but I started running into an issue both on my local TF and TFC:

Code

module "gcloud" {
  source  = "terraform-google-modules/gcloud/google"
  version = "0.3"
  enabled = true

  create_cmd_body        = "iam service-accounts enable ${module.project.service_account_name}"
  destroy_cmd_body       = ""
  destroy_cmd_entrypoint = ""
}

Output

module.gcloud.null_resource.decompress[0]: Creating...
module.gcloud.null_resource.decompress[0]: Provisioning with 'local-exec'...
module.gcloud.null_resource.decompress[0] (local-exec): Executing: ["/bin/sh" "-c" "tar -xzf .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk.tar.gz -C .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux && cp .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/jq .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk/bin/"]
module.gcloud.null_resource.decompress[0] (local-exec): tar (child): .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk.tar.gz: Cannot open: No such file or directory
module.gcloud.null_resource.decompress[0] (local-exec): tar (child): Error is not recoverable: exiting now
module.gcloud.null_resource.decompress[0] (local-exec): tar: Child returned status 2
module.gcloud.null_resource.decompress[0] (local-exec): tar: Error is not recoverable: exiting now


Error: Error running command 'tar -xzf .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk.tar.gz -C .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux && cp .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/jq .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk/bin/': exit status 2. Output: tar (child): .terraform/modules/gcloud/terraform-google-gcloud-0.3.0/cache/linux/google-cloud-sdk.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now

Terraform v0.12.28

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.