Code Monkey home page Code Monkey logo

terraform-google-gke's Introduction

Sunset notice

We believe there is an opportunity to create a truly outstanding developer experience for deploying to the cloud, however developing this vision requires that we temporarily limit our focus to just one cloud. Gruntwork has hundreds of customers currently using AWS, so we have temporarily suspended our maintenance efforts on this repo. Once we have implemented and validated our vision for the developer experience on the cloud, we look forward to picking this up. In the meantime, you are welcome to use this code in accordance with the open source license, however we will not be responding to GitHub Issues or Pull Requests.

If you wish to be the maintainer for this project, we are open to considering that. Please contact us at [email protected].


GitHub tag (latest SemVer) Terraform Version

Google Kubernetes Engine (GKE) Module

This repo contains a Terraform module for running a Kubernetes cluster on Google Cloud Platform (GCP) using Google Kubernetes Engine (GKE).

Quickstart

If you want to quickly spin up a GKE Public Cluster, you can run the example that is in the root of this repo. Check out the gke-basic-helm example documentation for instructions.

What's in this repo

This repo has the following folder structure:

  • root: The root folder contains an example of how to deploy a GKE Public Cluster with an example chart with Helm. See gke-basic-helm for the documentation.

  • modules: This folder contains the main implementation code for this Module, broken down into multiple standalone submodules.

    The primary module is:

    There are also several supporting modules that add extra functionality on top of gke-cluster:

  • examples: This folder contains examples of how to use the submodules.

  • test: Automated tests for the submodules and examples.

What is Kubernetes?

Kubernetes is an open source container management system for deploying, scaling, and managing containerized applications. Kubernetes is built by Google based on their internal proprietary container management systems (Borg and Omega). Kubernetes provides a cloud agnostic platform to deploy your containerized applications with built in support for common operational tasks such as replication, autoscaling, self-healing, and rolling deployments.

You can learn more about Kubernetes from the official documentation.

What is GKE?

Google Kubernetes Engine or "GKE" is a Google-managed Kubernetes environment. GKE is a fully managed experience; it handles the management/upgrading of the Kubernetes cluster master as well as autoscaling of "nodes" through "node pool" templates.

Through GKE, your Kubernetes deployments will have first-class support for GCP IAM identities, built-in configuration of high-availability and secured clusters, as well as native access to GCP's networking features such as load balancers.

How do you run applications on Kubernetes?

There are three different ways you can schedule your application on a Kubernetes cluster. In all three, your application Docker containers are packaged as a Pod, which are the smallest deployable unit in Kubernetes, and represent one or more Docker containers that are tightly coupled. Containers in a Pod share certain elements of the kernel space that are traditionally isolated between containers, such as the network space (the containers both share an IP and thus the available ports are shared), IPC namespace, and PIDs in some cases.

Pods are considered to be relatively ephemeral disposable entities in the Kubernetes ecosystem. This is because Pods are designed to be mobile across the cluster so that you can design a scalable fault tolerant system. As such, Pods are generally scheduled with Controllers that manage the lifecycle of a Pod. Using Controllers, you can schedule your Pods as:

  • Jobs, which are Pods with a controller that will guarantee the Pods run to completion.
  • Deployments behind a Service, which are Pods with a controller that implement lifecycle rules to provide replication and self-healing capabilities. Deployments will automatically reprovision failed Pods, or migrate Pods to healthy nodes off of failed nodes. A Service constructs a consistent endpoint that can be used to access the Deployment.
  • Daemon Sets, which are Pods that are scheduled on all worker nodes. Daemon Sets schedule exactly one instance of a Pod on each node. Like Deployments, Daemon Sets will reprovision failed Pods and schedule new ones automatically on new nodes that join the cluster.

What's a Module?

A Module is a canonical, reusable, best-practices definition for how to run a single piece of infrastructure, such as a database or server cluster. Each Module is written using a combination of Terraform and scripts (mostly bash) and include automated tests, documentation, and examples. It is maintained both by the open source community and companies that provide commercial support.

Instead of figuring out the details of how to run a piece of infrastructure from scratch, you can reuse existing code that has been proven in production. And instead of maintaining all that infrastructure code yourself, you can leverage the work of the Module community to pick up infrastructure improvements through a version number bump.

Who maintains this Module?

This Module and its Submodules are maintained by Gruntwork. If you are looking for help or commercial support, send an email to [email protected].

Gruntwork can help with:

  • Setup, customization, and support for this Module.
  • Modules and submodules for other types of infrastructure, such as VPCs, Docker clusters, databases, and continuous integration.
  • Modules and Submodules that meet compliance requirements, such as HIPAA.
  • Consulting & Training on AWS, Terraform, and DevOps.

How do I contribute to this Module?

Contributions are very welcome! Check out the Contribution Guidelines for instructions.

How is this Module versioned?

This Module follows the principles of Semantic Versioning. You can find each new release, along with the changelog, in the Releases Page.

During initial development, the major version will be 0 (e.g., 0.x.y), which indicates the code does not yet have a stable API. Once we hit 1.0.0, we will make every effort to maintain a backwards compatible API and use the MAJOR, MINOR, and PATCH versions on each release to indicate any incompatibilities.

License

Please see LICENSE for how the code in this repo is licensed.

Copyright © 2020 Gruntwork, Inc.

terraform-google-gke's People

Contributors

autero1 avatar bhegazy avatar bwhaley avatar craigedmunds avatar eak12913 avatar etiene avatar exaldraen avatar gruntwork-ci avatar ina-stoyanova avatar infraredgirl avatar mensaah avatar miradorn avatar rileykarson avatar robmorgan avatar sryabkov avatar stubbi avatar veggiemonk avatar vscoder avatar yorinasub17 avatar zackproser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-gke's Issues

what is the benefits of using this module compared to official gke module

I have been referred to this module a while ago and I really like the structure.

I've also been looking into the official terraform module for gke which is very opinionated and straight forward.
https://github.com/terraform-google-modules/terraform-google-kubernetes-engine

Overall, they both seems to support the main features and seem they were created about the same time(early 2019) and evolved since then.

The thing that is very appealing to me in this module is the documentation provided by gruntwork, which is very well organized.

The official module seems to be more popular and well maintained, and seems to be a reliable choice in the long term.

Given the above, what are the other benefits of using this module in the long term compared to official gke module that are not clear until we start using either of them? Is this module still relevant?

Nodes created in private cluster cannot access the internet or docker hub

I am looking to create a basic GKE cluster behind a VPN using your module. However, the nodes created are not able to access dockerhub. Can you provide guidance on how to create a cluster which is privately accessible which can download images from docker hub? Any suggestions would be appreciated.

Perhaps I misunderstand the usage of "private nodes"

module "network" {
  source = "github.com/gruntwork-io/terraform-google-network.git//modules/vpc-network?ref=v0.4.0"
  name_prefix = "management"
  project = var.project
  region = var.region
  cidr_block = "10.100.0.0/16"
  secondary_cidr_block = "10.101.0.0/16"
}


module "gke_cluster" {

  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-cluster?ref=v0.4.3"

  name = "management"

  project  = var.project
  location = var.region
  network  = module.network.network
  subnetwork = module.network.public_subnetwork

  master_ipv4_cidr_block = "10.102.0.0/28"

  enable_private_nodes = "true"

  # To make testing easier, we keep the public endpoint available. In production, we highly recommend restricting access to only within the network boundary, requiring your users to use a bastion host or VPN.
  disable_public_endpoint = "true"

  # With a private cluster, it is highly recommended to restrict access to the cluster master
  # However, for testing purposes we will allow all inbound traffic.
  master_authorized_networks_config = [
    {
      cidr_blocks = [
        {
          cidr_block   = "10.100.0.0/16"
          display_name = "management 1"
        },
        {
          cidr_block   = "10.101.0.0/16"
          display_name = "management 2"
        },
        {
          cidr_block   = "10.11.0.0/24"
          display_name = "home vpn"
        },
      ]
    },
  ]

  cluster_secondary_range_name = module.network.public_subnetwork_secondary_range_name

  enable_vertical_pod_autoscaling = "true"
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NODE POOL
# ---------------------------------------------------------------------------------------------------------------------

resource "google_container_node_pool" "node_pool" {
  provider = google-beta

  name     = "pool"
  project  = var.project
  location = var.region
  cluster  = module.gke_cluster.name

  initial_node_count = "1"

  autoscaling {
    min_node_count = "1"
    max_node_count = "5"
  }

  management {
    auto_repair  = "true"
    auto_upgrade = "true"
  }

  node_config {
    image_type   = "COS"
    machine_type = "n1-standard-1"

    labels = {
      private-pools-example = "true"
    }

    tags = [
      module.network.public,
    ]

    disk_size_gb = "30"
    disk_type    = "pd-standard"
    preemptible  = false

    service_account = module.gke_service_account.email

    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }

  lifecycle {
    ignore_changes = [initial_node_count]
  }

  timeouts {
    create = "30m"
    update = "30m"
    delete = "30m"
  }
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A CUSTOM SERVICE ACCOUNT TO USE WITH THE GKE CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "gke_service_account" {
  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-service-account?ref=v0.4.3"

  name        = "management-gke"
  project     = var.project
  description = "Management GKE"
}

Incorrect attribute value type errors

https://github.com/gruntwork-io/terraform-google-gke/tree/master/examples/gke-private-tiller

After running terraform plan I get the following errors:

var.location
  The location (region or zone) of the GKE cluster.

  Enter a value: us-east4

var.project
  The project ID where all resources will be launched.

  Enter a value: be-my-player-two

var.region
  The region for the network. If the cluster is regional, this must be the same region. Otherwise, it should be the region of the zone.

  Enter a value: us-east4


Error: Incorrect attribute value type

  on modules\gke-cluster\main.tf line 51, in resource "google_container_cluster" "cluster":
  51:       disabled = "${var.http_load_balancing ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on modules\gke-cluster\main.tf line 55, in resource "google_container_cluster" "cluster":
  55:       disabled = "${var.horizontal_pod_autoscaling ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on modules\gke-cluster\main.tf line 59, in resource "google_container_cluster" "cluster":
  59:       disabled = "${var.enable_kubernetes_dashboard ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on modules\gke-cluster\main.tf line 63, in resource "google_container_cluster" "cluster":
  63:       disabled = "${var.enable_network_policy ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Unsupported argument

  on modules\gke-cluster\main.tf line 83, in resource "google_container_cluster" "cluster":
  83:   master_authorized_networks_config = "${var.master_authorized_networks_config}"

An argument named "master_authorized_networks_config" is not expected here.
Did you mean to define a block of type "master_authorized_networks_config"?

Trying to create Private K8s Cluster with the example given

Thanks for the Private K8s Cluster Example Code. It is useful
https://github.com/gruntwork-io/terraform-google-gke/blob/master/examples/gke-private-cluster/variables.tf#L47
all 3 CIDR ranges are "master_ipv4_cidr_block" is "10.5.0.0/28" , "vpc_cidr_block" is "10.3.0.0/16" and "vpc_secondary_cidr_block" is "10.4.0.0/16"

But in my case due to various CIDR block range those are little bit constrained still getting the following
all 3 CIDR ranges are "master_ipv4_cidr_block" is "10.32.98.0/28" , "vpc_cidr_block" is "10.32.96.0/24" and "vpc_secondary_cidr_block" is "10.32.97.0/24"

And I am getting the following error

"tierSettings": { "tier": "STANDARD" }, "selfLink": "https://container.googleapis.com/v1beta1/projects/my-sandbox/locations/us-central1/clusters/sb-pvt-cluster", "zone": "us-central1", "endpoint": "34.68.176.42", "initialClusterVersion": "1.14.3-gke.11", "currentMasterVersion": "1.14.3-gke.11", "currentNodeVersion": "1.14.3-gke.11", "createTime": "2019-09-16T18:45:28+00:00", "status": "ERROR", "statusMessage": "\n\t(1) deploy error: Not all instances running in IGM after 12.294175155s. Expect 1. Current errors: [INVALID_FIELD_VALUE]: Instance 'gke-sb-pvt-cluster-default-pool-78b0666e-fjqd' creation failed: Invalid value for field 'resource.networkInterfaces[0].aliasIpRanges[0].ipCidrRange': '/24'. CIDR range for alias IP range must be within the specified subnetwork. (when acting as '[email protected]')\n\t(2) deploy error: Not all instances running in IGM after 12.636626322s. Expect 1. Current errors: [INVALID_FIELD_VALUE]: Instance 'gke-sb-pvt-cluster-default-pool-14f555fd-p8c0' creation failed: Invalid value for field 'resource.networkInterfaces[0].aliasIpRanges[0].ipCidrRange': '/24'. CIDR range for alias IP range must be within the specified subnetwork. (when acting as '[email protected]')\n\t(3) deploy error: Not all instances running in IGM after 14.673528039s. Expect 1. Current errors: [INVALID_FIELD_VALUE]: Instance 'gke-sb-pvt-cluster-default-pool-7f5795dc-zzbs' creation failed: Invalid value for field 'resource.networkInterfaces[0].aliasIpRanges[0].ipCidrRange': '/24'. CIDR range for alias IP range must be within the specified subnetwork. (when acting as '[email protected]').", "servicesIpv4Cidr": "10.32.97.0/28", "instanceGroupUrls": [ "https://www.googleapis.com/compute/v1/projects/my-sandbox/zones/us-central1-b/instanceGroupManagers/gke-sb-pvt-cluster-default-pool-78b0666e-grp", "https://www.googleapis.com/compute/v1/projects/my-sandbox/zones/us-central1-c/instanceGroupManagers/gke-sb-pvt-cluster-default-pool-7f5795dc-grp", "https://www.googleapis.com/compute/v1/projects/my-sandbox/zones/us-central1-a/instanceGroupManagers/gke-sb-pvt-cluster-default-pool-14f555fd-grp" ], "currentNodeCount": 3, "location": "us-central1", "conditions": [ { "message": "[INVALID_FIELD_VALUE]: Instance 'gke-sb-pvt-cluster-default-pool-78b0666e-fjqd' creation failed: Invalid value for field 'resource.networkInterfaces[0].aliasIpRanges[0].ipCidrRange': '/24'. CIDR range for alias IP range must be within the specified subnetwork. (when acting as '[email protected]')" }, { "message": "[INVALID_FIELD_VALUE]: Instance 'gke-sb-pvt-cluster-default-pool-14f555fd-p8c0' creation failed: Invalid value for field 'resource.networkInterfaces[0].aliasIpRanges[0].ipCidrRange': '/24'. CIDR range for alias IP range must be within the specified subnetwork. (when acting as '[email protected]')" }

Support Google Provider 2.1.0

I had to lock the Google provider to 2.0.0. 2.1.0 seems to drop the subnetwork self_link. hashicorp/terraform-provider-google#3040

module.gke_cluster.google_container_cluster.cluster: Resource 'data.google_compute_subnetwork.gke_subnetwork' does not have attribute 'self_link' for variable 'data.google_compute_subnetwork.gke_subnetwork.self_link'

Relevant build: https://circleci.com/gh/gruntwork-io/terraform-google-gke/97

@rileykarson am I suppose to use another attribute?

Stackdriver support

It would be great to have an option Stackdriver Kubernetes Engine Monitoring and integrate with Prometheus.

Getting error during private cluster installation

I'm getting below error, when try to create the private cluster in gke.

Made below config to true in main.tf
install_tiller = true

Error: Error running command 'kubergrunt tls gen --ca --namespace kube-system --secret-name kube-system-namespace-tiller-ca-certs --secret-label gruntwork.io/tiller-namespace=kube-system --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=ca --tls-subject-json '{"common_name":"tiller","org":"Gruntwork"}' --tls-private-key-algorithm ECDSA --tls-private-key-ecdsa-curve P256 --kubectl-server-endpoint "$KUBECTL_SERVER_ENDPOINT" --kubectl-certificate-authority "$KUBECTL_CA_DATA" --kubectl-token "$KUBECTL_TOKEN"

kubergrunt tls gen --namespace kube-system --ca-secret-name kube-system-namespace-tiller-ca-certs --ca-namespace kube-system --secret-name tiller-certs --secret-label gruntwork.io/tiller-namespace=kube-system --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '{"common_name":"tiller","org":"Gruntwork"}' --tls-private-key-algorithm ECDSA --tls-private-key-ecdsa-curve P256 --kubectl-server-endpoint "$KUBECTL_SERVER_ENDPOINT" --kubectl-certificate-authority "$KUBECTL_CA_DATA" --kubectl-token "$KUBECTL_TOKEN"
': exit status 127. Output: /bin/sh: 1: kubergrunt: not found
/bin/sh: 3: kubergrunt: not found

Anyone help me to fix this issue.

Could not find [my-dev1] in [europe-west2]

Issue finding cluster. I'm using terraform.tfvars:

project = "my-team"
location = "europe-west2-a"
region = "europe-west2"
cluster_name = "my-dev1"
cluster_service_account_name = "my-dev1-sa"
cluster_service_account_description = "GKE Cluster Service Account managed by Terraform"
kubectl_config_path = "$HOME/.kube/config"
tls_subject = {
  common_name = "tiller"
  org         = "Me"
  org_unit    = "My Team"
  city        = "Dublin"
  country     = "Ireland"
}
client_tls_subject = {
  common_name = "admin"
  org         = "Me"
  org_unit    = "My Team"
  city        = "Dublin"
  country     = "Ireland"
}

terraform apply Error:

google_container_node_pool.node_pool: Creation complete after 5m46s [id=europe-west2-a/my-dev1/private-pool]
null_resource.configure_kubectl: Creating...
null_resource.configure_kubectl: Provisioning with 'local-exec'...
null_resource.configure_kubectl (local-exec): Executing: ["cmd" "/C" "gcloud beta container clusters get-credentials my-dev1 --region europe-west2 --project my-team"]
null_resource.configure_kubectl (local-exec): Fetching cluster endpoint and auth data.
null_resource.configure_kubectl (local-exec): ERROR: (gcloud.beta.container.clusters.get-credentials) ResponseError: code=404, message=Not found: projects/my-team/locations/europe-west2/clusters/my-dev1.
null_resource.configure_kubectl (local-exec): Could not find [my-dev1] in [europe-west2].
null_resource.configure_kubectl (local-exec): Did you mean [my-dev1] in [europe-west2-a]?


Error: Error running command 'gcloud beta container clusters get-credentials my-dev1 --region europe-west2 --project my-team': exit status 1. Output: Fetching cluster endpoint and auth data.
ERROR: (gcloud.beta.container.clusters.get-credentials) ResponseError: code=404, message=Not found: projects/my-team/locations/europe-west2/clusters/my-dev1.
Could not find [my-dev1] in [europe-west2].
Did you mean [my-dev1] in [europe-west2-a]?




Error: Error running command 'kubergrunt tls gen --ca --namespace kube-system --secret-name kube-system-namespace-tiller-ca-certs --secret-label gruntwork.io/tiller-namespace=kube-system --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=ca --tls-subject-json '{"city":"Dublin","common_name":"tiller","country":"Ireland","org":"Me","org_unit":"My Team"}' --tls-private-key-algorithm ECDSA --tls-private-key-ecdsa-curve P256 --kubectl-server-endpoint "$KUBECTL_SERVER_ENDPOINT" --kubectl-certificate-authority "$KUBECTL_CA_DATA" --kubectl-token "$KUBECTL_TOKEN"

kubergrunt tls gen --namespace kube-system --ca-secret-name kube-system-namespace-tiller-ca-certs --ca-namespace kube-system --secret-name tiller-certs --secret-label gruntwork.io/tiller-namespace=kube-system --secret-label gruntwork.io/tiller-credentials=true --secret-label gruntwork.io/tiller-credentials-type=server --tls-subject-json '{"city":"Dublin","common_name":"tiller","country":"Ireland","org":"Me","org_unit":"My Team"}' --tls-private-key-algorithm ECDSA --tls-private-key-ecdsa-curve P256 --kubectl-server-endpoint "$KUBECTL_SERVER_ENDPOINT" --kubectl-certificate-authority "$KUBECTL_CA_DATA" --kubectl-token "$KUBECTL_TOKEN"
': exit status 1. Output: time="2019-08-06T14:34:36+01:00" level=info msg="No context name provided. Using default." name=kubergrunt
time="2019-08-06T14:34:36+01:00" level=info msg="No kube config path provided. Using default (C:\\Users\\me\\.kube\\config)" name=kubergrunt
ERROR: invalid character '\'' looking for beginning of value

google-beta 3 removed the kubernetes_dashboard

Running on

Terraform v0.12.16

  • provider.google v2.9.1
  • provider.google-beta v3.0.0-beta.1
  • provider.random v2.2.1

We get
"addons_config.0.kubernetes_dashboard": [REMOVED] The Kubernetes Dashboard addon is removed for clusters on GKE.

IpAllocation Policy - Secondary Range Names

Hello everyone,

This week I begin to receive this error:
Error: googleapi: Error 400: Pod secondary range name (public-services) should not be the same as service secondary range name., badRequest when I recreate a cluster.

I in my plan module add the same name in cluster and service secondary ranges names.

+ ip_allocation_policy {
          + cluster_ipv4_cidr_block       = (known after apply)
          + cluster_secondary_range_name  = "public-services"
          + services_ipv4_cidr_block      = (known after apply)
          + services_secondary_range_name = "public-services"
        }

Look in detail the name is from vpc_network module and isn't possible to create a new subnetwork with a different name on this module.

I believe Google made few changes to the API.

Following the installation instructions doesn't work.

https://github.com/gruntwork-io/terraform-google-gke/tree/master/examples/gke-private-tiller

After I tried using terraform init command I get the following errors.

Initializing modules...
- gke_cluster in modules\gke-cluster
- gke_service_account in modules\gke-service-account
Downloading git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0 for tiller...
Downloading git::[email protected]:gruntwork-io/terraform-google-network.git?ref=v0.1.0 for vpc_network...

Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "vpc_network" (main.tf:209) source code from
"git::[email protected]:gruntwork-io/terraform-google-network.git?ref=v0.1.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-google-network.git?ref=v0.1.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\vpc_network'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



c:\bemyplayertwo\terraform\prod-test\terraform-google-gke>terraform init
Initializing modules...
Downloading git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0 for tiller...
Downloading git::[email protected]:gruntwork-io/terraform-google-network.git?ref=v0.1.0 for vpc_network...

Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "vpc_network" (main.tf:209) source code from
"git::[email protected]:gruntwork-io/terraform-google-network.git?ref=v0.1.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-google-network.git?ref=v0.1.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\vpc_network'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "tiller" (main.tf:311) source code from
"git::[email protected]:gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-kubernetes-helm.git?ref=v0.3.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\tiller'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.



Error: Failed to download module

Could not download module "vpc_network" (main.tf:209) source code from
"git::[email protected]:gruntwork-io/terraform-google-network.git?ref=v0.1.0":
error downloading
'ssh://[email protected]/gruntwork-io/terraform-google-network.git?ref=v0.1.0':
C:\Program Files\Git\cmd\git.exe exited with 128: Cloning into
'.terraform\modules\vpc_network'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Following the instructions from your google blog post, and I'm getting errors

https://cloud.google.com/blog/products/devops-sre/deploying-a-production-grade-helm-release-on-gke-with-terraform

I clicked on the first "open in google cloud shell" button which clones this repo and has some config scripts loaded.

Following the instructions on the right side, I get to terraform apply and, from the error messages, it looks like the string values aren't being interpolated?

Screen Shot 2019-12-08 at 7 32 17 PM

This is my first time dabbling in Terraform (and Kubernetes), so I'm completely lost as to what's wrong.

Already existing node pool marked as to be created in plan

Terraform Version

0.12.24

Affected Resource(s)

  • google_container_node_pool

Terraform Configuration Files

Relevant config:

  provider = google-beta

  name     = "private-pool"
  project  = var.project
  location = var.location
  cluster  = module.gke_cluster.name

  initial_node_count = var.cluster_nodepool_min_nodes

  autoscaling {
    min_node_count = var.cluster_nodepool_min_nodes
    max_node_count = var.cluster_nodepool_max_nodes
  }

  management {
    auto_repair  = "true"
    auto_upgrade = "true"
  }

  node_config {
    image_type   = "COS"
    machine_type = var.cluster_nodepool_machine_type

    labels = {
      private-pools-example = "true"
    }

    # Add a private tag to the instances. See the network access tier table for full details:
    # https://github.com/gruntwork-io/terraform-google-network/tree/master/modules/vpc-network#access-tier
    tags = [
      module.vpc_network.private,
      "private-pool-example",
    ]

    disk_size_gb = "30"
    disk_type    = "pd-standard"
    preemptible  = false

    service_account = module.gke_service_account.email

    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }

  lifecycle {
    ignore_changes = [initial_node_count]
  }

  timeouts {
    create = "30m"
    update = "30m"
    delete = "30m"
  }
}

Debug Output

  + resource "google_container_node_pool" "node_pool" {
      + cluster             = "web-cluster"
      + id                  = (known after apply)
      + initial_node_count  = 2
      + instance_group_urls = (known after apply)
      + location            = "europe-west1"
      + max_pods_per_node   = (known after apply)
      + name                = "private-pool"
      + name_prefix         = (known after apply)
      + node_count          = (known after apply)
      + node_locations      = (known after apply)
      + project             = "redacted-web"
      + region              = (known after apply)
      + version             = (known after apply)
      + zone                = (known after apply)
      + autoscaling {
          + max_node_count = 2
          + min_node_count = 2
        }
      + management {
          + auto_repair  = true
          + auto_upgrade = true
        }
      + node_config {
          + disk_size_gb      = 30
          + disk_type         = "pd-standard"
          + guest_accelerator = (known after apply)
          + image_type        = "COS"
          + labels            = {
              + "private-pools-example" = "true"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-highmem-16"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = "[email protected]"
          + tags              = [
              + "private",
              + "private-pool-example",
            ]
          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }
        }
      + timeouts {
          + create = "30m"
          + delete = "30m"
          + update = "30m"
        }
    }

Panic Output

Expected Behavior

No changes to the node pool, it already exists and no config has been changed since last apply.

Actual Behavior

If applied following error is thrown to indicate the node pool to be created already exists:

  on ../modules/gke-cluster/main.tf line 49, in resource "google_container_node_pool" "node_pool":
  49: resource "google_container_node_pool" "node_pool" {

Steps to Reproduce

  1. terraform plan

Important Factoids

Could it have anything to do with the Kubernetes version of the nodes? We currently have a kubernetes node upgrade available from 1.15.9-gke.24 (current) to 1.15.9-gke.26. Wondering if this could have an impact in terraform seeing these as different node pools?

Failing to install gruntwork tooling

From PR #136, @ina-stoyanova:

Just noticed the tests are failing to install the gruntwork-tooling. May be worth creating a separate issue to look into that.


Sunset notice

We believe there is an opportunity to create a truly outstanding developer experience for deploying to the cloud, however developing this vision requires that we temporarily limit our focus to just one cloud. Gruntwork has hundreds of customers currently using AWS, so we have temporarily suspended our maintenance efforts on this repo. Once we have implemented and validated our vision for the developer experience on the cloud, we look forward to picking this up. In the meantime, you are welcome to use this code in accordance with the open source license, however we will not be responding to GitHub Issues or Pull Requests.

If you wish to be the maintainer for this project, we are open to considering that. Please contact us at [email protected].

Private GKE in Shared VPC env

Are these two steps must to make this work in shared VPC environment?

Add this service project SA on the host project
gcloud projects add-iam-policy-binding project_name
--member serviceAccount:[email protected]
--role roles/container.hostServiceAgentUser

Add this service project SA on the host project to the shared VPC subnet - bindings on host project

Note - I already have a manually created service account(service project) with "compute.networkuser" permissions on the subnet in the host project. Using that same account for terraform.

Support setting Release Channels

Release channels provide more control over automatic upgrades of GKE clusters. By default it's set to UNSPECIFIED which presents upgrade options when running terraform and a new GKE master version exists. For production clusters, STABLE is the recommended release channel.

Helm example has no files besides Readme

The master branch helm example doesn't have anything.

Do all operations of creating a cluster initialize helm correctly as specified in the Google Cloud post?

Fix Google Provider 4.0.0 Breaking Changes

I am seeing the following error when upgrading google-provider to 4.0.0. Looks like there are some breaking changes in this release as noted in https://github.com/hashicorp/terraform-provider-google/releases

│ Error: Insufficient client_certificate_config blocks
│ 
│   on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 107, in resource "google_container_cluster" "cluster":
│  107:   master_auth {
│ 
│ At least 1 "client_certificate_config" blocks are required.
╷
│ Error: Unsupported argument
│ 
│   on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 108, in resource "google_container_cluster" "cluster":
│  108:     username = var.basic_auth_username
│ 
│ An argument named "username" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 109, in resource "google_container_cluster" "cluster":
│  109:     password = var.basic_auth_password
│ 
│ An argument named "password" is not expected here.

Is this Repo still maintained ?

I am having hard time that none of the pull requests for GKE add-ons are being reviewed and merged by the moderators.

The review seems pretty straight forward its just adding a flag. Whats taking so long to approve these addon PRs

I would like to see these addons added to this module.

  • Config Connector #114
  • Shielded GKE nodes #113
  • Filestore CSI driver (not there yet)
  • node local DNS cache #128

Cluster services/pods secondary subnets overlap causing service IP issues.

Thanks for open-sourcing the modules,

We had couple issues with overlapping cluster services IPs / pods IPs (services IPs becoming unreachable) that we traced to having both services and pods share the same secondary range. (

services_secondary_range_name = var.cluster_secondary_range_name
)

As per https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#secondary_ranges the ranges have to be different, either by letting GKE create them or specifying them.

We're currently looking into tweaking the module to allow gke to create secondary ranges such that we don't have to refactor the vpc module for multiple secondary ranges. Wondering if anyone else had to deal with this.

gke cluster module isn't idempotent

I have a template that calls the gke-cluster module. No matter what I put for variables in the template, every time I run terragrunt apply, it tells me it needs to destroy the old cluster and create a new one.

  # module.gke_cluster.google_container_cluster.cluster must be replaced
-/+ resource "google_container_cluster" "cluster" {
      ~ additional_zones            = [] -> (known after apply)
      ~ cluster_autoscaling         = [] -> (known after apply)
      ~ cluster_ipv4_cidr           = "10.4.0.0/16" -> (known after apply)
        description                 = "GKE cluster for mgmt environment"
      + enable_binary_authorization = (known after apply)
        enable_kubernetes_alpha     = false
        enable_legacy_abac          = false
      + enable_tpu                  = (known after apply)
      ~ endpoint                    = "{redacted}" -> (known after apply)
      ~ id                          = "{redacted}" -> (known after apply)
        initial_node_count          = 1
      ~ instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/{redacted}/zones/us-west1-c/instanceGroups/gke-{redacted}-private-pool-4aab0243-grp",
        ] -> (known after apply)
      ~ ip_allocation_policy        = [
          ~ {
              ~ cluster_ipv4_cidr_block       = "10.4.0.0/16" -> (known after apply)
                cluster_secondary_range_name  = "public-services"
              ~ create_subnetwork             = false -> null
              ~ node_ipv4_cidr_block          = "10.0.0.0/16" -> (known after apply)
              ~ services_ipv4_cidr_block      = "10.4.0.0/16" -> (known after apply)
                services_secondary_range_name = "public-services"
              ~ subnetwork_name               = "" -> null
                use_ip_aliases                = true
            },
        ]
        location                    = "us-west1-c"
        logging_service             = "logging.googleapis.com/kubernetes"
      ~ master_version              = "1.13.7-gke.24" -> (known after apply)
        min_master_version          = "1.13.7-gke.24"
        monitoring_service          = "monitoring.googleapis.com/kubernetes"
        name                        = "{redacted}"
      ~ network                     = "projects/{redacted}/global/networks/mgmt-network" -> "https://www.googleapis.com/compute/v1/{redacted}/global/networks/mgmt-network"
      ~ node_locations              = [] -> (known after apply)
      ~ node_version                = "1.13.7-gke.24" -> (known after apply)
        project                     = "{redacted}"
      + region                      = (known after apply)
        remove_default_node_pool    = true
      - resource_labels             = {} -> null
      ~ subnetwork                  = "projects/{redacted}/regions/us-west1/subnetworks/mgmt-subnetwork-public" -> "https://www.googleapis.com/compute/v1/projects/{redacted}/regions/us-west1/subnetworks/mgmt-subnetwork-public"
      ~ zone                        = "us-west1-c" -> (known after apply)

        addons_config {
            horizontal_pod_autoscaling {
                disabled = false
            }

            http_load_balancing {
                disabled = false
            }

            kubernetes_dashboard {
                disabled = true
            }

            network_policy_config {
                disabled = false
            }
        }

      ~ maintenance_policy {
          ~ daily_maintenance_window {
              ~ duration   = "PT4H0M0S" -> (known after apply)
                start_time = "05:00"
            }
        }

      ~ master_auth {
          + client_certificate     = (known after apply)
          + client_key             = (sensitive value)
          ~ cluster_ca_certificate = "{redacted}" -> (known after apply)

            client_certificate_config {
                issue_client_certificate = false
            }
        }

        master_authorized_networks_config {
            cidr_blocks {
                cidr_block   = "0.0.0.0/0"
                display_name = "all-for-testing"
            }
        }

        network_policy {
            enabled  = true
            provider = "CALICO"
        }

      ~ node_config {
          ~ disk_size_gb      = 10 -> (known after apply)
          ~ disk_type         = "pd-standard" -> (known after apply)
          ~ guest_accelerator = [] -> (known after apply)
          ~ image_type        = "COS" -> (known after apply)
          - labels            = {
              - "private-pool-example" = "true"
            } -> null # forces replacement
          ~ local_ssd_count   = 0 -> (known after apply)
          ~ machine_type      = "n1-standard-1" -> (known after apply)
          ~ metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> (known after apply)
          ~ oauth_scopes      = [
              - "https://www.googleapis.com/auth/cloud-platform",
            ] -> (known after apply)
            preemptible       = false
            service_account   = "{redacted}@{redacted}.iam.gserviceaccount.com"
          - tags              = [
              - "private",
              - "private-pool-example",
            ] -> null # forces replacement
        }

      ~ node_pool {
          ~ initial_node_count  = 1 -> (known after apply)
          ~ instance_group_urls = [
              - "https://www.googleapis.com/compute/v1/projects/{redacted}/zones/us-west1-c/instanceGroupManagers/gke-{redacted}-private-pool-4aab0243-grp",
            ] -> (known after apply)
          ~ max_pods_per_node   = 0 -> (known after apply)
          ~ name                = "private-pool" -> (known after apply)
          + name_prefix         = (known after apply)
          ~ node_count          = 1 -> (known after apply)
          ~ version             = "1.13.7-gke.24" -> (known after apply)

          ~ autoscaling {
              ~ max_node_count = 5 -> (known after apply)
              ~ min_node_count = 1 -> (known after apply)
            }

          ~ management {
              ~ auto_repair  = true -> (known after apply)
              ~ auto_upgrade = true -> (known after apply)
            }

          ~ node_config {
              ~ disk_size_gb      = 10 -> (known after apply)
              ~ disk_type         = "pd-standard" -> (known after apply)
              ~ guest_accelerator = [] -> (known after apply)
              ~ image_type        = "COS" -> (known after apply)
              ~ labels            = {
                  - "private-pool-example" = "true"
                } -> (known after apply)
              ~ local_ssd_count   = 0 -> (known after apply)
              ~ machine_type      = "n1-standard-1" -> (known after apply)
              ~ metadata          = {
                  - "disable-legacy-endpoints" = "true"
                } -> (known after apply)
              + min_cpu_platform  = (known after apply)
              ~ oauth_scopes      = [
                  - "https://www.googleapis.com/auth/cloud-platform",
                ] -> (known after apply)
              ~ preemptible       = false -> (known after apply)
              ~ service_account   = "{redacted}@{redacted}.iam.gserviceaccount.com" -> (known after apply)
              ~ tags              = [
                  - "private",
                  - "private-pool-example",
                ] -> (known after apply)

              + taint {
                  + effect = (known after apply)
                  + key    = (known after apply)
                  + value  = (known after apply)
                }

              + workload_metadata_config {
                  + node_metadata = (known after apply)
                }
            }
        }

      ~ private_cluster_config {
          - enable_private_endpoint = false -> null
            enable_private_nodes    = true
            master_ipv4_cidr_block  = "10.2.255.0/28"
          ~ private_endpoint        = "10.2.255.2" -> (known after apply)
          ~ public_endpoint         = "{redacted}" -> (known after apply)
        }
    }

It seems that, if the plan is to be believed, that the tags on the default node pool must
be explicitly set to the same values as the tags in non-default node pool, so that subsequent runs don't require updates. I haven't tested it via a fork yet.

The module forced a new cluster when I specified no node_pool outside of the module AND when I specified a node_pool outside the module. I haven't tried a node_pool with no tags, because the network won't work for that, since it needs a 'private' tag, at minimum.

Docker container runtimes is not supported

While using the code

https://github.com/gruntwork-io/terraform-google-gke/tree/master/examples/gke-private-cluster with latest module source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-cluster"

Output Reports

Plan: 1 to add, 0 to change, 0 to destroy.
google_container_node_pool.node_pool: Creating...
╷
│ Error: error creating NodePool: googleapi: Error 400: Creation of node pools using node images based on Docker container runtimes is not supported in GKE v1.24+ clusters as Dockershim has been removed in Kubernetes v1.24., badRequest
│
│   with google_container_node_pool.node_pool,
│   on main.tf line 93, in resource "google_container_node_pool" "node_pool":
│   93: resource "google_container_node_pool" "node_pool" {
│
╵
❯

Use Helm 3/remove Tiller?

Thanks so much for providing this article and the accompanying repo! Our team needs to deploy a single Helm chart in GKE with as few moving parts as possible to help us with federal compliance, and this is very instructive.

In the time since the article was written, Helm 3 dropped the use of Tiller, further simplifying and integrating with k8s. Do you have any plans to update your repo to make use of Helm 3 and drop the Tiller component?

Error 403: Required 'compute.networks.create'

I'm hitting the following on terraform apply

Error: Error creating Network: googleapi: Error 403: Required 'compute.networks.create' permission for 'projects/my-proj/global/networks/cluster-dev1-network-yl0n-network', forbidden

  on .terraform\modules\vpc_network\modules\vpc-network\main.tf line 12, in resource "google_compute_network" "vpc":
  12: resource "google_compute_network" "vpc" {

I've setup a Service Account key with GOOGLE_APPLICATION_CREDENTIALS="C:\terraform-2baedc723d70.json" roles:
image

$ terraform version
Terraform v0.12.5
+ provider.external v1.2.0
+ provider.google v2.7.0
+ provider.google-beta v2.7.0
+ provider.helm v0.10.0
+ provider.kubernetes v1.7.0
+ provider.null v2.1.2
+ provider.random v2.1.2
+ provider.template v2.1.2
+ provider.tls v2.0.1

Git URL ref pin breaking module/gke-cluster bool attributes

Changing from local modules to pinned from Git URL for gke_cluster and gke_service_account looks like it can't parse the variables from module.

module "gke_service_account" {
  # When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
  # to a specific version of the modules, such as the following example:
  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-service-account?ref=v0.2.0"
  #source = "../../modules/gke-service-account"


  name        = var.cluster_service_account_name
  project     = var.project
  description = var.cluster_service_account_description
}
module "gke_cluster" {
  # When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
  # to a specific version of the modules, such as the following example:
  source = "github.com/gruntwork-io/terraform-google-gke.git//modules/gke-cluster?ref=v0.2.0"
  #source = "../../modules/gke-cluster"

  name = var.cluster_name

terraform validate

Error: Incorrect attribute value type

  on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 51, in resource "google_container_cluster" "cluster":
  51:       disabled = "${var.http_load_balancing ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 55, in resource "google_container_cluster" "cluster":
  55:       disabled = "${var.horizontal_pod_autoscaling ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 59, in resource "google_container_cluster" "cluster":
  59:       disabled = "${var.enable_kubernetes_dashboard ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Incorrect attribute value type

  on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 63, in resource "google_container_cluster" "cluster":
  63:       disabled = "${var.enable_network_policy ? 0 : 1}"

Inappropriate value for attribute "disabled": bool required.


Error: Unsupported argument

  on .terraform/modules/gke_cluster/modules/gke-cluster/main.tf line 83, in resource "google_container_cluster" "cluster":
  83:   master_authorized_networks_config = "${var.master_authorized_networks_config}"

An argument named "master_authorized_networks_config" is not expected here.
Did you mean to define a block of type "master_authorized_networks_config"?

Everything else left as default from example/gke-private-cluster

Anyone noticing similar issues?

**terraform -v **

Terraform v0.12.12
+ provider.google v2.9.1
+ provider.google-beta v2.9.1
+ provider.random v2.2.1

Issues with provider version and GCS when creating private cluster

I am looking to create a private GKE Cluster and manage a Google Cloud storage with the following configuration:

uniform_bucket_level_access = true

Unfortunately, this property is only available in recent version of the providers used by the modules and my deploy fails because of that.

Using version 0.5.0

Variable enable_legacy_abac isn't work

How to reproduce

In module gke-cluster is defined variable

variable "enable_legacy_abac" {
description = "Whether to enable legacy Attribute-Based Access Control (ABAC). RBAC has significant security advantages over ABAC."
type = bool
default = false
}

I set value for this variable to true

module "gke_cluster" {
  ...
  enable_legacy_abac = "true"
}

Expected behaviour

When i do terraform apply, i expect to see:

  # module.gke_cluster.google_container_cluster.cluster will be created
  + resource "google_container_cluster" "cluster" {
      ...
      + enable_legacy_abac          = true

Real behavior

When i do terraform apply, i see:

  # module.gke_cluster.google_container_cluster.cluster will be created
  + resource "google_container_cluster" "cluster" {
      ...
      + enable_legacy_abac          = false

How to fix

In file modules/gke-cluster/main.tf block resource "google_container_cluster" "cluster" {...} add

resource "google_container_cluster" "cluster" {
  enable_legacy_abac = var.enable_legacy_abac
  ...
}

Terraform code for GCP private cluster not working

I have the terraform code for GCP private cluster and it was working one month back, but today I am trying it and it's giving error now.

Error: googleapi: Error 400: Alias IP addresses are required for private cluster, please make sure you enable alias IPs when creating a cluster., badRequest

  on modules/gke-cluster/main.tf line 20, in resource "google_container_cluster" "cluster":
  20: resource "google_container_cluster" "cluster" {

Am I missing something? Help will be appreciated

Proposal: Change hardcoded module values to variables

I'm currently working on fork that would change statically typed values (node pool sizes, machine types, naming) in the module.

Is there any reason that at the moment we have those hard coded? Am I missing something?

Would this be a reasonable PR to implement?

Current main.tf:

  autoscaling {
    min_node_count = "1"
    max_node_count = "5"
  }

Example of proposal main.tf:

  autoscaling {
    min_node_count = var.autoscaling_min_node_count
    max_node_count = var.autoscaling_max_node_count
  }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.