Code Monkey home page Code Monkey logo

terraform-google-cloud-nat's People

Contributors

4m3ndy avatar aaron-lane avatar apeabody avatar betsy-lichtenberg avatar bharathkkb avatar cloud-foundation-bot avatar coryodaniel avatar dantheperson avatar dependabot[bot] avatar erjohnso avatar g-awmalik avatar ingwarr avatar jberlinsky avatar jeanmorais avatar kevensen avatar kopachevsky avatar ludoo avatar morgante avatar nick4fake avatar nikhilmakhijani avatar philippe-vandermoere avatar rafaelromcar avatar release-please[bot] avatar renovate[bot] avatar robgordon89 avatar tanguynicolas avatar umairidris avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-google-cloud-nat's Issues

Support for Terraform version 0.13.0 for vpc, cloud-router and cloud-nat modules

I have upgraded Terraform version 0.12 to 0.13. Most of the resources are compatible with 0.13.0 but not the below ones.
"terraform-google-modules/cloud-nat/google"
"terraform-google-modules/cloud-router/google"
"terraform-google-modules/network/google".

It throws the below error:

Error: Unsupported Terraform Core version

  on .terraform/modules/vpc/modules/vpc/versions.tf line 18, in terraform:
  18:   required_version = "~> 0.12.6"

Module module.vpc.module.vpc (from ./modules/vpc) does not support Terraform
version 0.13.0. To proceed, either choose another supported Terraform version
or update this version constraint. Version constraints are normally set for
good reason, so updating the constraint may lead to other errors or unexpected
behavior.

When the support for version 0.13.0 would be in place ?

High Availability

My apologies if this is covered somewhere else, but is there any configuration to make CloudNAT HA?

How to create a NAT with static IP?

I would like to create a NAT with a static IP using this module but don't see anything about static IP's in this repo. Could you please provide an example of how to do so?

Leaving nat_ips not defined causes a terraform error on > 0.12

Running the basic example with Terraform v0.12.5

module "cloud-nat" {
source = "terraform-google-modules/cloud-nat/google"
project_id = "${var.project_id}"
region = "${var.region}"
router = "${var.router_name}"
}

I see the following error thrown

Error: Incorrect attribute value type

on .terraform/modules/cloud-nat/terraform-google-modules-terraform-google-cloud-nat-137878d/main.tf line 42, in resource "google_compute_router_nat" "main":
42: nat_ips = ["${var.nat_ips}"]

Inappropriate value for attribute "nat_ips": element 0: string required.

Unable To Use Google Compute Address For Cloud NAT

TL;DR

When trying to specify the nat_ips, there is a 404 error saying the resource is not found. I am able to manually create the cloud NAT with the same IP that was created without the error.

Expected behavior

NAT should be created without any issues

Observed behavior

Error: Error creating RouterNat: googleapi: Error 404: The resource 'projects/<projectname>/regions/us-west1/addresses/34.83.95.163' was not found, notFound

Terraform Configuration

locals {
  address      = var.create_address ? join("", google_compute_address.default.*.address) : var.address
}


resource "google_compute_router" "router" {
  name        = var.name
  network     = var.network
#   region      = var.region
#   project     = var.project
  description = var.description
  dynamic "bgp" {
    for_each = var.bgp != null ? [var.bgp] : []
    content {
      asn = var.bgp.asn

      # advertise_mode is intentionally set to CUSTOM to not allow "DEFAULT".
      # This forces the config to explicitly state what subnets and ip ranges
      # to advertise. To advertise the same range as DEFAULT, set
      # `advertise_groups = ["ALL_SUBNETS"]`.
      advertise_mode     = lookup(var.bgp, "advertise_mode", null)
      advertised_groups  = lookup(var.bgp, "advertised_groups", null)
      keepalive_interval = lookup(var.bgp, "keepalive_interval", null)

      dynamic "advertised_ip_ranges" {
        for_each = lookup(var.bgp, "advertised_ip_ranges", [])
        content {
          range       = advertised_ip_ranges.value.range
          description = lookup(advertised_ip_ranges.value, "description", null)
        }
      }
    }
  }
}

# Reserve regional ip address for cloud nat
resource "google_compute_address" "default" {
  count      = var.create_address ? 1 : 0
  # project    = var.project
  name       = "${var.name}-address"
}

resource "google_compute_router_nat" "nats" {
  for_each = {
    for n in var.nats :
    n.name => n
  }

  name                               = each.value.name
#   project                            = google_compute_router.router.project
  router                             = google_compute_router.router.name
#   region                             = google_compute_router.router.region
  nat_ip_allocate_option             = lookup(each.value, "nat_ip_allocate_option", length(lookup(each.value, "nat_ips", [])) > 0 ? "MANUAL_ONLY" : "AUTO_ONLY")
  source_subnetwork_ip_ranges_to_nat = lookup(each.value, "source_subnetwork_ip_ranges_to_nat", "ALL_SUBNETWORKS_ALL_IP_RANGES")

#   nat_ips                             = lookup(each.value, "nat_ips", null)
  nat_ips             = [local.address]
  min_ports_per_vm                    = lookup(each.value, "min_ports_per_vm", null)
  udp_idle_timeout_sec                = lookup(each.value, "udp_idle_timeout_sec", null)
  icmp_idle_timeout_sec               = lookup(each.value, "icmp_idle_timeout_sec", null)
  tcp_established_idle_timeout_sec    = lookup(each.value, "tcp_established_idle_timeout_sec", null)
  tcp_transitory_idle_timeout_sec     = lookup(each.value, "tcp_transitory_idle_timeout_sec", null)
  enable_endpoint_independent_mapping = lookup(each.value, "enable_endpoint_independent_mapping", null)

  log_config {
    enable = false
    filter = lookup(lookup(each.value, "log_config", {}), "filter", "ALL")
  }

  dynamic "subnetwork" {
    for_each = lookup(each.value, "subnetworks", [])
    content {
      name                     = subnetwork.value.name
      source_ip_ranges_to_nat  = subnetwork.value.source_ip_ranges_to_nat
      secondary_ip_range_names = lookup(subnetwork.value, "secondary_ip_range_names", null)
    }
  }
}


### Terraform Version

```sh
Terraform v1.2.3
on darwin_arm64
+ provider registry.terraform.io/hashicorp/google v4.27.0

Additional information

No response

Output wait for CloudNAT

I would like to set one of the outputs, region for example, to be set only when the CloudNAT is available. This would help build a dependency chain for resources, such as VM's to only spin up once the CloudNAT is available. Something like:

output "region" {
  description = "Cloud NAT region"
  value       = var.region
  depends_on = [
    google_compute_router_nat.main,
  ]
}

Are there consideration for which I need to account?

log_config_enable doesn't have effect on v1.3.0

Relates to latest release, #35 and #24 .

I am using the following configuration:

module "cloud-nat" {
  source                           = "terraform-google-modules/cloud-nat/google"
  version                          = "1.3.0"
  project_id                       = var.project_id
  region                           = var.region
  router                           = "a-cloud-router"
  create_router                    = true
  network                          = "mynetwork"
  tcp_established_idle_timeout_sec = "180"
  log_config_enable                = false

The above always results in a '1 to change' for log_config:

      - log_config {
          - enable = false -> null
          - filter = "ALL" -> null
        }

var.nat_ip_allocate_option has to be set to false to be able to use MANUAL_ONLY

TL;DR

var.nat_ip_allocate_option needing to be a bool doesnt make sense. It should be an enum of MANUAL_ONLY or AUTO_ONLY. if you set var.nat_ip_allocate_option to true the ternary operator takes the value of var.nat_ip_allocate_option which is true making it an invalid value for the compute_router_nat resource

Expected behavior

setting nat_ip_allocate_option to "MANUAL_ONLY" to work

Observed behavior

Errored invalid value as needed to be bool

Terraform Configuration

module "cloud_nat" {
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "~> 2.2.1"

  name       = "${var.prefix}-nat-gateway"
  project_id = var.project_id

  region     = var.region
  router     = google_compute_router.router.name
  
  nat_ip_allocate_option = "MANUAL_ONLY"
  nat_ips = [ google_compute_address.nat_gw.address ]
  
  source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"

  subnetworks = [ for subnet in local.private_subnets: {
    name = subnet.self_link
    source_ip_ranges_to_nat = [ subnet.ip_cidr_range ]
    secondary_ip_range_names = []
  }]

}

Terraform Version

Terraform v1.3.0

Additional information

No response

Cloud Nat Module always changed value to default when tcp_established_idle_timeout_sec was set as number

Hi ✋
Hire is Cloud Nat Module's property I'm mentioning

  • tcp_established_idle_timeout_sec
  • icmp_idle_timeout_sec
  • min_ports_per_vm
  • tcp_transitory_idle_timeout_sec
  • udp_idle_timeout_sec

these properties has string type but value is number .
It confused me.
And we can run plan and apply if the properties were set as number like

variable "nat" {
     name                               = string
     nat_ip_allocate_option             = string
     source_subnetwork_ip_ranges_to_nat = string
-    icmp_idle_timeout_sec              = number
-    min_ports_per_vm                   = number
     tcp_established_idle_timeout_sec   = string
-    tcp_transitory_idle_timeout_sec    = number
-    udp_idle_timeout_sec               = number

especially, tcp_established_idle_timeout_sec is back to default value 1200 when I set any timeout count like 60 as number.

Requests

I suggest that the properties can be also set as number or when users set as number and run terraform plan , generate an error.

Thanks for reading.

New release plans?

The README file has a set of variables not available in last November's release.

  • log_config_filter
  • log_config_enable

Thanks!

gke: failed to execute portforward in network

TL;DR

Im not able to forward ports from the pods, i have the firewall rules:

resource "google_compute_firewall" "<snip>-firewall" {
  project = <snip>
  name    = "allow-http"
  network = google_compute_network.cluster_network.name
  allow {
    protocol = "tcp"
    ports    = ["80", "1194", "443", "7111"]
  }
  source_ranges = ["35.235.240.0/20", "0.0.0.0/0"]
}

(added the 0.0.0.0/0 for testing purposes)
The forward command itself doesnt raise an error, but when i try to access to forwarded port i see this:

% kubectl port-forward service/pritunl 7111:80
Forwarding from 127.0.0.1:7111 -> 80
Forwarding from [::1]:7111 -> 80
Handling connection for 7111
E1214 16:18:09.698736   78106 portforward.go:400] an error occurred forwarding 7111 -> 80: error forwarding port 80 to pod a2290a0c3fed42c947c3a5a70ee228ed3e42a5ea8a82297786a2fa2fa2af7267, uid : failed to execute portforward in network namespace "/var/run/netns/cni-a732520e-6c93-d975-c7f9-76b8daecab1a": failed to dial 80: dial tcp4 127.0.0.1:80: connect: connection refused

Is this expected?

Expected behavior

Port forward works

Observed behavior

Port forward fails

Terraform Configuration

variable "cluster_node_ips" {
  type    = list(string)
  default = ["1.2.3.4", "5.6.7.8"]
}

terraform {
  backend "gcs" {
    bucket = "123-infra"
    pr123x = "terraform/non-prod.tfstate"
  }
}

provider "google" {
  project     = "project-co"
  region      = "europe-west3"
}

resource "google_service_account" "default-service-account" {
  account_id   = "sa-id-non-prod"
  display_name = "project default SA Non-prod"
}

resource "google_compute_network" "cluster_network" {
  name = "non-prod-network"
}

resource "google_container_cluster" "project" {
  name     = "cluster-non-prod"
  location = "europe-west3-b"

  network = google_compute_network.cluster_network.self_link
  remove_default_node_pool = true
  initial_node_count       = 1
}

resource "google_compute_router" "router" {
  project = "project-co"
  name    = "non-prod-nat-router"
  network = google_compute_network.cluster_network.self_link
  region  = "europe-west3"
}

module "cloud-nat" {
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "~> 1.2"
  project_id = "project-co"
  region     = "europe-west3"
  router     = google_compute_router.router.name
}

resource "google_container_node_pool" "project_node_pool" {
  name       = "node-pool-non-prod"
  location   = "europe-west3-b"
  cluster    = google_container_cluster.project.name
  node_count = 1

  node_config {
    machine_type = "e2-medium"

    # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
    service_account = google_service_account.default-service-account.email
    oauth_scopes    = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
}

resource "google_container_node_pool" "project_node_pool_vpn" {
  name       = "node-pool-vpn-non-prod"
  location   = "europe-west3-b"
  cluster    = google_container_cluster.project.name
  node_count = 1

  node_config {
    machine_type = "e2-medium"
    taint {
      effect = "NO_SCHEDULE"
      key = "vpnPod"
      value = "true"
    }
    # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
    service_account = google_service_account.default-service-account.email
    oauth_scopes    = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
}

resource "google_compute_firewall" "non-prod-firewall" {
  project = "project-co"
  name    = "allow-http"
  network = google_compute_network.cluster_network.name
  allow {
    protocol = "tcp"
    ports    = ["80", "1194", "443", "7111"]
  }
  source_ranges = ["35.235.240.0/20", "0.0.0.0/0"]
}


resource "google_sql_database" "project-db" {
  name     = "project"
  instance = google_sql_database_instance.project-db-instance.name
}

resource "google_sql_database_instance" "project-db-instance" {
  name             = "project-db-non-prod"
  database_version = "POSTGRES_11"
  region = "europe-west3"

  settings {
    # Second-generation instance tiers are based on the machine
    # type. See argument reference below.
    tier = "db-custom-2-12288"

    database_flags {
      name = "temp_file_limit"
      value = "2147483647"
    }

    ip_configuration {

      dynamic "authorized_networks" {
        for_each = var.cluster_node_ips
        iterator = node_ip
        content {
          name = node_ip.value
          value = node_ip.value
        }
      }
    }
  }
}

Terraform Version

% terraform version
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v4.3.0
+ provider registry.terraform.io/hashicorp/google-beta v4.4.0
+ provider registry.terraform.io/hashicorp/random v3.1.0

Your version of Terraform is out of date! The latest version
is 1.1.0. You can update by downloading from https://www.terraform.io/downloads.html

Additional information

Im trying to put my GKE cluster behind a NAT, so i can whitelist one single, known, IP on my RDS

compute_router_nat subnetwork always changes for LIST_OF_SUBNETWORKS

I use the module to setup NAT for specific subnetwork like this:

module "cloud-nat" {
	source		= "terraform-google-modules/cloud-nat/google"
	version		= "~> 1.4"
	# ... 
	source_subnetwork_ip_ranges_to_nat	= "LIST_OF_SUBNETWORKS"
	subnetworks	=	[{
		name					= "test"
		source_ip_ranges_to_nat	= ["10.10.0.0/24"]
		secondary_ip_range_names= []
		}
	]
}

After initial apply on the next plan or apply Terraform always shows a change even though there was no change actually either in the Terraform code or on the resource in GCP manually. This is the plan:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.cloud-nat.google_compute_router_nat.main will be updated in-place
  ~ resource "google_compute_router_nat" "main" {
        id                                  = ".../.../test-router/test-nat-over-test-router"
        name                                = "test-nat-over-test-router"
        # (13 unchanged attributes hidden)

      - subnetwork {
          - name                     = "https://www.googleapis.com/compute/v1/projects/.../regions/.../subnetworks/test" -> null
          - secondary_ip_range_names = [] -> null
          - source_ip_ranges_to_nat  = [
              - "ALL_IP_RANGES",
            ] -> null
        }
      + subnetwork {
          + name                     = "test"
          + secondary_ip_range_names = []
          + source_ip_ranges_to_nat  = [
              + "10.10.0.0/24",
            ]
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

The value that appears changed is source_ip_ranges_to_nat, but I'm not sure whether the issue is into the module or the GCP API.

Thank you for looking into this and let me know if more information is needed.

P.S. I have not noticed this behavior with the default value ALL_SUBNETWORKS_ALL_IP_RANGES for source_subnetwork_ip_ranges_to_nat argument (see Inputs).

enable the option nat_ip_allocate_option= "MANUAL_ONLY" get error Incorrect condition type

TL;DR

i trying to create in gcp nat and and associate it to exist address by the option "MANUAL_ONLY"
i get this error

│ Error: Incorrect condition type

│ on .terraform/modules/cloud-nat/main.tf line 29, in locals:
│ 29: nat_ip_allocate_option = var.nat_ip_allocate_option ? var.nat_ip_allocate_option : local.default_nat_ip_allocate_option
│ ├────────────────
│ │ var.nat_ip_allocate_option is "MANUAL_ONLY"

│ The condition expression must be of type bool.

Expected behavior

associate the cloud nat to the exist address

Observed behavior

│ Error: Incorrect condition type

│ on .terraform/modules/cloud-nat/main.tf line 29, in locals:
│ 29: nat_ip_allocate_option = var.nat_ip_allocate_option ? var.nat_ip_allocate_option : local.default_nat_ip_allocate_option
│ ├────────────────
│ │ var.nat_ip_allocate_option is "MANUAL_ONLY"

│ The condition expression must be of type bool.

Terraform Configuration

module "cloud-nat" {
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "~> 1.2"
  project_id = var.project_id
  region     = var.services_region
  router     = "services-router"
  name       = "services-nat" 

  nat_ip_allocate_option= "MANUAL_ONLY"
  nat_ips                = module.address.self_links
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_PRIMARY_IP_RANGES"

}

Terraform Version

Terraform v1.1.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v3.90.1
+ provider registry.terraform.io/hashicorp/google-beta v4.13.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Additional information

No response

google_compute_router_nat doesn't converge converge when log_config is missing or disabled

If you don't set log_config, or disable it, the resource never converges:

module "cloud-nat" {
  source  = "terraform-google-modules/cloud-nat/google"
  version = "1.3.0"

  project_id             = <redacted>
  region                 = "us-central1"
  name                   = "k8s-nat"
  router                 = google_compute_router.nat_router.name
  nat_ip_allocate_option = false
  nat_ips                = [google_compute_address.nat_address.self_link]
  log_config_enable      = "false"
  log_config_filter      = "ALL"
}

  # module.cloud-nat.google_compute_router_nat.main will be updated in-place
  ~ resource "google_compute_router_nat" "main" {
        icmp_idle_timeout_sec              = 30
        id                                 = "<redacted>"
        min_ports_per_vm                   = 64
        name                               = "k8s-nat"
        nat_ip_allocate_option             = "MANUAL_ONLY"
        nat_ips                            = [
            "https://www.googleapis.com/compute/v1/projects/<redacted>",
        ]
        project                            = "<redacted>"
        region                             = "us-central1"
        router                             = "k8s-nat-router"
        source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
        tcp_established_idle_timeout_sec   = 1200
        tcp_transitory_idle_timeout_sec    = 30
        udp_idle_timeout_sec               = 30

      - log_config {
          - enable = false -> null
          - filter = "ALL" -> null
        }
    }

or

module "cloud-nat" {
  source  = "terraform-google-modules/cloud-nat/google"
  version = "1.3.0"

  project_id             = <redacted>
  region                 = "us-central1"
  name                   = "k8s-nat"
  router                 = google_compute_router.nat_router.name
  nat_ip_allocate_option = false
  nat_ips                = [google_compute_address.nat_address.self_link]
}

  # module.cloud-nat.google_compute_router_nat.main will be updated in-place
  ~ resource "google_compute_router_nat" "main" {
        icmp_idle_timeout_sec              = 30
        id                                 = "<redacted>"
        min_ports_per_vm                   = 64
        name                               = "k8s-nat"
        nat_ip_allocate_option             = "MANUAL_ONLY"
        nat_ips                            = [
            "https://www.googleapis.com/compute/v1/projects/<redacted>",
        ]
        project                            = "<redacted>"
        region                             = "us-central1"
        router                             = "k8s-nat-router"
        source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
        tcp_established_idle_timeout_sec   = 1200
        tcp_transitory_idle_timeout_sec    = 30
        udp_idle_timeout_sec               = 30

      - log_config {
          - enable = false -> null
          - filter = "ALL" -> null
        }
    }

Enabling log_config allows the resource to converge:

  log_config_enable      = "true"
  log_config_filter      = "ERRORS_ONLY"

Relates to #24.

Readme out of date

nat_ips is actually nat_addresses

nat_ip_allocate_option needs a better description

NAT gateway with nat_ip_allocate_option=AUTO, no ip address output

Issue summary:
When creating a nat gateway with nat_ip_allocate_option=AUTO_ONLY, the nat gateway terraform module does not output the ip addresses that were assigned to the nat gateway. Deriving the ip address that is assigned to the nat gateway would involve a data resource, and doesn’t feel like best practice
I would expect the ip address to be outputted in the nat_ips array. The documentation suggests that this field is ignored if AUTO_ONLY is set, but it’s ambiguous if this extends to the outputted data.
I’d either expect there to be a different output for ips outright, or for the nat_ips array to be populated regardless of the nat_ip_allocate_option value

Steps to reproduce:
Create a NAT gateway with terraform. Set "nat_ip_allocate_option" = "AUTO_ONLY". Try to get IP address out of "nat_ips" = [] and see that it's blank.

Support for Dynamic Port Allocation

TL;DR

Google and Terraform support dynamic port allocation, while this module does not yet support it.

The following new variables are needed:

max_ports_per_vm - (Optional) Maximum number of ports allocated to a VM from this NAT. This field can only be set when enableDynamicPortAllocation is enabled.

enable_dynamic_port_allocation - (Optional) Enable Dynamic Port Allocation. If minPortsPerVm is set, minPortsPerVm must be set to a power of two greater than or equal to 32. If minPortsPerVm is not set, a minimum of 32 ports will be allocated to a VM from this NAT config. If maxPortsPerVm is set, maxPortsPerVm must be set to a power of two greater than minPortsPerVm. If maxPortsPerVm is not set, a maximum of 65536 ports will be allocated to a VM from this NAT config. Mutually exclusive with enableEndpointIndependentMapping.

Terraform Resources

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat#enable_dynamic_port_allocation

Detailed design

enableDynamicPortAllocation can only be set to true if enableEndpointIndependentMapping is false.

Additional information

Google provider version 4.5 or greater

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

gomod
test/integration/go.mod
  • go 1.21
  • go 1.21.9
  • github.com/GoogleCloudPlatform/cloud-foundation-toolkit/infra/blueprint-test v0.14.0
  • github.com/stretchr/testify v1.9.0
regex
Makefile
  • cft/developer-tools 1.20
build/int.cloudbuild.yaml
  • cft/developer-tools 1.20
build/lint.cloudbuild.yaml
  • cft/developer-tools 1.20
terraform
examples/advanced/main.tf
  • terraform-google-modules/cloud-nat/google ~> 5.0
examples/basic/main.tf
  • terraform-google-modules/cloud-nat/google ~> 5.0
examples/nat_with_compute_engine/main.tf
  • terraform-google-modules/cloud-nat/google ~> 5.0
  • terraform-google-modules/network/google ~> 9.0
examples/nat_with_gke/main.tf
  • terraform-google-modules/cloud-nat/google ~> 5.0
  • terraform-google-modules/network/google ~> 9.0
test/fixtures/advanced/example.tf
test/fixtures/basic/example.tf
test/fixtures/shared/provider.tf
test/fixtures/subnetworks/example.tf
test/setup/main.tf
  • terraform-google-modules/project-factory/google ~> 15.0
test/setup/versions.tf
  • google >= 3.53
  • google-beta >= 3.53
  • hashicorp/terraform >= 0.13
versions.tf
  • google >= 4.51, < 6
  • random ~> 3.0
  • hashicorp/terraform >= 0.13

  • Check this box to trigger a request for Renovate to run again on this repository

Endpoint-Independent Mapping default value should be false (not null)

TL;DR

Google sets the default for enable Endpoint-Independent Mapping property to false. But the default value in this module is set to null which results in enable Endpoint-Independent Mapping being set to true.

When Endpoint-Independent Mapping is enabled, it can increase the likelihood of Sent Dropped Packets for the Cloud NAT device. Troubleshooting information from Google is available here.

Terraform Resources

No response

Detailed design

No response

Additional information

No response

new release / version tag after lint-failing file removed

#53
removed a python script that was failing lint checks. Could a new release be created that contains this fix?

We're currently referencing this module by it's latest version tag, v2.0.0, but that version fails our lint checkers too due to afore mentioned python script. Could we request a new tag pointing to current master, or any commit after PR 53 merge?

Thanks!

Unable to to use global address for NAT IP?

Hi

I'm trying to setup a Cloud NAT that uses an external global IP, but it appears that it attempts to fined the passed in NAP IP in the same region as the NAT even though the self-link, of the IP, points to a global IP?

resource "google_compute_router" "router" {
  name = "${var.name}-nat-router"
  network = module.vpc.network_self_link
}

data "google_compute_global_address" "address" {
  name = module.ip.names.0
}

module "ip" {
  source  = "terraform-google-modules/address/google"
  version = "2.0.0"

  names      = ["${var.name}-ip"]
  global     = true
  region     = var.region
  project_id = var.project_id
}

module "cloud-nat" {
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "1.2.0"
  project_id = var.project_id
  region     = var.region

  nat_ips    = [data.google_compute_global_address.address.self_link]
  router     = google_compute_router.router.name
}

If I run this then I get the following error

Error: Error creating RouterNat: googleapi: Error 404: The resource 'projects/my-project/regions/europe-west4/addresses/my-test-ip' was not found, notFound

  on .terraform/modules/cloud-nat/terraform-google-modules-terraform-google-cloud-nat-86e5f53/main.tf line 45, in resource "google_compute_router_nat" "main":
  45: resource "google_compute_router_nat" "main" {

However looking at the self-link of the created IP

output "out" {
  value = data.google_compute_global_address.address.self_link
}

Shows that it is pointing to the global resource

https://www.googleapis.com/compute/v1/projects/my-project/global/addresses/my-test-ip

Support specific subnetworks

The google_compute_router_nat resource supports specific subnetworks: it would be great to have this feature in the NAT module.

Example:

resource "google_compute_router_nat" "main" {
  region     = "${google_compute_router.main.region}"	
  router     = "${google_compute_router.main.name}"
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
  subnetwork {
    name                    = "${data.google_compute_subnetwork.main.self_link}"
    source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
  }
}

Support Dynamic Port Allocation

TL;DR

Add support for Cloud Nat Dynamic Port Allocation. The feature is currently in Preview.

Terraform Resources

[google_compute_router_nat](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat)

Detailed design

I guess the resource should add support for the two additional parameters: `max_ports_per_vm` and `enable_dynamic_port_allocation`.

Example:

resource "google_compute_router_nat" "nat" {
  name                               = "my-router-nat"
  router                             = google_compute_router.router.name
  region                             = google_compute_router.router.region
  nat_ip_allocate_option             = "AUTO_ONLY"
  source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"

  min_ports_per_vm: 32
  max_ports_per_vm: 64000
  enable_dynamic_port_allocation: true
}


### Additional information

_No response_

Release log_config

Hello,

Is there a timeline around when the new log_config support will be released?

Currently I'm using source = "github.com/terraform-google-modules/terraform-google-cloud-nat" in order to deploy and use cloud-nat with log_config support

Support Max ports per VM setting

TL;DR

There can be scenarios where it's desirable to set a reasonable limit on the number of sessions that a single VM can create, for example to prevent a load test from exhausting all NAT ports or contain a denial of service condition.

Terraform Resources

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat#max_ports_per_vm

Detailed design

This setting can only be used when **enable_dynamic_port_allocation = true**

variable "enable_dpa" {
  type = bool
  description = Specifies to enable Dynamic Port Allocation"
  default = true
}
variable "max_ports_per_vm" {
  type        = string
  description = "Max ports per VM (only relevant if DPA is enabled)"
  default     = "32768"
}
resource "google_compute_router_nat" "default" {
  enable_dynamic_port_allocation      = var.enable_dpa
  min_ports_per_vm                    = var.min_ports_per_vm
  max_ports_per_vm                    = var.enable_dpa ? var.max_ports_per_vm : null
}

Additional information

No response

Cannot determine region.

Terraform v0.13.3

Error: Cannot determine region: set in this resource, or set provider-level 'region' or 'zone'.

resource "google_compute_router" "router-edge" {
  name    = "router-edge"
  network = google_compute_network.infra-network.self_link
  bgp {
    asn               = 64514
    advertise_mode    = "CUSTOM"
    advertised_groups = ["ALL_SUBNETS"]
    advertised_ip_ranges {
      range = "1.2.3.4"
    }
    advertised_ip_ranges {
      range = "6.7.0.0/16"
    }
  }
}

module "cloud-nat" {
  source     = "terraform-google-modules/cloud-nat/google"
  version    = "~> 1.3"
  project_id = "my-project-id"
  region     = "europe-west3-a"
  router     = google_compute_router.router-edge.self_link
}

Why does nat_ip_allocation_option exist? Bug?

It's a string that should be either "MANUAL_ONLY" or "AUTO_ONLY" but it is used as a boolean.
It seems to me that this might be a bug or that the variable is entirely unnecessary.

Error: Incorrect condition type

  on main.tf line 29, in locals:
  29:   nat_ip_allocate_option = var.nat_ip_allocate_option ? var.nat_ip_allocate_option : local.default_nat_ip_allocate_option
    |----------------
    | var.nat_ip_allocate_option is "AUTO_ONLY"

The condition expression must be of type bool.

https://github.com/terraform-google-modules/terraform-google-cloud-nat/blob/master/main.tf#L27-L29

locals {
  default_nat_ip_allocate_option = local.nat_ips_length > 0 ? "MANUAL_ONLY" : "AUTO_ONLY"
  nat_ip_allocate_option = var.nat_ip_allocate_option ? var.nat_ip_allocate_option : local.default_nat_ip_allocate_option
}

max_ports_per_vm with dynamic allocation

TL;DR

This pull request added support for dynamic port allocation, but did not add support for max_ports_per_vm

Terraform Resources

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat#max_ports_per_vm

Detailed design

No response

Additional information

No response

max_ports_per_vm not taken into account

TL;DR

since version 4.27 of google provider,
my values in entry max_ports_per_vm are alawys set to null

I tried without any value, with values 65536 and 512 and I always get a null during my terraform plan.

Expected behavior

max_ports_per_vm shoud take into account values

Observed behavior

max_ports_per_vm value is always null

Terraform Configuration

module "cloud_router" {
  source  = "terraform-google-modules/cloud-router/google"
  version = "2.0.0"

  name    = "${var.region-eu-west1}-cloud-nat"
  project = var.project_id
  region  = var.region-eu-west1
  network = google_compute_network.vpc_mdm_network.name

  nats = [{
      name : "test"
      source_subnetwork_ip_ranges_to_nat : "LIST_OF_SUBNETWORKS"
      min_ports_per_vm : 32
      max_ports_per_vm : 65536
      enable_dynamic_port_allocation : true
      enable_endpoint_independent_mapping : false
      nat_ip_allocate_option = "MANUAL_ONLY"
      log_config = {
        filter : "ERRORS_ONLY"
      }
      subnetworks = [
        {
          name : google_compute_subnetwork.mysubnetwork.name
          source_ip_ranges_to_nat : [
            "PRIMARY_IP_RANGE"
          ]
        }
      ]
      nat_ips = [
        google_compute_address.my_gateway.self_link,
      ]
    }
]
}

Terraform Version

1.2.3

Additional information

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.