Code Monkey home page Code Monkey logo

terraform-example-foundation's Introduction

terraform-example-foundation

This example repository shows how the CFT Terraform modules can build a secure Google Cloud foundation, following the Google Cloud Enterprise Foundations Blueprint (previously called the Security Foundations Guide). The supplied structure and code is intended to form a starting point for building your own foundation with pragmatic defaults that you can customize to meet your own requirements. Currently, the step 0 is manually executed. From step 1 onwards, the Terraform code is deployed by using either Google Cloud Build (default) or Jenkins. Cloud Build has been chosen by default to allow you to quickly get started without having to deploy a CI/CD tool, although it is worth noting the code can easily be executed by your preferred tool.

Overview

This repo contains several distinct Terraform projects, each within their own directory that must be applied separately, but in sequence. Each of these Terraform projects are to be layered on top of each other, and run in the following order.

This stage executes the CFT Bootstrap module which bootstraps an existing Google Cloud organization, creating all the required Google Cloud resources and permissions to start using the Cloud Foundation Toolkit (CFT). For CI/CD Pipelines, you can use either Cloud Build (by default) or Jenkins. If you want to use Jenkins instead of Cloud Build, see README-Jenkins on how to use the Jenkins sub-module.

The bootstrap step includes:

  • The prj-b-seed project that contains the following:
    • Terraform state bucket
    • Custom service accounts used by Terraform to create new resources in Google Cloud
  • The prj-b-cicd project that contains the following:
    • A CI/CD Pipeline implemented with either Cloud Build or Jenkins
    • If using Cloud Build, the following items:
      • Cloud Source Repository
      • Artifact Registry
    • If using Jenkins, the following items:
      • A Compute Engine instance configured as a Jenkins Agent
      • Custom service account to run Compute Engine instances for Jenkins Agents
      • VPN connection with on-prem (or wherever your Jenkins Controller is located)

It is a best practice to separate concerns by having two projects here: one for the Terraform state and one for the CI/CD tool.

  • The prj-b-seed project stores Terraform state and has the service accounts that can create or modify infrastructure.
  • The prj-b-cicd project holds the CI/CD tool (either Cloud Build or Jenkins) that coordinates the infrastructure deployment.

To further separate the concerns at the IAM level as well, a distinct service account is created for each stage. The Terraform custom service accounts are granted the IAM permissions required to build the foundation. If using Cloud Build as the CI/CD tool, these service accounts are used directly in the pipeline to execute the pipeline steps (plan or apply). In this configuration, the baseline permissions of the CI/CD tool are unchanged.

If using Jenkins as the CI/CD tool, the service account of the Jenkins Agent ([email protected]) is granted impersonation access so it can generate tokens over the Terraform custom Service Accounts. In this configuration, the baseline permissions of the CI/CD tool are limited.

After executing this step, you will have the following structure:

example-organization/
└── fldr-bootstrap
    ├── prj-b-cicd
    └── prj-b-seed

When this step uses the Cloud Build submodule, it sets up the cicd project (prj-b-cicd) with Cloud Build and Cloud Source Repositories for each of the stages below. Triggers are configured to run a terraform plan for any non-environment branch and terraform apply when changes are merged to an environment branch (development, nonproduction or production). Usage instructions are available in the 0-bootstrap README.

The purpose of this stage is to set up the common folder used to house projects that contain shared resources such as Security Command Center notification, Cloud Key Management Service (KMS), org level secrets, and org level logging. This stage also sets up the network folder used to house network related projects such as DNS Hub, Interconnect, network hub, and base and restricted projects for each environment (development, nonproduction or production). This will create the following folder and project structure:

example-organization
└── fldr-common
    ├── prj-c-logging
    ├── prj-c-billing-logs
    ├── prj-c-scc
    ├── prj-c-kms
    └── prj-c-secrets
└── fldr-network
    ├── prj-net-hub-base
    ├── prj-net-hub-restricted
    ├── prj-net-dns
    ├── prj-net-interconnect
    ├── prj-d-shared-base
    ├── prj-d-shared-restricted
    ├── prj-n-shared-base
    ├── prj-n-shared-restricted
    ├── prj-p-shared-base
    └── prj-p-shared-restricted

Logs

Among the four projects created under the common folder, two projects (prj-c-logging, prj-c-billing-logs) are used for logging. The first one is for organization-wide audit logs, and the second one is for billing logs. In both cases, the logs are collected into BigQuery datasets which you can then use for general querying, dashboarding, and reporting. Logs are also exported to Pub/Sub, a Cloud Storage bucket, and a log bucket.

Notes:

  • Log export to Cloud Storage bucket has optional object versioning support via log_export_storage_versioning.
  • The various audit log types being captured in BigQuery are retained for 30 days.
  • For billing data, a BigQuery dataset is created with permissions attached, however you will need to configure a billing export manually, as there is no easy way to automate this at the moment.

Security Command Center notification

Another project created under the common folder. This project will host the Security Command Center notification resources at the organization level. This project will contain a Pub/Sub topic, a Pub/Sub subscription, and a Security Command Center notification configured to send all new findings to the created topic. You can adjust the filter when deploying this step.

KMS

Another project created under the common folder. This project is allocated for Cloud Key Management for KMS resources shared by the organization.

Usage instructions are available for the org step in the README.

Secrets

Another project created under the common folder. This project is allocated for Secret Manager for secrets shared by the organization.

Usage instructions are available for the org step in the README.

DNS hub

This project is created under the network folder. This project will host the DNS hub for the organization.

Interconnect

Another project created under the network folder. This project will host the Dedicated Interconnect Interconnect connection for the organization. In case of Partner Interconnect, this project is unused and the VLAN attachments will be placed directly into the corresponding hub projects.

Networking

Under the network folder, two projects, one for base and another for restricted network, are created per environment (development, nonproduction, and production) which is intended to be used as a Shared VPC host project for all projects in that environment. This stage only creates the projects and enables the correct APIs, the following networks stages, 3-networks-dual-svpc and 3-networks-hub-and-spoke, create the actual Shared VPC networks.

The purpose of this stage is to set up the environments folders used for projects that contain monitoring and secrets projects. This will create the following folder and project structure:

example-organization
└── fldr-development
    ├── prj-d-monitoring
    ├── prj-p-kms
    └── prj-d-secrets
└── fldr-nonproduction
    ├── prj-n-monitoring
    ├── prj-n-kms
    └── prj-n-secrets
└── fldr-production
    ├── prj-p-monitoring
    ├── prj-p-kms
    └── prj-p-secrets

Monitoring

Under the environment folder, a project is created per environment (development, nonproduction, and production), which is intended to be used as a Cloud Monitoring workspace for all projects in that environment. Please note that creating the workspace and linking projects can currently only be completed through the Cloud Console. If you have strong IAM requirements for these monitoring workspaces, it is worth considering creating these at a more granular level, such as per business unit or per application.

KMS

Under the environment folder, a project is created per environment (development, nonproduction, and production), which is intended to be used by Cloud Key Management for KMS resources shared by the environment.

Usage instructions are available for the environments step in the README.

Secrets

Under the environment folder, a project is created per environment (development, nonproduction, and production), which is intended to be used by Secret Manager for secrets shared by the environment.

Usage instructions are available for the environments step in the README.

This step focuses on creating a Shared VPC per environment (development, nonproduction, and production) in a standard configuration with a reasonable security baseline. Currently, this includes:

  • (Optional) Example subnets for development, nonproduction, and production inclusive of secondary ranges for those that want to use Google Kubernetes Engine.
  • Hierarchical firewall policy created to allow remote access to VMs through IAP, without needing public IPs.
  • Hierarchical firewall policy created to allow for load balancing health checks.
  • Hierarchical firewall policy created to allow Windows KMS activation.
  • Private service networking configured to enable workload dependant resources like Cloud SQL.
  • Base Shared VPC with private.googleapis.com configured for base access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Restricted Shared VPC with restricted.googleapis.com configured for restricted access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Default routes to internet removed, with tag based route egress-internet required on VMs in order to reach the internet.
  • (Optional) Cloud NAT configured for all subnets with logging and static outbound IPs.
  • Default Cloud DNS policy applied, with DNS logging and inbound query forwarding turned on.

Usage instructions are available for the networks step in the README.

This step configures the same network resources that the step 3-networks-dual-svpc does, but this time it makes use of the architecture based on the hub-and-spoke reference network model.

Usage instructions are available for the networks step in the README.

This step is focused on creating service projects with a standard configuration that are attached to the Shared VPC created in the previous step and application infrastructure pipelines. Running this code as-is should generate a structure as shown below:

example-organization/
└── fldr-development
    └── fldr-development-bu1
        ├── prj-d-bu1-kms
        ├── prj-d-bu1-sample-floating
        ├── prj-d-bu1-sample-base
        ├── prj-d-bu1-sample-restrict
        ├── prj-d-bu1-sample-peering
    └── fldr-development-bu2
        ├── prj-d-bu2-kms
        ├── prj-d-bu2-sample-floating
        ├── prj-d-bu2-sample-base
        ├── prj-d-bu2-sample-restrict
        └── prj-d-bu2-sample-peering
└── fldr-nonproduction
    └── fldr-nonproduction-bu1
        ├── prj-n-bu1-kms
        ├── prj-n-bu1-sample-floating
        ├── prj-n-bu1-sample-base
        ├── prj-n-bu1-sample-restrict
        ├── prj-n-bu1-sample-peering
    └── fldr-nonproduction-bu2
        ├── prj-n-bu2-kms
        ├── prj-n-bu2-sample-floating
        ├── prj-n-bu2-sample-base
        ├── prj-n-bu2-sample-restrict
        └── prj-n-bu2-sample-peering
└── fldr-production
    └── fldr-production-bu1
        ├── prj-p-bu1-kms
        ├── prj-p-bu1-sample-floating
        ├── prj-p-bu1-sample-base
        ├── prj-p-bu1-sample-restrict
        ├── prj-p-bu1-sample-peering
    └── fldr-production-bu2
        ├── prj-p-bu2-kms
        ├── prj-p-bu2-sample-floating
        ├── prj-p-bu2-sample-base
        ├── prj-p-bu2-sample-restrict
        └── prj-p-bu2-sample-peering
└── fldr-common
    ├── prj-c-bu1-infra-pipeline
    └── prj-c-bu2-infra-pipeline

The code in this step includes two options for creating projects. The first is the standard projects module which creates a project per environment, and the second creates a standalone project for one environment. If relevant for your use case, there are also two optional submodules which can be used to create a subnet per project, and a dedicated private DNS zone per project.

Usage instructions are available for the projects step in the README.

The purpose of this step is to deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

Usage instructions are available for the app-infra step in the README.

Final view

After all steps above have been executed, your Google Cloud organization should represent the structure shown below, with projects being the lowest nodes in the tree.

example-organization
└── fldr-common
    ├── prj-c-logging
    ├── prj-c-billing-logs
    ├── prj-c-scc
    ├── prj-c-kms
    ├── prj-c-secrets
    ├── prj-c-bu1-infra-pipeline
    └── prj-c-bu2-infra-pipeline
└── fldr-network
    ├── prj-net-hub-base
    ├── prj-net-hub-restricted
    ├── prj-net-dns
    ├── prj-net-interconnect
    ├── prj-d-shared-base
    ├── prj-d-shared-restricted
    ├── prj-n-shared-base
    ├── prj-n-shared-restricted
    ├── prj-p-shared-base
    └── prj-p-shared-restricted
└── fldr-development
    ├── prj-d-monitoring
    ├── prj-d-kms
    └── prj-d-secrets
    └── fldr-development-bu1
        ├── prj-d-bu1-kms
        ├── prj-d-bu1-sample-floating
        ├── prj-d-bu1-sample-base
        ├── prj-d-bu1-sample-restrict
        ├── prj-d-bu1-sample-peering
    └── fldr-development-bu2
        ├── prj-d-bu2-kms
        ├── prj-d-bu2-sample-floating
        ├── prj-d-bu2-sample-base
        ├── prj-d-bu2-sample-restrict
        └── prj-d-bu2-sample-peering
    └── fldr-development-bu2
        ├── prj-d-bu2-kms
        ├── prj-d-bu2-sample-floating
        ├── prj-d-bu2-sample-base
        ├── prj-d-bu2-sample-restrict
        └── prj-d-bu2-sample-peering
└── fldr-nonproduction
    ├── prj-n-monitoring
    ├── prj-n-kms
    └── prj-n-secrets
    └── fldr-nonproduction-bu1
        ├── prj-n-bu1-kms
        ├── prj-n-bu1-sample-floating
        ├── prj-n-bu1-sample-base
        ├── prj-n-bu1-sample-restrict
        ├── prj-n-bu1-sample-peering
    └── fldr-nonproduction-bu2
        ├── prj-n-bu2-kms
        ├── prj-n-bu2-sample-floating
        ├── prj-n-bu2-sample-base
        ├── prj-n-bu2-sample-restrict
        └── prj-n-bu2-sample-peering
└── fldr-production
    ├── prj-p-monitoring
    ├── prj-p-kms
    └── prj-p-secrets
    └── fldr-production-bu1
        ├── prj-p-bu1-kms
        ├── prj-p-bu1-sample-floating
        ├── prj-p-bu1-sample-base
        ├── prj-p-bu1-sample-restrict
        ├── prj-p-bu1-sample-peering
    └── fldr-production-bu2
        ├── prj-p-bu2-kms
        ├── prj-p-bu2-sample-floating
        ├── prj-p-bu2-sample-base
        ├── prj-p-bu2-sample-restrict
        └── prj-p-bu2-sample-peering
└── fldr-bootstrap
    ├── prj-b-cicd
    └── prj-b-seed

Branching strategy

There are three main named branches: development, nonproduction, and production that reflect the corresponding environments. These branches should be protected. When the CI/CD Pipeline (Jenkins or Cloud Build) runs on a particular named branch (say for instance development), only the corresponding environment (development) is applied. An exception is the shared environment, which is only applied when triggered on the production branch. This is because any changes in the shared environment may affect resources in other environments and can have adverse effects if not validated correctly.

Development happens on feature and bug fix branches (which can be named feature/new-foo, bugfix/fix-bar, etc.) and when complete, a pull request (PR) or merge request (MR) can be opened targeting the development branch. This will trigger the CI/CD Pipeline to perform a plan and validate against all environments (development, nonproduction, shared, and production). After the code review is complete and changes are validated, this branch can be merged into development. This will trigger a CI/CD Pipeline that applies the latest changes in the development branch on the development environment.

After validated in development, changes can be promoted to nonproduction by opening a PR or MR targeting the nonproduction branch and merging them. Similarly, changes can be promoted from nonproduction to production.

Policy validation

This repo uses the terraform-tools component of the gcloud CLI to validate the Terraform plans against a library of Google Cloud policies.

The Scorecard bundle was used to create the policy-library folder with one extra constraint added.

See the policy-library documentation if you need to add more constraints from the samples folder in your configuration based in your type of workload.

Step 1-org has instructions on the creation of the shared repository to host these policies.

Optional Variables

Some variables used to deploy the steps have default values, check those before deployment to ensure they match your requirements. For more information, there are tables of inputs and outputs for the Terraform modules, each with a detailed description of their variables. Look for variables marked as not required in the section Inputs of these READMEs:

Errata summary

Refer to the errata summary for an overview of the delta between the example foundation repository and the Google Cloud security foundations guide.

Contributing

Refer to the contribution guidelines for information on contributing to this module.

terraform-example-foundation's People

Contributors

ahwilms avatar amandakarina avatar apeabody avatar atos-cit avatar averbuks avatar bharathkkb avatar caleonardo avatar cloud-foundation-bot avatar daniel-cit avatar dependabot[bot] avatar eeaton avatar ericyz avatar felipecrescencio-cit avatar juliocc avatar kwraith05 avatar langleyce avatar luizsdcit avatar mark1000 avatar mauro-cit avatar maxi-cit avatar morgante avatar release-please[bot] avatar renato-rudnicki avatar renovate[bot] avatar rjerrems avatar romanini-ciandt avatar samir-cit avatar tngwilkins avatar vovinacci avatar z-shah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-example-foundation's Issues

Cloud Build for "2-environments" step (develop branch)

@bharathkkb

Suggestion for Cloud Build config - "2-environments" step (terraform-example-foundation develop branch)

  1. Create Cloud Build configuration files that run terraform init, terraform plan, and terraform apply for each of the environments (dev, prod, nonprod)

  2. Within Bootstrap, provision empty gcp-envs CSR (Cloud Source Repository) in Cloud Build Project with proper triggers, variables (see gcp-org repository for reference). This can be used for Cloud Build in "2-environments" step.

  3. Change README documentation to reflect changes

Create a Jenkins pipeline incorporating terraform-validator

Create Jenkins pipeline which does the following

  • terraform init
  • terraform plan -out terraform.tfplan && terraform show -json ./terraform.tfplan > ./terraform.tfplan.json
  • terraform-validator validate --policy-path=${POLICY_PATH} ./terraform.tfplan.json
  • terraform apply ./terraform.tfplan

We can start with this Jenkinsfile which already does init, plan and apply; modify as needed for working with GCE. Jenkins worker will be a GCE VM and we can assume terraform and terraform-validator binaries will be installed on this worker.

Validation:
A simple validation can be

  • Set up policies such that GCE VMs cannot be launched with external IPs, i.e terraform-validator stage catches this and fails.

4-projects - variable consistency

In 3-networks, we use access_context_manager_policy_id as the variable name for the ACM Policy, but in 4-projects it appears we're just using policy_id - should we update it to match the name used in 3-networks for consistency?

Change folder hierarchy to split environments

Currently, we use a "flat" hierarchy in each component where all resources for a component are contained in a single Terraform state. This increases the blast radius and makes it challenging to test changes in nonprod before productionizing.

I'd like to split this repo to have folders for each environment (under the components), like the structure described here:

├── foundations
│   ├── folders
│   │   └── shared
│   │       └── folders.tf
│   ├── logging
│   │   ├── nonprod
│   │   │   └── export.tf
│   │   └── prod
│   │       └── export.tf
│   ├── network
│   │   ├── nonprod
│   │   │   └── network.tf
│   │   └── prod
│   │       └── network.tf
│   ├── org-iam
│   │   └── shared
│   │       └── iam.tf
│   ├── org-policies
│   │   └── shared
│   │       └── org-policies.tf
│   ├── Jenkinsfile
│   ├── backend.tf
│   ├── common-providers.tf
│   ├── common-variables.tf
│   ├── main.tf
│   └── output.tf

1-org step Terraform Plan fails on Cloud Build due to missing API Enablements and Roles

On the 1-org Cloud Build phase, received issues on APIs and Missing Roles.

An example of the API errors:

Step #3: Error: Error reading KMSKeyRing "projects/$PROJECT_ID/locations/australia-southeast1/keyRings/tf-keyring": googleapi: Error 403: Cloud Key Management Service (KMS) API has not been used in project $PROJECT_NUMBER before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudkms.googleapis.com/overview?project=$PROJECT_NUMBER then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

An example of the Roles/Permission errors:

Step #3: Error: Error reading CloudBuildTrigger "projects/cft-cloudbuild-9091/triggers/4a44918f-10d7-48f4-b613-bb9549c05a77": googleapi: Error 403: Cloud Build API has not been used in project 565375199005 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudbuild.googleapis.com/overview?project=565375199005 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

I was able to resolve this by, enabling APIs in the Seed Project:

  • cloudkms.googleapis.com
  • cloudbuild.googleapis.com

and adding Roles to the org-terraform service account on the Cloud Build Project:

  • Cloud Build Editor
  • Cloud KMS Admin
  • Source Repository Administrator
  • Storage Admin

Improve CICD

I have scripts to run any unit with Cloud Build + generation of the YAML configs without needing all the nasty copy / pasting of subfolders. Creating this issue, will make a PR later.

2-environments - README updates

  • Need to add steps to copy tf-wrapper.sh
  • Step 5: May not need to edit cloudbuild files with new logic? Or describe how to modify regex to fit env names?
  • Step 9: Should we be pushing to master or prod?

Make project example folder structure and file naming conventions clearer

For the projects repo, it should be clear that the top level folder is a business unit or team. We should rename this to something like:

example_product or example_team etc

Following on from this, we should also use that in the naming convention for the tf files. For example

example_product_sample_project.tf

@ericyz - what do you think?

Clarify use of gcp API's not in default bootstrap set

Thanks so much for putting together these examples! I've been having a blast blowing away and rebuilding my personal GCP infra, but I hit a snag on the way to building a GKE cluster:

I naively tried to create a 4-gke directory and copied over and hacked variables and providers, and added an example from terraform-google-modules/terraform-google-kubernetes-engine but it seems like impersonating the terraform service account won't work unless the additional GCP APIs (in this case container.googleapis.com) are enabled in the project associated with that service account. I get an error like:

Error: googleapi: Error 403: Kubernetes Engine API has not been used in project ${SEED_PROJECT_ID} before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=${SEED_PROJECT_ID} then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry., accessNotConfigured

Is the idea that the seed project should enable every API that is enabled in every other project, and the GSA should have additional roles? Or should there be a separate terraform project with a full cloudbuild bootstrap and/or GSA for each different set of higher-level GCP API's?

Concurrent CI runs may fail due to multiple projects matching filter

Since projects maybe left over from failed runs or concurrent runs, it can match this filter (we only select the first matched project)

filter = "labels.application_name=restricted-shared-vpc-host-${local.env} lifecycleState=ACTIVE"

resulting in CI trying to put the project from the wrong build into the service perimeter and can result in errors.
Possible example: https://console.cloud.google.com/cloud-build/builds/c6f63be0-c353-4539-854c-0ba760af56d0?project=cloud-foundation-cicd

Define and source common variables on from location.

Hi,

Can the terraform.tfvars in the different step folders source common values from the previous step? If the intention is to run these steps in sequence shouldn't we be defining in values in one place? Either method is better than duplicating the values. Best to keep things DRY.

Thanks,
Sal.

Shared VPC Build

Why? To align existing foundations example with the best practice whitepaper guidance.
Updates:

  • Per environment(x?)
    • Shared VPC
      • base
        • subnet region 1
        • subnet region 2
        • cloud router 1(x2)
        • cloud router 2(x2)
        • firewall rules (?)
        • cloud DNS
      • restricted
        • subnet region 1
        • subnet region 2
        • cloud router 1(x2)
        • cloud router 2(x2)
        • firewall rules (?)
        • cloud DNS
  • DNS Hub project(peered cloud DNS)
  • Interconnect Attachment(x?)

1-org step Terraform Plan fails Error 403: The caller does not have permission

Fixed previous permissions issues and now getting error 403.

cft-seed-29c2 master 1-org]$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.org_billing_logs.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=fP4hHQ]
module.seed_bootstrap.random_id.suffix: Refreshing state... [id=AAU]
module.seed_bootstrap.module.seed_project.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=KcI]
module.org_shared_vpc_nonprod.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=WUW7Fg]
module.org_shared_vpc_prod.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=oN_jhQ]
module.org_billing_logs.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=PipnnQ]
module.cloudbuild_bootstrap.module.cloudbuild_project.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=U9M]
module.org_shared_vpc_nonprod.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=fufKTw]
module.org_audit_logs.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=s8ntgg]
module.org_shared_vpc_nonprod.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=pUI]
module.org_monitoring_nonprod.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=3Xs]
module.org_shared_vpc_nonprod.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=-NxuzA]
module.org_billing_logs.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=HvN25g]
module.org_monitoring_prod.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=7BJwOA]
module.org_monitoring_nonprod.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=m6sk3A]
module.org_monitoring_prod.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=6pg]
module.org_monitoring_prod.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=XEn31w]
module.org_audit_logs.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=nLc]
module.org_monitoring_prod.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=eqG6Kg]
module.org_audit_logs.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=jgWAgQ]
module.org_monitoring_nonprod.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=lcZ3VA]
module.org_shared_vpc_prod.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=XSM]
module.org_monitoring_nonprod.module.project-factory.module.gcloud_delete.random_id.cache: Refreshing state... [id=aOLbxA]
module.org_billing_logs.module.project-factory.random_id.random_project_id_suffix: Refreshing state... [id=BM8]
module.org_shared_vpc_prod.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=7mRcUQ]
module.org_audit_logs.module.project-factory.module.gcloud_deprivilege.random_id.cache: Refreshing state... [id=tIIVKA]
module.cloudbuild_bootstrap.module.cloudbuild_project.module.project-factory.null_resource.preconditions: Refreshing state... [id=5082551985136791431]
module.seed_bootstrap.module.seed_project.module.project-factory.null_resource.preconditions: Refreshing state... [id=8989348742835411969]
module.org_shared_vpc_prod.module.project-factory.module.gcloud_disable.random_id.cache: Refreshing state... [id=OAKzEQ]
data.google_service_account_access_token.default: Refreshing state...

Error: googleapi: Error 403: The caller does not have permission, forbidden

  on providers.tf line 30, in data "google_service_account_access_token" "default":
  30: data "google_service_account_access_token" "default" {

Current permissions associated with my terrform service account:

Billing Account User
Cloud Build Editor
Cloud KMS Admin
Compute Network Admin
Compute Shared VPC Admin
Security Admin
Service Account Admin
Logs Configuration Writer
Organization Policy Administrator
Folder Admin
Organization Viewer
Project Creator
Source Repository Administrator
Storage Admin

The folder operation violates display name uniqueness within the parent.

Hi,

I'm running terraform apply in 1-org dir. I can see that folders are created but I'm getting uniqueness error. Under my org I have successfully provisioned a common /logs /monitoring /networking folders and they appear to be populated correctly. Is the idempotency breaking here? If those folders exist I guess the apply should run without errors? Possibly related to:

hashicorp/terraform-provider-google#1903

Plan: 79 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

google_folder.common: Creating...

Error: Error creating folder 'common' in 'organizations/<REDACTED>': Error waiting for creating folder: Error code 9, message: The folder operation violates display name uniqueness within the parent.

  on folders.tf line 21, in resource "google_folder" "common":
  21: resource "google_folder" "common" {

2-environments - Project Naming / Labels

  • Need to update env-secrets and monitoring project names
  • Need to remove environment from application_name label values (redundant with environment label value)

image

0-bootstrap - rename build triggers?

The build triggers created in 0-bootstrap are still named --terraform-apply-on-push-to-master and --terraform-plan-on-all-branches-except-master, which are no longer indicative of their function (apply on push to dev|nonprod|prod, plan on push to anything else)

image

Storage bucket does not exist.

Hi,

I'm following the instructions for the 0-bootstrap and executing terraform init is reporting the following error.

Initializing the backend...

Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.

Error: Failed to get existing workspaces: querying Cloud Storage failed: storage: bucket doesn't exist

I tried to remove the .terraform dir and terraform init -reconfigure but the error persists.

I'm using Terraform v0.12.24.

Thanks.

1-org Terraform plan fails due to missing entity.

Managed to resolve many of the service account issues but now seeing this.

1-org]$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.google_service_account_access_token.default: Refreshing state...

Error: googleapi: Error 404: Requested entity was not found., notFound

  on providers.tf line 30, in data "google_service_account_access_token" "default":
  30: data "google_service_account_access_token" "default" {

3-networks create common terraform.tfvars

Add terrafrom.tfvars with symbolic link to 3-networks/terraform.example.tfvars in each 3-networks/envs/ with all the needed variables in one place, to make it easier to use / update.

UX for new GCP organizations - Project Quota

The default maximum number of projects with a billing account is 6, which is not high enough to follow the examples. It might be good to at least call that out, and maybe have a clear path to working around the limit, especially if some projects don't need a billing account and can be configured not to use one.

This seems to be the same root cause as the issue @morgante filed in hashicorp/terraform-provider-google#1892 -- but that issue is closed, and the errors are still unclear. Are the examples pinned to a version of the provider that doesn't have the fix?

The error case is particularly bad UX because it manifests as inscrutable 400 errors on first apply, and then causes 409 errors afterward unless you either wipe the state completely or clear out the random suffix for the project id.

Error 403: The caller does not have permission, forbidden

Hi,

Following the instructions in the 1-org directory and after making the changes as required I get the following error. I'm using the service account created for terraform in the previous stage.

]$ terraform plan
Acquiring state lock. This may take a few moments...
data.google_service_account_access_token.default: Refreshing state...

Error: googleapi: Error 403: The caller does not have permission, forbidden

  on providers.tf line 30, in data "google_service_account_access_token" "default":
  30: data "google_service_account_access_token" "default" {


Releasing state lock. This may take a few moments...

3-networks - modification to filter for project/folder lookup

I had to update the project filter for the data resources to use element in order for it to run successfully; not sure if this is an isolated issue in my environment or not, has anyone else run into this?

New filter: filter = "parent.id:${element(split("/", data.google_active_folder.env.name),1)} labels.application_name=restricted-shared-vpc-host-${local.env} lifecycleState=ACTIVE"

3-networks - DNS Hub - name consistency

(develop)
In 3-networks/env/shared:

  • Rename dns_default_region1 to default_region1
  • Rename dns_default_region2 to default_region2

To keep variables consistent and allow creation of a common terraform.tfvars file for 3-networks/envs.

Add a README with instructions on how to configure terraform-validator

(develop branch)
Add a separated README file with instructions on how to configure terraform-validator in the cloud build/jenkins context.

  • The service account running the build (cloud build service account if using cloud build) need at least these roles at the parent folder:
    • Browser
    • Security Reviewer
  • add a link to how to install the policies repo
  • explanation for configuration of the _POLICY_REPO variable in the cloud build trigger
    • value should be /workspace/RELATIVE-PATH-TO-POLICY-FOLDER-INSIDE-REPO

Filter data. google_projects further by parent_id to prevent concurrent run errors

When we use data.google_projects, for example

data "google_projects" "restricted_host_project" {
, we should filter by filter = "labels.application_name=restricted-shared-vpc-host-${local.env} lifecycleState=ACTIVE" parent.id=<ENV_FOLDER_ID> . This can otherwise cause a race condition where multiple projects can be matched during concurrent CI runs.

Access levels are not unique and can cause integration tests to fail

@bharathkkb - it looks like access levels are not unique and can cause integration tests to fail when you have two concurrent test runs

Error: Error creating AccessLevel: googleapi: Error 409: Level 'accessPolicies/546966232772/accessLevels/alp_d_shared_restricted_members' already exists and cannot be created.
       
         on .terraform/modules/dev.restricted_shared_vpc.access_level_members/terraform-google-vpc-service-controls-2.0.0/modules/access_level/main.tf line 21, in resource "google_access_context_manager_access_level" "access_level":
         21: resource "google_access_context_manager_access_level" "access_level" {
       
       
       
       Error: Error creating AccessLevel: googleapi: Error 409: Level 'accessPolicies/546966232772/accessLevels/alp_n_shared_restricted_members' already exists and cannot be created.
       
         on .terraform/modules/nonprod.restricted_shared_vpc.access_level_members/terraform-google-vpc-service-controls-2.0.0/modules/access_level/main.tf line 21, in resource "google_access_context_manager_access_level" "access_level":
         21: resource "google_access_context_manager_access_level" "access_level" {
       
       
       
       Error: Error creating AccessLevel: googleapi: Error 409: Level 'accessPolicies/546966232772/accessLevels/alp_p_shared_restricted_members' already exists and cannot be created.
       
         on .terraform/modules/prod.restricted_shared_vpc.access_level_members/terraform-google-vpc-service-controls-2.0.0/modules/access_level/main.tf line 21, in resource "google_access_context_manager_access_level" "access_level":
         21: resource "google_access_context_manager_access_level" "access_level" {

https://console.cloud.google.com/cloud-build/builds/c6cb37df-9561-4ed8-ab70-8b0f0764f41e?project=2843445864

3-networks - errors implementing dns policy, managed zone

Troubleshooting the following errors

Error: Error creating Policy: googleapi: Error 403: Forbidden, forbidden                          

  on main.tf line 78, in resource "google_dns_policy" "default_policy":
  78: resource "google_dns_policy" "default_policy" {



Error: Error creating ManagedZone: googleapi: Error 403: Forbidden, forbidden                     

  on .terraform/modules/dns-forwarding-zone/terraform-google-cloud-dns-3.0.2/main.tf line 46, in resource "google_dns_managed_zone" "forwarding":
  46: resource "google_dns_managed_zone" "forwarding" {

0-bootstrap terraform destroy fails on Mac

I get the following error while performing terraform destroy on 0-bootstrap. The cache folder with random id is not getting created (instead there are folders named 'darwin' and 'linux'), causing the provisioner to fail

module.cloudbuild_bootstrap.module.cloudbuild_project.module.project-factory.module.gcloud_disable.null_resource.upgrade_destroy[0]: Destruction complete after 31s
module.cloudbuild_bootstrap.module.cloudbuild_project.module.project-factory.module.gcloud_disable.random_id.cache: Destroying... [id=qTXcPw]
module.cloudbuild_bootstrap.module.cloudbuild_project.module.project-factory.module.gcloud_disable.random_id.cache: Destruction complete after 0s

Error: Error running command 'tar -xzf .terraform/modules/seed_bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/18d111ef/google-cloud-sdk.tar.gz -C .terraform/modules/seed_bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/18d111ef && cp .terraform/modules/seed_bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/18d111ef/jq .terraform/modules/seed_bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/18d111ef/google-cloud-sdk/bin/': exit status 1. Output: tar: Error opening archive: Failed to open ###'.terraform/modules/seed_bootstrap.seed_project.project-factory.gcloud_disable/terraform-google-gcloud-0.5.1/cache/18d111ef/google-cloud-sdk.tar.gz'

It might be linked to the following issue?
terraform-google-modules/terraform-google-gcloud#26

1-org - Should we move README out of envs/shared?

(this is for the develop branch)

The previous structure of this repo had the README files in the root area of each sub-repo; should we move README back to 1-org/ or leave it in 1-org/envs/shared?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.