Code Monkey home page Code Monkey logo

tfwrapper's Introduction

claranet-tfwrapper

Changelog Mozilla Public License Pypi

tfwrapper is a python wrapper for OpenTofu and legacy Terraform which aims to simplify their usage and enforce best practices.

Note: the term Terraform is used in this documentation when talking about generic concepts like providers, modules, stacks and the HCL based domain specific language.

Table Of Contents

Features

  • OpenTofu and Terraform behaviour overriding
  • State centralization enforcement
  • Standardized file structure
  • Stack initialization from templates
  • AWS credentials caching
  • Azure credentials loading (both Service Principal or User)
  • GCP and GKE user ADC support
  • Plugins caching
  • Tab completion

Setup Dependencies

  • python3 >= 3.8.1 <4.0
  • python3-pip
  • python3-venv
  • pipx (recommended)

Runtime Dependencies

  • terraform >= 0.10 (>= 0.15 for fully working Azure backend with isolation due to hashicorp/terraform#25416)
  • azure-cli when using context based Azure authentication

Recommended setup

  • OpenTofu 1.6+ (recommended) or Terraform 1.0+ (warning: versions above 1.6 are not open-source, and may cause legal issues depending on the context you are using it).
  • An AWS S3 bucket and DynamoDB table for state centralization in AWS.
  • An Azure Blob Storage container for state centralization in Azure.

Installation

tfwrapper should installed using pipx (recommended) or pip:

pipx install claranet-tfwrapper

Setup command-line completion

Add the following to your shell's interactive configuration file, e.g. .bashrc for bash:

eval "$(register-python-argcomplete tfwrapper -e tfwrapper)"

You can then press the completion key (usually Tab ↹) twice to get your partially typed tfwrapper commands completed.

Note: the -e tfwrapper parameter adds an suffix to the defined _python_argcomplete function to avoid clashes with other packages (see kislyuk/argcomplete#310 (comment) for context).

Upgrade from tfwrapper v7 or older

If you used versions of the wrapper older than v8, there is not much to do when upgrading to v8 except a little cleanup. Indeed, the wrapper is no longer installed as a git submodule of your project like it used to be instructed and there is no longer any Makefile to activate it.

Just clean up each project by destroying the .wrapper submodule:

git rm -f Makefile
git submodule deinit .wrapper
rm -rf .git/modules/.wrapper
git rm -f .wrapper

Then check the staged changes and commit them.

Required files

tfwrapper expects multiple files and directories at the root of a project.

conf

Stacks configurations are stored in the conf directory.

templates

The templates directory is used to store the state backend configuration template and the Terraform stack templates used to initialize new stacks. Using a git submodule is recommended.

The following files are required:

  • templates/{provider}/common/state.tf.jinja2: AWS S3 or Azure Blob Storage state backend configuration template.
  • templates/{provider}/basic/main.tf: the default Terraform configuration for new stacks. The whole template/{provider}/basic directory is copied on stack initialization.

For example with AWS:

mkdir -p templates/aws/common templates/aws/basic

# create state configuration template with AWS backend
cat << 'EOF' > templates/aws/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}

terraform {
  backend "s3" {
    bucket = "my-centralized-terraform-states-bucket"
    key    = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
    region = "eu-west-1"

    dynamodb_table = "my-terraform-states-lock-table"
  }
}

resource "null_resource" "state-test" {}
EOF

# create a default stack templates with support for AWS assume role
cat << 'EOF' > templates/aws/basic/main.tf
provider "aws" {
  region     = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
  token      = var.aws_token
}
EOF

For example with Azure:

mkdir -p templates/azure/common templates/azure/basic

# create state configuration template with Azure backend
cat << 'EOF' > templates/azure/common/state.tf.jinja2
{% if region is not none %}
{% set region = '/' + region + '/' %}
{% else %}
{% set region = '/' %}
{% endif %}

terraform {
  backend "azurerm" {
    subscription_id      = "00000000-0000-0000-0000-000000000000"
    resource_group_name  = "my-resource-group"
    storage_account_name = "my-centralized-terraform-states-account"
    container_name       = "terraform-states"

    key = "{{ client_name }}/{{ account }}/{{ environment }}{{ region }}{{ stack }}/terraform.state"
  }
}
EOF

# create a default stack templates with support for Azure credentials
cat << 'EOF' > templates/azure/basic/main.tf
provider "azurerm" {
  subscription_id = var.azure_subscription_id
  tenant_id       = var.azure_tenant_id
}
EOF

.run

The .run directory is used for credentials caching and plan storage.

mkdir .run
cat << 'EOF' > .run/.gitignore
*
!.gitignore
EOF

.gitignore

Adding the following .gitignore at the root of your project is recommended:

cat << 'EOF' > .gitignore
.terraform
terraform.tfstate
terraform.tfstate.backup
terraform.tfvars
EOF

Configuration

tfwrapper uses yaml files stored in the conf directory of the project.

tfwrapper configuration

tfwrapper uses some default behaviors that can be overridden or modified via a config.yml file in the conf directory.

---
always_trigger_init: False # Always trigger `terraform init` first when launching `plan` or `apply` commands
pipe_plan_command: "cat" # Default command used when you're invoking tfwrapper with `--pipe-plan`
use_local_azure_session_directory: False # Use the current user's Azure configuration in `~/.azure`. By default, the wrapper uses a local `azure-cli` session and configuration in the local `.run` directory.

Stacks configurations

Stacks configuration files use the following naming convention:

conf/${account}_${environment}_${region}_${stack}.yml

Here is an example for an AWS stack configuration:

---
state_configuration_name: "aws" # use "aws" backend state configuration
aws:
  general:
    account: &aws_account "xxxxxxxxxxx" # aws account for this stack
    region: &aws_region eu-west-1 # aws region for this stack
  credentials:
    profile: my-aws-profile # should be configured in .aws/config

terraform:
  legacy: false # Use legacy version of the tool instead of OpenTofu, defaults to false. Only used if version >= 1.6.0.
  version: "1.0" # OpenTofu version that tfwrapper will use for this stack. Automatically downloaded if it's not available locally. Defaults to 1.0
  vars: # variables passed to terraform
    aws_account: *aws_account
    aws_region: *aws_region
    client_name: my-client-name # arbitrary client name

Here is an example for a stack on Azure configuration using user mode and AWS S3 backend for state storage:

---
state_configuration_name: "aws-demo" # use "aws" backend state configuration
azure:
  general:
    mode: user # Uses personal credentials with MFA
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

terraform:
  legacy: false # Use legacy version of the tool instead of OpenTofu, defaults to false. Only used if version >= 1.6.0.
  version: "1.0" # OpenTofu version that tfwrapper will use for this stack. Automatically downloaded if it's not available locally. Defaults to 1.0
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name #Replace it with the name of your client

It is using your account linked to a Microsoft Account. You must have access to the Azure Subscription if you want to use Terraform.

Here is an example for a stack on Azure configuration using Service Principal mode:

---
azure:
  general:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

    credentials:
      profile: customer-profile # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

terraform:
  legacy: false # Use legacy version of the tool instead of OpenTofu, defaults to false. Only used if version >= 1.6.0.
  version: "1.0" # OpenTofu version that tfwrapper will use for this stack. Automatically downloaded if it's not available locally. Defaults to 1.0
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name # Replace it with the name of your client

The wrapper uses the Service Principal's credentials to connect the Azure subscription. The given Service Principal must have access to the subscription. The wrapper loads client_id, client_secret and tenant_id properties from your config.yml file located in ~/.azurerm/config.yml.

~/.azurerm/config.yml file structure example:

---
claranet-sandbox:
  client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
  client_secret: AAbbbCCCzzz==
  tenant_id: 00000000-0000-0000-0000-000000000000

customer-profile:
  client_id: aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz
  client_secret: AAbbbCCCzzz==
  tenant_id: 00000000-0000-0000-0000-000000000000

Here is an example for a GCP/GKE stack with user ADC and multiple GKE instances:

---
gcp:
  general:
    mode: adc-user
    project: &gcp_project project-name
  gke:
    - name: kubernetes-1
      zone: europe-west1-c
    - name: kubernetes-2
      region: europe-west1

terraform:
  legacy: false # Use legacy version of the tool instead of OpenTofu, defaults to false. Only used if version >= 1.6.0.
  version: "1.0" # OpenTofu version that tfwrapper will use for this stack. Automatically downloaded if it's not available locally. Defaults to 1.0
  vars:
    gcp_region: europe-west1
    gcp_zone: europe-west1-c
    gcp_project: *gcp_project
    client_name: client-name

You can declare multiple providers configurations, context is set up accordingly.

⚠ This feature is only supported for Azure stacks for now and only works with Azure authentication isolation

---
azure:
  general:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: &directory_id "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: &subscription_id "11111111-1111-1111-1111-111111111111" # Azure Subscription UID

    credentials:
      profile: customer-profile # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

  alternative:
    mode: service_principal # Uses an Azure tenant Service Principal account
    directory_id: "00000000-0000-0000-0000-000000000000" # Azure Tenant/Directory UID
    subscription_id: "22222222-2222-2222-2222-222222222222" # Azure Subscription UID

    credentials:
      profile: claranet-sandbox # To stay coherent, create an AzureRM profile with the same name as the account-alias. Please checkout `azurerm_config.yml.sample` file for configuration structure.

terraform:
  version: "1.0" # OpenTofu version that tfwrapper will use for this stack. Automatically downloaded if it's not available locally. Defaults to 1.0
  legacy: false # Use legacy version of the tool instead of OpenTofu, defaults to false. Only used if version >= 1.6.0.
  vars:
    subscription_id: *subscription_id
    directory_id: *directory_id
    client_name: client-name # Replace it with the name of your client

This configuration is useful when having various service principals with a dedicated rights scope for each.

The wrapper will generate the following Terraform variables that can be used in your stack:

  • <config_name>_azure_subscription_id with Azure subscription ID. From the example, variable is: alternative_subscription_id = "22222222-2222-2222-2222-222222222222"
  • <config_name>_azure_tenant_id with Azure tenant ID. From the example, variable is: alternative_tenant_id = "00000000-0000-0000-0000-000000000000"
  • <config_name>_azure_client_id with Service Principal client id. From the example, variable is: alternative_client_id = "aaaaaaaa-bbbb-cccc-dddd-zzzzzzzzzzzz"
  • <config_name>_azure_client_secret with Service Principal client secret. From the example, variable is: alternative_client_secret = "AAbbbCCCzzz=="

Also, an isolation context is set to the local .run/aure_<config_name> directory for each configuration.

States centralization configuration

The conf/state.yml configuration file defines the configuration used to connect to state backends.

These backends can be of AWS S3 and/or AzureRM types.

The resources for these backends are not created by tfwrapper, and thus must exist beforehand:

  • AWS: an S3 bucket (and optionally but highly recommended a DynamoDB table for locking). It is also recommended to enable versioning on the S3 bucket.
  • Azure: a Blob storage account

You can use other backends (e.g. Google GCS or Hashicorp Consul) not specifically supported by the wrapper, if you manage authentication yourself and omit the conf/state.yml file or make it empty:

---

Example configuration with both AWS and Azure backends defined:

---
aws:
  - name: "aws-demo"
    general:
      account: "xxxxxxxxxxx"
      region: eu-west-1
    credentials:
      profile: my-state-aws-profile # should be configured in .aws/config
azure:
  # This backend use storage keys for authentication
  - name: "azure-backend"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
  - name: "azure-alternative"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
  # This backend use Azure AD authentication
  - name: "azure-ad-auth"
    general:
      subscription_id: "xxxxxxx" # the Azure account to use for state storage
      resource_group_name: "tfstates-xxxxx-rg" # The Azure resource group with state storage
      storage_account_name: "tfstatesxxxxx"
      azuread_auth: true

backend_parameters: # Parameters or options which can be used by `state.j2.tf` template file
  state_snaphot: "false" # Example of Azure storage backend option

Note: the first backend will be the default one for stacks not defining state_backend_type.

How to migrate from one backend to another for state centralization

If for example you have both an AWS and Azure state backend configured in your conf/state.yml file, you can migrate your stack state from one backend to another.

Here is a quick howto:

  1. Make sure your stack is clean:
$ cd account/path/env/your_stack
$ tfwrapper init
$ tfwrapper plan
# should return no changes
  1. Change your backend in the stack configuration yaml file:
---
#state_configuration_name: 'aws-demo' # previous backend
state_configuration_name: "azure-alternative" # new backend to use
  1. Back in your stack directory, you can perform the change:
$ cd account/path/env/your_stack
$ rm -v state.tf # removing old state backend configuration
$ tfwrapper bootstrap # regen a new state backend configuration based on the stack yaml config file
$ tfwrapper init # Terraform will detect the new backend and propose to migrate it
$ tfwrapper plan
# should return the same changes diff as before

Stacks file structure

Terraform stacks are organized based on their:

  • account: an account alias which may refer to provider accounts or subscriptions, e.g. project-a-prod, customer-b-dev.
  • environment: production, preproduction, dev, etc. With global as a special case eliminating the region part.
  • region: eu-west-1, westeurope, etc.
  • stack: defaults to default. web, admin, tools, etc.

The following file structure is then enforced:

# project root
└── account
│   └── environment
│       └── region
│           └── stack
└── account
    └── _global
        └── stack

A real-life example:

# project root
├── aws-account-1
│   ├── _global
│   │   └── default
│   │       └── main.tf
│   └── production
│       ├── eu-central-1
│       │   └── web
│       │       └── main.tf
│       └── eu-west-1
│           ├── default
│           │   └── main.t
│           └── tools
│               └── main.tf
└── aws-account-2
    └── backup
        └── eu-west-1
            └── backup
                └── main.tf

Usage

Stack bootstrap

After creating a conf/${account}_${environment}_${region}_${stack}.yml stack configuration file you can bootstrap it.

# you can bootstrap using the templates/{provider}/basic stack
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap

# or another stack template, for example: templates/aws/foobar
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap aws/foobar

# or from an existent stack, for example: customer/env/region/stack
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} bootstrap mycustomer/dev/eu-west/run

In the special case of a global stack, the configuration file should instead be named as conf/${account}_global_${stack}.yml.

Working on stacks

You can work on stacks from their directory or from the root of the project.

# working from the root of the project
tfwrapper -a ${account} -e ${environment} -r ${region} -s ${stack} plan

# working from the root of a stack
cd ${account}/${environment}/${region}/${stack}
tfwrapper plan

You can also work on several stacks sequentially with the foreach subcommand from any directory under the root of the project. By default, foreach selects all stacks under the current directory, so if called from the root of the project without any filter, it will select all stacks and execute the specified command in them, one after another:

# working from the root of the project
tfwrapper foreach -- tfwrapper init

Any combination of the -a, -e, -r and -s arguments can be used to select specific stacks, e.g. all stacks for an account across all environments but in a specific region:

# working from the root of the project
tfwrapper -a ${account} -r ${region} foreach -- tfwrapper plan

The same can be achieved with:

# working from an account directory
cd ${account}
tfwrapper -r ${region} foreach -- tfwrapper plan

Complex commands can be executed in a sub-shell with the -S/--shell argument, e.g.:

# working from an environment directory
cd ${account}/${environment}
tfwrapper foreach -S 'pwd && tfwrapper init >/dev/null 2>&1 && tfwrapper plan 2>/dev/null -- -no-color | grep "^Plan: "'

Passing options

You can pass anything you want to terraform using --.

tfwrapper plan -- -target resource1 -target resource2

Environment

tfwrapper sets the following environment variables.

S3 state backend credentials

The default AWS credentials of the environment are set to point to the S3 state backend. Those credentials are acquired from the profile defined in conf/state.yml

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_SESSION_TOKEN

Azure Service Principal credentials

Those AzureRM credentials are loaded only if you are using the Service Principal mode. They are acquired from the profile defined in ~/.azurerm/config.yml

  • ARM_CLIENT_ID
  • ARM_CLIENT_SECRET
  • ARM_TENANT_ID

Azure authentication isolation

AZURE_CONFIG_DIR environment variable is set to the local .run/azure directory if global configuration value use_local_azure_session_directory is set to true, which is the default, which is the default.

If you have multiple configurations in your stacks, you also have <CONFIG_NAME>_AZURE_CONFIG_DIR which is set to the local .run/azure_<config_name> directory.

GCP configuration

Those GCP related variables are available from the environment when using the example configuration:

  • TF_VAR_gcp_region
  • TF_VAR_gcp_zone
  • TF_VAR_gcp_project

GKE configurations

Each GKE instance has its own kubeconfig, the path to each configuration is available from the environment:

  • TF_VAR_gke_kubeconfig_${gke_cluster_name}

kubeconfig is automatically fetched by the wrapper (using gcloud) and stored inside the .run directory of your project. It is refreshed automatically at every run to ensure you point to correct Kubernetes endpoint. You can disable this behaviour by setting refresh_kubeconfig: never in your cluster settings.

---
gcp:
  general:
    mode: adc-user
    project: &gcp_project project-name
  gke:
    - name: kubernetes-1
      zone: europe-west1-c
      refresh_kubeconfig: never

Stack configurations and credentials

The terraform['vars'] dictionary from the stack configuration is accessible as Terraform variables.

The profile defined in the stack configuration is used to acquire credentials accessible from Terraform. There is two supported providers, the variables which will be loaded depends on the used provider.

  • TF_VAR_client_name (if set in .yml stack configuration file)
  • TF_VAR_aws_account
  • TF_VAR_aws_region
  • TF_VAR_aws_access_key
  • TF_VAR_aws_secret_key
  • TF_VAR_aws_token
  • TF_VAR_azurerm_region
  • TF_VAR_azure_region
  • TF_VAR_azure_subscription_id
  • TF_VAR_azure_tenant_id
  • TF_VAR_azure_state_access_key (removed in v11.0.0)

Stack path

The stack path is passed to Terraform. This is especially useful for resource naming and tagging.

  • TF_VAR_account
  • TF_VAR_environment
  • TF_VAR_region
  • TF_VAR_stack

Development

Tests

All new code contributions should come with unit and/or integrations tests.

To run those tests locally, use tox:

poetry run tox -e py

Linters are also used to ensure code respects our standards.

To run those linters locally:

poetry run tox -e lint

Debug command-line completion

You can get verbose debugging information for argcomplete by defining the following environment variable:

export _ARC_DEBUG=1

Python code formatting

Our code is formatted with black.

Make sure to format all your code contributions with black ${filename}.

Hint: enable auto-format on save with black in your favorite IDE.

Checks

To run code and documentation style checks, run tox -e lint.

In addition to black --check, code is also checked with:

README TOC

This README's table of content is formatted with md_toc.

Keep in mind to update it with md_toc --in-place github README.md.

Using OpenTofu development builds

To build and use development versions of OpenTofu, put them in a ~/.terraform.d/versions/X.Y/X.Y.Z-dev/ folder:

# git clone https://github.com/opentofu/opentofu.git ~/go/src/github.com/opentofu/opentofu
# cd ~/go/src/github.com/opentofu/opentofu
# go build ./cmd/tofu/
# ./tofu version
OpenTofu v1.6.0-dev
on linux_amd64
# mkdir -p ~/.terraform.d/versions/1.6/1.6.0-dev
# mv ./opentofu ~/.terraform.d/versions/1.6/1.6.0-dev/

git pre-commit hooks

Some git pre-commit hooks are configured in .pre-commit-config.yaml for use with the pre-commit tool.

Using them helps avoiding to push changes that will fail the CI.

They can be installed locally with:

# pre-commit install

If updating hooks configuration, run checks against all files to make sure everything is fine:

# pre-commit run --all-files --show-diff-on-failure

Note: the pre-commit tool itself can be installed with pip or pipx.

Review and merge open Dependabot PRs

Use the scripts/merge-dependabot-mrs.sh script from master branch to:

  • list open Dependabot PRs that are mergeable,
  • review, approve and merge them,
  • pull changes from github and pushing them to origin.

Just invoke the script without any argument:

# ./scripts/merge-dependabot-mrs.sh

Check the help:

# ./scripts/merge-dependabot-mrs.sh --help

Tagging and publishing new releases to PyPI

Use the scripts/release.sh script from master branch to:

  • bump the version with poetry,
  • update CHANGELOG.md,
  • commit these changes,
  • tag with last CHANGELOG.md item content as annotation,
  • bump the version with poetry again to mark it for development,
  • commit this change,
  • push all commits and tags to all remote repositories.

This will trigger a Github Actions job to publish packages to PyPI.

To invoke the script, pass it the desired bump rule, e.g.:

# ./scripts/release.sh minor

For more options, check the help:

# ./scripts/release.sh --help

tfwrapper's People

Contributors

abrefort avatar adayclara avatar apestel avatar bd-clara avatar bzspi avatar cp3hu avatar dependabot[bot] avatar jmapro avatar jnancel avatar kh12e avatar maxenced avatar nsenaud avatar pdecat avatar renovate[bot] avatar rossifumax avatar shr3ps avatar xp-1000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfwrapper's Issues

docs: Why template repeats state storage details instead of reading it from conf/state.yml?

Since the State centralization configuration explains details of storage configuration live in conf/state.yml, then why in templates section instructs to repeat that same information in the template?

terraform {
  backend "azurerm" {
    subscription_id      = "00000000-0000-0000-0000-000000000000"
    resource_group_name  = "my-resource-group"
    storage_account_name = "my-centralized-terraform-states-account"
    container_name       = "terraform-states"

Why those values are not templated and tfwrapper is let to read those from the conf/state.yml to fill the template with the values?
Or that is the case but the documentation is not explaining it.

docs: Missing explanation of account term and relation to config

Stacks configurations says

conf/${account}_${environment}_${region}_${stack}.yml

then the ${account} is used in number of commands that follow.

Stacks file structure says:

account: an account alias which may reference one or multiple providers accounts.

but how does the former ${account} relate to the latter, just, account?

Additionally, there is no explanation of the following

  • Naming requirements and if any exist
  • Relation to {{ account }} in templates - yes, reader can figure it out, but an explicit reference would clear potential questions
  • Correlation, if any, with any account names/identifier used in content of conf/${account}_${environment}_${region}_${stack}.yml e.g. does ${account} need/have to correlate with value of aws.general.account or other property in that file.
  • Correlation, if any, with azure.name or aws.name or other property from conf/state.yml

The plain word account is ambiguous as it is used for storage account, user account (e.g. Microsoft Account), Service Principal account, etc. so at least the ${account} needs to be clarified, IMHO.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • fix(deps): update dependency boto3 to v1.34.122

Other Branches

These updates are pending. To force PRs open, click the checkbox below.

  • chore(deps): update dependency pre-commit to v3.7.1

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

asdf
.tool-versions
  • python 3.12.4
github-actions
.github/workflows/main.yml
  • actions/checkout v4
  • actions/setup-python v5.1.0
  • actions/checkout v4
  • actions/setup-python v5.1.0
  • actions/upload-artifact v4
  • actions/checkout v4
  • actions/download-artifact v4
  • py-cov-action/python-coverage-comment-action v3.24
  • actions/upload-artifact v4
.github/workflows/release.yml
  • actions/checkout v4
  • actions/setup-python v5.1.0
  • ncipollo/release-action v1
gitlabci
.gitlab-ci.yml
pep621
pyproject.toml
  • poetry-core >=1.1.0
poetry
pyproject.toml
  • python >=3.8.1,<4.0
  • argcomplete <4.0
  • boto3 ^1.17.94
  • cachecontrol ^0.14.0
  • colorlog >=5.0.1,<7.0.0
  • jinja2 ^3.0.1
  • lockfile ^0.12.2
  • natsort >=7.1.1,<9.0.0
  • packaging >=24,<25
  • pyyaml ^6.0.1
  • requests ^2.25.1
  • schema ^0.7.4
  • semver ^3.0.1
  • termcolor >=1.1,<3.0
  • black *
  • coverage *
  • flake8 >=6.0.0
  • flake8-docstrings *
  • flake8-pyproject *
  • md-toc *
  • mock *
  • pook >=1.0.2
  • pre-commit *
  • pytest *
  • pytest-mock *
  • requests-mock *
  • toml *
  • tox >=4

  • Check this box to trigger a request for Renovate to run again on this repository

Link is not accessible

Hi there,
I see the README say:

It is using the Service Principal's credentials to connect the Azure Subscription. This SP must have access to the subscription. The wrapper loads client_id and client_secret from your config.yml located in ~/.azurem/config.yml. Please check the example here: https://git.fr.clara.net/claranet/cloudnative/projects/terraform/base-template/tree/master/conf

But that link is private. I can not access it.

BTW, this project is really hard to setup. Maybe it's better to have a sample directory which included simple projects for each provider (Azure, AWS, GCP, ..etc)? Just my thoughts..

9.2.0 - tfwrapper foreach fails : Stack config xxx.yml has no matching directory at /home/david/terraform/gitlab-repos/set -xeo pipefail ; pwd ; tfwrapper

Hello
Since tfwrapper 9.2.0, the following command fails :

> tfwrapper foreach -c "set -xeo pipefail ; pwd ; tfwrapper --plugin-cache-dir '/tmp/zz' init && tfwrapper validate"
WARNING tfwrapper : Stack config conf/c-xxx_global_gitlab.yml has no matching directory at /home/david/terraform/gitlab-repos/set -xeo pipefail ; pwd ; tfwrapper --plugin-cache-dir '/tmp/c-xxx/_global/gitlab, skipping.

the -c argument is used for :

  • the config file
  • the command to run on each stack

with the foreach argument, tfwrapper considers the -c argument as the config file and not the command.

David.

wrapper cannot correctly check Azure session since azure-cli `v2.30.0`

Context:

Claranet terraform-wrapper relies on the Python SDK azure-cli-core lib to retrieve a CLI Azure session.
This session is used to:

  • check if the Azure session is correct, has the correct rights to access the target Azure Subscription
  • use this session when Azure bakend is used, to get Azure Storage Account keys (and forward them to Terraform)

terraform-wrapper v8.1.2 depends on azure-cli-core v2.29.0 which uses ADAL library for Azure authentication. (Session tokens are stored in the $AZURE_CONFIG_DIR/accessTokens.json file.
With azure-cli and azure-cli-core v2.30.0, Microsoft has introduced a CORE breaking change: they now uses MSAL library for Azure authentication. (See Changelog info: https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/docs-ref-conceptual/release-notes-azure-cli.md#core)
Session tokens are now stored in $AZURE_CONFIG_DIR/msal_token_cache.json file.

Issue description:
First case:

  • You have a valid Azure session, generated via azure login command and using azure-cli v2.29 (or anterior version)
  • You now upgrade to azure-cli v2.30.0 (or more recent)
  • If you trigger a tfwrapper command (like tfwrapper plan):
    • The init phase will success: tfwrapper will rely on the Azure session available via azure-ci-core v2.29 and ADAL lib
    • The plan phase will crash with an error message from Terraform:
Error: building account: getting authenticated object ID: Error parsing json result from the Azure CLI: Error waiting for the Azure CLI: exit status 1: ERROR: User aaaa.bbbb@fr.clara.net does not exist in MSAL token cache. Run `az login`.
│ 
│   with provider["registry.terraform.io/hashicorp/azurerm"],
│   on main.tf line 1, in provider "azurerm":1: provider "azurerm" {

You need to run again az login with azure-cli v2.30+, so you will have both ADAL and MSAL Azure sessions.

Second Case:

  • You're on a fresh install of azure-cli v2.30.0, you do an az login command
  • Because this has genererated a MSAL session, tfwrapper will directly fail with:
ERROR   tfwrapper : Error while getting Azure token, check that you are authorized on this subscription then log yourself in with:

 AZURE_CONFIG_DIR=/home/xxxxxxxxxxxx/.run/azure az login

because tfwrapper cannot find the ADAL session.

Support OpenTofu

This is a public tracking issue to report that support for OpenTofu is already implemented and being tested internally.

search_on_github can't find new release with an older version

Hi there, how are you ! ;)

It looks like the tfswitch feature cannot find a release if the releases are not ordered by name:

For example this will fail with the 0.12.30 that was released a week ago because it search since 0.13.0 to find it.

Here is what I get if I set the 0.12.30 in the config file

DEBUG   tfwrapper : Looking up "https://github.com/hashicorp/terraform/releases?after=v0.13.0" in the cache
DEBUG   tfwrapper : Current age based on date: 998
DEBUG   tfwrapper : Freshness lifetime from expires: 900
DEBUG   tfwrapper : Starting new HTTPS connection (1): github.com:443
DEBUG   tfwrapper : https://github.com:443 "GET /hashicorp/terraform/releases?after=v0.13.0 HTTP/1.1" 200 None
DEBUG   tfwrapper : Updating cache with response from "https://github.com/hashicorp/terraform/releases?after=v0.13.0"
DEBUG   tfwrapper : Caching due to etag
DEBUG   tfwrapper : Release found: "v0.13.0-rc1", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.29", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.13.0-beta3", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.28", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.27", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.13.0-beta2", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.13.0-beta1", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.26", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.25", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.24", checking with "^v0.12.30$"
DEBUG   tfwrapper : Looking up "https://github.com/hashicorp/terraform/releases?after=v0.12.24" in the cache
DEBUG   tfwrapper : Current age based on date: 999
DEBUG   tfwrapper : Freshness lifetime from expires: 900
DEBUG   tfwrapper : https://github.com:443 "GET /hashicorp/terraform/releases?after=v0.12.24 HTTP/1.1" 200 None
DEBUG   tfwrapper : Updating cache with response from "https://github.com/hashicorp/terraform/releases?after=v0.12.24"
DEBUG   tfwrapper : Caching due to etag
DEBUG   tfwrapper : Release found: "v0.12.23", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.22", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.21", checking with "^v0.12.30$"
DEBUG   tfwrapper : Release found: "v0.12.20", checking with "^v0.12.30$"

The first search https://github.com/hashicorp/terraform/releases?after=v0.13.0 don't have the 0.12.30 in result of course because it was released recently.

A quick workaround would be to get only the first result with undefined version https://github.com/hashicorp/terraform/releases then iterate starting with the last result of the page https://github.com/hashicorp/terraform/releases?after=v0.14.0-beta2
This might be a performance killer, you may have a better way to handle it.

feat(aws): awsu compatibility

The issue

The tfwrapper is not able to call directly awsu to interact with a yubikey and avoid to fill two mfa codes.

  • The credentials are valid for a limited time range so you need to refresh it once or twice a day (depending on you security policy)

Feature proposal

  • Add a function to call awsu using subprocess at terraform commands

We had implemented it on our former fork (Old PR link -> #6) So now it's the time to make another shot within the great pypi deployment you've made 😍 (without a Makefile to update) ;)

Issue using config.yml with Azure Credentials

It's more like a question rather than an issue, and I'm probably doing something wrong, but it seems that the use_local_azure_session_directory isn't read or something.

Here my config.yml filed, located on conf directory :

---
always_trigger_init: false
use_local_azure_session_directory: false

I struggled quite a long time just initing my stack, with this message on the screen :

→ tfwrapper -a melon -e test -r fr-central -s do bootstrap
INFO    tfwrapper : Failed to load or parse file /home/alex/dev/guilds/poc/.run/azure/azureProfile.json. It will be overridden by default settings.
ERROR   tfwrapper : Error while getting Azure token, check that you are authorized on this subscription then log yourself in with:

 AZURE_CONFIG_DIR=/home/alex/dev/guilds/poc/.run/azure AZURE_ACCESS_TOKEN_FILE=/home/alex/dev/guilds/poc/.run/azure/accessTokens.json az login

I added some debug to understand the issue :

Debug log
→ tfwrapper -d -a melon -e test -r fr-central -s do bootstrap
DEBUG   tfwrapper : Detected confdir at 'conf'
DEBUG   tfwrapper : Detected rootdir at '/home/alex/dev/guilds/poc' with 0 parents from .
DEBUG   tfwrapper : Detected environment 'test'
DEBUG   tfwrapper : Loading wrapper config from 'conf/config.yml'
DEBUG   tfwrapper : Loading state config from 'conf/state.yml'
DEBUG   tfwrapper : Exporting `AZURE_CONFIG_DIR` set to `/home/alex/dev/guilds/poc/.run/azure` directory
DEBUG   tfwrapper : Exporting `AZURE_ACCESS_TOKEN_FILE` set to `/home/alex/dev/guilds/poc/.run/azure/accessTokens.json`
INFO    tfwrapper : Failed to load or parse file /home/alex/dev/guilds/poc/.run/azure/azureProfile.json. It will be overridden by default settings.
--- Logging error ---
Traceback (most recent call last):
  File "/home/alex/.local/lib/python3.8/site-packages/azure/cli/core/_session.py", line 48, in load
    with codecs_open(self.filename, 'r', encoding=self._encoding) as f:
  File "/usr/lib/python3.8/codecs.py", line 905, in open
    file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '/home/alex/dev/guilds/poc/.run/azure/azureProfile.json'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 396, in _get_azure_token
profile = get_cli_profile()
File "/home/alex/.local/lib/python3.8/site-packages/azure/common/credentials.py", line 34, in get_cli_profile
ACCOUNT.load(os.path.join(azure_folder, 'azureProfile.json'))
File "/home/alex/.local/lib/python3.8/site-packages/azure/cli/core/_session.py", line 61, in load
self.save()
File "/home/alex/.local/lib/python3.8/site-packages/azure/cli/core/_session.py", line 65, in save
with codecs_open(self.filename, 'w', encoding=self._encoding) as f:
File "/usr/lib/python3.8/codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '/home/alex/dev/guilds/poc/.run/azure/azureProfile.json'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.8/logging/init.py", line 1081, in emit
msg = self.format(record)
File "/usr/lib/python3.8/logging/init.py", line 925, in format
return fmt.format(record)
File "/home/alex/.local/lib/python3.8/site-packages/colorlog/colorlog.py", line 135, in format
message = super(ColoredFormatter, self).format(record)
File "/usr/lib/python3.8/logging/init.py", line 664, in format
record.message = record.getMessage()
File "/usr/lib/python3.8/logging/init.py", line 369, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/home/alex/.local/bin/tfwrapper", line 8, in
sys.exit(main())
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 1369, in main
state_session = get_session(
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 482, in get_session
session = _get_azure_session(
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 449, in _get_azure_session
if not _check_azure_auth(subscription_id=azure_subscription):
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 385, in _check_azure_auth
return _get_azure_token(subscription_id, tenant_id) is not None
File "/home/alex/.local/lib/python3.8/site-packages/claranet_tfwrapper/init.py", line 407, in _get_azure_token
logger.debug("Failed retrieving Azure profile and token: {}", e)
Message: 'Failed retrieving Azure profile and token: {}'
Arguments: (FileNotFoundError(2, 'No such file or directory'),)
ERROR tfwrapper : Error while getting Azure token, check that you are authorized on this subscription then log yourself in with:

AZURE_CONFIG_DIR=/home/alex/dev/guilds/poc/.run/azure AZURE_ACCESS_TOKEN_FILE=/home/alex/dev/guilds/poc/.run/azure/accessTokens.json az login

As we can see, my config.yml is loaded, but the parameter didn't impact the init phase.
N.B : I used both pyton3.6 & python3.8, but nothing changed.

Then, I did run the az login locally to my project, init, and then try again to update my config.yml file.

always_trigger_init to true
DESKTOP-7C704V2:  ~/dev/guilds/poc/melon/test/fr-central/do
→ tfwrapper plan
INFO    tfwrapper : Azure state backend initialized.
INFO    tfwrapper : Using Azure user mode
WARNING tfwrapper : Using terraform version 1.0.9
INFO    tfwrapper : Init has been activated in config

Initializing the backend...

Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

No changes. Your infrastructure matches the configuration.

always_trigger_init to false
DESKTOP-7C704V2:  ~/dev/guilds/poc/melon/test/fr-central/do
→ tfwrapper plan
INFO    tfwrapper : Azure state backend initialized.
INFO    tfwrapper : Using Azure user mode
WARNING tfwrapper : Using terraform version 1.0.9

│ Error: Backend initialization required, please run "terraform init"

│ Reason: Initial configuration of the requested backend "azurerm"

│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.

│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.

│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.

As we can see, the always_trigger_init value change is well loaded during the tfwrapper plan and the value do impact the behaviour of tfwrapper.
Now, if I delete the ~/project/.run/azure, I'm facing again the issue with the Azure Token which is not loaded, no maters if i put true, false, any other word or delete the line.

Do you have an idea what can I do wrong ? Right now, I'm completly stuck.

Thanks,
Regards

docs: Unclear whether bootstrap or init creates Azure storage account

First, I have completed followed the Required files and created templates

$ tree ./templates/
./templates/
└── azure
    ├── basic
    │   └── main.tf
    └── common
        └── state.tf.jinja2

and configuration for Azure-based test project and stack

$ ls -1 ./conf
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx_dev_uksouth_test.yml
state.yml

Next, I've authenticated Azure CLI

$ AZURE_CONFIG_DIR=/home/mloskot/test-claranet/.run/azure az login
$ AZURE_CONFIG_DIR=/home/mloskot/test-claranet/.run/azure az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"

Finally, I bootstrapped test stack in my project

tfwrapper -a xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx -e dev -r uksouth -s test bootstrap

and I'm trying to intialise it (the Terraform) via the wrapper"

$ tfwrapper -a xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx -e dev -r uksouth -s test init
INFO    tfwrapper : Using terraform version 1.5.2
INFO    tfwrapper : Azure state backend initialized.
INFO    tfwrapper : Using Azure user mode

Initializing the backend...

│ Error: Failed to get existing workspaces: Error retrieving keys for Storage Account "tfwrapperd41b0183":
storage.AccountsClient#ListKeys: Failure responding to request:
StatusCode=404 -- Original Error: autorest/azure: Service returned an error.
Status=404 Code="ResourceGroupNotFound" Message="Resource group 'rg-tfwrapper' could not be found."

Although the documentation is unclear about it, it looks like users are supposed to create

  • resource group
  • storage account

before running tfwrapper. Is that correct?

TBH, I think it is quite 'natural' to expect tfwrapper to create those automatically, since it is provided with all the authentication and authorisation via Azure CLI and the configuration files.

However, the documentation seems incomplete here as I can not find the prerequisite of pre-existing resource group and storage account.

docs: Missing reference to Claranet Terraform modules

Since the tfwrapper is Claranet-oriented, then it is valid to expect it would support use of Claranet production-ready Terraform modules.

I think, the wrapper documentation, especially Stacks file structure, should touch on best practices on using the modules in a project:

  • Does tfwrapper offer any features to refer/link/import the modules?
  • If not, where it is suggested to put them?
  • Share between account/environment/stack or make as local as possible?
  • Host them as a copy or via Git submodule?
    ...

docs: Stack mixing Azure and AWS S3?

Current README.md in the Stacks configurations shows an example which is described this way:

Here is an example for a stack on Azure configuration using user mode and AWS S3 backend for state storage:

  • Is this correct? There seem to be no actual reference to AWS S3.
  • Why blending Azure and AWS in single stack configuration?

clang failing on MacOS

Hi,

I'm trying to use tfwrapper on my Mac with MacOS Big Sur and it fails when running make when reaching task "Running setup.py install for cffi ... error"

Have you already tried and successfully used it in this environment?

On my side, it ends with the following stack:

    Installing collected packages: cffi, certifi, requests, pyjwt, oauthlib, cryptography, requests-oauthlib, python-dateutil, isodate, azure-nspkg, tabulate, pyyaml, pynacl, pygments, portalocker, msrest, msal, markupsafe, jmespath, colorama, bcrypt, azure-mgmt-nspkg, azure-core, argcomplete, applicationinsights, adal, vsts, pyopenssl, pkginfo, paramiko, msrestazure, msal-extensions, mock, knack, jinja2, invoke, humanfriendly, botocore, azure-mgmt-datalake-nspkg, azure-mgmt-core, azure-common, azure-cli-telemetry, xmltodict, websocket-client, vsts-cd-manager, sshtunnel, scp, s3transfer, pytz, psutil, msgpack, jsondiff, jsmin, javaproperties, fabric, contextlib2, azure-synapse-spark, azure-synapse-artifacts, azure-synapse-accesscontrol, azure-storage-common, azure-multiapi-storage, azure-mgmt-web, azure-mgmt-trafficmanager, azure-mgmt-synapse, azure-mgmt-storage, azure-mgmt-sqlvirtualmachine, azure-mgmt-sql, azure-mgmt-signalr, azure-mgmt-servicefabric, azure-mgmt-servicebus, azure-mgmt-security, azure-mgmt-search, azure-mgmt-resource, azure-mgmt-reservations, azure-mgmt-relay, azure-mgmt-redis, azure-mgmt-redhatopenshift, azure-mgmt-recoveryservicesbackup, azure-mgmt-recoveryservices, azure-mgmt-rdbms, azure-mgmt-privatedns, azure-mgmt-policyinsights, azure-mgmt-network, azure-mgmt-netapp, azure-mgmt-msi, azure-mgmt-monitor, azure-mgmt-media, azure-mgmt-marketplaceordering, azure-mgmt-maps, azure-mgmt-managementgroups, azure-mgmt-managedservices, azure-mgmt-loganalytics, azure-mgmt-kusto, azure-mgmt-keyvault, azure-mgmt-iothubprovisioningservices, azure-mgmt-iothub, azure-mgmt-iotcentral, azure-mgmt-imagebuilder, azure-mgmt-hdinsight, azure-mgmt-eventhub, azure-mgmt-eventgrid, azure-mgmt-dns, azure-mgmt-devtestlabs, azure-mgmt-deploymentmanager, azure-mgmt-datamigration, azure-mgmt-datalake-store, azure-mgmt-datalake-analytics, azure-mgmt-databoxedge, azure-mgmt-cosmosdb, azure-mgmt-containerservice, azure-mgmt-containerregistry, azure-mgmt-containerinstance, azure-mgmt-consumption, azure-mgmt-compute, azure-mgmt-cognitiveservices, azure-mgmt-cdn, azure-mgmt-botservice, azure-mgmt-billing, azure-mgmt-batchai, azure-mgmt-batch, azure-mgmt-authorization, azure-mgmt-applicationinsights, azure-mgmt-appconfiguration, azure-mgmt-apimanagement, azure-mgmt-advisor, azure-loganalytics, azure-keyvault-administration, azure-keyvault, azure-graphrbac, azure-functions-devops-build, azure-datalake-store, azure-cosmos, azure-cli-core, azure-batch, azure-appconfiguration, antlr4-python3-runtime, termcolor, schema, natsort, lockfile, colorlog, cachecontrol, boto3, azure-cli
    Running setup.py install for cffi ... error
    ERROR: Command errored out with exit status 1:
     command: /Users/olivierdupre/workspace/tfwrapper-stacks-infra/.wrapper/.virtualenv/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-install-77qdkipl/cffi_16a35fb20bc846b99fb44bea1c78791f/setup.py'"'"'; __file__='"'"'/private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-install-77qdkipl/cffi_16a35fb20bc846b99fb44bea1c78791f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-record-7f9ualpo/install-record.txt --single-version-externally-managed --compile --install-headers /Users/olivierdupre/workspace/tfwrapper-stacks-infra/.wrapper/.virtualenv/include/site/python3.9/cffi
         cwd: /private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-install-77qdkipl/cffi_16a35fb20bc846b99fb44bea1c78791f/
    Complete output (45 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.macosx-11-arm64-3.9
    creating build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/backend_ctypes.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/error.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/setuptools_ext.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/__init__.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/cffi_opcode.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/vengine_gen.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/pkgconfig.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/model.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/ffiplatform.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/api.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/vengine_cpy.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/commontypes.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/lock.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/recompiler.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/cparser.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/verifier.py -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/_cffi_include.h -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/parse_c_type.h -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/_embedding.h -> build/lib.macosx-11-arm64-3.9/cffi
    copying cffi/_cffi_errors.h -> build/lib.macosx-11-arm64-3.9/cffi
    running build_ext
    building '_cffi_backend' extension
    creating build/temp.macosx-11-arm64-3.9
    creating build/temp.macosx-11-arm64-3.9/c
    /usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk/usr/include/ffi -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/ffi -I/opt/homebrew/include -I/opt/homebrew/opt/[email protected]/include -I/opt/homebrew/opt/sqlite/include -I/Users/olivierdupre/workspace/tfwrapper-stacks-infra/.wrapper/.virtualenv/include -I/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c c/_cffi_backend.c -o build/temp.macosx-11-arm64-3.9/c/_cffi_backend.o
    c/_cffi_backend.c:6029:5: warning: 'PyEval_InitThreads' is deprecated [-Wdeprecated-declarations]
        PyEval_InitThreads();
        ^
    /opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:130:1: note: 'PyEval_InitThreads' has been explicitly marked deprecated here
    Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void);
    ^
    /opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
    #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
                                                         ^
    c/_cffi_backend.c:6089:9: error: implicit declaration of function 'ffi_prep_closure' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
        if (ffi_prep_closure(closure, &cif_descr->cif,
            ^
    1 warning and 1 error generated.
    error: command '/usr/bin/clang' failed with exit code 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /Users/olivierdupre/workspace/tfwrapper-stacks-infra/.wrapper/.virtualenv/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-install-77qdkipl/cffi_16a35fb20bc846b99fb44bea1c78791f/setup.py'"'"'; __file__='"'"'/private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-install-77qdkipl/cffi_16a35fb20bc846b99fb44bea1c78791f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/cj/k0kxqmyj0fzf2sdtq51szdr40000gn/T/pip-record-7f9ualpo/install-record.txt --single-version-externally-managed --compile --install-headers /Users/olivierdupre/workspace/tfwrapper-stacks-infra/.wrapper/.virtualenv/include/site/python3.9/cffi Check the logs for full command output.

Cheers,
Olivier

Allow inheriting some environment variables

In many places you call subprocess.run() passing it some hand-crafted environment (passing env=<some dict> to the function). This prevents the sub-process from inherit some important variables. For example we have terraform modules that interact with kubernetes, yet they cannot see the KUBECONFIG environment variable from the parent shell. This can result in various outcomes from authentication failures to talking to the wrong cluster leading to messing up terraform state.

Would you consider adding a way to provide a list of variable sub-processes can inherit ? Thanks.

Makefile SHELL grep not working

Hi!

If your env contains multiple variables containing the string SHELL= they are all matched by the Makefile statement and therefore make breaks.
In my case it was a variable called RBENV_SHELL.

I managed to fix it just deleting the first line of the makefile, as it defaults to use the env SHELL variable anyways. (Source)

I was wondering why you used grep and cut instead of relying on the default.

I have a PR with the fix (huge stuff, one line removed 😉 ) in a fork and I can push it here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.