snyk / driftctl Goto Github PK
View Code? Open in Web Editor NEWDetect, track and alert on infrastructure drift
License: Apache License 2.0
Detect, track and alert on infrastructure drift
License: Apache License 2.0
Description
Add support for aws_default_security_group
Description
Context: A user reported a crash on Docker latest image.
Our latest released version is v0.1.1 and it's working great (cloudskiff/driftctl:v0.1.1)
But our "latest" tagged docker hub includes an older dev release:
$ ./bin/driftctl_linux_amd64 version
v0.1.0-6-gbb52629-dev
Environment
How to reproduce
use the docker hub cloudskiff/driftctl:latest release
Possible Solution
Update the release pipeline for latest to match the real latest
Additional context
Originally a user reported an issue like this one:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "scan": executable file not found in $PATH: unknown.
ERRO[0001] error waiting for container: context canceled
Description
Usually, an AWS account is shared with multiple teams and projects so I don't think the driftctl
is useful if just try to compare drift with a single TF state file. It doesn't give a useful result. Unless the whole infrastructure is governed by Terraform and driftctl can use multiple TF statefiles, it will be useful but such things are not really practical in my view.
Description
Panic due to invalid memory address while using docker image. I'm not a Go programmer but let me know how I can help.
Environment
How to reproduce
From existing TF dir:
docker run \
-v ~/.aws:/app/.aws:ro \
-v $(pwd)/terraform.tfstate:/app/terraform.tfstate:ro \
-v ~/.driftctl:/app/.driftctl \
-e AWS_PROFILE=$AWS_PROFILE \
cloudskiff/driftctl scan
Downloading AWS terraform provider: terraform-provider-aws_v3.19.0_x5
Scanning AWS on region: us-east-1
ERRO[0015] Unable to scan resources: A runner routine paniced: runtime error: invalid memory address or nil pointer dereference
Usage: driftctl scan [flags]
FLAGS:
--filter string JMESPath expression to filter on
Examples :
- Type == 'aws_s3_bucket' (will filter only s3 buckets)
- Type =='aws_s3_bucket && Id != 'my_bucket' (excludes s3 bucket 'my_bucket')
- Attr.Tags.Terraform == 'true' (include only resources that have Tag Terraform equal to 'true')
-f, --from string IaC source, by default try to find local terraform.tfstate file
Accepted schemes are: tfstate://,tfstate+s3://
(default "tfstate://terraform.tfstate")
-o, --output string Output format, by default it will write to the console
Accepted formats are: console://,json://PATH/TO/FILE.json
(default "console://")
-t, --to string Cloud provider source
Accepted values are: aws+tf
(default "aws+tf")
INHERITED FLAGS:
-h, --help Display help for command
--no-version-check Disable the version check
unable to run driftcl
Possible Solution
???
Additional context
Terraform v0.14.3 in local dir, Terraform state format version 4
There is a typo in the final error message: driftcl should be driftctl
Description
We don't describe how to optionally share the .driftignore
file when using driftctl on docker from a repo.
Something like this:
$ docker run -it --rm -v ~/.aws:/app/.aws:ro -e AWS_RE
GION=eu-west-3 -v $(pwd)/.driftignore:/app/.driftignore -v $(pwd)/terraform.tfstate:/app/terraform.tfstate:ro -v ~/.d
riftctl:/app/.driftctl -e AWS_PROFILE=cloudskiff-demo cloudskiff/driftctl:v0.1.1 --no-version-check scan
Scanning AWS on region: eu-west-3
Found 4 resource(s)
- 100% coverage
Description
Create a running routine with an exposed channel to allow us to send alerts from any moment of our lifecycle.
Use cases:
We want to be able to query alert collection when outputing results:
alerts
key in json outputExample
Alerts should be in analysis struct.
type Alerts map[string][]Alert
type Alert struct {
Message string
ShouldIgnoreResource bool
}
// Alert keys are indexed on resources fqdn
Alerts["aws_s3_bucket.my-bucket"]
// For a s3 403 read permission denied
Alerts["aws_s3_bucket"]
// JSON Output
{
"alerts": {
"aws_s3_bucket.my-bucket": [
{
"message": "Unable to read bucket"
}
],
"aws_s3_bucket": [
{
"message": "Permissions denied while reading bucket list",
"should_ignore_resource": true
},
{
"message": "Another issue"
}
]
}
}
Description
When driftctl does not have enough permissions for a full scan, we should ignore related resources of a given type from analysis and push an alert (alerting related PR #7 )
Example
Single resource failure:
We run driftctl without permissions to read a specific S3 bucket, we stop execution.
Resource enumeration failure:
We run driftctl without permission to enumerate S3 buckets, we should skip analysis for all S3 buckets and show an alert.
Description
Add support for aws_route_table
Description
As suggested by @Gary-Armstrong, to be able to specify a --from
tfstate is good enough, but by reading the right terraform file we could get the same information and build the option automatically for the user:
terraform {
backend "s3" {
bucket = "the-bucket-name"
key = "some-directory-name"
region = "us-east-1"
dynamodb_table = "some-statelock"
}
}
Sidenote: it should also work dynamically for people with different workspaces.
driftctl
runs without any need to access the TF HCL code.
Example
Description
Using 0.2.0, all the domains managed in TF are seen as unmanaged resources, along with others that are DNS entries, not domains:
Found unmanaged resources:
aws_route53_record:
- cloudskiff.io
- cloudskiff.io
- _github-challenge-cloudskiff.cloudskiff.com
- cloudskiff.com
- cloudskiff.com
- yu7cdluskcyr.cloudskiff.com
- driftctl.com
- driftctl.com
Same for drifted resources, they seem randomly mixed:
Found drifted resources:
- driftctl.com (aws_route53_record):
~ Records.0: "192.0.78.137" => "10 spool.mail.gandi.net."
~ Records.1: "192.0.78.209" => "50 fb.mail.gandi.net"
~ Ttl: 300 => 10800
~ Type: "A" => "MX"
[...]
- cloudskiff.com (aws_route53_record):
+ Records.2: <nil> => "10 ALT4.ASPMX.L.GOOGLE.COM."
+ Records.3: <nil> => "5 ALT1.ASPMX.L.GOOGLE.COM."
+ Records.4: <nil> => "5 ALT2.ASPMX.L.GOOGLE.COM."
~ Records.0: "192.0.78.137" => "1 ASPMX.L.GOOGLE.COM."
~ Records.1: "192.0.78.209" => "10 ALT3.ASPMX.L.GOOGLE.COM."
~ Type: "A" => "MX"
- driftctl.com (aws_route53_record):
~ Records.0: "10 spool.mail.gandi.net." => "192.0.78.137"
~ Records.1: "50 fb.mail.gandi.net" => "192.0.78.209"
~ Ttl: 10800 => 300
~ Type: "MX" => "A"
- cloudskiff.io (aws_route53_record):
+ Records.1: <nil> => "ns-1540.awsdns-00.co.uk."
+ Records.2: <nil> => "ns-26.awsdns-03.com."
+ Records.3: <nil> => "ns-883.awsdns-46.net."
~ Records.0: "v=spf1 include:_mailcust.gandi.net ?all" => "ns-1186.awsdns-20.org."
~ Ttl: 10800 => 172800
~ Type: "TXT" => "NS"
- cloudskiff.com (aws_route53_record):
- Records.4: "5 ALT2.ASPMX.L.GOOGLE.COM." => <nil>
~ Records.0: "1 ASPMX.L.GOOGLE.COM." => "ns-1414.awsdns-48.org."
~ Records.1: "10 ALT3.ASPMX.L.GOOGLE.COM." => "ns-1615.awsdns-09.co.uk."
~ Records.2: "10 ALT4.ASPMX.L.GOOGLE.COM." => "ns-444.awsdns-55.com."
~ Records.3: "5 ALT1.ASPMX.L.GOOGLE.COM." => "ns-820.awsdns-38.net."
~ Ttl: 300 => 172800
~ Type: "MX" => "NS"
- driftctl.com (aws_route53_record):
+ Records.2: <nil> => "ns-373.awsdns-46.com."
+ Records.3: <nil> => "ns-578.awsdns-08.net."
~ Records.0: "google-site-verification=k4-uf9SRZ-JTX6f4w4dCdMZM6fvkAsVzeu_YTWP0i9o" => "ns-1513.awsdns-61.org."
~ Records.1: "v=spf1 include:_mailcust.gandi.net ?all" => "ns-1911.awsdns-46.co.uk."
~ Ttl: 10800 => 172800
~ Type: "TXT" => "NS"
Environment
How to reproduce
Possible Solution
Additional context
Description
'kind/documentation' tags aren't automatically added for Documentation type of issues
Description
Make the bump formula PR inside the homebrew-core repo automatic for each new release !
Description
We should add all FakeResource
as pointer inside a slice of resources just to be consistent with how we add remote/state resources in deserializers.
Description
Add support for aws_default_route_table
Description
Context:
LastModified
that I want to ignore (from aws_lambda_function.my-lambda-name
)If I follow precisely the documentation here, it works great: now lowercased, my field becomes lastmodified
in the .driftignore
(and not LastModified
)
It's not really intuitive though: I think keeping things how they are is more intuitive for users and for me (aws_lambda_function.driftctl-version.LastModified
is good enough! Keep it simple)
Environment
How to reproduce
Possible Solution
Keep it simple and case-insensitive :)
Additional context
Description
Cloning the repository, using the GitHub Desktop app, on Windows 10 is reportedly not working with one of the golden files: aws_s3_bucket_analytics_configuration-bucket-martin-test-drift:Analytics_Bucket.res.golden.json
Cloning into 'C:\Users\sjourdan\src\github.com\cloudskiff\driftctl'...
remote: Enumerating objects: 244, done.
remote: Counting objects: 100% (244/244), done.
remote: Compressing objects: 100% (139/139), done.
remote: Total 1066 (delta 129), reused 161 (delta 92), pack-reused 822
Receiving objects: 100% (1066/1066), 762.77 KiB | 2.12 MiB/s, done.
Resolving deltas: 100% (424/424), done.
error: invalid path 'pkg/remote/aws/test/analytics_inventory_nometrics/aws_s3_bucket_analytics_configuration-bucket-martin-test-drift:Analytics_Bucket.res.golden.json'
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
Environment
main
is at 546674a
How to reproduce
Possible Solution
Maybe Windows doesn't support the :
char in the filename.
It works great on other OSes and within WSL2
Additional context
Description
A user requested support for AWS SSO authentication in driftctl.
While we could replicate a working use case manually in a lab (AWS CLI v2 works well with this since November '19), it currently can't work with driftctl directly, because it depends on both the SSO feature integration in the Go SDK and then in the Terraform AWS provider.
In December 2020, priority was high for the Go SDK team (as read in a Terraform AWS provider issue).
Sources:
Description
Some users want to use an IAM Role to authenticate (and not an AWS profile based on IAM keys).
Example
See AWS doc: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
Description
Add support for aws_subnet
Description
Support the use of named profiles for execution, based on AWS credential documentation.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
Example
driftctl scan --profile profile_name
Description
Reported by @Gary-Armstrong on a #73 comment:
After 17 minutes it completed reading AWS, and then it says
Unable to scan resources: Failed to read state file: The state file could not be read: read terraform.tfstate: is a directory
which is news to me :)
Maybe do this check at the start?
Great suggestion, we should indeed check everything is at the right place early, especially when scanning huge AWS accounts takes a long time
Environment
How to reproduce
Possible Solution
Early check for existence of the --from
value (or DCTL_FROM
env var)
Additional context
Description
driftctl's latest tag is currently manually set to the latest stable release to fix #44 :
docker tag cloudskiff/driftctl:v0.1.1 cloudskiff/driftctl:latest
docker push cloudskiff/driftctl:latest
The latest tag should point to the latest stable release and not our state on the master main branch.
Environment
How to reproduce
Possible Solution
Automatically set latest to the latest stable release
Additional context
cf #44
Description
We want to be able to ignore from drift a single field of a resource.
Example
I create an aws_lambda_function
resource "aws_lambda_function" "foobar" {
function_name = "foobar"
role = "foobar"
filename = "data/lambda/something.zip"
handler = "main"
timeout = 15
runtime = "go1.x"
lifecycle {
ignore_changes = [
environment,
]
}
}
I update my lambda environment using a CI pipeline.
I don't want driftctl to show me environment drifts
Description
Building the master branch keeps the current stable version number, which is misleading (I think I'm running a stable 0.1.1 while I'm really running a pre-0.2.0)
Why not simply not naming it?
$ time make build; ./bin/driftctl_linux_amd64 version
scripts/build.sh
Bash: 5.0.3(1)-release
+ Building env: dev
ARCH: linux/amd64
+ Removing old binaries ...
+ Building with flags: -X github.com/cloudskiff/driftctl/pkg/version.version=v0.1.1-61-g5405385
Number of parallel builds: 3
--> linux/amd64: github.com/cloudskiff/driftctl
real 0m5.956s
user 0m9.215s
sys 0m1.871s
v0.1.1-61-g5405385-dev
Environment
How to reproduce
Possible Solution
Additional context
Description
ERRO[0001] Unable to scan resources: A runner routine paniced
Environment
How to reproduce
docker run -t --rm -v ~/.aws:/home/.aws:ro -v $(pwd):/tf:ro -v ~/.driftctl:/home/.driftctl -e AWS_PROFILE=$AWS_PROFILE -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN cloudskiff/driftctl scan --from tfstate:///tf/users/.terraform/terraform.tfstate
Scanning AWS on region: us-east-1
ERRO[0001] Unable to scan resources: A runner routine paniced: runtime error: invalid memory address or nil pointer dereference
Usage: driftctl scan [flags]
Description
Records containing simply a name aren't affected (TXT domain.com for example)
But records containing a subdomain like TXT _amazonses.cloudskiff.io are seen as deleted + unmanaged
Environment
How to reproduce
Some sample terraform like this:
# this one will work
resource "aws_route53_record" "txt" {
zone_id = aws_route53_zone.csio.zone_id
name = "cloudskiff.io"
type = "TXT"
ttl = "10800"
records = ["v=spf1 include:_mailcust.gandi.net ?all"]
}
# this one will always be seen as deleted/unmanaged
resource "aws_route53_record" "amazonses_verification_record" {
zone_id = aws_route53_zone.csio.zone_id
name = "_amazonses.cloudskiff.io"
type = "TXT"
ttl = "600"
records = [aws_ses_domain_identity.cloudskiff_io.verification_token]
}
Description
TestProviderInstallerGetAwsDoesNotExist is not working on macOS due to the fact that we download plugins/linux_amd64
instead of the darwin one.
How to reproduce
$ make test
I know this is early days for this tool but, it looks like it's purely AWS only for now. Are the plans to expand support for Azure and GCP platforms?
Description
tl;dr: no curl
and no sudo
in default ubuntu docker environment.
A user reported the documentation didn't work for him, in a default ubuntu docker environment:
$ docker run -it --rm ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
da7391352a9b: Pull complete
14428a6d4bcd: Pull complete
2c2d948710f2: Pull complete
Digest: sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c
Status: Downloaded newer image for ubuntu:latest
root@41960ff4c1ac:/# curl https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 | sudo tee /usr/local/bin/driftctl
bash: sudo: command not found
bash: curl: command not found
Fix: apt update && apt install -y curl sudo
(even if sudo
is useless in a default docker env
Anyway we can facilitate the installation with a simpler sudo curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 -o /usr/local/bin/driftctl
Also chmod +x /usr/local/bin/driftctl
is missing .
Description
My Okta token expires after 60 minutes. When it expires and I use aws cli I see an error:
╰─ aws s3 ls s3://<bucketname>
An error occurred (ExpiredToken) when calling the ListObjectsV2 operation: The provided token has expired.
The container hangs in this case and does not display an error. Ideally it would halt and report an error.
Environment
How to reproduce
Use Okta SSO to generate ~/.aws/credentials
Let credentials expire
docker run -t --rm \
-v ~/.aws:/home/.aws:ro \
-v $(pwd)/terraform.tfstate:/app/terraform.tfstate:ro \
-v ~/.driftctl:/app/.driftctl \
-e AWS_PROFILE=$AWS_PROFILE \
cloudskiff/driftctl scan
Possible Solution
I think simply halting and displaying an error is correct.
Description
During acceptance test run, some random race errors can happen with "context cancelled" message.
It's a race condition due to how we handle terraform providers
Two points may introduce issues:
plugin.CleanupClients()
after each end of scan cmd, wich lead to terraform providers closing while a potential other test has started.Environment
How to reproduce
make acc
Check for warnings like WARN[0129] Error reading foo-2.com.[aws_route53_zone]: rpc error: code = Canceled desc = context canceled
Possible Solution
We should remove the global provider map and find a better way to handle plugin end (maybe move plugin cleanup in main to allow tests to reuse plugin without someone closing it during execution). Maybe we could dig into terraform plugin mgmt to ensure we are only closing plugin of current command (by keeping a reference of client and call Kill()
method)
Additional context
➜ make acc
DRIFTCTL_ACC=true gotestsum --format testname --junitfile unit-tests-acc.xml -- -coverprofile=cover-acc.out -coverpkg=./pkg/... -run=TestAcc_ ./pkg/resource/...
EMPTY pkg/resource
PASS pkg/resource/aws.TestAcc_AwsInstance_WithBlockDevices (54.30s)
=== RUN TestAcc_AwsRoute53Record_WithFQDNAsId
DEBU[0054] Running terraform apply ...
DEBU[0127] Terraform apply done
DEBU[0127] Found existing aws provider path=/home/elie/.driftctl/plugins/linux_amd64/terraform-provider-aws_v3.19.0_x5
DEBU[0127] Starting new provider region=eu-west-3
DEBU[0127] Starting aws provider GRPC client region=eu-west-3
DEBU[0128] Found IAC provider backend= path=testdata/acc/aws_route53_record/terraform.tfstate supplier=tfstate
INFO[0128] Start scanning cloud provider
DEBU[0129] Stopping ParallelRunner
ERRO[0129] Unable to scan resources: rpc error: code = Canceled desc = context canceled
Usage: driftctl scan [flags]
FLAGS:
--filter string JMESPath expression to filter on
Examples :
- Type == 'aws_s3_bucket' (will filter only s3 buckets)
- Type =='aws_s3_bucket && Id != 'my_bucket' (excludes s3 bucket 'my_bucket')
- Attr.Tags.Terraform == 'true' (include only resources that have Tag Terraform equal to 'true')
-f, --from string IaC source, by default try to find local terraform.tfstate file
Accepted schemes are: tfstate://,tfstate+s3://
(default "tfstate://terraform.tfstate")
-o, --output string Output format, by default it will write to the console
Accepted formats are: console://,json://PATH/TO/FILE.json
(default "console://")
-t, --to string Cloud provider source
Accepted values are: aws+tf
(default "aws+tf")
INHERITED FLAGS:
-h, --help Display help for command
--no-version-check Disable the version check
aws_route53_record_test.go:17: unable to run driftctl
DEBU[0129] Running terraform destroy ...
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU type=aws_route53_zone
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5 type=aws_route53_zone
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5__test2.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_foo-2.com_SOA type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test3.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_foo-2.com_NS type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test0.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test1.foo-2.com_TXT type=aws_route53_record
WARN[0129] Error reading foo-2.com.[aws_route53_zone]: rpc error: code = Canceled desc = context canceled
WARN[0129] Error reading foo-2.com.[aws_route53_zone]: rpc error: code = Canceled desc = context canceled
DEBU[0129] Stopping ParallelRunner
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU_foo-2.com_SOA type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU_foo-2.com_NS type=aws_route53_record
DEBU[0130] Stopping ParallelRunner
DEBU[0168] Terraform destroy done
--- FAIL: TestAcc_AwsRoute53Record_WithFQDNAsId (114.09s)
FAIL pkg/resource/aws.TestAcc_AwsRoute53Record_WithFQDNAsId (114.09s)
coverage: 55.2% of statements in ./pkg/...
FAIL pkg/resource/aws
EMPTY pkg/resource/aws/deserializer
=== Failed
=== FAIL: pkg/resource/aws TestAcc_AwsRoute53Record_WithFQDNAsId (114.09s)
DEBU[0054] Running terraform apply ...
DEBU[0127] Terraform apply done
DEBU[0127] Found existing aws provider path=/home/elie/.driftctl/plugins/linux_amd64/terraform-provider-aws_v3.19.0_x5
DEBU[0127] Starting new provider region=eu-west-3
DEBU[0127] Starting aws provider GRPC client region=eu-west-3
DEBU[0128] Found IAC provider backend= path=testdata/acc/aws_route53_record/terraform.tfstate supplier=tfstate
INFO[0128] Start scanning cloud provider
DEBU[0129] Stopping ParallelRunner
ERRO[0129] Unable to scan resources: rpc error: code = Canceled desc = context canceled
Usage: driftctl scan [flags]
FLAGS:
--filter string JMESPath expression to filter on
Examples :
- Type == 'aws_s3_bucket' (will filter only s3 buckets)
- Type =='aws_s3_bucket && Id != 'my_bucket' (excludes s3 bucket 'my_bucket')
- Attr.Tags.Terraform == 'true' (include only resources that have Tag Terraform equal to 'true')
-f, --from string IaC source, by default try to find local terraform.tfstate file
Accepted schemes are: tfstate://,tfstate+s3://
(default "tfstate://terraform.tfstate")
-o, --output string Output format, by default it will write to the console
Accepted formats are: console://,json://PATH/TO/FILE.json
(default "console://")
-t, --to string Cloud provider source
Accepted values are: aws+tf
(default "aws+tf")
INHERITED FLAGS:
-h, --help Display help for command
--no-version-check Disable the version check
aws_route53_record_test.go:17: unable to run driftctl
DEBU[0129] Running terraform destroy ...
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU type=aws_route53_zone
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5 type=aws_route53_zone
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5__test2.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_foo-2.com_SOA type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test3.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_foo-2.com_NS type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test0.foo-2.com_TXT type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z07465731S7T7LM5NIJT5_test1.foo-2.com_TXT type=aws_route53_record
WARN[0129] Error reading foo-2.com.[aws_route53_zone]: rpc error: code = Canceled desc = context canceled
WARN[0129] Error reading foo-2.com.[aws_route53_zone]: rpc error: code = Canceled desc = context canceled
DEBU[0129] Stopping ParallelRunner
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU_foo-2.com_SOA type=aws_route53_record
DEBU[0129] Reading aws cloud resource attrs=map[] id=Z0725199YV5IFAKVODYU_foo-2.com_NS type=aws_route53_record
DEBU[0130] Stopping ParallelRunner
DEBU[0168] Terraform destroy done
DONE 2 tests, 1 failure in 169.965s
make: *** [Makefile:37: acc] Error 1
Description
Add support for aws_default_subnet
Description
I got this error: ERRO[0009] unsupported attribute "enclave_options"
This is related to a simple EC2 instance resource:
# Create AWS EC2 Instance
resource "aws_instance" "main" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.nano"
tags = {
Name = var.name
TTL = var.ttl
owner = "${var.name}-guide"
}
}
This is new from 3.22.0 https://github.com/hashicorp/terraform-provider-aws/blob/master/CHANGELOG.md#3220-december-18-2020
Example
I was following the hashicorp tutorial here.
See the tfstate attached
** Context **
https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html
AWS Nitro Enclaves is an Amazon EC2 feature that allows you to create isolated execution environments, called enclaves, from Amazon EC2 instances. Enclaves are separate, hardened, and highly constrained virtual machines. They provide only secure local socket connectivity with their parent instance.
It's set to false by default
Description
Support aws_default_vpc
Description
We need to provide a quick flag to allow users to report crash. Sentry is a pretty good tool, so let's integrate it and only enable it when --enable-reporting
flag is set.
Description
The output for the aws_route53_record
is quite raw for a human compared to the route53 zones output:
aws_route53_record:
- ZOS30SFDAFTU9__github-challenge-cloudskiff.cloudskiff.com_TXT
- ZOS30SFDAFTU9_yu7cdluskcyr.cloudskiff.com_CNAME
Example
An example could be:
aws_route53_record:
- _github-challenge-cloudskiff.cloudskiff.com (TXT) (zone: ZOS30SFDAFTU9)
- _yu7cdluskcyr.cloudskiff.com (CNAME) (zone: ZOS30SFDAFTU9)
Stringer implementation should be able to replace the ID output and not only add human readable fields after the ID
Description
To facilitate installation on macOS, we must create a homebrew formula.
Example
This will make our users install driftctl with:
brew install driftctl
Description
For documentation issues, it's weird to choose between a "bug" or a "feature" templated issue.
Let's create a "Documentation" template with the kind/documentation tag already filled!
Example
Description
As we're close to being accepted into homebrew core, let's add a quick line for our mac os friends
$ brew install driftctl
Description
Using an aws_security_group
resource, I declare rules using the egress {}
and ingress {}
blocks.
The resulting rules are detected as "unmanaged" by driftctl
, which is not true
Environment
How to reproduce
to_reproduce.tar.gz
Possible Solution
Additional context
$ AWS_REGION="eu-west-3" AWS_PROFILE="cloudskiff-demo" driftctl scan
Scanning AWS on region: eu-west-3
Found unmanaged resources:
[...]
aws_security_group_rule:
- sgrule-3926891421 (Type: egress, SecurityGroup: sg-00cc7a64235d09359, Protocol: All, Ports: All, Destination: 0.0.0.0/0)
- sgrule-3913664463 (Type: ingress, SecurityGroup: sg-00cc7a64235d09359, Protocol: tcp, Ports: 443, Source: 0.0.0.0/0)
[...]
Description
When using the aws_eip
resource with aws_eip_association
for an aws_instance
, you need two terraform apply
for the state to be updated.
Hence, driftctl detects drift when there isn't.
The following fields in the state are empty after the first tf apply and get filled only after a second tf apply:
"association_id": "",
"instance": "",
"network_interface": "",
"private_dns": null,
"private_ip": "",
Here is the fake positive drift detected:
Found drifted resources:
- eipalloc-0e2894d8ea42851df (aws_eip):
~ AssociationId: "" => "eipassoc-0ee67e1ca759733a2"
~ Instance: "" => "i-004611704837fd09a"
~ NetworkInterface: "" => "eni-0a62972b0471447f6"
~ PrivateDns: <nil> => "ip-172-31-40-4.eu-west-3.compute.internal"
~ PrivateIp: "" => "172.31.40.4"
State, after the first TF apply:
{
"mode": "managed",
"type": "aws_eip",
"name": "eip_test_instance_2",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"allocation_id": null,
"associate_with_private_ip": null,
"association_id": "",
"customer_owned_ip": "",
"customer_owned_ipv4_pool": "",
"domain": "vpc",
"id": "eipalloc-0e2894d8ea42851df",
"instance": "",
"network_border_group": "eu-west-3",
"network_interface": "",
"private_dns": null,
"private_ip": "",
"public_dns": "ec2-35-180-56-77.eu-west-3.compute.amazonaws.com",
"public_ip": "35.180.56.77",
"public_ipv4_pool": "amazon",
"tags": {
"Name": "eip_test_instance_2"
},
"timeouts": null,
"vpc": true
},
"sensitive_attributes": [],
"private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiZGVsZXRlIjoxODAwMDAwMDAwMDAsInJlYWQiOjkwMDAwMDAwMDAwMCwidXBkYXRlIjozMDAwMDAwMDAwMDB9fQ=="
}
]
State, 2nd terraform apply:
{
"mode": "managed",
"type": "aws_eip",
"name": "eip_test_instance_2",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"allocation_id": null,
"associate_with_private_ip": null,
"association_id": "eipassoc-0ee67e1ca759733a2",
"customer_owned_ip": "",
"customer_owned_ipv4_pool": "",
"domain": "vpc",
"id": "eipalloc-0e2894d8ea42851df",
"instance": "i-004611704837fd09a",
"network_border_group": "eu-west-3",
"network_interface": "eni-0a62972b0471447f6",
"private_dns": "ip-172-31-40-4.eu-west-3.compute.internal",
"private_ip": "172.31.40.4",
"public_dns": "ec2-35-180-56-77.eu-west-3.compute.amazonaws.com",
"public_ip": "35.180.56.77",
"public_ipv4_pool": "amazon",
"tags": {
"Name": "eip_test_instance_2"
},
"timeouts": null,
"vpc": true
},
"sensitive_attributes": [],
Environment
How to reproduce
A minimal aws_instance
, with a minimal aws_eip
and a minimal aws_eip_association
resource "aws_instance" "test_instance_1" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "test_instance_1"
}
volume_tags = {
Name = "rootVol"
}
root_block_device {
volume_type = "gp2"
volume_size = 20
delete_on_termination = true
}
}
resource "aws_eip" "eip_test_instance_2" {
tags = {
Name = "eip_test_instance_2"
}
}
resource "aws_eip_association" "eip_assoc_instance_1" {
instance_id = aws_instance.test_instance_1.id
allocation_id = aws_eip.eip_test_instance_2.id
}
Possible Solution
issue a warning?
Additional context
This relevant terraform issue explaining the root cause found by @wbeuil hashicorp/terraform-provider-aws#15093 (comment)
Also similar to #11
Description
Some fields introduced by provider update are not normalized correctly (it seems the state is not always consistent depending if it's refreshed or not)
- foo (aws_lambda_function):
~ ImageUri: <nil> => ""
~ SigningJobArn: <nil> => ""
~ SigningProfileVersionArn: <nil> => ""
Environment
How to reproduce
Terraform code in lambda function supplier test help reproduce this.
Launch driftctl before and after a terraform refresh. The refreshed run show a drift.
Possible Solution
We should apply the same normalization from state and remote
Description
Users running on the latest macs may enjoy a native build.
Golang will support Apple Silicon in its 1.16 release, which is targeted for February 2021.
Description
When adding an aws_eip to an aws_instance, 2 applies are needed to update the state and driftctl to stop seeing a drift (while there's none)
It is a Terraform issue and not driftctl, but still we report a drift while there's none
Found 2 differences in managed resource: i-0fe2f966f41521b44 (aws_instance):
~ PublicDns: "ec2-35-180-34-37.eu-west-3.compute.amazonaws.com" => "ec2-15-237-90-99.eu-west-3.compute.amazonaws.com"
~ PublicIp: "35.180.34.37" => "15.237.90.99"
Environment
How to reproduce
Possible Solution
We could ignore these fields from diff when we got an aws_eip_association
resource attached to the instance as it is a duplicate drift information.
cases to test:
Additional context
$ AWS_REGION="eu-west-3" AWS_PROFILE="cloudskiff-demo" driftctl scan
ERRO[2020-10-27T14:38:33+01:00] unsupported attribute "block_device_mappings"
Found 5 unmanaged resources, 0 deleted resources. 2 resources have drifted.
[...]
Found 2 differences in managed resource: i-0fe2f966f41521b44 (aws_instance):
~ PublicDns: "ec2-35-180-34-37.eu-west-3.compute.amazonaws.com" => "ec2-15-237-90-99.eu-west-3.compute.amazonaws.com"
~ PublicIp: "35.180.34.37" => "15.237.90.99"
[...]
Total coverage is 28%
$ AWS_REGION="eu-west-3" AWS_PROFILE="cloudskiff-demo" terraform apply
random_string.prefix: Refreshing state... [id=do4luh]
data.aws_ami.ubuntu: Refreshing state...
aws_instance.test_instance_1: Refreshing state... [id=i-0fe2f966f41521b44]
aws_eip.eip_test_instance_1: Refreshing state... [id=eipalloc-059d78c79c2d54262]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
$ AWS_REGION="eu-west-3" AWS_PROFILE="cloudskiff-demo" driftctl scan
ERRO[2020-10-27T14:39:40+01:00] unsupported attribute "block_device_mappings"
Found 5 unmanaged resources, 0 deleted resources. 1 resources have drifted.
[...]
Total coverage is 28%
Description
We have multiple tfstate / terraform on one account so this app is not OK with our architecture
Example
account
--- vpc-dev (one terraform for this env)
--- vpc-qual (one terraform for this env)
When we use the tfstate from env-qual, we have the listing for env-dev
When we use the tfstate from env-dev, we have the listing for env-qual
Is it possible to specify multiple tfstate
driftctl *.tfstate
Description
Acceptance test should be run with minimal permissions to cover potential missing permissions in documentation or other permissions issues.
Since we run a terraform apply to generate required resource on remote side during Acc test execution, we cannot use a minimal read only permission set.
An approach could be to introduce ACC_AWS_PROFILE
overriding default AWS_PROFILE
to be used during terraform write operations during acc tests.
Example
ACC_AWS_PROFILE=read-write-profile AWS_PROFILE=read-only-profile DRIFTCTL_ACC=true make acc
Description
Add support for aws_vpc
Description
Docker method fails: unable to create dir .driftctl
First try using Docker instructions from README:
docker run -t --rm \
-v ~/.aws:/home/.aws:ro \
-v $(pwd):/app:ro \
-v ~/.driftctl:/home/.driftctl \
-e AWS_PROFILE=$AWS_PROFILE \
> cloudskiff/driftctl scan
Unable to find image 'cloudskiff/driftctl:latest' locally
latest: Pulling from cloudskiff/driftctl
be0788dcda67: Pull complete
c3c40d5e8c96: Pull complete
da5f21922be9: Pull complete
2525c1e3b098: Pull complete
Digest: sha256:bd0d310a51bd64223ee33f54b1cef0491cc2f3047615de901e143694daba2df1
Status: Downloaded newer image for cloudskiff/driftctl:latest
Downloading AWS terraform provider: terraform-provider-aws_v3.19.0_x5
Usage: driftctl scan [flags]
FLAGS:
--filter string JMESPath expression to filter on
Examples :
- Type == 'aws_s3_bucket' (will filter only s3 buckets)
- Type =='aws_s3_bucket && Id != 'my_bucket' (excludes s3 bucket 'my_bucket')
- Attr.Tags.Terraform == 'true' (include only resources that have Tag Terraform equal to 'true')
-f, --from string IaC source, by default try to find local terraform.tfstate file
Accepted schemes are: tfstate://,tfstate+s3://
(default "tfstate://terraform.tfstate")
-o, --output string Output format, by default it will write to the console
Accepted formats are: console://,json://PATH/TO/FILE.json
(default "console://")
-t, --to string Cloud provider source
Accepted values are: aws+tf
(default "aws+tf")
INHERITED FLAGS:
-h, --help Display help for command
--no-version-check Disable the version check
mkdir /app/.driftctl: read-only file system
I would prefer to have PWD mounted RO since it would be my TF root. I could probably use -f
along with a writable PWD, but I'm unsure of the best path forward for the app function.
Description
We want an auto-builded docker image available on docker hub.
Example
$ docker run cloudskiff/driftctl scan --help
Scan
Usage: driftctl scan [flags]
FLAGS:
--filter string JMESPath expression to filter on
Examples :
- Type == 'aws_s3_bucket' (will filter only s3 buckets)
- Type =='aws_s3_bucket && Id != 'my_bucket' (excludes s3 bucket 'my_bucket')
- Attr.Tags.Terraform == 'true' (include only resources that have Tag Terraform equal to 'true')
-f, --from string IaC source, by default try to find local terraform.tfstate file
Accepted schemes are: tfstate://,tfstate+s3://
(default "tfstate://terraform.tfstate")
-o, --output string Output format, by default it will write to the console
Accepted formats are: console://,json://PATH/TO/FILE.json
(default "console://")
-t, --to string Cloud provider source
Accepted values are: aws+tf
(default "aws+tf")
INHERITED FLAGS:
-h, --help Display help for command
--no-version-check Disable the version check
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.