Code Monkey home page Code Monkey logo

terraform-aws-s3-bucket's Introduction

AWS S3 bucket Terraform module

Terraform module which creates S3 bucket on AWS with all (or almost all) features provided by Terraform AWS provider.

SWUbanner

These features of S3 bucket configurations are supported:

  • static web-site hosting
  • access logging
  • versioning
  • CORS
  • lifecycle rules
  • server-side encryption
  • object locking
  • Cross-Region Replication (CRR)
  • ELB log delivery bucket policy
  • ALB/NLB log delivery bucket policy

Usage

Private bucket with versioning enabled

module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "my-s3-bucket"
  acl    = "private"

  control_object_ownership = true
  object_ownership         = "ObjectWriter"

  versioning = {
    enabled = true
  }
}

Bucket with ELB access log delivery policy attached

module "s3_bucket_for_logs" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "my-s3-bucket-for-logs"
  acl    = "log-delivery-write"

  # Allow deletion of non-empty bucket
  force_destroy = true

  control_object_ownership = true
  object_ownership         = "ObjectWriter"

  attach_elb_log_delivery_policy = true
}

Bucket with ALB/NLB access log delivery policy attached

module "s3_bucket_for_logs" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "my-s3-bucket-for-logs"
  acl    = "log-delivery-write"

  # Allow deletion of non-empty bucket
  force_destroy = true

  control_object_ownership = true
  object_ownership         = "ObjectWriter"

  attach_elb_log_delivery_policy = true  # Required for ALB logs
  attach_lb_log_delivery_policy  = true  # Required for ALB/NLB logs
}

Conditional creation

Sometimes you need to have a way to create S3 resources conditionally but Terraform does not allow to use count inside module block, so the solution is to specify argument create_bucket.

# This S3 bucket will not be created
module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"

  create_bucket = false
  # ... omitted
}

Terragrunt and variable "..." { type = any }

There is a bug #1211 in Terragrunt related to the way how the variables of type any are passed to Terraform.

This module solves this issue by supporting jsonencode()-string in addition to the expected type (list or map).

In terragrunt.hcl you can write:

inputs = {
  bucket    = "foobar"            # `bucket` has type `string`, no need to jsonencode()
  cors_rule = jsonencode([...])   # `cors_rule` has type `any`, so `jsonencode()` is required
}

Module wrappers

Users of this Terraform module can create multiple similar resources by using for_each meta-argument within module block which became available in Terraform 0.13.

Users of Terragrunt can achieve similar results by using modules provided in the wrappers directory, if they prefer to reduce amount of configuration files.

Examples:

Requirements

Name Version
terraform >= 1.0
aws >= 5.27

Providers

Name Version
aws >= 5.27

Modules

No modules.

Resources

Name Type
aws_s3_bucket.this resource
aws_s3_bucket_accelerate_configuration.this resource
aws_s3_bucket_acl.this resource
aws_s3_bucket_analytics_configuration.this resource
aws_s3_bucket_cors_configuration.this resource
aws_s3_bucket_intelligent_tiering_configuration.this resource
aws_s3_bucket_inventory.this resource
aws_s3_bucket_lifecycle_configuration.this resource
aws_s3_bucket_logging.this resource
aws_s3_bucket_metric.this resource
aws_s3_bucket_object_lock_configuration.this resource
aws_s3_bucket_ownership_controls.this resource
aws_s3_bucket_policy.this resource
aws_s3_bucket_public_access_block.this resource
aws_s3_bucket_replication_configuration.this resource
aws_s3_bucket_request_payment_configuration.this resource
aws_s3_bucket_server_side_encryption_configuration.this resource
aws_s3_bucket_versioning.this resource
aws_s3_bucket_website_configuration.this resource
aws_caller_identity.current data source
aws_canonical_user_id.this data source
aws_iam_policy_document.access_log_delivery data source
aws_iam_policy_document.combined data source
aws_iam_policy_document.deny_incorrect_encryption_headers data source
aws_iam_policy_document.deny_incorrect_kms_key_sse data source
aws_iam_policy_document.deny_insecure_transport data source
aws_iam_policy_document.deny_unencrypted_object_uploads data source
aws_iam_policy_document.elb_log_delivery data source
aws_iam_policy_document.inventory_and_analytics_destination_policy data source
aws_iam_policy_document.lb_log_delivery data source
aws_iam_policy_document.require_latest_tls data source
aws_partition.current data source
aws_region.current data source

Inputs

Name Description Type Default Required
acceleration_status (Optional) Sets the accelerate configuration of an existing bucket. Can be Enabled or Suspended. string null no
access_log_delivery_policy_source_accounts (Optional) List of AWS Account IDs should be allowed to deliver access logs to this bucket. list(string) [] no
access_log_delivery_policy_source_buckets (Optional) List of S3 bucket ARNs which should be allowed to deliver access logs to this bucket. list(string) [] no
acl (Optional) The canned ACL to apply. Conflicts with grant string null no
allowed_kms_key_arn The ARN of KMS key which should be allowed in PutObject string null no
analytics_configuration Map containing bucket analytics configuration. any {} no
analytics_self_source_destination Whether or not the analytics source bucket is also the destination bucket. bool false no
analytics_source_account_id The analytics source account id. string null no
analytics_source_bucket_arn The analytics source bucket ARN. string null no
attach_access_log_delivery_policy Controls if S3 bucket should have S3 access log delivery policy attached bool false no
attach_analytics_destination_policy Controls if S3 bucket should have bucket analytics destination policy attached. bool false no
attach_deny_incorrect_encryption_headers Controls if S3 bucket should deny incorrect encryption headers policy attached. bool false no
attach_deny_incorrect_kms_key_sse Controls if S3 bucket policy should deny usage of incorrect KMS key SSE. bool false no
attach_deny_insecure_transport_policy Controls if S3 bucket should have deny non-SSL transport policy attached bool false no
attach_deny_unencrypted_object_uploads Controls if S3 bucket should deny unencrypted object uploads policy attached. bool false no
attach_elb_log_delivery_policy Controls if S3 bucket should have ELB log delivery policy attached bool false no
attach_inventory_destination_policy Controls if S3 bucket should have bucket inventory destination policy attached. bool false no
attach_lb_log_delivery_policy Controls if S3 bucket should have ALB/NLB log delivery policy attached bool false no
attach_policy Controls if S3 bucket should have bucket policy attached (set to true to use value of policy as bucket policy) bool false no
attach_public_policy Controls if a user defined public bucket policy will be attached (set to false to allow upstream to apply defaults to the bucket) bool true no
attach_require_latest_tls_policy Controls if S3 bucket should require the latest version of TLS bool false no
block_public_acls Whether Amazon S3 should block public ACLs for this bucket. bool true no
block_public_policy Whether Amazon S3 should block public bucket policies for this bucket. bool true no
bucket (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name. string null no
bucket_prefix (Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket. string null no
control_object_ownership Whether to manage S3 Bucket Ownership Controls on this bucket. bool false no
cors_rule List of maps containing rules for Cross-Origin Resource Sharing. any [] no
create_bucket Controls if S3 bucket should be created bool true no
expected_bucket_owner The account ID of the expected bucket owner string null no
force_destroy (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable. bool false no
grant An ACL policy grant. Conflicts with acl any [] no
ignore_public_acls Whether Amazon S3 should ignore public ACLs for this bucket. bool true no
intelligent_tiering Map containing intelligent tiering configuration. any {} no
inventory_configuration Map containing S3 inventory configuration. any {} no
inventory_self_source_destination Whether or not the inventory source bucket is also the destination bucket. bool false no
inventory_source_account_id The inventory source account id. string null no
inventory_source_bucket_arn The inventory source bucket ARN. string null no
lifecycle_rule List of maps containing configuration of object lifecycle management. any [] no
logging Map containing access bucket logging configuration. any {} no
metric_configuration Map containing bucket metric configuration. any [] no
object_lock_configuration Map containing S3 object locking configuration. any {} no
object_lock_enabled Whether S3 bucket should have an Object Lock configuration enabled. bool false no
object_ownership Object ownership. Valid values: BucketOwnerEnforced, BucketOwnerPreferred or ObjectWriter. 'BucketOwnerEnforced': ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. 'BucketOwnerPreferred': Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control canned ACL. 'ObjectWriter': The uploading account will own the object if the object is uploaded with the bucket-owner-full-control canned ACL. string "BucketOwnerEnforced" no
owner Bucket owner's display name and ID. Conflicts with acl map(string) {} no
policy (Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide. string null no
putin_khuylo Do you agree that Putin doesn't respect Ukrainian sovereignty and territorial integrity? More info: https://en.wikipedia.org/wiki/Putin_khuylo! bool true no
replication_configuration Map containing cross-region replication configuration. any {} no
request_payer (Optional) Specifies who should bear the cost of Amazon S3 data transfer. Can be either BucketOwner or Requester. By default, the owner of the S3 bucket would incur the costs of any data transfer. See Requester Pays Buckets developer guide for more information. string null no
restrict_public_buckets Whether Amazon S3 should restrict public bucket policies for this bucket. bool true no
server_side_encryption_configuration Map containing server-side encryption configuration. any {} no
tags (Optional) A mapping of tags to assign to the bucket. map(string) {} no
versioning Map containing versioning configuration. map(string) {} no
website Map containing static web-site hosting or redirect configuration. any {} no

Outputs

Name Description
s3_bucket_arn The ARN of the bucket. Will be of format arn:aws:s3:::bucketname.
s3_bucket_bucket_domain_name The bucket domain name. Will be of format bucketname.s3.amazonaws.com.
s3_bucket_bucket_regional_domain_name The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL.
s3_bucket_hosted_zone_id The Route 53 Hosted Zone ID for this bucket's region.
s3_bucket_id The name of the bucket.
s3_bucket_lifecycle_configuration_rules The lifecycle rules of the bucket, if the bucket is configured with lifecycle rules. If not, this will be an empty string.
s3_bucket_policy The policy of the bucket, if the bucket is configured with a policy. If not, this will be an empty string.
s3_bucket_region The AWS region this bucket resides in.
s3_bucket_website_domain The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records.
s3_bucket_website_endpoint The website endpoint, if the bucket is configured with a website. If not, this will be an empty string.

Authors

Module is maintained by Anton Babenko with help from these awesome contributors.

License

Apache 2 Licensed. See LICENSE for full details.

Additional information for users from Russia and Belarus

terraform-aws-s3-bucket's People

Contributors

a-ashiq avatar alkapkone avatar antonbabenko avatar betajobot avatar bohnjamin avatar breitsmiley avatar bryantbiggs avatar cageyv avatar dev-slatto avatar duganth-va avatar ereminanton avatar kbcz1989 avatar lundbird avatar m4t22 avatar magreenbaum avatar nonstdio avatar oleksii-borodai avatar pavlopavlichenko avatar pselle avatar remiflament avatar schniber avatar semantic-release-bot avatar serhatcetinkaya avatar shbedev avatar solidsly avatar theipster avatar tsybanov avatar xyaren avatar yoyoman21 avatar yyarmoshyk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-s3-bucket's Issues

s3 bucket already owned by you error

Hello,

I have no buckets in my s3 account. I am running the following code:

module "s3_buckets" {
  source  = "terraform-aws-modules/s3-bucket/aws"

  for_each                       = toset(["alb-logs", "static-assets"])
  bucket                         = "${var.environment}-${var.name}-${each.key}"
  force_destroy                  = true
  attach_elb_log_delivery_policy = true
  block_public_policy            = true
  block_public_acls              = true
  create_bucket                  = true

    tags = {
    Name = "${var.environment}-${var.name}-${each.key}"
    environment = var.environment
    application-id = var.application_id
    service-type = "s3-${each.key}"
  }
}

The bucket creation is successful when it runs, however the apply never finishes and errors out halfway through. it throws this error for both buckets:
Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.

Is this an issue with my code or the module? Even when I delete both buckets by hand and then rerun this I am getting the same error.

terraform version: Terraform v0.13.4

provider "aws" {
region = var.aws-region
version = "3.5"
}

Bug on s3_bucket_id output

Description

Please provide a clear and concise description of the issue you are encountering, your current setup, and what steps led up to the issue. If you can provide a reproduction, that will help tremendously.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Terraform:

Terraform v0.15.0

  • Provider(s):

provider registry.terraform.io/hashicorp/aws v3.37.0

  • Module:
module "s3_static_site" {
 source  = "terraform-aws-modules/s3-bucket/aws"
 version = "~> 2.0"
...
}

Reproduction

Steps to reproduce the behavior:

data "aws_iam_policy_document" "cdn_s3_policy" {
  statement {
    actions   = ["s3:GetObject"]
    resources = ["${module.s3_static_site.s3_bucket_arn}/*"]

    principals {
      type        = "AWS"
      identifiers = module.cdn.cloudfront_origin_access_identity_iam_arns
    }
  }
}

resource "aws_s3_bucket_policy" "cdn_bucket_policy" {
  bucket = module.s3_static_site.s3_bucket_id
  policy = data.aws_iam_policy_document.cdn_s3_policy.json
}

The actual error during apply attempt:

β”‚ 
β”‚   on static_site.tf line 77, in resource "aws_s3_bucket_policy" "cdn_bucket_policy":
β”‚   77:   bucket = module.s3_static_site.s3_bucket_id
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ module.s3_static_site.s3_bucket_id is a object, known only after apply
β”‚ 
β”‚ Inappropriate value for attribute "bucket": string required.
yes yes

Code Snippet to Reproduce

Expected behavior

The name of the bucket is should be a string.

Actual behavior

Terminal Output Screenshot(s)

Additional context

It should be:

output "s3_bucket_id" {
  description = "The name of the bucket."
  value       = element(aws_s3_bucket.this.*.id, [""]), 0)
}

Create branch tf_0.12

Please, create a branch to keep the module files related to the Terraform 0.12 version.

CORS rules fail if you don't specify all parameters

If you don't specify all fields(even though some of them are optional) for CORS rule structure, plan fails with:

 terraform plan -var-file main.tfvars -out plan

Error: Invalid value for module argument

  on main.tf line 125, in module "s3-dev":
 125:   cors_rules = [
 126:     {
 127:       allowed_origins = ["http://localhost/"]
 128:       allowed_methods = ["GET", "PUT", "POST"]
 129:       allowed_headers = ["Authorization", "x-amz-date", "x-amz-content-sha256", "content-type", "content-disposition"]
 130:       expose_headers  = ["ETag"]
 131:       max_age_seconds = 3000
 132:     },
 133:     {
 134:       allowed_origins = ["*"]
 135:       allowed_methods = ["GET"]
 136:       max_age_seconds = 3000
 137:     }
 138:   ]

The given value is not suitable for child module variable "cors_rules" defined
at ../../modules/s3/variables.tf:247,1-22: all list elements must have the
same type.

main.tf

locals {
  bucket_name = "s3-bucket-${random_pet.this.id}"
}

data "aws_canonical_user_id" "current" {}

resource "random_pet" "this" {
  length = 2
}

module "s3_bucket" {
  source = "../../"

  bucket        = local.bucket_name
  acl           = "private"
  force_destroy = true

  cors_rule = [
    {
      allowed_methods = ["PUT", "POST"]
      allowed_origins = ["https://modules.tf", "https://terraform-aws-modules.modules.tf"]
      allowed_headers = ["*"]
      expose_headers  = ["ETag"]
      max_age_seconds = 3000
      }, {
      allowed_methods = ["PUT"]
      allowed_origins = ["https://example.com"]
      allowed_headers = ["*"]
      max_age_seconds = 3000
    }
  ]
}

Obvious(a little bit ugly, though) solution would be specifying max_age_seconds as a list and using only the first element of it:

module/s3/main.tf

...
dynamic "cors_rule" {
    for_each = toset(var.cors_rules)
    content {
      allowed_methods = cors_rule.value.allowed_methods
      allowed_origins = cors_rule.value.allowed_origins
      allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
      expose_headers  = lookup(cors_rule.value, "expose_headers", null)
      max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)[0]
    }
  }
...

main.tf

cors_rules = [
    {
      allowed_origins = ["https://prototype.mddxtap.com/"]
      allowed_methods = ["GET", "PUT", "POST"]
      allowed_headers = ["Authorization", "x-amz-date", "x-amz-content-sha256", "content-type", "content-disposition"]
      expose_headers  = ["ETag"]
      max_age_seconds = [3000]
    },
    {
      allowed_origins = ["*"]
      allowed_methods = ["GET"]
      max_age_seconds = [3000]
    }
  ]

Inconsistent final plan cty.StringVal

Error: Provider produced inconsistent final plan

When expanding the plan for
module.base.module.ceramic.module.alb_node.aws_lb.this[0] to include new
values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" produced an invalid new value for
.access_logs[0].bucket: was cty.StringVal(""), but now
cty.StringVal("ceramic-tnet-alb-node.log").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Fix notification for new module releases

Is your request related to a new offering from AWS?

No

Is your request related to a problem? Please describe.

Hi. We configured github custom notification to get an email when new module released. See screenshot

362c0f72-5c6d-4c60-8d69-eedefcdc00ff

And this notification works for eks module, (as i understand) because, when new version released, @barryib created github release, so it looks like this https://github.com/terraform-aws-modules/terraform-aws-eks/releases
For other modules (iam, vpc, cloudfront) repos there aren't really github releases created, just automatically based on tags, and in that way github don't sent notifications (as i think). I checked what all repos notifications configured exactly as for eks repo

Describe the solution you'd like.

May question mostly to @antonbabenko. Is it possible to create github releases every time when you release new module, to give opportunity to get github notifications, please?

Describe alternatives you've considered.

Additional context

Thanks for any explanation why it works for eks module and doesn't work for others.

Grants have issue.

Hi @antonbabenko while using Grants like this as per example

  grant = [{
    type        = "CanonicalUser"
    permissions = ["FULL_CONTROL"]
    id          = data.aws_canonical_user_id.current.id
  }, {
    type        = "Group"
    permissions = ["READ_ACP", "WRITE"]
    uri          = "http://acs.amazonaws.com/groups/s3/LogDelivery"
  }]


I am getting below error

  12:   grant = [{
  13:     type        = "CanonicalUser"
  14:     permissions = ["FULL_CONTROL"]
  15:     id          = data.aws_canonical_user_id.current.id
  16:   }, {
  17:     type        = "Group"
  18:     permissions = ["READ_ACP", "WRITE"]
  19:     uri          = "http://acs.amazonaws.com/groups/s3/LogDelivery"
  20:   }]

The given value is not suitable for child module variable "grant" defined at
.terraform/modules/s3.example-infrastructure/variables.tf:97,1-17: all list
elements must have the same type.

Feature request: allow non-AWS S3 endpoints

In the meantime, AWS S3 interface has been implemented also by other vendors.

That is, an S3 integration should not hardcode the service endpoint.

As it seems from the parametrization specs in the documentation, there is no possibility to use a custom endpoint.

If so, please implement this feature.

Amazon S3 Bucket Policy for CloudTrail

Is your request related to a problem? Please describe.

I'd like to easily attach the policy described here.

Describe the solution you'd like.

An attach_cloudtrail_policy bool var that adds the following policy and attaches it to the bucket.

data "aws_iam_policy_document" "cloudtrail_policy" {
  statement {
    sid    = "AWSCloudTrailAclCheck20150319"
    effect = "Allow"

    actions = [
      "s3:GetBucketAcl",
    ]

    resources = [
      aws_s3_bucket.this.arn,
    ]

    principals {
      type        = "Service"
      identifiers = ["cloudtrail.amazonaws.com"]
    }
  }

  statement {
    sid    = "AWSCloudTrailWrite20150319"
    effect = "Allow"

    actions = [
      "s3:PutObject",
    ]

    resources = [
      "${aws_s3_bucket.this.arn}/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
    ]

    principals {
      type        = "Service"
      identifiers = ["cloudtrail.amazonaws.com"]
    }

    condition {
      test     = "StringEquals"
      variable = "s3:x-amz-acl"
      values = [
        "bucket-owner-full-control"
      ]
    }
  }
}

Set object locking conditionally

I'm not able to set object locking conditionally (ie. based on the value of a variable). When disabling object locking I set the input variable object_lock_configuration to an empty map {}. However that results in a type error.

Error: Inconsistent conditional result types

  on ../../main.tf line 79, in module "bucket":
  79:   object_lock_configuration = var.enable_object_locking ? {
  80:     object_lock_enabled = "Enabled"
  81:     rule                = {
  82:       default_retention = {
  83:         mode = var.object_retention_mode
  84:         days = 1
  85:       }
  86:     }
  87:   } : {}

The true and false result expressions must have consistent types. The given
expressions are object and object, respectively.

Similar issues have been reported upstream hashicorp/terraform#22405

Lifecycle rule for all objects does not work

Description

I'm trying to to create a bucket with a lifecycle rule to transition all objects after 0 days to Deep Archive. The rule does not work when I created it with terraform. When I create the rule in AWS console it has a different Filter output then the one created with terraform.

Versions

  • Terraform:

Terraform v1.0.4
on darwin_amd64

  • Provider(s):

Terraform v1.0.4
on darwin_amd64

  • provider registry.terraform.io/hashicorp/aws v3.55.0
  • Module:
    AWS S3 bucket Terraform module

Reproduction

Steps to reproduce the behavior:
Terraform used to create bucket:

resource "aws_s3_bucket" "my-test-bucketname-for-infra-versions-2" {
  bucket = "my-test-bucketname-for-infra-versions-2"
  lifecycle_rule {
    id      = "Move to deep glacier after upload"
    enabled = true

    transition {
      days          = 0
      storage_class = "DEEP_ARCHIVE"
    }
  }
}

Run apply:
./terraform -chdir=projects/test_bucket apply

View s3 rule:
aws s3api get-bucket-lifecycle-configuration --bucket my-test-bucketname-for-infra-versions-2

Output from that command:

{
    "Rules": [
        {
            "ID": "Move to deep glacier after upload",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled",
            "Transitions": [
                {
                    "Days": 0,
                    "StorageClass": "DEEP_ARCHIVE"
                }
            ]
        }
    ]
}

This rule does not work. Objects in the bucket are not transitioned to Deep Archive storage class. I've had the rule in place for over 48 hours.

I've talked with AWS support and they said that when they create a rule for all objects the Filter field is empty and that the rule then works.

If I create a rule to apply to all objects via the AWS console and then run this command:
aws s3api get-bucket-lifecycle-configuration --bucket my-test-bucketname-for-infra-versions-2

I get different output:

{
    "Rules": [
        {
            "ID": "Move to deep glacier after upload",
            "Filter": {},
            "Status": "Enabled",
            "Transitions": [
                {
                    "Days": 0,
                    "StorageClass": "DEEP_ARCHIVE"
                }
            ]
        }
    ]
}

Even though I have not specified the optional prefix parameter in the terraform it still creates the Prefix field in the Filter field which should be empty like the one created with the console.

Is there a way I should be specifying a lifecycle rule to apply to all objects?

Terraform error: This object does not have an attribute named error but value is there

Hello there,

I am facing a strange issue on TerraForm. The error is listed below:

Module 1
image

However, the output is clearly called out:

Module 2
image

To give a little context, the Module 1 error is taking place on the Apply Stage. Module 1 is calling Module 2, and the output value is clearly listed in Module 2.

The output value in Module 1 is as such:

image

Any idea what the issue is?

S3 Versioning set to false and object_lock_configuration set to Enabled = bucket creation fail

Description

If versioning is false and object_lock is enabled, terraform/aws will complain that it cannot change the versioning status because the lock is in place. Commenting the versioning block fixes the issue.

Versions

  • Terraform: v0.14.7 and Terraform 0.15.1

  • Terragrunt v0.28.7 and Terragrunt 0.29.1

  • AMD64 arch

  • provider registry.terraform.io/hashicorp/aws v3.37.0

  • terraform-aws-modules/s3-bucket/aws 2.1.0

Reproduction

  1. Have a versioning block with an object_lock
β•·
β”‚ Error: Error putting S3 versioning: InvalidBucketState: An Object Lock configuration is present on this bucket, so the versioning state cannot be changed.
β”‚ 	status code: 409, request id: 7309TRQPXQRXJZKH, host id: DaKsKeOtji12fJgyyh4Z/CEZAAX2oVIhW4tGbi3G+2mlJWD6I96qmF2WTWHZdE4n48wsMSfVt5k=
β”‚ 
β”‚   with module.s3_bucket.aws_s3_bucket.this[0],
β”‚   on .terraform/modules/s3_bucket/main.tf line 5, in resource "aws_s3_bucket" "this":
β”‚    5: resource "aws_s3_bucket" "this" {
β”‚ 

Code Snippet to Reproduce

We use terragrunt as an interface to terraform but I am mainly only passing on the variables from terragrunt to terraform so that would be :

  versioning = {                                                                                                                                                                                                      
    enabled = false
  }

  [..]

  object_lock_configuration = {
    object_lock_enabled = "Enabled"
    rule = {
      default_retention = {
        mode = "GOVERNANCE", 
        days = "1" 
      }
    }
  }

However the below is how I called the s3 module with

  • var.versioning = false
  • and var.object_lock_configuration like the above
module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"

  create_bucket = var.create_bucket
  tags          = var.tags
  bucket        = var.bucket
  bucket_prefix = var.bucket_prefix
  acl           = var.acl
  attach_policy = var.attach_policy
  policy        = var.policy

  versioning = {                                           ###  Commenting this block                                                                                                                                                         
    enabled = var.versioning                        ###  allows to go through the
  }                                                               ###   S3 bucket creation.

  lifecycle_rule = var.lifecycle_rule

  # S3 bucket-level Public Access Block configuration
  block_public_acls       = var.block_public_acls
  block_public_policy     = var.block_public_policy
  ignore_public_acls      = var.ignore_public_acls
  restrict_public_buckets = var.restrict_public_buckets

  # Encryption by default
  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }
  object_lock_configuration = var.object_lock_configuration
}

Expected behavior

I expect the bucket to be created successfully with versioning set to false (instead of being absent) and obviously the lock enabled.

Actual behavior

Impossible to create a bucket with object_lock_configuration and versioning set to false without an error.

Workaround

Comment the versioning {} block but unfortunately we use terragrunt to reuse this s3 module and feed it variables. We don't want to create separate terragrunt modules one with versioning variable and no object_lock for and one without versioning but with object_lock. It's possible but annoying.

Additional context

My terragrunt output showing

$ terragrunt apply

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.s3_bucket.aws_s3_bucket.this[0] will be created
  + resource "aws_s3_bucket" "this" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "da-broken-bucket"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Environment" = "Development"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + lifecycle_rule {
          + abort_incomplete_multipart_upload_days = 7
          + enabled                                = true
          + id                                     = "Delete logs after X days"

          + expiration {
              + days = 30
            }
        }
      + lifecycle_rule {
          + abort_incomplete_multipart_upload_days = 7
          + enabled                                = true
          + id                                     = "Move files to lesser tier after X days"

          + transition {
              + days          = 180
              + storage_class = "GLACIER"
            }
          + transition {
              + days          = 60
              + storage_class = "STANDARD_IA"
            }
        }

      + object_lock_configuration {
          + object_lock_enabled = "Enabled"

          + rule {
              + default_retention {
                  + days = 1
                  + mode = "GOVERNANCE"
                }
            }
        }

      + server_side_encryption_configuration {
          + rule {
              + apply_server_side_encryption_by_default {
                  + sse_algorithm = "AES256"
                }
            }
        }

      + versioning {
          + enabled    = false
          + mfa_delete = false
        }
    }

  # module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
  + resource "aws_s3_bucket_public_access_block" "this" {
      + block_public_acls       = true
      + block_public_policy     = true
      + bucket                  = (known after apply)
      + id                      = (known after apply)
      + ignore_public_acls      = true
      + restrict_public_buckets = true
    }

Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
  + s3_bucket_arn  = (known after apply)
  + s3_bucket_name = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.s3_bucket.aws_s3_bucket.this[0]: Creating...
β•·
β”‚ Error: Error putting S3 versioning: InvalidBucketState: An Object Lock configuration is present on this bucket, so the versioning state cannot be changed.
β”‚ 	status code: 409, request id: 7309TRQPXQRXJZKH, host id: DaKsKeOtji12fJgyyh4Z/CEZAAX2oVIhW4tGbi3G+2mlJWD6I96qmF2WTWHZdE4n48wsMSfVt5k=
β”‚ 
β”‚   with module.s3_bucket.aws_s3_bucket.this[0],
β”‚   on .terraform/modules/s3_bucket/main.tf line 5, in resource "aws_s3_bucket" "this":
β”‚    5: resource "aws_s3_bucket" "this" {
β”‚ 
β•΅
ERRO[0020] Hit multiple errors:
Hit multiple errors:
exit status 1 

An argument named "source_policy_documents" is not expected here.

Description

Starting with v1.23 and #77, there is an issue with the secure transport policy as see below:

Error: Unsupported argument

  on .terraform/modules/dynamodb_backup.secondary_dynamodb_backup_bucket/main.tf line 247, in data "aws_iam_policy_document" "combined":
 247:   source_policy_documents = compact([

An argument named "source_policy_documents" is not expected here.

Versions

  • Terraform:
    0.14.9
  • Provider(s):
    3.36.0
  • Module:
    1.23/1.24

Reproduction

Steps to reproduce the behavior:

Code Snippet to Reproduce

data "aws_iam_policy_document" "service_reports_artifact_bucket" {
  statement {
    sid    = "DenyNonSecureTransport"
    effect = "Deny"
    actions = [
      "s3:*"
    ]

    resources = [
      module.service_reports_artifact_bucket.this_s3_bucket_arn,
      "${module.service_reports_artifact_bucket.this_s3_bucket_arn}/*",
    ]

    principals {
      type        = "AWS"
      identifiers = ["*"]
    }

    condition {
      test     = "Bool"
      variable = "aws:SecureTransport"
      values = [
        "false"
      ]
    }
  }
}

module "service_reports_artifact_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 1.22"

  bucket = "service-reports-artifacts-something-random"

  attach_policy = true
  policy        = data.aws_iam_policy_document.service_reports_artifact_bucket.json

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

  lifecycle_rule = [
    {
      id      = "all"
      enabled = true

      expiration = {
        days = 30
      }

      noncurrent_version_expiration = {
        days = 5
      }
    }
  ]

  tags = module.tags.tags
}

Expected behavior

  • bucket should be provisioned without issue

Actual behavior

  • see above

[question] private is not real private access

Apply the s3 bucket change with this module with option

    acl    = "private"

But the real bucket's access status is Objects can be public, which I want to set the bucket with private as Bucket and objects not public

What option should I go with?

reference:

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access.html

Viewing Access Status

The list buckets view shows whether your bucket is publicly accessible. Amazon S3 labels the permissions for a bucket as follows:

Public – Everyone has access to one or more of the following: List objects, Write objects, Read and write permissions.

Objects can be public – The bucket is not public, but anyone with the appropriate permissions can grant public access to objects.

Buckets and objects not public – The bucket and objects do not have any public access.

Only authorized users of this account – Access is isolated to IAM users and roles in this account and AWS service principals because there is a policy that grants public access.

go through this url, seems no one is suitable

https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Help: How to import with this module.

With this code, how can I import my existing resource...?

module "s3_bucket_example-core-binaries" {
  source = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 1.9.0"

  create_bucket = var.environment == "PRO" ? true : false

  bucket = "example-core-binaries"
  acl    = "private"
}

I tried...

$ terraform.exe import module.s3_bucket_example-core-binaries example-core-binaries

and

$ terraform.exe import module.s3_bucket_example-core-binaries.this[0] example-core-binaries

Thanks

Replication rule -- replicate all content

Hi team!

Was just wondering what the code is if I want to replicate everything in the bucket (not requiring a prefix) ?
Right now I have the following setup:
`
replication_configuration = {
role = aws_iam_role.replication.arn
rules = [
{
id = "ReplicateRule"
status = "Enabled"
priority = 10

    source_selection_criteria = {
      sse_kms_encrypted_objects = {
        enabled = true
      }
    }

    destination = {
      bucket             = module.cross_region_replicated_bucket.this_s3_bucket_arn
      storage_class      = "STANDARD"
      replica_kms_key_id = data.aws_kms_key.mykms_key.arn
      account_id         = data.aws_caller_identity.current.account_id
      access_control_translation = {
        owner = "Destination"
      }
 }`

However it does not seem to be replicating to the destination bucket. Is it because I need to specify a prefix="" ?

Issue: "Map has no element for required attribute 'sqs1'"

Description

Please provide a clear and concise description of the issue you are encountering, your current setup, and what steps led up to the issue. If you can provide a reproduction, that will help tremendously.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Terraform: v0.14.9
  • Provider(s):
    aws = ">= 3.28" random = ">= 2.0" null = ">= 2.0"
  • Module: terraform-aws-modules/terraform-aws-s3-bucket

Reproduction

Steps to reproduce the behavior:

git clone on this repo
terraform init in "examples/notification"
terraform plan

Code Snippet to Reproduce

examples\notification\main.tf
image
modules\notification\main.tf

image

Expected behavior

I expected a working example (i would have had no problem in fixing version issues like the EKS module).

Actual behavior

It failed when executing the terraform plan command because the example has an empty map in a ternary operator.

Terminal Output Screenshot(s)

image

Additional context

Version v0.1.0 removed from terraform registry

Hi!

Yesterday my pipeline crashed due to a deleted tag of this module in the terraform registry.
We've fixed this by using a newer version, but to check if this is a common situation:

Does it happen more often that module tags of this and other terraform-aws-modules repositories are removed from the terraform registry?

No versioning breaks runs

β”œβ”€β”€ provider.aws
β”œβ”€β”€ module.access
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.jenkins
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”‚Β Β  └── provider.random
β”‚Β Β  β”œβ”€β”€ module.jenkins-alb
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ module.alb
β”‚Β Β  β”‚Β Β  └── provider.aws ~> 2.54
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ module.alb_4xx_errors
β”‚Β Β  β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ module.alb_request_count
β”‚Β Β  β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”‚Β Β  └── module.alb_response_time
β”‚Β Β  β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.key
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”‚Β Β  └── provider.tls
β”‚Β Β  β”œβ”€β”€ module.openvpn
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.acme
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.null
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.random
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ provider.tls
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ module.openvpn_cpu
β”‚Β Β  β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”‚Β Β  └── module.openvpn_status
β”‚Β Β  β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.vpc
β”‚Β Β  └── provider.aws >= 2.57, < 4.0
β”œβ”€β”€ module.config_global
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ provider.random
β”‚Β Β  └── module.config_bucket
β”‚Β Β  └── provider.aws >= 3.0, < 4.0
β”œβ”€β”€ module.config_regional
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.dev
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.key
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── provider.tls
β”‚Β Β  β”œβ”€β”€ module.rds
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ provider.random
β”‚Β Β  └── module.this
β”‚Β Β  β”œβ”€β”€ provider.aws >= 2.49, < 4.0
β”‚Β Β  β”œβ”€β”€ module.db_instance
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_option_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_parameter_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.db_subnet_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.redis
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.test-alb
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb
β”‚Β Β  └── provider.aws ~> 2.54
β”‚Β Β  β”œβ”€β”€ module.alb_4xx_errors
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb_request_count
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.alb_response_time
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.test-instance
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── module.cloudwatch_asg_cpu
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.vpc
β”‚Β Β  └── provider.aws >= 2.57, < 4.0
β”‚Β Β  β”œβ”€β”€ module.web-alb
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb
β”‚Β Β  └── provider.aws ~> 2.54
β”‚Β Β  β”œβ”€β”€ module.alb_4xx_errors
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb_request_count
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.alb_response_time
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.web-instance
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── module.cloudwatch_asg_cpu
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.global_infra
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.administrator_group
β”‚Β Β  └── provider.aws >= 2.23, < 4.0
β”‚Β Β  β”œβ”€β”€ module.backup
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.billing_group
β”‚Β Β  └── provider.aws >= 2.23, < 4.0
β”‚Β Β  β”œβ”€β”€ module.cloudtrail
β”‚Β Β  β”œβ”€β”€ provider.archive
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ provider.template
β”‚Β Β  β”œβ”€β”€ module.cloudtrail
β”‚Β Β  └── provider.aws >= 3.0, < 4.0
β”‚Β Β  β”œβ”€β”€ module.root_user_metric_alarm
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.root_user_metric_filter
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.formpiper-env
β”‚Β Β  └── provider.aws >= 3.0, < 4.0
β”‚Β Β  └── module.read_only_group
β”‚Β Β  └── provider.aws >= 2.23, < 4.0
β”œβ”€β”€ module.guardduty_global
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── module.config_bucket
β”‚Β Β  └── provider.aws >= 3.0, < 4.0
β”œβ”€β”€ module.guardduty_regional
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.prod
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.key
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── provider.tls
β”‚Β Β  β”œβ”€β”€ module.rds
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ provider.random
β”‚Β Β  └── module.this
β”‚Β Β  β”œβ”€β”€ provider.aws >= 2.49, < 4.0
β”‚Β Β  β”œβ”€β”€ module.db_instance
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_option_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_parameter_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.db_subnet_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.redis
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.vpc
β”‚Β Β  └── provider.aws >= 2.57, < 4.0
β”‚Β Β  β”œβ”€β”€ module.web-alb
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb
β”‚Β Β  └── provider.aws ~> 2.54
β”‚Β Β  β”œβ”€β”€ module.alb_4xx_errors
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb_request_count
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.alb_response_time
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.web-instance
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── module.cloudwatch_asg_cpu
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.regional_infra
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.cloudwatch_sns
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── provider.template
β”‚Β Β  └── module.ssm
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.security_hub
β”‚Β Β  └── provider.aws (inherited)
β”œβ”€β”€ module.stage
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.key
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── provider.tls
β”‚Β Β  β”œβ”€β”€ module.rds
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ provider.random
β”‚Β Β  └── module.this
β”‚Β Β  β”œβ”€β”€ provider.aws >= 2.49, < 4.0
β”‚Β Β  β”œβ”€β”€ module.db_instance
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_option_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.db_parameter_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.db_subnet_group
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.redis
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.vpc
β”‚Β Β  └── provider.aws >= 2.57, < 4.0
β”‚Β Β  β”œβ”€β”€ module.web-alb
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb
β”‚Β Β  └── provider.aws ~> 2.54
β”‚Β Β  β”œβ”€β”€ module.alb_4xx_errors
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  β”œβ”€β”€ module.alb_request_count
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.alb_response_time
β”‚Β Β  └── provider.aws (inherited)
β”‚Β Β  └── module.web-instance
β”‚Β Β  β”œβ”€β”€ provider.aws (inherited)
β”‚Β Β  └── module.cloudwatch_asg_cpu
β”‚Β Β  └── provider.aws (inherited)
└── module.tgw
└── provider.aws (inherited)

The S3 module now requires AWS provider 3.0 or higher. Many other modules aren't ready for 3.0. Since a version suitable for AWS ~2.0, the is breaking when running terraform init. Please allow for a fix for this.

cors_rule problem

Description

I wan't to create one S3 bucket with cors_rule rule information, but getting error.

Versions

  • Terraform:Terraform v0.12.7
  • Terragrunt: terragrunt version v0.28.18
  • Provider(s): don't have idia, bevause installet only terraform and terragrunt using brew install
  • Module: source = "git::[email protected]:terraform-aws-modules/terraform-aws-s3-bucket.git?ref=v1.6.0"

Reproduction

Steps to reproduce the behavior:

  1. create aws test account
  2. create user for s3 without any roles assing as wit application access
  3. copy arn ot the user
  4. save my code and replace user arnin ${dependency.apple-iam.outputs.s3-user-ui_arn}
  5. try run terragrun apply in . with my code

Code Snippet to Reproduce

terraform {
  source = "git::[email protected]:terraform-aws-modules/terraform-aws-s3-bucket.git?ref=v1.6.0"
}

dependencies {
  paths = ["../aws-data" , "../apple-iam"]
}

dependency "apple-iam" {
  config_path = "../apple-iam"
}

include {
  path = find_in_parent_folders()
}

###########################################################
# View all available inputs for this module:
# https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/1.6.0?tab=inputs
###########################################################
inputs = {
  # (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name.
  # type: string
  bucket = "terraform-apple-ui"
  

  # (Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee.
  # type: string
  #region = "eu-central-1"

  block_public_acls = false

  block_public_policy  = false

  ignore_public_acls = false

  restrict_public_buckets = false

  attach_policy = true

  #example of json:
  # jsonencode("${variable}/text bla bla bla") 
   policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${dependency.apple-iam.outputs.s3-user-ui_arn}"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::terraform-apple-ui/*",
                "arn:aws:s3:::terraform-apple-ui"
            ]
        },
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::terraform-apple-ui/*",
                "arn:aws:s3:::terraform-apple-ui"
            ]
        }
    ]
})

cors_rule = [
    {
      allowed_methods = ["PUT", "POST"]
      allowed_origins = ["https://modules.tf", "https://terraform-aws-modules.modules.tf"]
      allowed_headers = ["*"]
      expose_headers  = ["ETag"]
      max_age_seconds = 3000
      }, {
      allowed_methods = ["PUT"]
      allowed_origins = ["https://example.com"]
      allowed_headers = ["*"]
      expose_headers  = ["ETag"]
      max_age_seconds = 3000
    }
  ]

}

Expected behavior

should create S3 with filled cors_rule from field cors_rule in terracgrunthcl

Actual behavior

error on terragrunt apply or\and terragrunt apply-all

Error: Invalid function argument

on main.tf line 25, in resource "aws_s3_bucket" "this":
25: for_each = length(keys(var.cors_rule)) == 0 ? [] : [var.cors_rule]
|----------------
| var.cors_rule is tuple with 2 elements

Invalid value for "inputMap" parameter: must have map or object type.

ERRO[0014] Hit multiple errors:
Hit multiple errors:
exit status 1

Additional context

thanks

Issue with S3 Grants

Hi @antonbabenko everytime I run plan the grants show up as change that needs to be applied though it was applied before. Any idea what's breaking here?

      + grant {
          + permissions = [
              + "READ_ACP",
              + "WRITE",
            ]
          + type        = "Group"
          + uri         = "http://acs.amazonaws.com/groups/s3/LogDelivery"
        }
      + grant {
          + id          = "canonical-id"
          + permissions = [
              + "FULL_CONTROL",
            ]
          + type        = "CanonicalUser"
        }

Update existing bucket using this module

s3 source bucket is already existing in aws.
Now I want to use this module to update the replication rules on it as well as create new replica bucket to achieve s3 bucket replication. Is it possible to achieve this using this replication module?

Lifecycle rule - Invalid value for "inputMap" parameter: must have map or object type.

"A reproducible test case or series of steps"

  1. Environment: Terraform v3.22.0 with latest version of terraform aws-s3-bucket- (v1.17.0 )
  2. Commented out dynamic lifecycle_rule section and pasted in lifecycle_rule example from terraform main website
    (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket)
  3. ran terraform init/plan/apply - bucket created with example lifecycle rule :-)
  4. terraform console >aws_s3_bucket.this[0].lifecycle_rule to get attribute structure.
  5. copied attribute structure into variables.tf "lifecycle_rule" default = []
  6. removed terraform example lifecycle_rule section then uncommented lifecycle_rule (returning to original main.tf state)
  7. ran terraform init/plan - getting the following errror messages:
Error: Invalid function argument

  on main.tf line 79, in resource "aws_s3_bucket" "this":
  79:         for_each = length(keys(lookup(lifecycle_rule.value, "expiration", {}))) == 0 ? [] : [lookup(lifecycle_rule.value, "expiration", {})]
    |----------------
    | lifecycle_rule.value is object with 9 attributes

Invalid value for "inputMap" parameter: must have map or object type.


Error: Invalid function argument

  on main.tf line 101, in resource "aws_s3_bucket" "this":
 101:         for_each = length(keys(lookup(lifecycle_rule.value, "noncurrent_version_expiration", {}))) == 0 ? [] : [lookup(lifecycle_rule.value, "noncurrent_version_expiration", {})]
    |----------------
    | lifecycle_rule.value is object with 9 attributes

Invalid value for "inputMap" parameter: must have map or object type.

also, including variable "lifecycle_rule" that generates the error

variable "lifecycle_rule" {
  description = "List of maps containing configuration of object lifecycle management."
  type        = any
  default     = [
    { "abort_incomplete_multipart_upload_days" = 0
      "enabled" = true
      "expiration" = [      
        { "date" = ""
          "days" = 90
          "expired_object_delete_marker" = false
        },
      ]
      "id" = "log"
      "noncurrent_version_expiration" = []
      "noncurrent_version_transition" = []
      "prefix" = "log/"
      "tags" = {
        "autoclean" = "true"
        "rule" = "log"
      }
      "transition" = [
        { "date" = ""
          "days" = 30
          "storage_class" = "STANDARD_IA"
        },
        { "date" = ""
          "days" = 60
          "storage_class" = "GLACIER"
        },
      ]
    }
  ]
 }

If you could kindly shed some light on how I can improve my process in order to make fullest access to this wonderful resource, it would be greatly appreciated.

Feature request: bucket policy to deny insecure transport

If no bucket policy is specified the default bucket policy should deny non-encrypted transport.

From
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/

{
  "Id": "ExamplePolicy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSSLRequestsOnly",
      "Action": "s3:*",
      "Effect": "Deny",
      "Resource": [
        "arn:aws:s3:::awsexamplebucket",
        "arn:aws:s3:::awsexamplebucket/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      },
      "Principal": "*"
    }
  ]
}

Remote state error after removing module

Hello,

I have removed the s3_bucket module from my code, however after running apply I get the error:

Error: leftover module module.s3_bucket in state that should have been removed; this is a bug in Terraform and should be reported

Terraform v0.12.19
Module version: 1.5.0

attach_elb_log_delivery_policy also needed for ALB

Description

The document implies that only attach_lb_log_delivery_policy = true is needed for ALB logging, but actually attach_elb_log_delivery_policy = true is also needed. (NLB would requires attach_lb_log_delivery_policy = true only)

Code Snippet to Reproduce

module "s3_logs_lb" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "test"
  acl    = "log-delivery-write"

  attach_lb_log_delivery_policy = true
}

module "alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "~> 6.0"
...

  access_logs = {
    bucket = module.s3_logs_lb.s3_bucket_id
    prefix = "alb"
  }
...
}

Expected behavior

Everything works fine.

Actual behavior

Failed in creating the ALB.

Terminal Output Screenshot(s)

Error: failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: test. Please check S3bucket permission
...
with module.alb.aws_lb.this[0]

Additional context

According to https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy, arn:aws:iam::elb-account-id:root shall be allowed to do s3:PutObject.

Unable to get abort_incomplete_multipart_upload_days working with lifecycle_rule

Description

I'm using terraform-aws-modules/terraform-aws-s3-bucket for creating S3 bucket and while defining lifecycle_rule, I am passing abort_incomplete_multipart_upload_days = 7. Terraform is unable to process that. I am getting below error.

Error: Error putting S3 lifecycle: InvalidRequest: AbortIncompleteMultipartUpload cannot be specified with Tags.
status code: 400, request id: GH54RNPMTPKAS8WN, host id: iNTwBVIqtCD0kC6ExcoE7ttr9ApqHPGk9lUmSWJOtAZgHfxqPq7mu0S5oW1zm4kR1rccarqh018=

Per Terraform docs, it should be defined at lifecycle_rule tag level -> https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#lifecycle_rule

Please let me know how to fix this?

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Terraform: v0.13.7
  • Module: provider registry.terraform.io/hashicorp/aws v3.56.0

Reproduction

Steps to reproduce the behavior:

Yes Yes

terraform init
terraform plan
terraform apply

Code Snippet to Reproduce

   {
     id                                     = "test-id"
     abort_incomplete_multipart_upload_days = 7
     enabled                                = true
     prefix                                 = "model-builder/"

     tags = {
       rule      = "test-lifecycle-rule"
       autoclean = "true"
     }

     transition = [
       {
         days          = 30
         storage_class = "ONEZONE_IA"
       }
     ]

     expiration = {
       days = 60
     }

     noncurrent_version_expiration = {
       days = 60
     }
   }
 ] 
## Expected behavior
Lifecycle Rule should respect abort_incomplete_multipart_upload_days

## Actual behavior
Getting Error - 
Error: Error putting S3 lifecycle: InvalidRequest: AbortIncompleteMultipartUpload cannot be specified with Tags.
   status code: 400, request id: GH54RNPMTPKAS8WN, host id: iNTwBVIqtCD0kC6ExcoE7ttr9ApqHPGk9lUmSWJOtAZgHfxqPq7mu0S5oW1zm4kR1rccarqh018=

### Terminal Output Screenshot(s)
![image](https://user-images.githubusercontent.com/6187189/131744755-e2ec9707-dda5-458d-acc1-d5e4de342ea3.png)

Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.

Hello Team,

I am trying to upgrade my terraform version Terraform v0.13.5 and changing terraform-aws-modules/s3-bucket/aws from 1.0.0 to 1.16.0 , then I tried using the s3_bucket module with below configuration

module "s3_bucket" { source = "terraform-aws-modules/s3-bucket/aws" version = "1.16.0" bucket = var.s3_name acl = "private" force_destroy = true tags = local.tags versioning = { enabled = true } website = { index_document = "index.html" error_document = "error.html" routing_rules = jsonencode([{ Condition : { KeyPrefixEquals : "docs/" }, Redirect : { ReplaceKeyPrefixWith : "documents/" } }]) } logging = { target_bucket = module.log_bucket.this_s3_bucket_id target_prefix = "log/" } cors_rule = { allowed_methods = ["PUT", "POST"] allowed_origins = ["https://modules.tf", "https://terraform-aws-modules.modules.tf"] allowed_headers = ["*"] expose_headers = ["ETag"] max_age_seconds = 3000 } .... .... .... ....

we finding below error when I run a terraform plan

but that form is now deprecated and will be removed in a future version of
Terraform. To silence this warning, remove the quotes around "map" and write
map(string) instead to explicitly indicate that the map elements are strings.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 27, in resource "aws_s3_bucket" "this":
  27:       allowed_methods = cors_rule.value.allowed_methods
    |----------------
    | cors_rule.value is tuple with 1 element

This value does not have any attributes.

(and one more similar warning elsewhere)


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 27, in resource "aws_s3_bucket" "this":
  27:       allowed_methods = cors_rule.value.allowed_methods
    |----------------
    | cors_rule.value is tuple with 2 elements

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 27, in resource "aws_s3_bucket" "this":
  27:       allowed_methods = cors_rule.value.allowed_methods
    |----------------
    | cors_rule.value is tuple with 2 elements

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 27, in resource "aws_s3_bucket" "this":
  27:       allowed_methods = cors_rule.value.allowed_methods
    |----------------
    | cors_rule.value is tuple with 1 element

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 27, in resource "aws_s3_bucket" "this":
  27:       allowed_methods = cors_rule.value.allowed_methods
    |----------------
    | cors_rule.value is 3000

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 28, in resource "aws_s3_bucket" "this":
  28:       allowed_origins = cors_rule.value.allowed_origins
    |----------------
    | cors_rule.value is tuple with 1 element

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 28, in resource "aws_s3_bucket" "this":
  28:       allowed_origins = cors_rule.value.allowed_origins
    |----------------
    | cors_rule.value is tuple with 2 elements

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 28, in resource "aws_s3_bucket" "this":
  28:       allowed_origins = cors_rule.value.allowed_origins
    |----------------
    | cors_rule.value is tuple with 2 elements

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 28, in resource "aws_s3_bucket" "this":
  28:       allowed_origins = cors_rule.value.allowed_origins
    |----------------
    | cors_rule.value is tuple with 1 element

This value does not have any attributes.


Error: Unsupported attribute

  on .terraform/modules/s3_bucket/main.tf line 28, in resource "aws_s3_bucket" "this":
  28:       allowed_origins = cors_rule.value.allowed_origins
    |----------------
    | cors_rule.value is 3000

This value does not have any attributes.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 29, in resource "aws_s3_bucket" "this":
  29:       allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 29, in resource "aws_s3_bucket" "this":
  29:       allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 29, in resource "aws_s3_bucket" "this":
  29:       allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 29, in resource "aws_s3_bucket" "this":
  29:       allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 29, in resource "aws_s3_bucket" "this":
  29:       allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
    |----------------
    | cors_rule.value is 3000

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 30, in resource "aws_s3_bucket" "this":
  30:       expose_headers  = lookup(cors_rule.value, "expose_headers", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 30, in resource "aws_s3_bucket" "this":
  30:       expose_headers  = lookup(cors_rule.value, "expose_headers", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 30, in resource "aws_s3_bucket" "this":
  30:       expose_headers  = lookup(cors_rule.value, "expose_headers", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 30, in resource "aws_s3_bucket" "this":
  30:       expose_headers  = lookup(cors_rule.value, "expose_headers", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 30, in resource "aws_s3_bucket" "this":
  30:       expose_headers  = lookup(cors_rule.value, "expose_headers", null)
    |----------------
    | cors_rule.value is 3000

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 31, in resource "aws_s3_bucket" "this":
  31:       max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 31, in resource "aws_s3_bucket" "this":
  31:       max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 31, in resource "aws_s3_bucket" "this":
  31:       max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    |----------------
    | cors_rule.value is tuple with 2 elements

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 31, in resource "aws_s3_bucket" "this":
  31:       max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    |----------------
    | cors_rule.value is tuple with 1 element

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.


Error: Invalid function argument

  on .terraform/modules/s3_bucket/main.tf line 31, in resource "aws_s3_bucket" "this":
  31:       max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    |----------------
    | cors_rule.value is 3000

Invalid value for "inputMap" parameter: lookup() requires a map as the first
argument.

Same bucket policy to multiple buckets (different RESOURCE)

Hello and that you very much for this nice module.

After losing quite some time of figuring our how to attach the same bucket policy to multiple buckets (using ONE policy document) I decided to ask here. So is there any "nice way" to attache the same bucket policy to multiple buckets created with this module. Lets say I created two buckets with the module (Can I use ONE module statement to create 2 or more buckets ? ). At the moment I am creating the buckets like this:

module "s3-1" {`
     source = ".../"
     bucket = "test1"
     policy        = data.aws_iam_policy_document.test.json
}

 module "s3-2" {
     source = ".../"
     bucket = "test2"
     policy        = data.aws_iam_policy_document.test.json
}

data "aws_iam_policy_document" "test" {
   statement {

        sid = "DenyIncorrectEncryptionHeader"
        effect = "Deny"
        principals {
            type = "AWS"
            identifiers = ["*"]
        }
        actions = ["s3:PutObject"]
        resources = [
             "${module.s3-1.this_s3_bucket_arn}/*"
        ]
        condition {
            test = "StringNotEquals"
            variable = "s3:x-amz-server-side-encryption"
            values = ["AES256"]
        }
    }
}

So I want to do something like this, the problem that i have is that in the POLICY in the RESOURCE section I want/need to have ONLY the bucket ARN of the current bucket. I know I can create multiple policy documents and use it for the different buckets, but is there a better way to do this. I was searching between dynamic_blocks, for_each, count etc. But nothing worked for me.

Any help is appreciated :)

Adding bucket policy after bucket creation cause conflicting conditional operation

Hi,

Do you see any issue if you put lifecycle create_before_destroy = true for aws_s3_bucket_public_access_block? I think it would solve this and avoid running apply twice.

module.s3_bucket["SOME_NAME"].aws_s3_bucket_policy.this[0]: Creating...
module.s3_bucket["SOME_NAME"].aws_s3_bucket_policy.this[0]: Creation complete after 1s [id=SOME_NAME-eu-central-1-prod]

Error: error deleting S3 Bucket Public Access Block (SOME_NAME-eu-central-1-prod): OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
        status code: 409, request id: BFAF12CAFC4C676A, host id: F7OYagS8jLWbdJHM7sIJWpRT0OUpPf98h2XcTsqlWS9kv+KcGDRaOBU9xcjMi+yQSrkt7I4BcZU=

Feature-Request Replication Configuration after Bucket Creation

Feature Request: replication_configuration after bucket creation

Scenario: Allow creation of replication_configuration outside of bucket creation. In order to create bi-directional replication both buckets need to exist before replication can be configured. I do not believe there is currently a way to support this scenario.

Potential Solution: Separate out the replication configuration so it can be applied to the bucket after creation, similar to bucket policy.

How to set "ignore_changes" lifecycle rule?

Hello,

I am trying this module as a replacement for different S3 resources I created manually so far
One use-case I have for a static website is to ignore changes on website.routing_rules.

I can write that lifecycle rule on the resource:

resource "aws_s3_bucket" "website" {
  # ...
  website {
    index_document = "index.html"
    error_document = "404.html"
  }

  # ...
  lifecycle {
    ignore_changes = [
      website.0.routing_rules
    ]
  }
}

But I cannot figure out how to migrate that lifecycle pattern to the module?

When I try:

module "website" {
  source = "terraform-aws-modules/s3-bucket/aws"

  # ...
  website = {
    index_document = "index.html"
    error_document = "404.html"
  }

  # ...
  lifecycle_rule = [
    {
      ignore_changes = [
        website.0.routing_rules
      ]
    }
  ]
}

I see the following error:

A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.

Not very self-explanatory, but from some research, it seems difficult today to inject ignore_change lifecycle rules to modules since variables are not supported.

Unless someone knows a magic trick?

Can't set region other than profile region

Hi everyone!

I can't identify whether problem is on my side or it's an issue in the module.
I'm using the module as a part of my own module
and I'm creating a couple of buckets and all of them get us-west-2 region despite what I'm passing as region var.

Code in my module ./modules/repo/main.tf:

module "s3-bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "1.9.0"
  region = var.bucket-region

  bucket               = "${local.pfx}-${var.bucket-name}-${var.env}"
  acl                  = "private"
  attach_public_policy = false

  versioning = {
    enabled = true
  }

  tags = local.tags
}

Main code ./main.tf :

module "repo-dev" {
  source = "./modules/repo"
  bucket-region = "us-west-2"   // <---------- that's created as expected
  env    = "dev"
}

module "repo-production" {
  source = "./modules/repo"
  bucket-region = "us-east-1"   // <---------- that's dropped to 'us-west-2'
  env    = "production"
}

terraform plan reportings are fine, but after terraform apply both buckets have region us-west-2.

It must be said that I'm using us-west-2 as an AWS provider setting:

provider "aws" {
  version = "~> 2.35"
  profile = var.profile
  region  = "us-west-2"
}

But the docs says that bucket region will override profile region:

region (Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee.

Could you please point me out what I'm doing wrong?

Thanks!

BUG: Invalid count argument

This change #10 broke my current use of this module.

  on .terraform/modules/example/main.tf line 221, in resource "aws_s3_bucket_policy" "this":
 221:   count = var.create_bucket && (var.attach_elb_log_delivery_policy || var.policy != null) ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

make: *** [plan-without-prep] Error 1

I believe this is because I've passing the policy via data aws_iam_policy_document.

module "bucket" {
  source        = "[email protected]:terraform-aws-modules/terraform-aws-s3-bucket.git?ref=v1.2.0"
  create_bucket = var.enable ? true : false

  bucket = local.bucket_name
  region = var.default_region
  policy = element(
    concat(
      data.aws_iam_policy_document.this.*.json,
      [""],
    ),
    0,
  )
}

incompatible changes when defining a policy

just a small heads up:

When releasing newer minor versions the last days, you added incompatible changes:

in 1.0.0 we had defined policy = .... which created a bucket policy ... when using the latest the bucket policy is not created anymore even if set and you have to define the policy and add an attach_policy = true attribute.

maybe a good idea to assume you want to attach a policy if you define a policy (not null or "")?

thanks a lot.

Issue with condition in module value

Hi,
I am using for_each in modules, and maybe I am missing something obvious, but from some reason if I put condition for server_side_encryption_configuration value it throws this error:


  on .terraform/modules/s3_bucket/main.tf line 189, in resource "aws_s3_bucket" "this":
 189:         for_each = length(keys(lookup(server_side_encryption_configuration.value, "rule", {}))) == 0 ? [] : [lookup(server_side_encryption_configuration.value, "rule", {})]

Invalid value for "default" parameter: the default value must have the same
type as the map elements.

If I specify same value directly without condition it works.

  description = "Map of s3-bucket names to configuration."
  type        = any
  default     = {
    first = {
      acl = "private"
      versioning = false
      encryption = true
    },
    second = {
      acl = "private"
      versioning = false
      encryption = true
    }
  }
}
module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"
  version = "1.15.0"

  for_each = var.s3-buckets

  bucket = "${each.key}-${local.region}-${var.environment}"
  acl    = lookup(each.value, "acl", "private")

  versioning = {
    enabled = lookup(each.value, "versioning", false)
  }

  server_side_encryption_configuration = each.value.encryption ? {rule = {apply_server_side_encryption_by_default = {sse_algorithm = "AES256"}}} : {}
//  server_side_encryption_configuration = try(each.value.encryption, false) ? {rule = {apply_server_side_encryption_by_default = {sse_algorithm = "AES256"}}} : {}
//  server_side_encryption_configuration = {} # This works
//  server_side_encryption_configuration =  {rule = {apply_server_side_encryption_by_default = {sse_algorithm = "AES256"}}} # This works


  tags = merge(
    {
      Environment = var.environment
      Terraform   = "true"
    },
  var.tags)

}

This object does not have an attribute named "function_name".

I tried to submit a PR but I am not sure why its not working on my end.
The error when planning / Applying
Code:

module "all_notifications" {
  source = "git::https://[email protected]/terraform-aws-modules/terraform-aws-s3-bucket.git//modules/notification?ref=v1.16.0"

  bucket = local.name_prefix
  create = true

  lambda_notifications = {
    lambda = {
      function_arn  = module.lambda.this_lambda_function_invoke_arn
      events        = ["s3:ObjectCreated:Put"]
      filter_suffix = ".csv"
    }
  }
}

Error:

Error: Unsupported attribute

  on .terraform/modules/all_notifications/modules/notification/main.tf line 63, in resource "aws_lambda_permission" "allow":
  63:   function_name       = each.value.function_name
    |----------------
    | each.value is object with 3 attributes

This object does not have an attribute named "function_name".

fix = main.tf

# Lambda
resource "aws_lambda_permission" "allow" {
  for_each = var.lambda_notifications

  statement_id_prefix = "AllowLambdaS3BucketNotification-"
  action              = "lambda:InvokeFunction"
  function_name       = each.value.function_name
  qualifier           = lookup(each.value, "qualifier", null)
  principal           = "s3.amazonaws.com"
  source_arn          = local.bucket_arn
}

**function_name = each.value.function_name -> each.value.function_arn

looking at the below code, function_name does not exist, only function_arn**

  dynamic "lambda_function" {
    for_each = var.lambda_notifications

    content {
      id                  = lambda_function.key
      lambda_function_arn = lambda_function.value.function_arn
      events              = lambda_function.value.events
      filter_prefix       = lookup(lambda_function.value, "filter_prefix", null)
      filter_suffix       = lookup(lambda_function.value, "filter_suffix", null)
    }
  }

Unless I am wrong, you can delete this issue then =)

Ability to manage S3 Bucket Ownership Controls

Is your request related to a new offering from AWS?

Yes: https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html#enable-object-ownership

I would like to ensure that objects placed into an S3 bucket in one AWS account by a principal from another AWS account are set to be owned by the bucket owner, even if AWS principal from another account did not explicitly set the bucket-owner-full-control canned ACL on the object during upload.

Feature supported by AWS provider for Terraform since v3.10.0:

Is your request related to a problem? Please describe.

Creating the aws_s3_bucket_ownership_controls resource outside of this module causes a race condition with aws_s3_bucket_public_access_block resource and possibly aws_s3_bucket_policy, that are optionally created by this module, requiring two-step apply.

Describe the solution you'd like.

I'd like to see an optional aws_s3_bucket_ownership_controls resource added to this module, creation and configuration of which can be controlled by an input variable.

Describe alternatives you've considered.

I have tried creating the aws_s3_bucket_ownership_controls resource outside of this module, but it tends to have a race condition with aws_s3_bucket_public_access_block resource optionally created by this module, requiring two-step apply. Currently there's no way to wait for the aws_s3_bucket_public_access_block or aws_s3_bucket_policy creation, since they are not exposed as output, so I feel adding optional aws_s3_bucket_ownership_controls resource to this module would be a cleaner solution, allowing for explicit dependencies to be specified.

Additional context

Possible module variable:

Name Description Type Default Required
object_ownership (Optional) Object ownership. Valid values: BucketOwnerPreferred or ObjectWriter string null no

Make kms_master_key_id optional

Sample code:

module "s3_bucket" {
  source = "terraform-aws-modules/s3-bucket/aws"

  bucket = "sample-bucket-1234567890"
  acl = "private"

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

}

Expected outcome: Bucket created with SSE-S3 encryption.

Real outcome:

Error: Unsupported attribute

  on .terraform/modules/s3_bucket/terraform-aws-modules-terraform-aws-s3-bucket-f778720/main.tf line 190, in resource "aws_s3_bucket" "this":
 190:               kms_master_key_id = apply_server_side_encryption_by_default.value.kms_master_key_id
    |----------------
    | apply_server_side_encryption_by_default.value is object with 1 attribute "sse_algorithm"

This object does not have an attribute named "kms_master_key_id"

Add a warning in the Complete example about the object lock feature

Greetings,
Please somehow annotate or highlight the very dangerous option object_lock_configuration in the full S3 example, or change the default retention mode to GOVERNANCE at least, so that other people don't find themselves in the situation that I've got myself into - I wasn't quite aware of this functionality and forgot to remove it from my configuration, which was based on this example. In short - now I'm stuck with a couple buckets which no one (even support) can delete for the next 5 years.

Feature Request - S3 ACL Policy Grant

Hi there,

We have a requirement to implement Bucket ACLs on a few buckets in S3 and have been using this module for other buckets we have created, so we'd like to keep some consistency if possible. It looks like this module doesn't support grants (https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#grant).

Is this not supported by design? I know there's some weirdness with acl vs grant using the S3 Resources. I have some time over the weekend and might be able to work on this, but if it's purposefully not supported I wouldn't want to waste my time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.