Code Monkey home page Code Monkey logo

terraform-aws-ecs's Introduction

AWS ECS Terraform module

Terraform module which creates ECS (Elastic Container Service) resources on AWS.

SWUbanner

Available Features

  • ECS cluster w/ Fargate or EC2 Auto Scaling capacity providers
  • ECS Service w/ task definition, task set, and container definition support
  • Separate sub-modules or integrated module for ECS cluster and service

For more details see the design doc

Usage

This project supports creating resources through individual sub-modules, or through a single module that creates both the cluster and service resources. See the respective sub-module directory for more details and example usage.

Integrated Cluster w/ Services

module "ecs" {
  source = "terraform-aws-modules/ecs/aws"

  cluster_name = "ecs-integrated"

  cluster_configuration = {
    execute_command_configuration = {
      logging = "OVERRIDE"
      log_configuration = {
        cloud_watch_log_group_name = "/aws/ecs/aws-ec2"
      }
    }
  }

  fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
  }

  services = {
    ecsdemo-frontend = {
      cpu    = 1024
      memory = 4096

      # Container definition(s)
      container_definitions = {

        fluent-bit = {
          cpu       = 512
          memory    = 1024
          essential = true
          image     = "906394416424.dkr.ecr.us-west-2.amazonaws.com/aws-for-fluent-bit:stable"
          firelens_configuration = {
            type = "fluentbit"
          }
          memory_reservation = 50
        }

        ecs-sample = {
          cpu       = 512
          memory    = 1024
          essential = true
          image     = "public.ecr.aws/aws-containers/ecsdemo-frontend:776fd50"
          port_mappings = [
            {
              name          = "ecs-sample"
              containerPort = 80
              protocol      = "tcp"
            }
          ]

          # Example image used requires access to write to root filesystem
          readonly_root_filesystem = false

          dependencies = [{
            containerName = "fluent-bit"
            condition     = "START"
          }]

          enable_cloudwatch_logging = false
          log_configuration = {
            logDriver = "awsfirelens"
            options = {
              Name                    = "firehose"
              region                  = "eu-west-1"
              delivery_stream         = "my-stream"
              log-driver-buffer-limit = "2097152"
            }
          }
          memory_reservation = 100
        }
      }

      service_connect_configuration = {
        namespace = "example"
        service = {
          client_alias = {
            port     = 80
            dns_name = "ecs-sample"
          }
          port_name      = "ecs-sample"
          discovery_name = "ecs-sample"
        }
      }

      load_balancer = {
        service = {
          target_group_arn = "arn:aws:elasticloadbalancing:eu-west-1:1234567890:targetgroup/bluegreentarget1/209a844cd01825a4"
          container_name   = "ecs-sample"
          container_port   = 80
        }
      }

      subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
      security_group_rules = {
        alb_ingress_3000 = {
          type                     = "ingress"
          from_port                = 80
          to_port                  = 80
          protocol                 = "tcp"
          description              = "Service port"
          source_security_group_id = "sg-12345678"
        }
        egress_all = {
          type        = "egress"
          from_port   = 0
          to_port     = 0
          protocol    = "-1"
          cidr_blocks = ["0.0.0.0/0"]
        }
      }
    }
  }

  tags = {
    Environment = "Development"
    Project     = "Example"
  }
}

Examples

Requirements

Name Version
terraform >= 1.0
aws >= 4.66.1

Providers

No providers.

Modules

Name Source Version
cluster ./modules/cluster n/a
service ./modules/service n/a

Resources

No resources.

Inputs

Name Description Type Default Required
autoscaling_capacity_providers Map of autoscaling capacity provider definitions to create for the cluster any {} no
cloudwatch_log_group_kms_key_id If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) string null no
cloudwatch_log_group_name Custom name of CloudWatch Log Group for ECS cluster string null no
cloudwatch_log_group_retention_in_days Number of days to retain log events number 90 no
cloudwatch_log_group_tags A map of additional tags to add to the log group created map(string) {} no
cluster_configuration The execute command configuration for the cluster any {} no
cluster_name Name of the cluster (up to 255 letters, numbers, hyphens, and underscores) string "" no
cluster_service_connect_defaults Configures a default Service Connect namespace map(string) {} no
cluster_settings List of configuration block(s) with cluster settings. For example, this can be used to enable CloudWatch Container Insights for a cluster any
[
{
"name": "containerInsights",
"value": "enabled"
}
]
no
cluster_tags A map of additional tags to add to the cluster map(string) {} no
create Determines whether resources will be created (affects all resources) bool true no
create_cloudwatch_log_group Determines whether a log group is created by this module for the cluster logs. If not, AWS will automatically create one if logging is enabled bool true no
create_task_exec_iam_role Determines whether the ECS task definition IAM role should be created bool false no
create_task_exec_policy Determines whether the ECS task definition IAM policy should be created. This includes permissions included in AmazonECSTaskExecutionRolePolicy as well as access to secrets and SSM parameters bool true no
default_capacity_provider_use_fargate Determines whether to use Fargate or autoscaling for default capacity provider strategy bool true no
fargate_capacity_providers Map of Fargate capacity provider definitions to use for the cluster any {} no
services Map of service definitions to create any {} no
tags A map of tags to add to all resources map(string) {} no
task_exec_iam_role_description Description of the role string null no
task_exec_iam_role_name Name to use on IAM role created string null no
task_exec_iam_role_path IAM role path string null no
task_exec_iam_role_permissions_boundary ARN of the policy that is used to set the permissions boundary for the IAM role string null no
task_exec_iam_role_policies Map of IAM role policy ARNs to attach to the IAM role map(string) {} no
task_exec_iam_role_tags A map of additional tags to add to the IAM role created map(string) {} no
task_exec_iam_role_use_name_prefix Determines whether the IAM role name (task_exec_iam_role_name) is used as a prefix bool true no
task_exec_iam_statements A map of IAM policy statements for custom permission usage any {} no
task_exec_secret_arns List of SecretsManager secret ARNs the task execution role will be permitted to get/read list(string)
[
"arn:aws:secretsmanager:::secret:*"
]
no
task_exec_ssm_param_arns List of SSM parameter ARNs the task execution role will be permitted to get/read list(string)
[
"arn:aws:ssm:::parameter/*"
]
no

Outputs

Name Description
autoscaling_capacity_providers Map of autoscaling capacity providers created and their attributes
cloudwatch_log_group_arn ARN of CloudWatch log group created
cloudwatch_log_group_name Name of CloudWatch log group created
cluster_arn ARN that identifies the cluster
cluster_capacity_providers Map of cluster capacity providers attributes
cluster_id ID that identifies the cluster
cluster_name Name that identifies the cluster
services Map of services created and their attributes
task_exec_iam_role_arn Task execution IAM role ARN
task_exec_iam_role_name Task execution IAM role name
task_exec_iam_role_unique_id Stable and unique string identifying the task execution IAM role

Authors

Module is maintained by Anton Babenko with help from these awesome contributors.

License

Apache-2.0 Licensed. See LICENSE.

terraform-aws-ecs's People

Contributors

alisson276 avatar amontalban avatar antonbabenko avatar arminc avatar asantos-fuze avatar betajobot avatar billylaing avatar bryantbiggs avatar dev-slatto avatar drfaust92 avatar federicogatti avatar frimik avatar gpdenny avatar ivan-sukhomlyn avatar jasondamour avatar kian avatar lancedikson avatar lays147 avatar leandrozimmer avatar leonardobiffi avatar levisbakalinsky-ballys avatar lghakamo-almaco avatar martynova1 avatar mbeacom avatar mkostrikin avatar oliverlambson avatar ozbillwang avatar saulius avatar semantic-release-bot avatar sestrella avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-ecs's Issues

ECS Service Linked Role does not exist

Just downloaded & ran with given example code and received this error:
module.ecs_example_complete-ecs.aws_ecs_capacity_provider.prov1: Creating...

Error: error creating capacity provider: ClientException: ECS Service Linked Role does not exist. Please create a Service linked role for ECS and try again.

module not outputting task_exec_iam_role_name, task_exec_iam_role_arn

in my terraform implementation of this module i have passed the values for this module

      create_task_exec_iam_role = true
      create_task_exec_policy   = true

and i can confirm that the module is creating the resources both IAM role and Policy, but when i want to see the role name and arn in the outputs it sends blank ( null )

output "arn" {
  value = module.ecs.task_exec_iam_role_arn
}

output "name" {
  value = module.ecs.task_exec_iam_role_name
}

another approach i tried to pass policiy ARNs to the module using this input task_exec_iam_role_policies
but when passing it . i am getting this error.

β”‚ Error: Invalid for_each argument
β”‚ 
β”‚   on .terraform/modules/ecs/modules/service/main.tf line 781, in resource "aws_iam_role_policy_attachment" "task_exec_additional":
β”‚  781:   for_each = { for k, v in var.task_exec_iam_role_policies : k => v if local.create_task_exec_iam_role }
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ local.create_task_exec_iam_role is true
β”‚     β”‚ var.task_exec_iam_role_policies is a map of string, known only after apply
β”‚ 
β”‚ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this
β”‚ resource.
β”‚ 
β”‚ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
β”‚ 
β”‚ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
β•΅

this is how the input passing looks like

      task_exec_iam_role_policies = {
        "policy1" = aws_iam_policy.secret_manager_get_secret_policy.arn
        "policy1" = aws_iam_policy.allow_ssm_exec_policy.arn
      }

i also tried adding depends_on to the module so that my policies gets created first then the module but it's not affecting anything.
any ideas on how to attach policies to my ECS successfully and why outputs not showing .?

using
terraform : v1.4.2
module version : 5.0.0
AWS Provider : 5.0.1
OS : Mac OSX 13.4 (22F66)

thanks in advance

Update deprecated resource

Is your request related to a new offering from AWS?

No.

Is your request related to a problem? Please describe.

After upgrading the AWS Terraform provider to v4.0.0 a warning shows up with the following message:

β”‚ Warning: Argument is deprecated β”‚ with module.ecs_cluster.aws_ecs_cluster.this, β”‚ on .terraform/modules/ecs_cluster/main.tf line 1, in resource "aws_ecs_cluster" "this": β”‚ 1: resource "aws_ecs_cluster" "this" { β”‚ Use the aws_ecs_cluster_capacity_providers resource instead β”‚ (and one more similar warning elsewhere)

Describe the solution you'd like.

Update the module to use the aws_ecs_cluster_capacity_providers resource

Additional context

Deprecated resources are listed at the ECS Cluster Terraform documentation web
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_cluster

Migration from v4 to v5 tries to destroy and recreate the cluster (upgrade steps undocumented)

Description

I had an ECS cluster declared like this, in a module of my project:

# ./cluster/main.tf

variable "name" {
  type = string
}

variable "instance_type" {
  type = string
}

variable "subnets" {
  type = list(string)
}

variable "security_group" {
  type = string
}

locals {
  # This is the convention we use to know what belongs to each other
  ec2_resources_name = "${var.name}-prod"

  image_data = jsondecode(data.aws_ssm_parameter.aws_ecs_recommended_image.value)
  image_id   = local.image_data["image_id"]
}

data "aws_ssm_parameter" "aws_ecs_recommended_image" {
  name = "/aws/service/ecs/optimized-ami/amazon-linux-2/arm64/recommended"
}

module "asg" {
  source = "terraform-aws-modules/autoscaling/aws"

  name = "${var.name}-asg"

  min_size                  = 2
  max_size                  = 5
  desired_capacity          = 2
  health_check_type         = "EC2"
  health_check_grace_period = 180 # 3 minutes

  vpc_zone_identifier = var.subnets
  security_groups     = [
    var.security_group
  ]

  create_iam_instance_profile = true
  iam_role_name               = "${var.name}-asg-instance-role"
  iam_role_path               = "/ec2/"
  iam_role_description        = "IAM role for ${var.name}-asg"
  iam_role_policies           = {
    AmazonSSMManagedInstanceCore        = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore",
    AmazonEC2ContainerServiceforEC2Role = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
  }

  image_id      = local.image_id
  instance_type = var.instance_type

  user_data     = base64encode(templatefile("${path.module}/templates/user-data.sh", { cluster_name = var.name }))

  key_name = "xxxxxxxx"

  tags = {
    Cluster = "${var.name}-cluster"
    product = var.name
  }
}

module "ecs" {
  source = "terraform-aws-modules/ecs/aws"
  version = "~> 4.1.3"

  cluster_name = var.name

  autoscaling_capacity_providers = {
    one = {
      name                   = "${var.name}-provider"
      auto_scaling_group_arn = module.asg.autoscaling_group_arn
      weight                 = "1"
      managed_scaling        = {
        status = "ENABLED"
      }
      default_capacity_provider_strategy = {
        name   = "${var.name}-provider-strategy"
        weight = "1"
      }
    }
  }

  tags = {
    Cluster = "${var.name}-cluster"
    product = var.name
  }
}

If change the version to 5+, terraform tries to destroy and re-create the cluster from scratch.

Honestly this is not optimal. It tries to destroy this item:

  • module.cluster.module.ecs.aws_ecs_cluster.this

And create this one:

  • module.cluster.module.ecs.module.cluster.aws_ecs_cluster.this

I think that the reason is that you've changed the folder structure of your internal module.

I haven't find any upgrade steps, but I've noticed you've done it for migrating from V3 to V4.

Versions

  • Module version [Required]: 4.1.3

  • Terraform version: 1.4.2

  • Provider version(s):

  required_providers {
    aws = "~> 4.67"
  }

Reproduction Code [Required]

See at the beginning of the issue

Steps to reproduce the behavior:

  • Create a cluster with version 4.1.3, apply
  • then switch to the latest (5.2.2) and plan

Expected behavior

It tries to destroy and recreate the cluster

Actual behavior

No change to the cluster, since nothing is changed.

Arn ASG for capacity provider not found

Description

I use terragrunt to create a ecs cluster with asg capacity provider, I get issue "The specified capacity provider was not found. Specify a valid capacity provider and try again" I use ARN of ASG like this, do I miss something

Terraform will perform the following actions:

  # aws_ecs_cluster.main will be created
  + resource "aws_ecs_cluster" "main" {
      + arn                = (known after apply)
      + capacity_providers = [
          + "arn:aws:autoscaling:eu-west-3:947226147474:autoScalingGroup:3daf2d5c-a6bc-4c44-8615-c87cf562863f:autoScalingGroupName/ECS-tr-express-20220209120348551000000002",
        ]
      + id                 = (known after apply)
      + name               = "ECS-tr-express-project-tr-express-dev"
      + tags               = {
          + "Application" = "tr-express"
          + "Deployer"    = "terraform"
          + "Environment" = "dev"
          + "Name"        = "ecs-cluster-tr-express-project-tr-express-dev"
          + "Project"     = "tr-express-project"
        }
      + tags_all           = {
          + "Application" = "tr-express"
          + "Deployer"    = "terraform"
          + "Environment" = "dev"
          + "Name"        = "ecs-cluster-tr-express-project-tr-express-dev"
          + "Project"     = "tr-express-project"
        }

      + default_capacity_provider_strategy {
          + base              = (known after apply)
          + capacity_provider = (known after apply)
          + weight            = (known after apply)
        }

      + setting {
          + name  = (known after apply)
          + value = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + arn  = (known after apply)
  + name = "ECS-tr-express-project-tr-express-dev"

the terminal output below:

β”‚ Error: failed creating ECS Cluster (ECS-tr-express-project-tr-express-dev): InvalidParameterException: The specified capacity provider was not found. Specify a valid capacity provider and try again.
β”‚ 
β”‚   with aws_ecs_cluster.main,
β”‚   on main.tf line 1, in resource "aws_ecs_cluster" "main":
β”‚    1: resource "aws_ecs_cluster" "main" {

Does not support tasks_iam_role_statements through the single module?

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • No πŸ›‘: please wait to file a request until the functionality is avaialble in the AWS provider
  • Yes βœ…: please list the AWS provider version which introduced this functionality

Is your request related to a problem? Please describe.

As the README, I want to create resources through a single module that creates both the cluster and service resources. But by this single module, it only supports task_exec_iam_statements for task exec role, but does not support tasks_iam_role_statements for task role.

Describe the solution you'd like.

Describe alternatives you've considered.

Additional context

Plugin crashed: interface {} is nil

Description

Latest patch has introduced to a crash situation. Maybe README is incomplete?

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:

  • Terraform version:
    1.1.8

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v4.17.1

Reproduction Code [Required]

Steps to reproduce the behavior:

  1. Previously had been using the capacity_providers = ["FARGATE", "FARGATE_SPOT"] and container_insights = true.
  2. Replaced with fargate_capacity_providers attribute per new README
  3. terraform validate passes
  4. terraform apply <- crash happens here

Expected behavior

No crash

Actual behavior

Empty {} object's default value is nil and causing a panic

Terminal Output Screenshot(s)

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.ecs.module.ecs.aws_ecs_cluster.this[0] will be updated in-place
  ~ resource "aws_ecs_cluster" "this" {
        id                 = "arn:aws:ecs:us-east-1:XXXXXXXXXXXXXXX:cluster/dev"
        name               = "dev"
        tags               = {
            "Environment" = "dev"
        }
        # (3 unchanged attributes hidden)

      + configuration {
        }

        # (1 unchanged block hidden)
    }

  # module.ecs.module.ecs.aws_ecs_cluster_capacity_providers.this[0] will be updated in-place
  ~ resource "aws_ecs_cluster_capacity_providers" "this" {
      ~ capacity_providers = [
          - "FARGATE_SPOT",
            # (1 unchanged element hidden)
        ]
        id                 = "dev"
        # (1 unchanged attribute hidden)

      + default_capacity_provider_strategy {
          + base              = 0
          + capacity_provider = "FARGATE"
          + weight            = 100
        }
    }

Plan: 0 to add, 2 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.ecs.module.ecs.aws_ecs_cluster.this[0]: Modifying... [id=arn:aws:ecs:us-east-1:XXXXXXXXXXXXXXX:cluster:cluster/dev]
β•·
β”‚ Error: Request cancelled
β”‚ 
β”‚   with module.ecs.module.ecs.aws_ecs_cluster.this[0],
β”‚   on .terraform/modules/ecs.ecs/main.tf line 5, in resource "aws_ecs_cluster" "this":
β”‚    5: resource "aws_ecs_cluster" "this" {
β”‚ 
β”‚ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
β•΅
Releasing state lock. This may take a few moments...

Stack trace from the terraform-provider-aws_v4.17.1_x5 plugin:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 40 [running]:
github.com/hashicorp/terraform-provider-aws/internal/service/ecs.expandClusterConfiguration({0xc0021d02e0, 0x94ce2e8, 0x2})
	github.com/hashicorp/terraform-provider-aws/internal/service/ecs/cluster.go:557 +0x113
github.com/hashicorp/terraform-provider-aws/internal/service/ecs.resourceClusterUpdate(0xc002ccb500, {0x8027680, 0xc00037f500})
	github.com/hashicorp/terraform-provider-aws/internal/service/ecs/cluster.go:332 +0x2a5
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).update(0xa5a7760, {0xa5a7760, 0xc002cfbb60}, 0xd, {0x8027680, 0xc00037f500})
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:729 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000d01880, {0xa5a7760, 0xc002cfbb60}, 0xc001692750, 0xc002ccb380, {0x8027680, 0xc00037f500})
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:847 +0x9e5
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000296930, {0xa5a76b8, 0xc002c97100}, 0xc002c94d20)
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1021 +0xe3c
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc001ae3860, {0xa5a7760, 0xc002cfb380}, 0xc0001b65b0)
	github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:812 +0x56b
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x92321e0, 0xc001ae3860}, {0xa5a7760, 0xc002cfb380}, 0xc002c8d860, 0x0)
	github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00013e700, {0xa6b4510, 0xc000cac1a0}, 0xc002186c60, 0xc001aef830, 0x1030dda0, 0x0)
	google.golang.org/[email protected]/server.go:1282 +0xccf
google.golang.org/grpc.(*Server).handleStream(0xc00013e700, {0xa6b4510, 0xc000cac1a0}, 0xc002186c60, 0x0)
	google.golang.org/[email protected]/server.go:1619 +0xa2a
google.golang.org/grpc.(*Server).serveStreams.func1.2()
	google.golang.org/[email protected]/server.go:921 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
	google.golang.org/[email protected]/server.go:919 +0x294

Error: The terraform-provider-aws_v4.17.1_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Additional context

N/A

Call to function "element" failed

Hi ,
Here is my aw ecs terraform module code

module "ecs" {
  source  = "terraform-aws-modules/ecs/aws"
  version = "5.0.1"
  cluster_name = "myecscluster"
  

   fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
  }

  services = {
  container_definitions = {
    myfront_end = {
      cpu       = 256
      memory    = 512
      essential = true
      image     = "650623132949.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest"
      execution_role_arn       = "module.ecs_task_execution_role.arn"
      subnet_ids    = var.subnet_ids[0]
      port_mappings = [
        {
          name          = "myapp"
          containerPort = 8080
          hostPort      = 8080
          protocol      = "tcp"
        }
      ]
      # Example image used requires access to write to root filesystem
      readonly_root_filesystem = false
      memory_reservation = 100
    }
  }
    service_connect_configuration = {
    namespace = "myapp"
    service = {
      client_alias = {
        port     = 8080
        dns_name = "testcase.lmapps.io"
      }
    }
  }
   load_balancer = {
    service = {
      target_group_arn = element(module.alb.target_group_arns, 0)
      container_name   = "myapp"
      container_port   = "8080"
      
    }
  }
  
  }
}

While terraform validate I had below issue

image

But I already defiled subnet_ids as a variables

image

Can you help ?

IAM policy attached to the task does not work.

Description

Hi, I'm trying to add to my ECS task a send SNS:Publish permission.
I was trying to add:

  task_exec_iam_statements = {
    sns = {
      sid       = "SNS"
      effect    = "Allow"
      actions   = ["sns:Publish"]
      resources = ["arn:aws:sns:eu-central-1:XXXXXXXXXXXX:*"] 
    }
  }

into the cluster section. So my code looks like:

module "ecs" {
  source = "terraform-aws-modules/ecs/aws"
  version = "5.2.1"

  task_exec_iam_statements = {
    sns = {
      sid       = "SNS"
      effect    = "Allow"
      actions   = ["sns:Publish"]
      resources = ["arn:aws:sns:eu-central-1:XXXXXXXXXXXX:*"] 
    }
  }

  ...

  services = {
    service_name => {
      ...

      container_definitions = {
        taks_name = {
          ...
        }
      }
    }
  }
}

unfortunately, when I'm trying to publish message into the SNS I'm getting an error:

User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/service_name-20230802080059751700000006/624dd649a11d44c09f90367ea64a7afa is not authorized to perform: SNS:Publish on resource: arn:aws:sns:eu-central-1:XXXXXXXXXXXX:some-sns-name because no identity-based policy allows the SNS:Publish action

Any idea how to solve the problem?

Complete example node cluster join fails

Description

Using examples/complete/ as-is now results in the provisioned EC2 instances being unable to join the ECS cluster.
Toggling enable_nat_gateway = true on

enable_nat_gateway = false
resolves the inability to

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Versions

  • Module version [Required]: v4.0.1
  • Terraform version: 1.2.3 - darwin_arm64
  • Provider version(s): aws == v4.19.0

Reproduction Code [Required]

Steps to reproduce the behavior:

  • cd examples/complete/ && terraform init && terraform apply -auto-approve

Expected behavior

  • Apply terraform
  • Check ECS cluster instances (ec2 instance does join the cluster)
  • Services become healthy (because the tasks are spun up and placed appropriately for the example).

Actual behavior

  • Apply terraform
  • Check ECS cluster instances (ec2 instance doesn't join the cluster)
  • Services don't become healthy (because the tasks can't be placed anywhere for the example).

Reverse documented example in cluster_settings

Description

The cluster_settings input description states "For example, this can be used to enable CloudWatch Container Insights for a cluster". This suggests to me that if I do not provide a value for cluster_settings, CloudWatch Container Insights will not be enabled. That is not the case, since its default value is to enable CloudWatch Container Insights.

The documentation's wording should be changed to "For example, this can be used to disable CloudWatch Container Insights for a cluster".

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 4.0.1

  • Terraform version:1.3.4

  • Provider version(s): Irrelevant

Reproduction Code [Required]

N/A

Expected behavior

If cluster_settings is not passed, CloudWatch Container Insights is disabled.

Actual behavior

If cluster_settings is not passed, CloudWatch Container Insights is enabled.

Terminal Output Screenshot(s)

N/A

Hello_world service fails - no capacity providers with a weight value greater than 0

Tried running the complete-ecs example, only changed the locals and tf backend settings, got this error:

`Error: InvalidParameterException: There are no capacity providers with a weight value greater than zero defined in the default capacity provider strategy. At least one capacity provider must have a weight value greater than zero specified. Update the cluster's default capacity provider strategy or specify a valid capacity provider strategy and try again. "hello_world"

root@b49da8a20338:/opt/test/terraform-aws-ecs/examples/complete-ecs# terraform --version
Terraform v0.13.5

  • provider registry.terraform.io/hashicorp/aws v3.22.0
  • provider registry.terraform.io/hashicorp/null v3.0.0
  • provider registry.terraform.io/hashicorp/random v3.0.0
  • provider registry.terraform.io/hashicorp/template v2.2.0
    `

Unreadable module directory

Hi I am trying to create ecs cluster and ecs services .Here is my code

module "ecs_cluster" {
  source = "../../modules/cluster"

  cluster_name = local.name

  # Capacity provider
  fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 50
        base   = 20
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
  }

  tags = local.tags
}

################################################################################
# Service
################################################################################

module "ecs_service" {
  source = "../../modules/service"

  name        = local.name
  cluster_arn = module.ecs_cluster.arn

  cpu    = 256
  memory = 512

  # Container definition(s)
  container_definitions = {

    fluent-bit = {
      cpu       = 256
      memory    = 512
      essential = true
      image     = 650623132949.dkr.ecr.ap-south-1.amazonaws.com/myapp      
      memory_reservation = 50
      user               = "0"
    }

    (local.container_name) = {
      cpu       = 256
      memory    = 512
      essential = true
      image     = "650623132949.dkr.ecr.ap-south-1.amazonaws.com/myapp"
      port_mappings = [
        {
          name          = local.container_name
          containerPort = local.container_port
          hostPort      = local.container_port
          protocol      = "tcp"
        }
      ]

      # Example image used requires access to write to root filesystem
      readonly_root_filesystem = false
      enable_cloudwatch_logging = false  
      memory_reservation = 100
    }
  }

  service_connect_configuration = {
    namespace = aws_service_discovery_http_namespace.this.arn
    service = {
      client_alias = {
        port     = local.container_port
        dns_name = local.container_name
      }
      port_name      = local.container_name
      discovery_name = local.container_name
    }
  }

  load_balancer = {
    service = {
      target_group_arn = element(module.alb.target_group_arns, 0)
      container_name   = local.container_name
      container_port   = local.container_port
    }
  }

  subnet_ids = var.subnets
  security_group_rules = {
    alb_ingress_3000 = {
      type                     = "ingress"
      from_port                = local.container_port
      to_port                  = local.container_port
      protocol                 = "tcp"
      description              = "Service port"
      source_security_group_id = module.albapp_sg.security_group_id
    }
    egress_all = {
      type        = "egress"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  tags = local.tags
}

while terraform init, I had below issue
image

what is the value i have to replace ecs_cluster and ecs_services source values

image

image

can you help

ECS Services always updating but ignore_task_definition_changes is true

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]:

  • Terraform version: 1.4.6

  • Provider version(s):
    hashicorp/aws = 4.67.0

Reproduction Code [Required]

module "consumer_service" {
 source                             = "terraform-aws-modules/ecs/aws//modules/service"
  version                            = "5.0.0"
  enable_autoscaling                 = false
  subnet_ids                         = ["subnet-da5", "subnet-0d8"]
  deployment_minimum_healthy_percent = 100
  ignore_task_definition_changes     = true
  for_each                           = var.services
  tasks_iam_role_name                = each.key
  task_exec_iam_role_name            = each.key
  name                               = "${var.environment}-consumer-${each.value.name}"
  cluster_arn                        = "arn:aws:ecs:us-east-1:175effefefefef0:cluster/staging-cluster-fefefe35553"
  cpu                                = 2048
  memory                             = 8192
  # Container definition(s)
  container_definitions = {

    (each.value.name) = {
      cpu                       = 512
      memory                    = 2048
      essential                 = true
      readonly_root_filesystem  = false
      essential                 = true
      enable_cloudwatch_logging = false
      image                     = "${var.registry}:${each.value.name}-${var.container_hash}"
      docker_labels = {
        "com.datadoghq.tags.env" : var.environment,
        "com.datadoghq.tags.service" : each.value.name
      },
    }
    }
}

var.services

variable "services" {
  description = "The list of services to deploy"
  type        = any
  default = {
    "person-worker" = {
      name = "person-worker"

    }
    "person-incentive" = {
      name = "person-incentive"
    }

    "user-index-worker" = {
      name = "user-index-worker"
    }
   }

Steps to reproduce the behavior:

Expected behavior

We wanted to deploy about 50 microservices to ECS; there was no way we wanted to be repetitive code for each service, so we used the /forech-each.value attribute, but after the deployment, when we run apply, the task definition says it needs to apply again and again and again. yet we have not updated anything on Terraform

Actual behavior

When you set the ignore task definition, the next time you apply, Terraform should not prompt for changes. Something like No infrastructure changes was found, but this keeps prompting.

Terminal Output Screenshot(s)

 module.consumer_service["user-index-worker"].data.aws_ecs_task_definition.this[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_ecs_task_definition" "this" {
      + arn                  = (known after apply)
      + arn_without_revision = (known after apply)
      + execution_role_arn   = (known after apply)
      + family               = (known after apply)
      + id                   = (known after apply)
      + network_mode         = (known after apply)
      + revision             = (known after apply)
      + status               = (known after apply)
      + task_definition      = "staging-consumer-user-index-worker"
      + task_role_arn        = (known after apply)
    }

  # module.consumer_service["user-index-worker"].aws_ecs_task_definition.this[0] must be replaced
+/- resource "aws_ecs_task_definition" "this" {
      ~ arn                      = "arn:aws:ecs:us-east-1:175effefefefef0:task-definition/staging-consumer-user-index-worker:73" -> (known after apply)
      ~ arn_without_revision     = "arn:aws:ecs:us-east-1:175effefefefef0:task-definition/staging-consumer-user-index-worker" -> (known after apply)
      ~ container_definitions    = jsonencode(
          ~ [
 
              ~ {
                  - mountPoints            = []
                    name                   = "user-index-worker"
                  - portMappings           = []
                  - volumesFrom            = []
                    # (14 unchanged attributes hidden)
                },
            ] # forces replacement
        )
      ~ id                       = "staging-consumer-user-index-worker" -> (known after apply)
      ~ revision                 = 73 -> (known after apply)
        tags                     = {
            "environment" = "staging"
            "managed-by"  = "terraform"
        }
        # (9 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Additional context

Add a flag to enable ECS exec for debugging purposes

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • Yes βœ…: based on the following issue it looks like enable_ecs_execute was released in version 3.34.0

Is your request related to a problem? Please describe.

Turning on ECS exec for debugging issues in a container is especially useful during the initial bootstrap of a service; however, the list of considerations for turning this feature on is big, which means there is more room for error; additionally, the error message provided by aws ecs execute-command is quite generic, making it difficult to trace problems in the configuration; the AWS team has developed a tool to pinpoint errors in the configuration. Providing a simple method to enable this feature in a container could save time.

Describe the solution you'd like.

Add two flags, one that operates at the task definition level in charge of adding the following statements to the task role:

{
   "Version": "2012-10-17",
   "Statement": [
       {
       "Effect": "Allow",
       "Action": [
            "ssmmessages:CreateControlChannel",
            "ssmmessages:CreateDataChannel",
            "ssmmessages:OpenControlChannel",
            "ssmmessages:OpenDataChannel"
       ],
      "Resource": "*"
      }
   ]
}

And another that operates at the container level to enable the following features:

"linuxParameters": {
  "initProcessEnabled": true
}
"readOnlyFilesystem": false

The interface exposed to the end user would look something like this:

module "ecs_service" {
  ...
  # Enable ECS exec for all containers
  container_definition_defaults = {
    enable_ecs_exec = true
  }

  container_definitions = {
    some_container = {
      ...
      # Enable ECS exec for a specific container
      enable_ecs_exec = true
    }
  }

  # Add the required statements to the task role and sets `enable_execute_command = true` 
  enable_ecs_exec = true
}

Design considerations:

In the event of overlapping configurations, the user's configuration takes precedence; here are a few examples:

If enable_ecs_exec is set to true at the container level and readonly_root_filesystem is set to true the final configuration would look like this:

"linuxParameters": {
  "initProcessEnabled": true
}
"readOnlyFilesystem": true

If enable_ecs_exec is set to true at the container level and linux_parameters is defined, the parameters would be merged:

linux_parameters = {
  capabilities = {
    add = [...]
  }
}

enable_ecs_exec = true

Container configuration:

"linuxParameters": {
  "capabilities": {
    "add": [...]
  }
  "initProcessEnabled": true
}
"readOnlyFilesystem": true

Describe alternatives you've considered.

Adding all of the configurations by hand or creating a wrapper around this module appears to be a workaround when setting up this feature on ECS, however, more people could benefit if this feature is added to a widely adopted module like this one.

Additional context

It is possible to add all the required configurations by hand with the features exposed by this module, however, it is less convenient and more difficult to troubleshoot errors in the configuration.

LimitExceeded for task policy

Description

When trying to apply permissions for my ECS service to have access to some SSM parameters:

task_exec_ssm_param_arns = flatten([
    dependency.shared_ssm.outputs.ssm_parameters_with_ignore_changes_arn,
    dependency.shared_ssm.outputs.ssm_parameters_without_ignore_changes_arn,
    dependency.api_ssm.outputs.ssm_parameters_without_ignore_changes_arn,
    dependency.api_ssm.outputs.ssm_parameters_with_ignore_changes_arn
  ])

But when applying the changes, I get the following error:

aws_iam_policy.task_exec[0]: Modifying... [id=arn:aws:iam::0000000000:policy/staging-ecs-execution-role]
β•·
β”‚ Error: updating IAM Policy (arn:aws:iam::0000000000:policy/staging-ecs-execution-role): LimitExceeded: Cannot exceed quota for PolicySize: 6144
β”‚ 	status code: 409, request id: 52ab06e4-c96a-4ea8-a617-14f65667f9c1
β”‚ 
β”‚   with aws_iam_policy.task_exec[0],
β”‚   on main.tf line 875, in resource "aws_iam_policy" "task_exec":
β”‚  875: resource "aws_iam_policy" "task_exec" {

Is there a way to attach additional policies and not put everything under aws_iam_policy.task_exec to escape the limit?

When I call module with create = false, then the terraform apply fails

Description

When I call module with create = false, then the terraform apply fails with below error

Error: Invalid index

on .terraform/modules/ecs/main.tf line 60, in locals:
60: { for k, v in var.autoscaling_capacity_providers : k => merge(aws_ecs_capacity_provider.this[k], v) }
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ aws_ecs_capacity_provider.this is object with no attributes
β”‚
β”‚ The given key does not identify an element in this collection value.

Versions

  • Module version [Required]: 4.0.2 (latest)
  • Terraform version: hashicorp/terraform:1.2.2
  • Provider version(s): hashicorp/aws v4.19.0

Reproduction Code [Required]

module "ecs" {

source = "terraform-aws-modules/ecs/aws"

create = false

...
}

Expected behavior

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Actual behavior

Error: Invalid index

on .terraform/modules/ecs/main.tf line 60, in locals:
60: { for k, v in var.autoscaling_capacity_providers : k => merge(aws_ecs_capacity_provider.this[k], v) }
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ aws_ecs_capacity_provider.this is object with no attributes
β”‚
β”‚ The given key does not identify an element in this collection value.

Terminal Output Screenshot(s)

Additional context

If I change below line in the module main.tf line 60, in locals , then it succeeds

From

{ for k, v in var.autoscaling_capacity_providers : k => merge(aws_ecs_capacity_provider.this[k], v) }

To

{ for k, v in var.autoscaling_capacity_providers : k => merge(aws_ecs_capacity_provider.this[k], v) if var.create }

Example with EC2 spot instances

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • No πŸ›‘: please wait to file a request until the functionality is avaialble in the AWS provider
  • Yes βœ…: please list the AWS provider version which introduced this functionality

Is your request related to a problem? Please describe.

This module is great, just wondering if you could provide an example with EC2 spot instances, in addition to the Fargate spot examples.

Describe the solution you'd like.

Basically, what I just described - an example with EC2 spot instances.

Describe alternatives you've considered.

Additional context

Example code error - Capacity Provider is invalid index.

Description

The sample module code does not provision correctly, most likely due to obsolete AWS API call.

Versions

  • Terraform: v1.0.9
  • Provider(s): 3.63.0
  • Module: ECS = v3.4.0 (latest)

Reproduction

Steps to reproduce the behavior:

  1. Create Terraform file from sample code (see below).
  2. Run terraform init
  3. Run terraform plan

Code Snippet to Reproduce

Copied from official documentation page:

module "ecs" {
  source = "terraform-aws-modules/ecs/aws"

  name = "my-ecs"

  container_insights = true

  capacity_providers = ["FARGATE", "FARGATE_SPOT"]

  default_capacity_provider_strategy = [
    {
      capacity_provider = "FARGATE_SPOT"
    }
  ]

  tags = {
    Environment = "Development"
  }
}

Expected behavior

A working sample ECS cluster to be created.

Actual behavior

This is the output with TF_LOG set to DEBUG.

Waiting for the plan to start...

Terraform v1.0.9
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
β•·
β”‚ Error: Invalid index
β”‚ 
β”‚   on .terraform/modules/ecs/main.tf line 13, in resource "aws_ecs_cluster" "this":
β”‚   13:       capacity_provider = strategy.value["capacity_provider"]
β”‚     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚     β”‚ strategy.value is map of string with 1 element
β”‚ 
β”‚ The given key does not identify an element in this collection value.

Additional context

It looks like the AWS API has changed and the module's call no longer works. I have tried creating this resource with just FARGATE_SPOT and FARGATE by themselves, but this error does not seem to be based on that.

I am running this on Terraform Cloud.

Let me know if any other info would be helpful.

Terraform to control ECS Service Autoscaling completely - Scaling Action part is missing

Policies are created and working successfully but ECS service autoscaling fails because terraform didn't update the value under 'Scaling Action'...

Please take a look at the screenshot if this is not understandable.

image

resource "aws_appautoscaling_policy" "ecs_policy-down" {
name = "${aws_ecs_service.service.name}-scale-down"
policy_type = "StepScaling"
resource_id = aws_appautoscaling_target.ecs_target.resource_id
scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension
service_namespace = aws_appautoscaling_target.ecs_target.service_namespace

step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"

step_adjustment {
  scaling_adjustment = -1
  metric_interval_upper_bound = 0
}

}
}
Error: Failed to update scaling policy: ValidationException: There must be a step adjustment with an unspecified upper bound when one step adjustment has a positive upper bound

We do not seem to have a handling for this option from terraform.. Please help us understand how we can handle this ? thanks !

Getting issue with the cluster config. I am getting this error. Pls suggest. Thanks.

terraform plan
β•·
β”‚ Error: Unsupported argument
β”‚
β”‚ on main.tf line 13, in module "ecs":
β”‚ 13: cluster_name = local.name
β”‚
β”‚ An argument named "cluster_name" is not expected here.
β•΅
β•·
β”‚ Error: Unsupported argument
β”‚
β”‚ on main.tf line 15, in module "ecs":
β”‚ 15: cluster_configuration = {
β”‚
β”‚ An argument named "cluster_configuration" is not expected here.
β•΅
β•·
β”‚ Error: Unsupported argument
β”‚
β”‚ on main.tf line 24, in module "ecs":
β”‚ 24: autoscaling_capacity_providers = {
β”‚
β”‚ An argument named "autoscaling_capacity_providers" is not expected here.
β•΅

Add support for specifying configuration / log_configuration block

The aws_ecs_cluster resource provides the ability to specify a logging configuration to ship logs to cloud_watch or a s3_bucket. This feature was introduced back in 3.46.0. Is it possible to get this feature added to the module?

Example:

resource "aws_ecs_cluster" "test" {
  name = "example"

  configuration {
    execute_command_configuration {
      kms_key_id = aws_kms_key.example.arn
      logging    = "OVERRIDE"

      log_configuration {
        cloud_watch_encryption_enabled = true
        cloud_watch_log_group_name     = aws_cloudwatch_log_group.example.name
      }
    }
  }
}

Please update this module

Tried to run and test this module with the current Terraform version and getting dozens of errors, for example:

Error: Missing required argument

  on main.tf line 56, in data "aws_ami" "amazon_linux_ecs":
  56: data "aws_ami" "amazon_linux_ecs" {

The argument "owners" is required, but no definition was found.


Error: Unsupported block type

  on main.tf line 110, in data "template_file" "user_data":
 110:   vars {

Blocks of type "vars" are not expected here. Did you mean to define argument
"vars"? If so, use the equals sign to assign it a value.

runtime_platform forces replacement with each apply.

Description

Each time I run terraform apply I see a new block in the output:

module.ecs_service.aws_ecs_task_definition.this[0] must be replaced
+/- resource "aws_ecs_task_definition" "this" {
      + runtime_platform { # forces replacement
          + cpu_architecture        = "X86_64" # forces replacement
          + operating_system_family = "LINUX" # forces replacement
        }
}

This forces my task to be replaced even if there are no other changes. I also am not able to workaround this issue by adding this block to my configuration.

I am not able to share the complete configuration in this issue report, however I could send it via other means if required.

  • [βœ…] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists
    βœ…

Versions

  • Module version [Required]:
{
  "Modules": [
    {
      "Key": "",
      "Source": "",
      "Dir": "."
    },
    {
      "Key": "ecs_cluster",
      "Source": "registry.terraform.io/terraform-aws-modules/ecs/aws//modules/cluster",
      "Version": "5.2.1",
      "Dir": ".terraform/modules/ecs_cluster/modules/cluster"
    },
    {
      "Key": "ecs_service",
      "Source": "registry.terraform.io/terraform-aws-modules/ecs/aws//modules/service",
      "Version": "5.2.1",
      "Dir": ".terraform/modules/ecs_service/modules/service"
    },
    {
      "Key": "ecs_service.container_definition",
      "Source": "../container-definition",
      "Dir": ".terraform/modules/ecs_service/modules/container-definition"
    },
    {
      "Key": "efs",
      "Source": "registry.terraform.io/terraform-aws-modules/efs/aws",
      "Version": "1.2.0",
      "Dir": ".terraform/modules/efs"
    }
  ]
}
  • Terraform version:
    Terraform v1.5.3
  • Provider version(s):
    Terraform v1.5.3
    on darwin_amd64
  • provider registry.terraform.io/hashicorp/aws v5.9.0

Reproduction Code [Required]

data "aws_availability_zones" "available" {}

locals {
  region       = "region-1"
  cluster_name = "region-1-cluster"
  name         = "matomo"

  vpc_cidr = "172.16.0.0/18"

  container_name = "matomo"
  container_port = 80

  tags = {
    Name       = local.name
    Repository = "https://github.com/terraform-aws-modules/terraform-aws-ecs"
  }
}

module "ecs_cluster" {
  source = "terraform-aws-modules/ecs/aws//modules/cluster"

  cluster_name = local.cluster_name

  # Capacity provider
  fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 50
        base   = 20
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
  }

  tags = local.tags
}

module "ecs_service" {
  source = "terraform-aws-modules/ecs/aws//modules/service"

  name        = local.name
  cluster_arn = module.ecs_cluster.arn

  cpu    = 512
  memory = 2048

  create_task_exec_iam_role = false
  create_tasks_iam_role     = false
  task_exec_iam_role_arn    = aws_iam_role.ecs_execution_role.arn
  tasks_iam_role_arn        = aws_iam_role.ecs_generic_task_role.arn
  create_security_group     = false
  security_group_ids        = ["${aws_security_group.matomo_task_sg.id}"]

  # Container definition(s)
  container_definitions = {

    fluent-bit = {
      essential = true
      image     = "public.ecr.aws/aws-observability/aws-for-fluent-bit:stable"
      name      = "log_router"
      firelens_configuration = {
        type = "fluentbit"
      }
      health_check = {
        command      = ["CMD-SHELL", "echo '{\"health\": \"check\"}' | nc 127.0.0.1 8877 || exit 1"]
        interval     = 10
        retries      = 2
        start_period = 30
        timeout      = 5
      }
      log_configuration = {
        logDriver = "awslogs"
        options = {
          awslogs-group         = "firelens-container"
          awslogs-region        = local.region
          awslogs-create-group  = "true"
          awslogs-stream-prefix = "firelens"
        }
      }
      memory_reservation = 50
      user               = "0"
    }

    (local.container_name) = {
      cpu       = 256
      memory    = 1024
      essential = true
      image     = "matomo:4.12.1-apache"
      port_mappings = [
        {
          name          = local.container_name
          containerPort = local.container_port
          hostPort      = local.container_port
          protocol      = "tcp"
        }
      ]
      health_check = {
        command      = ["CMD-SHELL", "curl -f http://localhost/ || exit 1"]
        interval     = 10
        retries      = 3
        start_period = 60
        timeout      = 5
      }

      mount_points = [
        {
          containerPath = "/var/www/html"
          sourceVolume  = "matomo-efs"
        }
      ]
      environment = [
        {
          name  = "MATOMO_DATABASE_HOST",
          value = "murasites.craf0zdipzos.us-east-1.rds.amazonaws.com"
        },
        {
          name  = "MATOMO_DATABASE_ADAPTER",
          value = "mysql"
        },
        {
          name  = "MATOMO_DATABASE_DBNAME",
          value = "matomo"
        },
      ]
      #   secrets = [
      #     {
      #       name      = "MATOMO_DATABASE_USERNAME"
      #       valueFrom = data.aws_secretsmanager_secret.matomo_db_username.arn
      #     },
      #     {
      #       name      = "MATOMO_DATABASE_PASSWORD"
      #       valueFrom = data.aws_secretsmanager_secret.matomo_db_password.arn
      #     },
      #   ]
      # Example image used requires access to write to root filesystem
      readonly_root_filesystem = false

      dependencies = [{
        containerName = "log_router"
        condition     = "START"
      }]

      #   enable_cloudwatch_logging = true
      log_configuration = {
        logDriver = "awsfirelens"
        options = {
          Name              = "cloudwatch"
          region            = local.region
          log_group_name    = format("/aws/ecs/containerinsights/%s/application", local.cluster_name)
          auto_create_group = "true"
          log_stream_name   = local.container_name
          retry_limit       = "2"
        }
      }

      memory_reservation = 100
    }
  }

  volume = {
    matomo-efs = {
      name = "matomo-efs"
      efs_volume_configuration = {
        file_system_id          = "fs-1234abcd"
        root_directory          = "/matomo"
        transit_encryption      = "ENABLED"
        transit_encryption_port = 2049
      }
    }
  }

  load_balancer = {
    service = {
      target_group_arn = aws_lb_target_group.target_group.arn
      container_name   = local.container_name
      container_port   = 80
    }
  }

  subnet_ids = [aws_subnet.public1.id, aws_subnet.public2.id, aws_subnet.public3.id, aws_subnet.private1.id, aws_subnet.private2.id, aws_subnet.private3.id]

  depends_on = [module.ecs_cluster, aws_iam_role.ecs_execution_role, aws_iam_role.ecs_generic_task_role, aws_security_group.matomo_alb_sg, aws_security_group.matomo_task_sg]

  tags = local.tags

}

Steps to reproduce the behavior:

  1. Run: terraform apply
  2. Immediately re-run terraform plan OR terraform apply
  3. You should see the service/task being replaced with the output:
      + runtime_platform { # forces replacement
          + cpu_architecture        = "X86_64" # forces replacement
          + operating_system_family = "LINUX" # forces replacement
        }

No

Yes βœ…

N/A

Expected behavior

I expect to see no changes immediately after applying and when I have not made any changes to the configuration.

Actual behavior

After applying and making no changes I see forced replacement on the service/task resource.

Terminal Output Screenshot(s)

Additional context

port_mappings: InvalidParameterException, is used in more than one port mapping for container

Description

When defining a container definition under services I am defining the port_mappings to for port 53 tcp and udp. I am getting this error:

InvalidParameterException: Container port 53 is used in more than one port mapping for container coredns-mysql

I can create the same port tcp and udp in the aws gui without any issues and the json task file reflects the proper ports being opened.

  port_mappings = [
    {
      name          = local.container_name
      containerPort = 53
      hostPort      = 53
      protocol      = "tcp"
    },
    {
      name          = "${local.container_name}-53-udp"
      containerPort = 53
      hostPort      = 53
      protocol      = "udp"
    }
  ]

Versions

  • Module version [Required]:
    5.0.1

  • Terraform version:
    1.4.2

  • Provider version(s):
    hashicorp/aws v4.67.0

Reproduction Code [Required]

  port_mappings = [
    {
      name          = local.container_name
      containerPort = 53
      hostPort      = 53
      protocol      = "tcp"
    },
    {
      name          = "${local.container_name}-53-udp"
      containerPort = 53
      hostPort      = 53
      protocol      = "udp"
    }
  ]

Steps to reproduce the behavior:

NO YES Adding the same port to port_mappings except using the alternate protocol

Expected behavior

I should be able to add port 53 tcp and port 53 udp to port_mappings.

Actual behavior

I get this error: InvalidParameterException: Container port 53 is used in more than one port mapping for container coredns-mysql

Terminal Output Screenshot(s)

Additional context

If you manually add the additional protocol through the aws gui it will not allow you to define a "Port name" and will have "Containername-Port-Protocol" greyed out in the "Port name"'s place.

Unable to use cloudwatch group in several environments.

name = "/aws/ecs/${var.service}/${var.name}"

Since the service name could be the same across environments, it can cause conflicts when a CloudWatch group already exists while applying to another environment.

If I add the option to logDriver: awslogs-create-group = true, then I can't specify the retention size.

The only way to achieve this is to create a CloudWatch group outside of this module.
It would be great if the module could support a variable override for the CloudWatch group name.

Add support for volumes in task definitions

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.
- Yes βœ…: please list the AWS provider version which introduced this functionality
aws_ecs_task_definition add argument for volumes:
https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#3450-june-10-2021

Is your request related to a problem? Please describe.

Create services with task definitions that support volumes, i.e. EFS or FSx:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#volume

when finishing the TODO list for the service submodule:
https://github.com/terraform-aws-modules/terraform-aws-ecs/tree/master/modules/service

Describe the solution you'd like.

Create a module input to support volume arguments in the aws_ecs_task_definition that's already in the TODO list of the Read.Me

Describe alternatives you've considered.

This third party module:
https://github.com/techservicesillinois/terraform-aws-ecs-service#network_configuration

Additional context

Third party module has too many co-dependencies and it's almost sunset. For example, when creating a service with a LB the Default Forward Rule can't be used but a Host Condition is required. Adding un-necessary complexity.

Using wait_until_stable_timeout

Description

I tried using wait_until_stable_timeout = "10m" in the service submodule which is the default value as per documentation but it gave me error. On further investigation, I found the type of variable is a number and the default value "10m" is string, hence the error.

variable "wait_until_stable_timeout" {
description = "Wait timeout for task set to reach `STEADY_STATE`. Valid time units include `ns`, `us` (or Β΅s), `ms`, `s`, `m`, and `h`. Default `10m`"
type = number
default = null
}

If your request is for a new feature, please use the Feature request template.

  • [βœ…] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 5.2.0

  • Terraform version: v1.5.4

  • Provider version(s):
    registry.terraform.io/hashicorp/aws: 5.9.0

Reproduction Code [Required]

locals {
  crewfare_frontend_task_definition_name = format("%s-%s", var.application_environment, var.crewfare_frontend_container_name)
  next_js_frontend_task_definition_name  = format("%s-%s", var.application_environment, var.next_js_frontend_container_name)
  backstage_nginx_task_definition_name   = format("%s-%s", var.application_environment, var.backstage_nginx_container_name)
}

module "ecs_service" {
  source  = "terraform-aws-modules/ecs/aws"
  version = "5.2.0"

  create = true

  cluster_name     = "${var.application_environment}-${var.application_name}"
  cluster_settings = { "name" : "containerInsights", "value" : "enabled" }

  create_task_exec_iam_role = true
  task_exec_iam_role_name   = "${var.application_environment}-${var.application_name}-task-execution"
  create_task_exec_policy   = true

  default_capacity_provider_use_fargate = true
  fargate_capacity_providers = {
    FARGATE      = {}
    FARGATE_SPOT = {}
  }

  create_cloudwatch_log_group            = true
  cloudwatch_log_group_retention_in_days = 90

  services = {
    (local.backstage_nginx_task_definition_name) = {
      cpu    = 512
      memory = 1024

      container_definitions = {
        (var.backstage_nginx_container_name) = {
          cpu       = 512
          memory    = 1024
          essential = true
          image     = var.backstage_nginx_image
          port_mappings = [
            {
              name          = var.backstage_nginx_container_name
              containerPort = var.backstage_nginx_container_port
              hostPort      = var.backstage_nginx_container_port
              protocol      = "tcp"
            }
          ]
          readonly_root_filesystem  = false
          enable_cloudwatch_logging = true
        }
      }

      load_balancer = {
        service = {
          target_group_arn = data.terraform_remote_state.ecs-elb.outputs.backstage_nginx_target_group_arn
          container_name   = var.backstage_nginx_container_name
          container_port   = var.backstage_nginx_container_port
        }
      }

      subnet_ids = data.terraform_remote_state.vpc.outputs.private_subnets
      security_group_rules = {
        alb_ingress = {
          type                     = "ingress"
          from_port                = var.backstage_nginx_container_port
          to_port                  = var.backstage_nginx_container_port
          protocol                 = "tcp"
          description              = "${var.backstage_nginx_container_name} port"
          source_security_group_id = data.terraform_remote_state.ecs-elb.outputs.lb_security_group_id
        }
        egress_all = {
          type        = "egress"
          from_port   = 0
          to_port     = 0
          protocol    = "-1"
          cidr_blocks = ["0.0.0.0/0"]
        }
      }

      force_new_deployment      = true
      wait_for_steady_state     = true
      wait_until_stable         = true
      wait_until_stable_timeout = "10m"
    }
  }
}

Steps to reproduce the behavior:
terraform apply

Are you using workspaces? No
Have you cleared the local cache (see Notice section above)? Yes

Expected behavior

There should be no error.

Actual behavior

ERROR

Error: Invalid value for input variable

  on .terraform/modules/ecs_service/main.tf line 155, in module "service":
 155:   wait_until_stable_timeout = try(each.value.wait_until_stable_timeout, null)

The given value is not suitable for
module.ecs_service.module.service["develop-ecs-backstage-nginx"].var.wait_until_stable_timeout
declared at
.terraform/modules/ecs_service/modules/service/variables.tf:546,1-37: a
number is required.

P.S. Please ignore the line numbers in output as I have shared an update code here

Terminal Output Screenshot(s)

Additional context

ECS module always trigger an in-place update for containerInsights

Description

Please provide a clear and concise description of the issue you are encountering, your current setup, and what steps led up to the issue. If you can provide a reproduction, that will help tremendously.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Terraform: 1.0.5
  • Provider(s): aws (3.56.0)
  • Module: ecs (3.3.0)

Reproduction

Steps to reproduce the behavior:

  1. terraform apply the snippet below.
  2. Set container_insights to true/false or not declared at all.
  3. Run terraform apply a few more times.
  # module.ecs.aws_ecs_cluster.this[0] will be updated in-place
  ~ resource "aws_ecs_cluster" "this" {
        id                 = "arn:aws:ecs:eu-west-1:1234567890:cluster/test-ecs"
        name               = "test-ecs"
        tags               = {}
        # (3 unchanged attributes hidden)


      + setting {
          + name  = "containerInsights"
          + value = "disabled"
        }
        # (1 unchanged block hidden)
    }

Code Snippet to Reproduce

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}

module "ecs" {
  source  = "terraform-aws-modules/ecs/aws"
  version = "~> 3.3"

  name               = "test-ecs"
  capacity_providers = ["FARGATE_SPOT"]

  default_capacity_provider_strategy = [
    {
      capacity_provider = "FARGATE_SPOT"
      weight            = "1"
    }
  ]
}

Expected behavior

No infra changes.

Actual behavior

The module will try to set the containerInsights setting every time.
If I set container_insights = true, it will become

      + setting {
          + name  = "containerInsights"
          + value = "enabled"
        }

and it will still try to apply the enable every time.

Separating service and task level placement constraints

I am working on a project where I need to usee the placement constraint type "distinctInstance", however the placement_constraints variable applies to both the taskdefinition and service. The issue is that distinctInstance can only be used in certain contexts https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html. Specifically this section The distinctInstance constraint places each task in the group on a different instance. It can be specified with the following actions: CreateService, UpdateService, and RunTask.

Describe the solution you'd like.

I see two possible solutions

  1. Use two separate variables task_placement_constraints and service_placement_constraints
  2. In aws_ecs_task_definition resource use a filter over foreach to remove distinctInstance types

Describe alternatives you've considered.

Creating taskdefinition manually and then passing taskdefinition arn into the module, however this would take away a lot of features of the module and thus not preferred.

ECS with EC2 autoscaling cannot use DIND but dockerd daemon starts successfully after enabling privileged = true

Description

I cannot mount /var/run/docker.sock on the task container although defining mountPoints and volume in the module.

container_definitions

          mountPoints = [
            {
              readOnly = null,
              containerPath = "/var/run/docker.sock",
              sourceVolume = "docker_sock"
            }
          ]

And in services on the same level as container definitions:

      volume = [{

        name      = "docker_sock"
        host = {
          sourcePath = "/var/run/docker.sock"
        }
      }]

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: version=5.2.1

Reproduction Code [Required]

Steps to reproduce the behaviour:

When providing the above mentioned arguments to the resource, the volume is not mounted and container fails with the following error:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.

Expected behaviour

As I can start the dockerd by calling the binary from entrypoint -> command but there is no other space to execute other commands

Actual behavior

Container is not able to run DIND

Terminal Output Screenshot(s)

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.

Additional context

The task role arn is not getting updated.

Description

I am using this module to create ECS cluster, services, and containers. I also have a RDS database. The containers in the service need to connect to the database using IAM auth. I have create a "aws_iam_role" resource but I am not being able to attach this role to the containers. (Please look at code snippets below for more clarity).

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:

  • Terraform version:
    Terraform v1.5.0

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v5.5.0

Reproduction Code [Required]

image

Line 34 in screenshot below is not taking effect.
image

Module defaults are overwriting USER set by docker image with root

Is your request related to a problem? Please describe.

With the changes introduced with this PR, by default the module is now overwriting the user set by the docker image with the root user. I don't think this should be the default functionality of the module.

Describe the solution you'd like.

Revert this line

Describe alternatives you've considered.

Or we could just keep it the way it is. It's no biggie.

Additional context

This randomly broke my github actions runner, till I explicitly set the user.

InvalidParameterException: You must set logging to 'OVERRIDE' when you supply a log configuration

Description

cluster_configuration = {
    execute_command_configuration = {
      logging = "NONE" # NONE DEFAULT, OVERRIDE 
    }
  }

this worked in 4.1.3, but does not in latest

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_cluster

the docs are unchanged

#76

this seems the likely culprit

is NONE still valid, or should we be doing something different now? I couldn't find anything obvious in the examples. Thanks.

AWS ecs services destroy fails - ResourceNotReady: exceeded wait attempts

Description

Destroying ecs service gets fails after 10 minutes with the error:

module.ecs-devops-demo-01.aws_ecs_service.ecs_service: Still destroying... [id=arn:aws:ecs:us-east-2:167392278656:serv...ops-demo-01/devops-demo-01-ecs-service, 9m50s elapsed]

Error: error deleting ECS service (arn:aws:ecs:us-east-2:167392278656:service/devops-demo-01/devops-demo-01-ecs-service): ResourceNotReady: exceeded wait attempts

Versions

  • Terraform: 1.0.2
  • Provider(s): hashicorp/aws
  • Module: 3.46.0

container-definition module empty variable

Description

I was deploying the container-definition module and found out that I had to specify an empty environment variable, otherwise I would get the error below:

β•·
β”‚ Error: Variables not allowed
β”‚ 
β”‚   on <value for var.environment> line 1:
β”‚   (source code not available)
β”‚ 
β”‚ Variables may not be used here.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 5.2.1

  • Terraform version:

Terraform v1.5.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v5.13.1

Reproduction Code [Required]

environment = []

Not sure if that is expected or not?

EC2 instances created but not added to ECS cluster

Summary

Ran terraform apply on complete-ecs/main.tf to create ECS cluster. Confirmed on AWS console the ECS cluster gets created. However, created EC2 instances are not added to cluster.
image

Steps to reproduce:

  • Set following values in complete-ecs/main.tf
provider "aws" {
  region = "us-west-2"
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 2.0"

  name = local.name

  cidr = "10.1.0.0/16"

  azs             = ["us-west-2a", "us-west-2b"]
  private_subnets = ["10.1.1.0/24", "10.1.2.0/24"]
  public_subnets  = ["10.1.11.0/24", "10.1.12.0/24"]

  enable_nat_gateway = false # this is faster, but should be "true" for real

  tags = {
    Environment = local.environment
    Name        = local.name
  }
}
  # Auto scaling group
  asg_name                  = local.ec2_resources_name
  vpc_zone_identifier       = module.vpc.private_subnets
  health_check_type         = "EC2"
  min_size                  = 1
  max_size                  = 3
  desired_capacity          = 1
  wait_for_capacity_timeout = 0
  • Ran terraform init, plan, and apply

Passing Target ID from ECS to ALB Module

Hi support ,

I would like to provide the input for the target_id in the ALB (Application Load Balancer) module from the ECS (Elastic Container Service) module. Could you please let me know which value I should retrieve from the ECS module and how to retrieve it?

module "ecs" {
  source  = "terraform-aws-modules/ecs/aws"
  version = "5.0.1"
  cluster_name = "myecscluster"
   fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = 50
      }
    }
  }
   container_definitions = {
    myfront_end = {
      cpu       = 256
      memory    = 512
      essential = true
      image     = "650623132949.dkr.ecr.ap-south-1.amazonaws.com/myapp:latest"
      port_mappings = [
        {
          name          = "myapp"
          containerPort = 8080
          hostPort      = 8080
          protocol      = "tcp"
        }
      ]
      # Example image used requires access to write to root filesystem
      readonly_root_filesystem = false
      memory_reservation = 100
    }
  }
    service_connect_configuration = {
    namespace = "myapp"
    service = {
      client_alias = {
        port     = 8080
        dns_name = "testcase.lmapps.io"
      }
    }
  }
}

##ALB module

module "alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "8.0"
  name = "my-alb"
  load_balancer_type = "application"
  vpc_id             = "vpc-075fcd4310a0f65c0"
  subnets            = ["subnet-0f06b2cbbb7b9b4bf", "subnet-070fdb57f5e2fef8b"]
  security_groups    = [module.sgroupalb.security_group_id]
  target_groups = [
    {
      name_prefix      = "pref-"
      backend_protocol = "HTTP"
      backend_port     = 80
      target_type      = "ip"
      targets = {
        my_target = {
          target_id = "???????????"
          port = 80
        }       
      }
    }
  ]
  https_listeners = [
    {
      port               = 443
      protocol           = "HTTPS"
      certificate_arn    = "arn:aws:iam::123456789012:server-certificate/test_cert-123456789012"
      target_group_index = 0
    }
  ]
  http_tcp_listeners = [
    {
      port               = 80
      protocol           = "HTTP"
      target_group_index = 0
    }
  ]
  tags = {
    Environment = "Test"
  }
}

Mixing ASG and Fargate capacity providers does not work

Description

When trying to use both Fargate and Autoscaling capacity providers I get:

Error: error updating ECS Cluster (dev-main-ecs-cluster) Capacity Providers: InvalidParameterException: A capacity provider strategy cannot contain a mix of capacity providers using Auto Scaling groups and Fargate providers. Specify one or the other and try again.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 4.0.2

  • Terraform version:

1.0.11
  • Provider version(s):
aws = {
    source  = "hashicorp/aws"
    version = "~> 4.0"
}

Reproduction Code [Required]

module "ecs" {
  source  = "terraform-aws-modules/ecs/aws"
  version = "~> 4.0.2"

  cluster_name = local.cluster_name

  fargate_capacity_providers = {
    FARGATE = {
      default_capacity_provider_strategy = {
        weight = var.fargate_weight
      }
    }
    FARGATE_SPOT = {
      default_capacity_provider_strategy = {
        weight = var.fargate_spot_weight
      }
    }
  }

  autoscaling_capacity_providers = {
    linux = {
      auto_scaling_group_arn         = module.asg_linux.autoscaling_group_arn
      managed_termination_protection = "DISABLED"

      managed_scaling = {
        maximum_scaling_step_size = 5
        minimum_scaling_step_size = 1
        status                    = "ENABLED"
        target_capacity           = 90
      }

      default_capacity_provider_strategy = {
        weight = 60
        base   = 20
      }
    }
    windows = {
      auto_scaling_group_arn         = module.asg_windows.autoscaling_group_arn
      managed_termination_protection = "DISABLED"

      managed_scaling = {
        maximum_scaling_step_size = 2
        minimum_scaling_step_size = 1
        status                    = "ENABLED"
        target_capacity           = 90
      }

      default_capacity_provider_strategy = {
        weight = 40
      }
    }
  }

}

I'm not using workspaces

yes I've cleared local chache

Terminal Output Screenshot(s)

image

ECS Container insights feature is not available in some regions

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • Yes βœ…: 3.74

Is your request related to a problem? Please describe.

ECS Container insights feature is not available in some regions (eu-south-1, for example). Thus hashicorp/terraform-provider-aws#22576

Describe the solution you'd like.

Make ECS cluster settings dynamic

Describe alternatives you've considered.

  • Not using above mentioned regions
  • Not using the module

Additional context

None

Allow the service to NOT have task CPU and Memory value being applied for EC2 launch types when Container CPU and Memory are set

Is your request related to a new offering from AWS?

No

Is your request related to a problem? Please describe.

Currently, the default values for Task Size CPU and Memory are 1024 and 2048 respectively, now this is different from the container CPU and Memory values.

What I want is an option to ignore the task definition Task Size CPU and Memory because I already set up those values on the container level especially if the launch type is EC2. It is allowed in the UI but somehow on this module at the task level, the default values are being applied without even declaring it.

Describe the solution you'd like.

So the solution would be that, if CPU and Memory are set at the container level and the launch type is EC2 then completely ignore Task level CPU and Memory unless it is explicitly declared.

Error: Invalid for_each argument

Description

I can build everything from scratch, but when I try to change something, I get this error:

Error: Invalid for_each argument
on .terraform/modules/ecs/modules/service/main.tf line 525, in module "container_definition":
  for_each = { for k, v in var.container_definitions : k => v if local.create_task_definition }
local.create_task_definition is true
var.container_definitions will be known only after apply
The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.

When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.

Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:

  • Terraform version:
    1.4.6 (Using Terraform Cloud)

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v4.65.0

Error: creating ECS Capacity Provider (default): ClientException: The specified capacity provider already exists.

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

When trying to provision a ECS cluster using:

  1. ec2 autoscaling
  2. launch template
  3. spot instances

It applies with no issue the first time. However, on apply again to update changes in the configuration, i get the following error

β•· β”‚ Error: Reference to undeclared module β”‚ β”‚ on ini_ecs.tf line 23, in module "ecs_cluster": β”‚ 23: auto_scaling_group_arn = module.asg.autoscaling_group_arn β”‚ β”‚ No module call named "asg" is declared in the root module. β•΅

I have nuked the .terraform directory and tried from a clean state, but the error persists

  • [ X ] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:

  • Terraform version: v1.5.1

  • Provider version(s): v5.5.0

Reproduction Code [Required]

ini_ecs.tf

module "ecs_cluster" {
  source  = "terraform-aws-modules/ecs/aws//modules/cluster"
  # modules/cluster
  version = "5.2.0"

  cluster_name = local.base_name

  create_cloudwatch_log_group = false

  default_capacity_provider_use_fargate = false
  autoscaling_capacity_providers = {
    default = {
      auto_scaling_group_arn         = module.asg.autoscaling_group_arn
      managed_termination_protection = "DISABLED"

      # ecs manages scaling
      managed_scaling = {
        maximum_scaling_step_size = 5
        minimum_scaling_step_size = 1
        status                    = "ENABLED"
        target_capacity           = 50
      }

    }
  }

  tags = local.tags
}

ini_asg.tf

module "asg" {
  source  = "terraform-aws-modules/autoscaling/aws"
  version = "6.10.0"

  # Autoscaling group
  name = local.base_name
  use_name_prefix = true
  ignore_desired_capacity_changes = false

  min_size                  = 0
  max_size                  = 1
  desired_capacity          = 1
  wait_for_capacity_timeout = 0
  health_check_type         = "EC2"
  vpc_zone_identifier       = data.aws_subnets.private.ids

  initial_lifecycle_hooks = [
    {
      name                  = "ExampleStartupLifeCycleHook"
      default_result        = "CONTINUE"
      heartbeat_timeout     = 60
      lifecycle_transition  = "autoscaling:EC2_INSTANCE_LAUNCHING"
      notification_metadata = jsonencode({ "hello" = "world" })
    },
    {
      name                  = "ExampleTerminationLifeCycleHook"
      default_result        = "CONTINUE"
      heartbeat_timeout     = 180
      lifecycle_transition  = "autoscaling:EC2_INSTANCE_TERMINATING"
      notification_metadata = jsonencode({ "goodbye" = "world" })
    }
  ]

  instance_refresh = {
    strategy = "Rolling"
    preferences = {
      #checkpoint_delay       = 600
      #checkpoint_percentages = [35, 70, 100]
      #instance_warmup        = 300 # replace with lifecycle hook
      min_healthy_percentage = 90 # check
    }
    triggers = ["tag"] # check
  }

  # Launch template
  launch_template_name        = "ecs-${local.base_name}"
  launch_template_use_name_prefix = true
  launch_template_description = "ECS launch template"
  update_default_version      = true

  image_id          = local.ami_id
  instance_type     = var.instance_type
  ebs_optimized     = true # check
  enable_monitoring = true

  # IAM role & instance profile
  create_iam_instance_profile = true
  iam_role_name               = "ecs-${local.base_name}"
  iam_role_path               = "/ec2/"
  iam_role_description        = "ECS role for cluster: ${var.product} ${var.env}"
  iam_role_tags = {Name = "ecs-${local.base_name}"}
  iam_role_policies = {
    AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  }

  block_device_mappings = [
    {
      # Root volume
      device_name = "/dev/xvda"
      no_device   = 0
      ebs = {
        delete_on_termination = true # check if we can reuse instead?
        encrypted             = true
        volume_size           = 50 # check
        volume_type           = "gp2" # check
      }
    }
  ]

  capacity_reservation_specification = {
    capacity_reservation_preference = "open"
  }

  instance_market_options = {
    market_type = "spot"
    spot_options = {
      instance_interruption_behavior = "terminate" # check
      #max_price                      = "one-time" # check
      spot_instance_type = "one-time"
      #spot_instance_type             = ??? # check - i think this should be disabled as it would then default to th eon-demand price
      #valid_until                    = ??? # check - not sure at the mo
    }
  }

  metadata_options = {
    http_endpoint               = "enabled"
    http_tokens                 = "required"
    http_put_response_hop_limit = 32
  }

  network_interfaces = [
    {
      delete_on_termination = true
      description           = "eth0"
      device_index          = 0
      security_groups       = [module.ecs_sg.security_group_id] # TODO: add default
    }
  ]

  placement = {
    availability_zone = var.region
  }

  tags = local.tags
}

ini_security_groups.tf

module "ecs_sg" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "5.1.0"

  name        = local.base_name
  description = "Security group for ECS cluster: ${var.product} ${var.env}"
  vpc_id      = var.vpc_id

  # ingress_cidr_blocks      = ["10.10.0.0/16"]
  # ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8000
      to_port     = 8000
      protocol    = "tcp"
      description = "fill me in"
      cidr_blocks = "0.0.0.0/0" # this seems too open
    }
    ### snipped for conciseness
  ]

}

data.tf

data "aws_vpc" "this" {
  id = var.vpc_id
}

data "aws_security_group" "this" {
  name   = "default"
  vpc_id = data.aws_vpc.this.id
}

data "aws_subnets" "private" {
  filter {
    name   = "vpc-id"
    values = [var.vpc_id]
  }

  tags = {
    Name = "REDACTED*"
  }
}

data "aws_ami" "ecs_optimised" {
    most_recent = true

    filter {
        name   = "name"
        # TODO: see if we need gp2 or gp3 disks? Also check if we need ebs-optimised. how important is HDD?
        values = ["amzn2-ami-ecs-hvm-2.0.202*x86_64-ebs*"]
        # aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended --region us-east-1
        # amzn2-ami-ecs-hvm-*-x86_64-ebs
    }

    filter {
        name   = "virtualization-type"
        values = ["hvm"]
    }

    owners = ["amazon"]
}

vars.tf

locals {
  ami_id = data.aws_ami.ecs_optimised.id

  tags = {
    environment = var.env
    product     = var.product
    owner       = "REDACTED"
    created_by  = "terraform"
    ami_id = local.ami_id
  }

  base_name = "ecs-${var.product}-${var.env}-rr"

  # TODO: Move this into separate script file if larger
  user_data = <<-EOT
    #!/bin/bash
    echo "Hello Terraform!"
  EOT
}

variable "vpc_id" {
  description = "AWS VPC ID. e.g. vpc-xxxxx"
  type        = string
}

variable "region" {
  description = "AWS region. e.g. eu-west-1"
  type        = string
}

variable "env" {
  description = "Deployment environment. e.g. dev, uat, prod"
  type        = string
}

variable "product" {
  description = "Product name. e.g. vfdb"
  type        = string
}

variable "instance_type" {
  description = "EC2 Instance type for ECS cluster. e.g. t3a.medium"
  type        = string
}

Steps to reproduce the behavior:

Are you using workspaces? NO
Have you cleared the local cache (see Notice section above)? YES

List steps in order that led up to the issue you encountered

2nd time I apply. standard tf workflow:

rm -rf .terraform .terraform.lock.hcl \
&& terraform_v1.5.1 init \
&& terraform_v1.5.1 apply -var-file=../../envs/dev/dev.tfvars -auto-approve`

Expected behavior

The capacity provider was created with terraform. Therefore, it should be updated accordingly. Though it is possible I have made a configuration error

Actual behavior

"The specified capacity provider already exists. To change the configuration of an existing capacity provider, update the capacity provider."

Terminal Output Screenshot(s)

Selection_043

Additional context

Possible misconfiguration, but not sure.

How to connect to the ECS cluster over internet

Hi,

I have adapted the sample to run my cluster with nginx as a reverse proxy. However, I am not sure how to connect to the cluster / EC2 instance over internet. I have the private ips but I don't see a public ip except in the nat gateway.

I am new to AWS/ECS/Terraform etc. Please advice.

Thanks

Complete ECS Example - IAM Role not configured in ASG

The Complete ECS Example don't work. Don't configure the IAM Role in ASG.

This is because the example uses launch configuration and the variable iam_instance_profile_arn. The variable iam_instance_profile_arn should be use with launch template. With launch configurations should use the variable iam_instance_profile_name.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.