Code Monkey home page Code Monkey logo

terraform-aws-ecs-fargate's Introduction

AWS Fargate ECS Terraform Module

CI Terraform Registry Terraform Version License: Apache 2.0

Terraform module to create Fargate ECS resources on AWS.

Features

How do I use this module?

with ALB integration

see example for details

module "service" {
  source = "registry.terraform.io/stroeer/ecs-fargate/aws"

  cpu                           = 256
  cluster_id                    = "my-ecs-cluster-id"
  container_port                = 8000
  create_ingress_security_group = false
  create_deployment_pipeline    = false
  desired_count                 = 1
  ecr_force_delete              = true
  memory                        = 512
  service_name                  = "my-service"
  vpc_id                        = module.vpc.vpc_id

  // add listener rules that determine how the load balancer routes requests to its registered targets.
  https_listener_rules = [{
    listener_arn = aws_lb_listener.http.arn

    actions = [{
      type               = "forward"
      target_group_index = 0
    }]

    conditions = [{
      path_patterns = ["/"]
    }]
  }]

  // add a target group to route ALB traffic to this service
  target_groups = [
    {
      name              = "my-service"
      backend_protocol  = "HTTP"
      backend_port      = 8000
      load_balancer_arn = "my-lb-arn"
      target_type       = "ip"

      health_check = {
        enabled  = true
        path     = "/"
        protocol = "HTTP"
      }
    }
  ]
}

with autoscaling

module "service" {
  // see above

  appautoscaling_settings = {
    predefined_metric_type = "ECSServiceAverageCPUUtilization"
    target_value           = 30
    max_capacity           = 8
    min_capacity           = 2
    disable_scale_in       = true
    scale_in_cooldown      = 120
    scale_out_cooldown     = 15
  }
}

Use this configuration map to enable and alter the autoscaling settings for this app.

key description
target_value (mandatory) the target value, refers to predefined_metric_type
predefined_metric_type see docs for possible values
max_capacity upper threshold for scale out
min_capacity lower threshold for scale in
disable_scale_in prevent scale in if set to true
scale_in_cooldown delay (in seconds) between scale in events
scale_out_cooldown delay (in seconds) between scale out events

with blue/green deployments

This module will can create an automated deployment pipeline for your service (set create_deployment_pipeline is set to true).

deployment pipeline

details

  • you'll need AWS credentials that allows pushing images into the ECR container registry.
  • Once you push an image with [tag=production] - a Cloudwatch Event will trigger the start of a CodePipeline. This tag will only trigger the pipeline. In addition, you'll need the following tags:
    • container.$CONTAINER_NAME is required to locate the correct container from the service's task-definition.json
    • another tag that will be unique and used for the actual deployment and the task-definition.json. A good choice would be git.sha. To be specific, we chose a tag that does not start with container. and is none of ["local", "production", "staging", "infrastructure"]

That CodePipeline will do the heavy lifting (see deployment flow above)

  1. Pull the full imagedefinitions.json from the ECR registry
  2. Trigger a CodeBuild to transform the imagedefinitions.json into a imagedefinitions.json for deployment
  3. Update the ECS service's task-definition by replacing the specified imageUri for the given name.

Notifications

We will create a notification rule for the pipeline. You can provide your ARN of a notification rule target (e.g. a SNS topic ARN) using codestar_notifications_target_arn. Otherwise a new SNS topic with required permissions is created for every service. See aws_codestarnotifications_notification_rule for details.

You can then configure an integration between those notifications and AWS Chatbot for example.

Optional shared pipeline resources

  • A shared S3 bucket for storing artifacts from CodePipeline can be used. You can specify it through the variable code_pipeline_artifact_bucket. Otherwise, a new bucket is created for every service.
  • A shared IAM::Role for CodePipeline and CodeBuild can be used. You can specify those through the variables code_pipeline_role_name and code_build_role_name. Otherwise, new roles are created for every service. For the permissions required see the module code

Examples

  • complete: complete example showcasing ALB integration, autoscaling and task definition configuration

Requirements

Name Version
terraform >= 1.3
aws >= 5.32

Providers

Name Version
aws >= 5.32

Modules

Name Source Version
code_deploy ./modules/deployment n/a
container_definition registry.terraform.io/cloudposse/config/yaml//modules/deepmerge 1.0.2
ecr ./modules/ecr n/a
envoy_container_definition registry.terraform.io/cloudposse/config/yaml//modules/deepmerge 1.0.2
fluentbit_container_definition registry.terraform.io/cloudposse/config/yaml//modules/deepmerge 1.0.2
otel_container_definition registry.terraform.io/cloudposse/config/yaml//modules/deepmerge 1.0.2
sg registry.terraform.io/terraform-aws-modules/security-group/aws ~> 3.0

Resources

Name Type
aws_alb_listener_rule.public resource
aws_alb_target_group.main resource
aws_appautoscaling_policy.ecs resource
aws_appautoscaling_target.ecs resource
aws_cloudwatch_log_group.containers resource
aws_ecs_service.this resource
aws_ecs_task_definition.this resource
aws_iam_policy.acm resource
aws_iam_policy.cloudwatch_logs_policy resource
aws_iam_policy.enable_execute_command resource
aws_iam_policy.fluent_bit_config_access resource
aws_iam_policy.otel resource
aws_iam_role.ecs_task_role resource
aws_iam_role.task_execution_role resource
aws_iam_role_policy.ecs_task_role_policy resource
aws_iam_role_policy_attachment.acm resource
aws_iam_role_policy_attachment.appmesh resource
aws_iam_role_policy_attachment.cloudwatch_logs_policy resource
aws_iam_role_policy_attachment.enable_execute_command resource
aws_iam_role_policy_attachment.fluent_bit_config_access resource
aws_iam_role_policy_attachment.otel resource
aws_security_group_rule.trusted_egress_attachment resource
aws_service_discovery_service.this resource
aws_caller_identity.current data source
aws_ecs_task_definition.this data source
aws_iam_policy.appmesh data source
aws_iam_policy.ecs_task_execution_policy data source
aws_iam_policy_document.acm data source
aws_iam_policy_document.cloudwatch_logs_policy data source
aws_iam_policy_document.ecs_task_assume_role_policy data source
aws_iam_policy_document.enable_execute_command data source
aws_iam_policy_document.fluent_bit_config_access data source
aws_iam_policy_document.otel data source
aws_iam_policy_document.task_execution_role data source
aws_lb.public data source
aws_region.current data source
aws_subnets.selected data source

Inputs

Name Description Type Default Required
additional_container_definitions Additional container definitions added to the task definition of this service, see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html for allowed parameters. list(any) [] no
app_mesh Configuration of optional AWS App Mesh integration using an Envoy sidecar.
object({
container_definition = optional(any, {})
container_name = optional(string, "envoy")
enabled = optional(bool, false)
mesh_name = optional(string, "apps")

tls = optional(object({
acm_certificate_arn = optional(string)
root_ca_arn = optional(string)
}), {})
})
{} no
appautoscaling_settings Autoscaling configuration for this service. map(any) null no
assign_public_ip Assign a public IP address to the ENI of this service. bool false no
capacity_provider_strategy Capacity provider strategies to use for the service. Can be one or more.
list(object({
capacity_provider = string
weight = string
base = optional(string, null)
}))
null no
cloudwatch_logs CloudWatch logs configuration for the containers of this service. CloudWatch logs will be used as the default log configuration if Firelens is disabled and for the fluentbit and otel containers.
object({
enabled = optional(bool, true)
name = optional(string, "")
retention_in_days = optional(number, 7)
})
{} no
cluster_id The ECS cluster id that should run this service string n/a yes
code_build_environment_compute_type Information about the compute resources the CodeBuild stage of the deployment pipeline will use. string "BUILD_LAMBDA_1GB" no
code_build_environment_image Docker image to use for the CodeBuild stage of the deployment pipeline. The image needs to include python. string "aws/codebuild/amazonlinux-aarch64-lambda-standard:python3.12" no
code_build_environment_type Type of build environment for the CodeBuild stage of the deployment pipeline. string "ARM_LAMBDA_CONTAINER" no
code_build_log_retention_in_days Log retention in days of the CodeBuild CloudWatch log group. number 7 no
code_build_role_name Use an existing role for codebuild permissions that can be reused for multiple services. Otherwise a separate role for this service will be created. string "" no
code_pipeline_artifact_bucket Use an existing bucket for codepipeline artifacts that can be reused for multiple services. Otherwise a separate bucket for each service will be created. string "" no
code_pipeline_artifact_bucket_sse AWS KMS master key id for server-side encryption. any {} no
code_pipeline_role_name Use an existing role for codepipeline permissions that can be reused for multiple services. Otherwise a separate role for this service will be created. string "" no
code_pipeline_type Type of the CodePipeline. Possible values are: V1 and V2. string "V1" no
code_pipeline_variables CodePipeline variables. Valid only when codepipeline_type is V2.
list(object({
name = string
default_value = optional(string)
description = optional(string)
}))
[] no
codestar_notifications_detail_type The level of detail to include in the notifications for this resource. Possible values are BASIC and FULL. string "BASIC" no
codestar_notifications_event_type_ids A list of event types associated with this notification rule. For list of allowed events see https://docs.aws.amazon.com/dtconsole/latest/userguide/concepts.html#concepts-api. list(string)
[
"codepipeline-pipeline-pipeline-execution-succeeded",
"codepipeline-pipeline-pipeline-execution-failed"
]
no
codestar_notifications_kms_master_key_id AWS KMS master key id for server-side encryption. string null no
codestar_notifications_target_arn Use an existing ARN for a notification rule target (for example, a SNS Topic ARN). Otherwise a separate sns topic for this service will be created. string "" no
container_definition_overwrites Additional container definition parameters or overwrites of defaults for your service, see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html for allowed parameters. any {} no
container_name Defaults to var.service_name, can be overridden if it differs. Used as a target for LB. string "" no
container_port The port used by the app within the container. number n/a yes
cpu Amount of CPU required by this service. 1024 == 1 vCPU number 256 no
cpu_architecture Must be set to either X86_64 or ARM64, see https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#runtime-platform. string "X86_64" no
create_deployment_pipeline Creates a deploy pipeline from ECR trigger if create_ecr_repo == true. bool true no
create_ecr_repository Create an ECR repository for this service. bool true no
create_ingress_security_group Create a security group allowing ingress from target groups to the application ports. Disable this for target groups attached to a Network Loadbalancer. bool true no
deployment_circuit_breaker Deployment circuit breaker configuration.
object({
enable = bool
rollback = bool
})
{
"enable": false,
"rollback": false
}
no
deployment_failure_detection_alarms CloudWatch alarms used to detect deployment failures.
object({
enable = bool
rollback = bool
alarm_names = list(string)
})
{
"alarm_names": [],
"enable": false,
"rollback": false
}
no
deployment_maximum_percent Upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment. Not valid when using the DAEMON scheduling strategy. number 200 no
deployment_minimum_healthy_percent Lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment. number 100 no
desired_count Desired count of services to be started/running. number 0 no
ecr_custom_lifecycle_policy JSON formatted ECR lifecycle policy used for this repository (disabled the default lifecycle policy), see https://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html#lifecycle_policy_parameters for details. string null no
ecr_enable_default_lifecycle_policy Enables an ECR lifecycle policy for this repository which expires all images except for the last 30. bool true no
ecr_force_delete If true, will delete this repository even if it contains images. bool false no
ecr_image_scanning_configuration n/a map(any)
{
"scan_on_push": true
}
no
ecr_image_tag Tag of the new image pushed to the Amazon ECR repository to trigger the deployment pipeline. string "production" no
ecr_image_tag_mutability n/a string "MUTABLE" no
ecr_repository_name Existing repo to register to use with this service module, e.g. creating deployment pipelines. string "" no
efs_volumes Configuration block for EFS volumes. any [] no
enable_execute_command Specifies whether to enable Amazon ECS Exec for the tasks within the service. bool false no
firelens Configuration for optional custom log routing using FireLens over fluentbit sidecar. Enable attach_init_config_s3_policy to attach an IAM policy granting access to the init config files on S3.
object({
attach_init_config_s3_policy = optional(bool, false)
container_name = optional(string, "fluentbit")
container_definition = optional(any, {})
enabled = optional(bool, false)
init_config_files = optional(list(string), [])
log_level = optional(string, "info")
opensearch_host = optional(string, "")
})
{} no
force_new_deployment Enable to force a new task deployment of the service. This can be used to update tasks to use a newer Docker image with same image/tag combination (e.g. myimage:latest), roll Fargate tasks onto a newer platform version, or immediately deploy ordered_placement_strategy and placement_constraints updates. bool false no
health_check_grace_period_seconds Seconds to ignore failing load balancer health checks on newly instantiated tasks to prevent premature shutdown, up to 2147483647. Only valid for services configured to use load balancers. number 0 no
https_listener_rules A list of maps describing the Listener Rules for this ALB. Required key/values: actions, conditions. Optional key/values: priority, https_listener_index (default to https_listeners[count.index]) any [] no
memory Amount of memory [MB] is required by this service. number 512 no
operating_system_family If the requires_compatibilities is FARGATE this field is required. Must be set to a valid option from https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#runtime-platform. string "LINUX" no
otel Configuration for (optional) AWS Distro für OpenTelemetry sidecar.
object({
container_definition = optional(any, {})
enabled = optional(bool, false)
})
{} no
platform_version The platform version on which to run your service. Defaults to LATEST. string "LATEST" no
policy_document AWS Policy JSON describing the permissions required for this service. string "" no
requires_compatibilities The launch type the task is using. This enables a check to ensure that all of the parameters used in the task definition meet the requirements of the launch type. set(string)
[
"EC2",
"FARGATE"
]
no
security_groups A list of security group ids that will be attached additionally to the ecs deployment. list(string) [] no
service_discovery_dns_namespace The ID of a Service Discovery private DNS namespace. If provided, the module will create a Route 53 Auto Naming Service to enable service discovery using Cloud Map. string "" no
service_name The service name. Will also be used as Route53 DNS entry. string n/a yes
subnet_tags Map of tags to identify the subnets associated with this service. Each pair must exactly match a pair on the desired subnet. Defaults to { Tier = public } for services with assign_public_ip == true and { Tier = private } otherwise. map(string) null no
tags Additional tags (_e.g._ { map-migrated : d-example-443255fsf }) map(string) {} no
target_groups A list of maps containing key/value pairs that define the target groups to be created. Order of these maps is important and the index of these are to be referenced in listener definitions. Required key/values: name, backend_protocol, backend_port any [] no
task_execution_role_arn ARN of the task execution role that the Amazon ECS container agent and the Docker daemon can assume. If not provided, a default role will be created and used. string "" no
task_role_arn ARN of the IAM role that allows your Amazon ECS container task to make calls to other AWS services. If not specified, the default ECS task role created in this module will be used. string "" no
vpc_id VPC id where the load balancer and other resources will be deployed. string n/a yes

Outputs

Name Description
alb_target_group_arn_suffixes ARN suffixes of the created target groups.
alb_target_group_arns ARNs of the created target groups.
autoscaling_target ECS auto scaling targets if auto scaling enabled.
cloudwatch_log_group Name of the CloudWatch log group for container logs.
container_definitions Container definitions used by this service including all sidecars.
ecr_repository_arn Full ARN of the ECR repository.
ecr_repository_url URL of the ECR repository.
task_execution_role_arn ARN of the task execution role that the Amazon ECS container agent and the Docker daemon can assume.
task_execution_role_name Friendly name of the task execution role that the Amazon ECS container agent and the Docker daemon can assume.
task_execution_role_unique_id Stable and unique string identifying the IAM role that the Amazon ECS container agent and the Docker daemon can assume.
task_role_arn ARN of IAM role that allows your Amazon ECS container task to make calls to other AWS services.
task_role_name Friendly name of IAM role that allows your Amazon ECS container task to make calls to other AWS services.
task_role_unique_id Stable and unique string identifying the IAM role that allows your Amazon ECS container task to make calls to other AWS services.

terraform-aws-ecs-fargate's People

Contributors

dependabot[bot] avatar harryherbig avatar jadiedrich avatar major0 avatar moritzzimmer avatar saefty avatar thisismana avatar vmk1vmk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

terraform-aws-ecs-fargate's Issues

deployment module relies on existing S3 bucket

The deployment (sub) module uses a data source of an existing S3 bucket for storing pipeline artifacts. This bucket is (re-used) for all pipelines.

There is no documentation about this bucket and it's name can't be configured so this won't be usable outside our team.

Options:

  • create the bucket (one for each service) inside the deployment (sub) module (service specific naming)
  • make id of required bucket configurable

I'd prefer option one since it's more coherent IMO and requires less upfront terraforming outside the module.

module fails with terraform 0.13

This module can't be applied using terraform 0.13. It fails with Error: ECR Repository (service-name) not found.

Details:

2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: 2020/08/25 09:49:16 [DEBUG] [aws-sdk-go] DEBUG: Response ecr/DescribeRepositories Details:
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: ---[ RESPONSE ]--------------------------------------
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: HTTP/1.1 400 Bad Request
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: Connection: close
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: Content-Length: 153
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: Content-Type: application/x-amz-json-1.1
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: Date: Tue, 25 Aug 2020 07:49:16 GMT
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: X-Amzn-Requestid: 99522912-b9c0-44a3-9be2-1fc3b09ef2a7
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5:
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5:
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: -----------------------------------------------------
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: 2020/08/25 09:49:16 [DEBUG] [aws-sdk-go] {"__type":"RepositoryNotFoundException","message":"The repository with name 'service-name' does not exist in the registry with id '12432442342'"}
2020-08-25T09:49:16.831+0200 [DEBUG] plugin.terraform-provider-aws_v3.3.0_x5: 2020/08/25 09:49:16 [DEBUG] [aws-sdk-go] DEBUG: Validate Response ecr/DescribeRepositories failed, attempt 0/25, error RepositoryNotFoundException: The repository with name 'service-name' does not exist in the registry with id '12432442342'
2020/08/25 09:49:16 [ERROR] eval: *terraform.evalReadDataPlan, err: ECR Repository (service-name) not found
2020/08/25 09:49:16 [ERROR] eval: *terraform.EvalSequence, err: ECR Repository (service-name) not found

Add policy attachment to task role for firehose

To simplify client code, we could attach an IAM policy to the existing ecs task role in order to allow containers to put log events into firehose.

Module clients could remove the following code and rely on the module:

data "aws_iam_policy_document" "policy" {
  statement {
    actions = ["firehose:PutRecordBatch"]
    // todo limit this service stream
    resources = ["*"]
  }
}

AWS `source` tag should be overrideable

All aws resources get default tags when created with this module.

When I create an ECS service via another github repository where I use terraform-aws-buzzgate the source tag should be of the service repository and not of github.com/stroeer/terraform-aws-buzzgate.

Example
in the github.com/stroeer/polyphase repo i use this module and I create aws resources with tf files, that are in the given repo. So to be able to terraform these resources you would need to know, that the tf files reside in github.com/stroeer/polyphase and not in github.com/stroeer/terraform-aws-buzzgate

use newer codebuild image for faster provisioning

Problem Description

The Provisioning phase for the CodeBuild step currently takes several minutes, which slows down deployments of ecs tasks by a lot.

Quick Analysis & Solution Proposal

• Currently we use aws/codebuild/amazonlinux2-x86_64-standard:1.0, which is pretty old.
• The Internet says that older images get dropped on the AWS caches, so we should probably switch to a newer image.
• According to this, aws/codebuild/amazonlinux2-x86_64-standard:3.0 should be the one to to switch to:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html

BUG: Call to function "lookup" failed: lookup failed to find key "resource_label".

│ Error: Error in function call
│ 
│   on .terraform/modules/alb-ecs-service/main.tf line 278, in resource "aws_appautoscaling_policy" "ecs":
│  278:       resource_label         = lookup(var.appautoscaling_settings, "resource_label")
│     ├────────────────
│     │ while calling lookup(inputMap, key, default...)
│     │ var.appautoscaling_settings is map of string with 7 elements
│ 
│ Call to function "lookup" failed: lookup failed to find key "resource_label".

Review deployment pipeline permissions

With #14 we introduced an IAM role with required permissions to run CodePipeline for services. This role currently contains commented code and it's not finally clear, which permissions are actually needed.

Enhance examples

We should provide examples depicting major features of this module. Those examples need to be terraform apply'able by users without errors.

Possible examples:

  • complete example with all basic variables set
  • with ALB
  • with deployment pipeline
  • with app mesh and logging
  • with NLB

using module w/o deployment pipeline fails

If create_deployment_pipeline is set to false, terraform plan fails with:

Error: Failed getting S3 bucket: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, HeadBucketInput.Bucket.
 Bucket: ""

  on .terraform/modules/client/terraform-aws-ecs-fargate-0.1.1/modules/deployment/s3.tf line 13, in data "aws_s3_bucket" "codepipeline":
  13: data "aws_s3_bucket" "codepipeline" {

Remove all internal specific datasources/remote states

We should list and cut all dependencies to our internal datasources and remote states, to make this module more generic to team/organization external users.

Affected resources:

  • ssm_ecs_task_execution_role data source
  • aws_service_discovery_service namespace_id

Provide log group for fluent-bit sidecar

With enabled logs sub-module, we should provide the opportunity to provide an existing CloudWatch Log Group for logs of the fluent-bit sidecar or create one (which should be the default).

Creating new services with `0.29.1` fails

│ Error: Invalid count argument
│ 
│   on .terraform/modules/services.otel_container_definition/modules/assert/main.tf line 2, in data "external" "assertion":
│    2:   count   = var.condition ? 0 : 1
│ 
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To
│ work around this, use the -target argument to first apply only the resources that the count depends on.
╵
make: *** [tf] Error 1

Workaround: apply using 0.29.0 first.

Initial terraform apply fails with `create_ingress_security_group = true`

An initial terraform apply using create_ingress_security_group = true fails with

╷
│ Error: Invalid for_each argument
│ 
│   on ../../main.tf line 2, in data "aws_lb" "public":
│    2:   for_each = var.create_ingress_security_group ? toset([for target in var.target_groups : lookup(target, "load_balancer_arn", "")]) : []
│     ├────────────────
│     │ var.create_ingress_security_group is true
│     │ var.target_groups is tuple with 1 element
│ 
│ The "for_each" set includes values derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
│ 
│ When working with unknown values in for_each, it's better to use a map value where the keys are defined statically in your configuration and where only the values contain apply-time results.
│ 
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.

As a workaround it's possible to apply with create_ingress_security_group = false and then afterwards setting the variable to true.

Versions:

Terraform v1.9.0
on darwin_arm64
+ provider registry.terraform.io/cloudposse/template v2.2.0
+ provider registry.terraform.io/cloudposse/utils v1.24.0
+ provider registry.terraform.io/hashicorp/aws v5.58.0
+ provider registry.terraform.io/hashicorp/http v3.4.3
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.6.2

deployment module relies on existing IAM roles

The deployment (sub) module uses data sources of existing IAM roles for codepipeline and codebuild.

There is no documentation about the necessary permissions for those roles and their names can't be configured so this won't be usable outside our team.

Options:

  • create those roles inside the deployment (sub) module (service specific naming)
  • document required permissions and make ARNs of the required roles configurable

I'd prefer option one since it's more coherent IMO and requires less upfront terraforming outside the module.

support apps w/o exposed ports

Providing a container_port used for the load balancer and/or mesh proxy configuration is mandatory at the moment. We might want to support apps w/o an exposed port as well, e.g. apps consuming Kinesis/Dynamodb streams or sqs queues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.