Code Monkey home page Code Monkey logo

terraform-aws-msk-kafka-cluster's Introduction

AWS MSK Kafka Cluster Terraform module

Terraform module which creates AWS MSK (Managed Streaming for Kafka) resources.

SWUbanner

Usage

See examples directory for working examples to reference:

module "msk_kafka_cluster" {
  source = "terraform-aws-modules/msk-kafka-cluster/aws"

  name                   = local.name
  kafka_version          = "3.5.1"
  number_of_broker_nodes = 3
  enhanced_monitoring    = "PER_TOPIC_PER_PARTITION"

  broker_node_client_subnets = ["subnet-12345678", "subnet-024681012", "subnet-87654321"]
  broker_node_storage_info = {
    ebs_storage_info = { volume_size = 100 }
  }
  broker_node_instance_type   = "kafka.t3.small"
  broker_node_security_groups = ["sg-12345678"]

  encryption_in_transit_client_broker = "TLS"
  encryption_in_transit_in_cluster    = true

  configuration_name        = "example-configuration"
  configuration_description = "Example configuration"
  configuration_server_properties = {
    "auto.create.topics.enable" = true
    "delete.topic.enable"       = true
  }

  jmx_exporter_enabled    = true
  node_exporter_enabled   = true
  cloudwatch_logs_enabled = true
  s3_logs_enabled         = true
  s3_logs_bucket          = "aws-msk-kafka-cluster-logs"
  s3_logs_prefix          = local.name

  scaling_max_capacity = 512
  scaling_target_value = 80

  client_authentication = {
    sasl = { scram = true }
  }
  create_scram_secret_association = true
  scram_secret_association_secret_arn_list = [
    aws_secretsmanager_secret.one.arn,
    aws_secretsmanager_secret.two.arn,
  ]

  # AWS Glue Registry
  schema_registries = {
    team_a = {
      name        = "team_a"
      description = "Schema registry for Team A"
    }
    team_b = {
      name        = "team_b"
      description = "Schema registry for Team B"
    }
  }

  # AWS Glue Schemas
  schemas = {
    team_a_tweets = {
      schema_registry_name = "team_a"
      schema_name          = "tweets"
      description          = "Schema that contains all the tweets"
      compatibility        = "FORWARD"
      schema_definition    = "{\"type\": \"record\", \"name\": \"r1\", \"fields\": [ {\"name\": \"f1\", \"type\": \"int\"}, {\"name\": \"f2\", \"type\": \"string\"} ]}"
      tags                 = { Team = "Team A" }
    }
    team_b_records = {
      schema_registry_name = "team_b"
      schema_name          = "records"
      description          = "Schema that contains all the records"
      compatibility        = "FORWARD"
      team_b_records = {
        schema_registry_name = "team_b"
        schema_name          = "records"
        description          = "Schema that contains all the records"
        compatibility        = "FORWARD"
        schema_definition = jsonencode({
          type = "record"
          name = "r1"
          fields = [
            {
              name = "f1"
              type = "int"
            },
            {
              name = "f2"
              type = "string"
            },
            {
              name = "f3"
              type = "boolean"
            }
          ]
        })
        tags = { Team = "Team B" }
      }
    }
  }

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}

Examples

Examples codified under the examples are intended to give users references for how to use the module(s) as well as testing/validating changes to the source code of the module. If contributing to the project, please be sure to make any appropriate updates to the relevant examples to allow maintainers to test your changes and to keep the examples up to date for users. Thank you!

Requirements

Name Version
terraform >= 1.0
aws >= 5.21
random >= 3.6

Providers

Name Version
aws >= 5.21
random >= 3.6

Modules

No modules.

Resources

Name Type
aws_appautoscaling_policy.this resource
aws_appautoscaling_target.this resource
aws_cloudwatch_log_group.this resource
aws_glue_registry.this resource
aws_glue_schema.this resource
aws_msk_cluster.this resource
aws_msk_cluster_policy.this resource
aws_msk_configuration.this resource
aws_msk_scram_secret_association.this resource
aws_msk_vpc_connection.this resource
aws_mskconnect_custom_plugin.this resource
aws_mskconnect_worker_configuration.this resource
random_id.this resource
aws_iam_policy_document.this data source

Inputs

Name Description Type Default Required
broker_node_az_distribution The distribution of broker nodes across availability zones (documentation). Currently the only valid value is DEFAULT string null no
broker_node_client_subnets A list of subnets to connect to in client VPC (documentation) list(string) [] no
broker_node_connectivity_info Information about the cluster access configuration any {} no
broker_node_instance_type Specify the instance type to use for the kafka brokers. e.g. kafka.m5.large. (Pricing info) string null no
broker_node_security_groups A list of the security groups to associate with the elastic network interfaces to control who can communicate with the cluster list(string) [] no
broker_node_storage_info A block that contains information about storage volumes attached to MSK broker nodes any {} no
client_authentication Configuration block for specifying a client authentication any {} no
cloudwatch_log_group_kms_key_id The ARN of the KMS Key to use when encrypting log data string null no
cloudwatch_log_group_name Name of the Cloudwatch Log Group to deliver logs to string null no
cloudwatch_log_group_retention_in_days Specifies the number of days you want to retain log events in the log group number 0 no
cloudwatch_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Cloudwatch Logs bool false no
cluster_override_policy_documents Override policy documents for cluster policy list(string) null no
cluster_policy_statements Map of policy statements for cluster policy any null no
cluster_source_policy_documents Source policy documents for cluster policy list(string) null no
configuration_arn ARN of an externally created configuration to use string null no
configuration_description Description of the configuration string null no
configuration_name Name of the configuration string null no
configuration_revision Revision of the externally created configuration to use number null no
configuration_server_properties Contents of the server.properties file. Supported properties are documented in the MSK Developer Guide map(string) {} no
connect_custom_plugin_timeouts Timeout configurations for the connect custom plugins map(string)
{
"create": null
}
no
connect_custom_plugins Map of custom plugin configuration details (map of maps) any {} no
connect_worker_config_description A summary description of the worker configuration string null no
connect_worker_config_name The name of the worker configuration string null no
connect_worker_config_properties_file_content Contents of connect-distributed.properties file. The value can be either base64 encoded or in raw format string null no
create Determines whether cluster resources will be created bool true no
create_cloudwatch_log_group Determines whether to create a CloudWatch log group bool true no
create_cluster_policy Determines whether to create an MSK cluster policy bool false no
create_configuration Determines whether to create a configuration bool true no
create_connect_worker_configuration Determines whether to create connect worker configuration bool false no
create_schema_registry Determines whether to create a Glue schema registry for managing Avro schemas for the cluster bool true no
create_scram_secret_association Determines whether to create SASL/SCRAM secret association bool false no
enable_storage_autoscaling Determines whether autoscaling is enabled for storage bool true no
encryption_at_rest_kms_key_arn You may specify a KMS key short ID or ARN (it will always output an ARN) to use for encrypting your data at rest. If no key is specified, an AWS managed KMS ('aws/msk' managed service) key will be used for encrypting the data at rest string null no
encryption_in_transit_client_broker Encryption setting for data in transit between clients and brokers. Valid values: TLS, TLS_PLAINTEXT, and PLAINTEXT. Default value is TLS string null no
encryption_in_transit_in_cluster Whether data communication among broker nodes is encrypted. Default value: true bool null no
enhanced_monitoring Specify the desired enhanced MSK CloudWatch monitoring level. See Monitoring Amazon MSK with Amazon CloudWatch string null no
firehose_delivery_stream Name of the Kinesis Data Firehose delivery stream to deliver logs to string null no
firehose_logs_enabled Indicates whether you want to enable or disable streaming broker logs to Kinesis Data Firehose bool false no
jmx_exporter_enabled Indicates whether you want to enable or disable the JMX Exporter bool false no
kafka_version Specify the desired Kafka software version string null no
name Name of the MSK cluster string "msk" no
node_exporter_enabled Indicates whether you want to enable or disable the Node Exporter bool false no
number_of_broker_nodes The desired total number of broker nodes in the kafka cluster. It must be a multiple of the number of specified client subnets number null no
s3_logs_bucket Name of the S3 bucket to deliver logs to string null no
s3_logs_enabled Indicates whether you want to enable or disable streaming broker logs to S3 bool false no
s3_logs_prefix Prefix to append to the folder name string null no
scaling_max_capacity Max storage capacity for Kafka broker autoscaling number 250 no
scaling_role_arn The ARN of the IAM role that allows Application AutoScaling to modify your scalable target on your behalf. This defaults to an IAM Service-Linked Role string null no
scaling_target_value The Kafka broker storage utilization at which scaling is initiated number 70 no
schema_registries A map of schema registries to be created map(any) {} no
schemas A map schemas to be created within the schema registry map(any) {} no
scram_secret_association_secret_arn_list List of AWS Secrets Manager secret ARNs to associate with SCRAM list(string) [] no
storage_mode Controls storage mode for supported storage tiers. Valid values are: LOCAL or TIERED string null no
tags A map of tags to assign to the resources created map(string) {} no
timeouts Create, update, and delete timeout configurations for the cluster map(string) {} no
vpc_connections Map of VPC Connections to create any {} no

Outputs

Name Description
appautoscaling_policy_arn The ARN assigned by AWS to the scaling policy
appautoscaling_policy_name The scaling policy's name
appautoscaling_policy_policy_type The scaling policy's type
arn Amazon Resource Name (ARN) of the MSK cluster
bootstrap_brokers Comma separated list of one or more hostname:port pairs of kafka brokers suitable to bootstrap connectivity to the kafka cluster
bootstrap_brokers_plaintext Comma separated list of one or more hostname:port pairs of kafka brokers suitable to bootstrap connectivity to the kafka cluster. Contains a value if encryption_in_transit_client_broker is set to PLAINTEXT or TLS_PLAINTEXT
bootstrap_brokers_sasl_iam One or more DNS names (or IP addresses) and SASL IAM port pairs. This attribute will have a value if encryption_in_transit_client_broker is set to TLS_PLAINTEXT or TLS and client_authentication_sasl_iam is set to true
bootstrap_brokers_sasl_scram One or more DNS names (or IP addresses) and SASL SCRAM port pairs. This attribute will have a value if encryption_in_transit_client_broker is set to TLS_PLAINTEXT or TLS and client_authentication_sasl_scram is set to true
bootstrap_brokers_tls One or more DNS names (or IP addresses) and TLS port pairs. This attribute will have a value if encryption_in_transit_client_broker is set to TLS_PLAINTEXT or TLS
cluster_uuid UUID of the MSK cluster, for use in IAM policies
configuration_arn Amazon Resource Name (ARN) of the configuration
configuration_latest_revision Latest revision of the configuration
connect_custom_plugins A map of output attributes for the connect custom plugins created
connect_worker_configuration_arn The Amazon Resource Name (ARN) of the worker configuration
connect_worker_configuration_latest_revision An ID of the latest successfully created revision of the worker configuration
current_version Current version of the MSK Cluster used for updates, e.g. K13V1IB3VIYZZH
log_group_arn The Amazon Resource Name (ARN) specifying the log group
schema_registries A map of output attributes for the schema registries created
schemas A map of output attributes for the schemas created
scram_secret_association_id Amazon Resource Name (ARN) of the MSK cluster
vpc_connections A map of output attributes for the VPC connections created
zookeeper_connect_string A comma separated list of one or more hostname:port pairs to use to connect to the Apache Zookeeper cluster. The returned values are sorted alphabetically
zookeeper_connect_string_tls A comma separated list of one or more hostname:port pairs to use to connect to the Apache Zookeeper cluster via TLS. The returned values are sorted alphabetically

License

Apache-2.0 Licensed. See LICENSE.

terraform-aws-msk-kafka-cluster's People

Contributors

bryantbiggs avatar carldjohnston avatar greggschofield avatar holgerson97 avatar magreenbaum avatar nishsak96 avatar semantic-release-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-msk-kafka-cluster's Issues

Error: Too many sasl blocks

Describe the bug
when we enable client_authentication_sasl_iam and client_authentication_sasl_scram at the same time, we face this issue.

β”‚ Error: Too many sasl blocks
β”‚
β”‚   on .terraform/modules/msk_cluster/main.tf line 44, in resource "aws_msk_cluster" "this":
β”‚   44:         content {
β”‚
β”‚ No more than 1 "sasl" blocks are allowed

To Reproduce
Steps to reproduce the behavior:

  1. get the basic example
  2. add this to the config
  client_authentication_sasl_scram         = true
  client_authentication_sasl_iam           = true

Expected behavior
both sasl scram and iam should be enabled.

Desktop (please complete the following information):

  • Ubuntu 22.04
  • Terraform v1.2.5
  • registry.terraform.io/clowdhaus/msk-kafka-cluster/aws 1.2.0
  • hashicorp/aws v4.66.1

Argument "ebs_volume_size" is deprecating

FYI, this line is causing warning:

 Warning: Argument is deprecated
β”‚
β”‚   with module.stream.module.msk.aws_msk_cluster.this[0],
β”‚   on .terraform\modules\stream.msk\main.tf line 19, in resource "aws_msk_cluster" "this":
β”‚   19:     ebs_volume_size = var.broker_node_ebs_volume_size
β”‚
β”‚ use 'storage_info' argument instead

Thanks for the excellent module!

Push v2.5 to Terraform registry

Hi, I wanted to know if there is any roadmap to see the upload of version v2.5 to the terraform registry, is the current version of the registry in terraform v2.3?
image

MSK cluster TLS config getting updated despite no changes

Description

When I try to run terraform apply where TF code wasn't changed. Terraform run is failing

Open issue: hashicorp/terraform-provider-aws#24914

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 2.1.0

  • Terraform version: 1.7.4

  • Provider version(s): 5.37.0

Reproduction Code [Required]

module "msk_kafka_cluster" {
  source  = "terraform-aws-modules/msk-kafka-cluster/aws"
  version = "2.1.0"

  name                   = local.cluster_name
  kafka_version          = var.kafka_version
  number_of_broker_nodes = length(tolist(data.aws_subnets.msk_enabled.ids))
  enhanced_monitoring    = "PER_BROKER"

  broker_node_client_subnets = tolist(data.aws_subnets.msk_enabled.ids)
  broker_node_instance_type   = "kafka.m5.large"
  broker_node_security_groups = [aws_security_group.kafka-clients.id]

  encryption_in_transit_client_broker = "TLS"
  encryption_in_transit_in_cluster    = true

  create_configuration   = false
  configuration_arn      = aws_msk_configuration.msk-cluster.arn
  configuration_revision = aws_msk_configuration.msk-cluster.latest_revision

  jmx_exporter_enabled    = true
  node_exporter_enabled   = true
  cloudwatch_logs_enabled = true
  s3_logs_enabled         = false

  scaling_max_capacity = 1000
  scaling_target_value = 80

  cloudwatch_log_group_retention_in_days = 90
  
  client_authentication           = var.client_authentication
  create_scram_secret_association = true
  scram_secret_association_secret_arn_list = [
    data.aws_secretsmanager_secret.this.arn
  ]

  tags = local.tags
  timeouts = {
    create = "120m"
  }

}

Steps to reproduce the behavior:

Expected behavior

Terraform has to run and successfully finished with message No infrastructure changes

Actual behavior

Terraform tries to update security settings where security settings weren't changed

Terminal Output Screenshot(s)

Error: updating MSK Cluster (arn:aws:kafka:us-west-2:***********:cluster/msk-kafka-cluster/*******) security: operation error Kafka: UpdateSecurity, https response error StatusCode: 400, RequestID: ************, BadRequestException: The request does not include any updates to the security setting of the cluster. Verify the request, then try again. with module.msk_kafka_cluster.aws_msk_cluster.this[0] on .terraform/modules/msk_kafka_cluster/main.tf line 5, in resource "aws_msk_cluster" "this":

Additional context

If you set the client_authentication[0].tls in module like below. Terraform will complete successfully

resource "aws_msk_cluster" "this {
lifecycle {
    ignore_changes = [
      broker_node_group_info[0].storage_info[0].ebs_storage_info[0].volume_size,
      client_authentication[0].tls
    ]
  }

}

Auto scaling target/policy creation failing in il-central-1

Description

In regions like il-central-1(Tel Aviv, Israel), we have MSK but it does not support autoscaling groups yet. In such cases when we try to create MSK using this module, it breaks with a "Not Authorized to perform this action" exception.

If your request is for a new feature, please use the Feature request template.

  • [Yes ] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: Latest

  • Terraform version: 1.4.6

  • Provider version(s): 5.16.2 (Hashicorp/aws)

Reproduction Code [Required]

Steps to reproduce the behavior:
Create AWS MSK in il-central-1 using this terraform module.

Expected behavior

Should work. (Making autoscaling targets/policies optional will do the job)

Actual behavior

Breaking.

Failed to find any class that implements Connector

Description

When I use the "connect" example to create a Debezium Postgres connector the connector worker is missing the Java class "io.debezium.connector.postgresql.PostgresConnector". It looks like the custom plugin isn't loaded correctly.

  • [-] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:
    2.1.0
  • Terraform version:
    1.5.2
  • Provider version(s):
    5.0

Reproduction Code [Required]

Repo

provider "aws" {
  region = local.region
}

data "aws_availability_zones" "available" {}

locals {
  name   = "ex-${basename(path.cwd)}"
  region = "us-east-1"

  vpc_cidr = "10.0.0.0/16"
  azs      = slice(data.aws_availability_zones.available.names, 0, 3)

  connector_external_url = "https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.3.0.Final/debezium-connector-postgres-2.3.0.Final-plugin.tar.gz"
  connector              = "debezium-connector-postgres/debezium-connector-postgres-2.3.0.Final.jar"

  tags = {
    Example    = local.name
    GithubRepo = "terraform-aws-msk-kafka-cluster"
    GithubOrg  = "terraform-aws-modules"
  }
}

################################################################################
# MSK Cluster
################################################################################

module "msk_cluster" {
  source = "../.."

  name                   = local.name
  kafka_version          = "3.4.0"
  number_of_broker_nodes = 3

  broker_node_client_subnets  = module.vpc.private_subnets
  broker_node_instance_type   = "kafka.t3.small"
  broker_node_security_groups = [module.security_group.security_group_id]

  # Connect custom plugin(s)
  connect_custom_plugins = {
    debezium = {
      name         = "debezium-postgresql"
      description  = "Debezium PostgreSQL connector"
      content_type = "JAR"

      s3_bucket_arn     = module.s3_bucket.s3_bucket_arn
      s3_file_key       = aws_s3_object.debezium_connector.id
      s3_object_version = aws_s3_object.debezium_connector.version_id

      timeouts = {
        create = "5m"
      }
    }
  }

  # Connect worker configuration
  create_connect_worker_configuration           = true
  connect_worker_config_name                    = local.name
  connect_worker_config_description             = "Example connect worker configuration"
  connect_worker_config_properties_file_content = <<-EOT
    key.converter=org.apache.kafka.connect.storage.StringConverter
    value.converter=org.apache.kafka.connect.storage.StringConverter
  EOT

  tags = local.tags
}

################################################################################
# Supporting Resources
################################################################################

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = local.name
  cidr = local.vpc_cidr

  azs              = local.azs
  public_subnets   = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
  private_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 3)]
  database_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 6)]

  create_database_subnet_group = true
  enable_nat_gateway           = true
  single_nat_gateway           = true

  tags = local.tags
}

module "security_group" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "~> 5.0"

  name        = local.name
  description = "Security group for ${local.name}"
  vpc_id      = module.vpc.vpc_id

  ingress_cidr_blocks = module.vpc.private_subnets_cidr_blocks
  ingress_rules = [
    "kafka-broker-tcp",
    "kafka-broker-tls-tcp"
  ]
  
  egress_rules = ["all-all"]

  tags = local.tags
}

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 3.0"

  bucket_prefix = local.name
  acl           = "private"
  
  control_object_ownership = true
  object_ownership         = "ObjectWriter"

  versioning = {
    enabled = true
  }

  # Allow deletion of non-empty bucket for testing
  force_destroy = true

  attach_deny_insecure_transport_policy = true
  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

  tags = local.tags
}

resource "aws_s3_object" "debezium_connector" {
  bucket = module.s3_bucket.s3_bucket_id
  key    = local.connector
  source = local.connector

  depends_on = [
    null_resource.debezium_connector
  ]
}

resource "null_resource" "debezium_connector" {
  provisioner "local-exec" {
    command = <<-EOT
      wget -c ${local.connector_external_url} -O connector.tar.gz \
        && tar -zxvf connector.tar.gz  ${local.connector} \
        && rm *.tar.gz
    EOT
  }
}

################################################################################
# IAM Role
################################################################################

module "iam_policy_kafka" {
  source = "terraform-aws-modules/iam/aws//modules/iam-policy"

  name        = "${local.name}_kafka"
  path        = "/"
  description = "My example policy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "kafka-cluster:*Topic*",
          "kafka-cluster:WriteData",
          "kafka-cluster:ReadData"
        ]
        Resource = module.msk_cluster.arn
      }
    ]
  })
}

module "iam_assumable_role" {
  source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role"

  trusted_role_services = ["kafkaconnect.amazonaws.com"]

  create_role = true

  role_name         = local.name
  role_requires_mfa = false

  custom_role_policy_arns = [
    module.iam_policy_kafka.arn,
  ]
  number_of_custom_role_policy_arns = 1
}

################################################################################
# Connector
################################################################################

resource "aws_mskconnect_connector" "debezium_postgres" {
  name = local.name

  kafkaconnect_version = "2.7.1"

  capacity {
    provisioned_capacity {
      worker_count = 1
    }
  }

  connector_configuration = {
    "name"                                            = local.name
    "connector.class"                                 = "io.debezium.connector.postgresql.PostgresConnector"
    "tasks.max"                                       = 1
  }

  kafka_cluster {
    apache_kafka_cluster {
      bootstrap_servers = module.msk_cluster.bootstrap_brokers_tls

      vpc {
        security_groups = [module.security_group.security_group_id]
        subnets         = module.vpc.private_subnets
      }
    }
  }

  kafka_cluster_client_authentication {
    authentication_type = "NONE"
  }

  kafka_cluster_encryption_in_transit {
    encryption_type = "TLS"
  }

  plugin {
    custom_plugin {
      arn      = module.msk_cluster.connect_custom_plugins.debezium.arn
      revision = module.msk_cluster.connect_custom_plugins.debezium.latest_revision
    }
  }

  log_delivery {
    worker_log_delivery {
      s3 {
        enabled = true
        bucket  = module.s3_bucket.s3_bucket_id
      }
    }
  }

  service_execution_role_arn = module.iam_assumable_role.iam_role_arn
}

Steps to reproduce the behavior:

terraform init
terraform apply

Expected behavior

The connector is expected to recognize the "io.debezium.connector.postgresql.PostgresConnector" class.

Actual behavior

The MSK Connect Connector throws an error with the following message:

Error: waiting for MSK Connect Connector (arn:aws:kafkaconnect:us-east-1:908798698:connector/ex-connect/f7be14d6-6e08-454b-a902-6b0e5767a652-4) create: unexpected state 'FAILED', wanted target 'RUNNING'. last error: InvalidInput.InvalidConnectorConfiguration: The connector configuration is invalid. Message: Failed to find any class that implements Connector and which name matches io.debezium.connector.postgresql.PostgresConnector, available connectors are: PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSinkConnector, name='org.apache.kafka.connect.file.FileStreamSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.file.FileStreamSourceConnector, name='org.apache.kafka.connect.file.FileStreamSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorCheckpointConnector, name='org.apache.kafka.connect.mirror.MirrorCheckpointConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorHeartbeatConnector, name='org.apache.kafka.connect.mirror.MirrorHeartbeatConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.mirror.MirrorSourceConnector, name='org.apache.kafka.connect.mirror.MirrorSourceConnector', version='1', encodedVersion=1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockConnector, name='org.apache.kafka.connect.tools.MockConnector', version='2.7.1', encodedVersion=2.7.1, type=connector, typeName='connector', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSinkConnector, name='org.apache.kafka.connect.tools.MockSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=sink, typeName='sink', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.MockSourceConnector, name='org.apache.kafka.connect.tools.MockSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.SchemaSourceConnector, name='org.apache.kafka.connect.tools.SchemaSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSinkConnector, name='org.apache.kafka.connect.tools.VerifiableSinkConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}, PluginDesc{klass=class org.apache.kafka.connect.tools.VerifiableSourceConnector, name='org.apache.kafka.connect.tools.VerifiableSourceConnector', version='2.7.1', encodedVersion=2.7.1, type=source, typeName='source', location='classpath'}

I would really appreciate the help.

MSK Connect - Worker Configuration and Custom Plugin

Hello,

First of all I want to thank you for such a great module and support.

Now Questions:

Is it possible to create many Worker Configurations and Custom Plugins without creating cluster itself.

Let's say if I want to have 1 MSK Cluster and I want many plugins and worker configuration for that cluster

--

The important thing is also that for 1 MSK Cluster needs to be created multiple connectors with different configurations for different sources.

This connectors should be taken from S3 (Located Jar or Zip files) and I also did not find any connector configuration Inputs in the module.

So connector function will be only template as Input which can take custom plugins located from S3

Can you suggest?

Unable to change MSK broker storage size with Terraform

Description

After provisioning a new MSK cluster with fixed storage size it's not possible to change the storage size with Terraform. It seems like Terraform is unable to detect the change in the broker storage size.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version : 2.5 (Latest in Terraform registry when creating the issue.)

  • Terraform version:
    v1.5.7 and also latest v1.8.5 on linux_amd64

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v5.54.1
    provider registry.terraform.io/hashicorp/random v3.6.2

Reproduction Code [Required]

module "msk" {
  source  = "terraform-aws-modules/msk-kafka-cluster/aws"
  version = "2.5.0"

  name                      = "dev"
  broker_node_instance_type = "kafka.t3.small"
  kafka_version             = "3.5.1"
  number_of_broker_nodes    = 2

  client_authentication = {
    sasl = { iam = true }
  }

  broker_node_client_subnets = [
...
  ]

  enable_storage_autoscaling = false
  broker_node_storage_info = {
    ebs_storage_info = {
      volume_size = 5
      }
    }
}

Steps to reproduce the behavior:

  1. Create the MSK cluster with fixed storage space
  2. Try to increase the storage space

Expected behavior

Terraform plan should show that it would change the EBS volume size from 5 GB to 20 GB.

Actual behavior

Terraform plan shows "No changes. Your infrastructure matches the configuration.".

aws_msk_configuration failed during AWS MSK version upgrade

Description

When I try to update the Kafka version on the module, the aws_msk_configuration resource fails because this version change requires its destruction, and this is not possible because it is being used by the msk cluster.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 2.3.0

  • Terraform version: 1.6.3

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v5.26.0
    provider registry.terraform.io/hashicorp/random v3.5.1

Reproduction Code [Required]

module "msk_cluster" {

  depends_on = [module.s3_bucket_for_logs, module.cluster_sg, module.kms]
  source     = "github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster?ref=v2.3.0"

  name                   = local.msk_cluster_name
  kafka_version          = var.kafka_version
  number_of_broker_nodes = var.number_of_broker_nodes
  enhanced_monitoring    = var.enhanced_monitoring

  broker_node_client_subnets  = var.broker_node_client_subnets
  broker_node_instance_type   = var.broker_node_instance_type
  broker_node_security_groups = concat([
    for sg in module.cluster_sg :sg.security_group_id
  ], var.extra_security_groups_ids)

  broker_node_storage_info = {
    ebs_storage_info = { volume_size = var.volume_size }
  }

  encryption_in_transit_client_broker = var.encryption_in_transit_client_broker
  encryption_in_transit_in_cluster    = var.encryption_in_transit_in_cluster
  encryption_at_rest_kms_key_arn      = module.kms.key_arn

  jmx_exporter_enabled                   = var.jmx_exporter_enabled
  node_exporter_enabled                  = var.node_exporter_enabled
  cloudwatch_logs_enabled                = var.cloudwatch_logs_enabled
  s3_logs_enabled                        = var.s3_logs_enabled
  s3_logs_bucket                         = module.s3_bucket_for_logs.s3_bucket_id
  s3_logs_prefix                         = var.s3_logs_prefix
  cloudwatch_log_group_retention_in_days = var.cloudwatch_log_group_retention_in_days
  cloudwatch_log_group_kms_key_id        = var.cloudwatch_log_group_kms_key_id
  configuration_server_properties        = var.configuration_server_properties
  configuration_name                     = "${local.msk_cluster_name}-${replace(var.kafka_version,".","-")}"
  configuration_description              = local.msk_cluster_name

  tags = merge(
    var.tags,
    {
      Name = local.msk_cluster_name
    }
  )
}

Steps to reproduce the behavior:

Expected behavior

That the new aws_msk_configuration resource be created before deleting the old one

Actual behavior

The previous aws_msk_configuration resource tries to be deleted before creating the new one and it cannot because it is being used by the cluster

Terminal Output Screenshot(s)

module.msk_cluster.aws_msk_configuration.this[0]: Destroying... [id=arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx]

Error: deleting MSK Configuration (arn:aws:kafka:eu-west-1:xxxxxxxxxxxxxx:configuration/example/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx): BadRequestException: Configuration is in use by one or more clusters. Dissociate the configuration from the clusters.
 {
   RespMetadata: {
     StatusCode: 400,
     RequestID: "0bb9fe5d-ee26-4dad-8a81-8c3fa6c06483"
   },
   InvalidParameter: "arn",
   Message_: "Configuration is in use by one or more clusters. Dissociate the configuration from the clusters."
 }

Additional context

If you set the configuration_name parameter to a dynamic name and manually change the aws_msk_configuration resource and add a lifecycle {create_before_destroy = true}, it updates successfully, so I don't know if this would be the solution.

Random number generator is not updating configuration resource name on create before destroy

Description

When upgrading an MSK cluster to a new version, the 2.5.0 version of this terraform module wants to replace the MSK configuration resource. The create before destroy is required because you cannot destroy the configuration for an active cluster. However, your random number generator does not always generate a unique name, so the create before destroy fails.

See this line of code in your module:
https://github.com/terraform-aws-modules/terraform-aws-msk-kafka-cluster/blob/master/main.tf#L259

  • [ X ] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version:
    2.5.0

  • Terraform version:
    1.7.5

  • Provider version(s):
    5.42.0

Expected behavior

Create before destroy should create a new aws_msk_configuration with a new/unique name using a random number generator, every single time it is ran.

Actual behavior

After the first successful recreation, the create before destroy on this resource creates a new aws_msk_configuration with the same random number generated on the first resource creation run. Causing terraform to fail.

Terminal Output

.
.
  # module.msk["REDACTED"].module.msk.aws_msk_configuration.this[0] must be replaced
+/- resource "aws_msk_configuration" "this" {
      ~ arn             = "arn:aws:kafka:REDACTED:REDACTED:configuration/REDACTED" -> (known after apply)
      ~ id              = "arn:aws:kafka:REDACTED:REDACTED:configuration/REDACTED" -> (known after apply)
      ~ kafka_versions  = [ # forces replacement
          - "3.4.0",
          + "3.5.1",
        ]
      ~ latest_revision = 1 -> (known after apply)
        name            = "REDACTED-123456789"        <-- notice the name is not changing
    }
.
.
.
Error: creating MSK Configuration: operation error Kafka: CreateConfiguration, https response error StatusCode: 409, RequestID: ********* ConflictException: A resource with this name already exists.

Possible Solutions

Random number generator was a great idea, but it only worked for the first recreation. You could ensure the value is always unique by doing something like this:

locals {
  config_as_string = join(",", [for key, value in var.configuration_server_properties : "${key}=${value}"])
  unique_config_suffix = substr(sha256("${local.config_as_string}${var.kafka_version}", -8, -1)
}

name = format("%s-%s", coalesce(var.configuration_name, var.name), local.unique_config_suffix)

Doing this, I believe any time either of these variables change, not only would it trigger the resource recreation, it would also be generating a completely unique 8 character value. I didn't test this, but I think something like this would work. You may also need to add one additional value to the hash, so you don't end up with hash collisions when two configs have the same name and the same configuration values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.