Code Monkey home page Code Monkey logo

terraform-provider-confluent's Introduction

Terraform Provider for Confluent

The Confluent Terraform Provider is a plugin for Terraform that allows for the lifecycle management of Confluent resources. This provider is maintained by Confluent.

Quick Starts

Documentation

Full documentation is available on the Terraform website.

License

Copyright 2022 Confluent Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

terraform-provider-confluent's People

Contributors

bennettzhu1 avatar cchristous avatar codyaray avatar confluentjenkins avatar confluentsemaphore avatar confluentspencer avatar confluenttools avatar gunalkupta avatar javabrett avatar justinrlee avatar kzzhang avatar linouk23 avatar luca-filipponi avatar lyoung-confluent avatar nlou9 avatar pgaref avatar sravindra99 avatar taihengjin avatar thllwg avatar tolgadur avatar xli1996 avatar zhenli00 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-confluent's Issues

confluent_kafka_acl support for a list of principals

Hi Confluent team!

I have a question/feature request/conversation that I would like to start.
Would it be possible for this Terraform provider to support a list of principals for a given confluentcloud_kafka_acl ?

One example from the point of view of a consumer of this provider could be:

resource "confluentcloud_kafka_acl" "describe-basic-cluster" {
  kafka_cluster = confluentcloud_kafka_cluster.basic-cluster.id
  resource_type = "CLUSTER"
  resource_name = "kafka-cluster"
  pattern_type  = "LITERAL"
  principals    = "User:12345, User:67890, User:11111"  <--- This would be the main change, please note the plural "principals"
  host          = "*"
  operation     = "DESCRIBE"
  permission    = "ALLOW"
  http_endpoint = confluentcloud_kafka_cluster.basic-cluster.http_endpoint
  credentials {
    key    = "<Kafka API Key for confluentcloud_kafka_cluster.basic-cluster>"
    secret = "<Kafka API Secret for confluentcloud_kafka_cluster.basic-cluster>"
  }
}

Perhaps by creating and returning an array or slice of Acls? it would make sense that all ACLs created in this fashion would share all other properties, such as resource_type, operation and so on

Thank you very much for the work on this provider

Add new computed attributes for PL network

What

The Terraform module which creates the Network with PL doesn’t expose few necessary details (DNS Domain and Sub Domain) in output data, which means I can’t configure the VPC linking without manually coping the details.

Refreshing state of confluent_kafka_acl resources fails and ALC is not recreated

Hi guys,

after starting to roll out a number of ACLs using the confluent_kafka_acl resource on our Confluent Cloud project using the new confluentinc/confluent 0.7.0 provider. I have noticed that the refreshing state step can fail for confluent_kafka_acl resource instances. And the terraform provider does then not create the missing ACL but instead throws an error.

Here an example of multiple TF runs on a temporary test system with no change to the TF code and no change on Confluent Cloud between runs.

1st run:

[2022-05-09T14:03:14.114Z] confluent_kafka_acl.metrics-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:03:14.114Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:03:14.114Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.114Z] confluent_kafka_acl.pushservice-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.pushservice-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.preprocessor-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.preprocessor-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#WRITE#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.httpapi-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.115Z] confluent_kafka_acl.httpapi-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:03:14.698Z] confluent_kafka_acl.metrics-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.698Z] confluent_kafka_acl.preprocessor-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:03:14.698Z] confluent_kafka_acl.metrics-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.698Z] confluent_kafka_acl.httpapi-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#WRITE#ALLOW]
[2022-05-09T14:03:14.698Z] confluent_kafka_acl.preprocessor-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:03:14.960Z] confluent_kafka_acl.httpapi-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DELETE#ALLOW]
[2022-05-09T14:03:14.960Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:14.960Z] confluent_kafka_acl.metrics-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#READ#ALLOW]
[2022-05-09T14:03:14.960Z] confluent_kafka_acl.preprocessor-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DELETE#ALLOW]
[2022-05-09T14:03:14.960Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:03:15.220Z] confluent_kafka_acl.pushservice-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DELETE#ALLOW]
[2022-05-09T14:03:15.220Z] confluent_kafka_acl.pushservice-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#WRITE#ALLOW]
[2022-05-09T14:03:15.220Z] confluent_kafka_acl.httpapi-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:03:15.220Z] confluent_kafka_acl.pushservice-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:03:15.220Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:03:27.463Z] 
[2022-05-09T14:03:27.463Z] Error: error reading Kafka ACLs "lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW": no Kafka ACLs were matched
[2022-05-09T14:03:27.463Z] 
[2022-05-09T14:03:27.463Z]   with confluent_kafka_acl.metrics-sa-acl-group-describe,
[2022-05-09T14:03:27.463Z]   on 025_confluent_acls.tf line 445, in resource "confluent_kafka_acl" "metrics-sa-acl-group-describe":
[2022-05-09T14:03:27.463Z]  445: resource "confluent_kafka_acl" "metrics-sa-acl-group-describe" {

2nd run:

[2022-05-09T14:34:27.781Z] confluent_kafka_acl.metrics-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#READ#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.httpapi-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DELETE#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.httpapi-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.preprocessor-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.httpapi-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.metrics-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.httpapi-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.httpapi-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#WRITE#ALLOW]
[2022-05-09T14:34:27.782Z] confluent_kafka_acl.pushservice-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#WRITE#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.preprocessor-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.preprocessor-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DELETE#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.pushservice-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DELETE#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.pushservice-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.pushservice-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:28.724Z] confluent_kafka_acl.pushservice-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:28.985Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:34:29.245Z] confluent_kafka_acl.preprocessor-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#WRITE#ALLOW]
[2022-05-09T14:34:29.245Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:29.245Z] confluent_kafka_acl.preprocessor-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:34:29.245Z] confluent_kafka_acl.metrics-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:29.245Z] confluent_kafka_acl.metrics-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:34:41.475Z] 
[2022-05-09T14:34:41.475Z] Error: error reading Kafka ACLs "lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW": no Kafka ACLs were matched
[2022-05-09T14:34:41.475Z] 
[2022-05-09T14:34:41.475Z]   with confluent_kafka_acl.metrics-sa-acl-group-describe,
[2022-05-09T14:34:41.475Z]   on 025_confluent_acls.tf line 445, in resource "confluent_kafka_acl" "metrics-sa-acl-group-describe":
[2022-05-09T14:34:41.476Z]  445: resource "confluent_kafka_acl" "metrics-sa-acl-group-describe" {

3rd run:

[2022-05-09T14:36:25.765Z] confluent_kafka_acl.preprocessor-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:36:25.765Z] confluent_kafka_acl.httpapi-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:36:25.765Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:25.765Z] confluent_kafka_acl.preprocessor-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DELETE#ALLOW]
[2022-05-09T14:36:25.765Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:36:26.031Z] confluent_kafka_acl.metrics-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:36:26.034Z] confluent_kafka_acl.preprocessor-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:26.034Z] confluent_kafka_acl.pushservice-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DELETE#ALLOW]
[2022-05-09T14:36:26.034Z] confluent_kafka_acl.preprocessor-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#READ#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.metrics-sa-acl-topic-describe: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.httpapi-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#WRITE#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.httpapi-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#READ#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.httpapi-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.pushservice-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#WRITE#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.metrics-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#READ#ALLOW]
[2022-05-09T14:36:26.986Z] confluent_kafka_acl.metrics-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:27.248Z] confluent_kafka_acl.httpapi-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-7yv0z1#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:36:27.248Z] confluent_kafka_acl.pushservice-sa-acl-topic-read: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:36:27.248Z] confluent_kafka_acl.pushservice-sa-acl-group-read: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#READ#ALLOW]
[2022-05-09T14:36:27.510Z] confluent_kafka_acl.httpapi-sa-acl-group-delete: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-7yv0z1#*#DELETE#ALLOW]
[2022-05-09T14:36:27.510Z] confluent_kafka_acl.preprocessor-sa-acl-topic-write: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-o3omyj#*#WRITE#ALLOW]
[2022-05-09T14:36:27.510Z] confluent_kafka_acl.preprocessor-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-o3omyj#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:27.510Z] confluent_kafka_acl.pushservice-sa-acl-topic-describe-config: Refreshing state... [id=lkc-9kz5rm/TOPIC#*#LITERAL#User:sa-121v06#*#DESCRIBE_CONFIGS#ALLOW]
[2022-05-09T14:36:27.510Z] confluent_kafka_acl.pushservice-sa-acl-group-describe: Refreshing state... [id=lkc-9kz5rm/GROUP#*#LITERAL#User:sa-121v06#*#DESCRIBE#ALLOW]
[2022-05-09T14:36:39.763Z] 
[2022-05-09T14:36:39.763Z] Error: error reading Kafka ACLs "lkc-9kz5rm/GROUP#*#LITERAL#User:sa-nv78kk#*#DESCRIBE#ALLOW": no Kafka ACLs were matched
[2022-05-09T14:36:39.764Z] 
[2022-05-09T14:36:39.764Z]   with confluent_kafka_acl.metrics-sa-acl-group-describe,
[2022-05-09T14:36:39.764Z]   on 025_confluent_acls.tf line 445, in resource "confluent_kafka_acl" "metrics-sa-acl-group-describe":
[2022-05-09T14:36:39.764Z]  445: resource "confluent_kafka_acl" "metrics-sa-acl-group-describe" {

As you can see it fails at different steps and the entry which was causing issues before works with another re-run.

These are the ACLs in place as per confluent CLI output:

$ confluent kafka acl list --cluster lkc-9kz5rm
    Principal    | Permission |    Operation     | Resource Type | Resource Name | Pattern Type
-----------------+------------+------------------+---------------+---------------+---------------
  User:sa-nv78kk | ALLOW      | DESCRIBE         | TOPIC         | *             | LITERAL
  User:sa-nv78kk | ALLOW      | DESCRIBE_CONFIGS | TOPIC         | *             | LITERAL
  User:sa-121v06 | ALLOW      | READ             | TOPIC         | *             | LITERAL
  User:sa-121v06 | ALLOW      | WRITE            | TOPIC         | *             | LITERAL
  User:sa-121v06 | ALLOW      | DESCRIBE         | TOPIC         | *             | LITERAL
  User:sa-121v06 | ALLOW      | DESCRIBE_CONFIGS | TOPIC         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | READ             | TOPIC         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | WRITE            | TOPIC         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | DESCRIBE         | TOPIC         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | DESCRIBE_CONFIGS | TOPIC         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | READ             | TOPIC         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | WRITE            | TOPIC         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | DESCRIBE         | TOPIC         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | DESCRIBE_CONFIGS | TOPIC         | *             | LITERAL
  User:sa-nv78kk | ALLOW      | READ             | GROUP         | *             | LITERAL
  User:sa-121v06 | ALLOW      | READ             | GROUP         | *             | LITERAL
  User:sa-121v06 | ALLOW      | DELETE           | GROUP         | *             | LITERAL
  User:sa-121v06 | ALLOW      | DESCRIBE         | GROUP         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | READ             | GROUP         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | DELETE           | GROUP         | *             | LITERAL
  User:sa-7yv0z1 | ALLOW      | DESCRIBE         | GROUP         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | READ             | GROUP         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | DELETE           | GROUP         | *             | LITERAL
  User:sa-o3omyj | ALLOW      | DESCRIBE         | GROUP         | *             | LITERAL

As you can see some of the required ACLs are indeed not in the list. But why is this considered an error in the first place? If the terraform provider cannot find an ACL which is expected to be present on the cluster. Shouldn't it just recreate the ACL instead of aborting with an error? That is what most other terraform providers would do in such a scenario.

Greetings
Valentin

Slow Performance (v0.7.0) : Creating an admin account, topic with Producer/Consumer

Firstly, well done on adding all the new features to 0.7.0 release.

Now to the issue

When I use the 0.7.0 version to provision (nothing more than what is being done in the sample examples)
(1) an admin account with CloudClusterAdmin RBAC
(2) A single topic with producer and consumer account with respective ACL's

it's takes > 6 to 8 mins to have all the resources provisioned. Using the CLI directly, something like this would take a few seconds (20 to 30 seconds) to provision the same resources.

Below are some of the output logs indicating how long the steps take

rbac binding flow

confluent_role_binding.app-manager-kafka-cluster-admin: Creating...
confluent_role_binding.app-manager-kafka-cluster-admin: Still creating... [10s elapsed]
...............
confluent_role_binding.app-manager-kafka-cluster-admin: Still creating... [3m0s elapsed]

api-key generation (every key generating resource seems to be take 2 mins on average)

confluent_api_key.app-manager-kafka-api-key: Still creating... [10s elapsed]
confluent_api_key.app-manager-kafka-api-key: Still creating... [20s elapsed]
...............
confluent_api_key.app-manager-kafka-api-key: Still creating... [2m0s elapsed]

I am wondering the impact of the slow performance when we have to scale the kafka resource provisioning on our end using terraform. Is this something your team is aware of - the performance implications when using the terraform provisioning and any plans to improve the provisioning time?

Thanks.

Validation Bug - invalid value for network.0.id (the network ID must be of the form 'n-' when actual network id starts 'nr-'

Overview
Successfully imported an existing cluster in Azure UK south that has an existing private network link to an Azure sub also in UK South,
The error seems to suggest that the validation code for the two resources only validates as good network id's that start n- and incorrectly fails network id's that start nr-

Environment
Terraform v1.1.7
on windows_amd64

  • provider registry.terraform.io/cloudflare/cloudflare v3.14.0
  • provider registry.terraform.io/confluentinc/confluent v0.7.1
  • provider registry.terraform.io/hashicorp/azuread v2.22.0
  • provider registry.terraform.io/hashicorp/azurerm v3.4.0
  • provider registry.terraform.io/hashicorp/random v3.1.3

Error
on future plan or apply I get the messages

confluent_environment.main: Refreshing state... [id=env-ok***]
confluent_network.main: Refreshing state... [id=nr-4y***]
confluent_kafka_cluster.main: Refreshing state... [id=lkc-qn***]
confluent_private_link_access.main: Refreshing state... [id=pla-ew***]

----------------------SNIP-------------------------------<

│ Error: invalid value for network.0.id (the network ID must be of the form 'n-')

│ with confluent_kafka_cluster.main,
│ on confluent.cloud.tf line 22, in resource "confluent_kafka_cluster" "main":
│ 22: id =confluent_network.main.id

│ Error: invalid value for network.0.id (the network ID must be of the form 'n-')

│ with confluent_private_link_access.main,
│ on confluent.cloud.tf line 53, in resource "confluent_private_link_access" "main":
│ 53: id = confluent_network.main.id

Resources affected

  • confluent_kafka_cluster
  • confluent_private_link_access

Code Snippet

resource "confluent_kafka_cluster" "main" {
  display_name = upper("${var.org_short}-${var.env_short}-${var.loc_short}-${var.service}")
  availability = var.confluent_cloud_cluster_avalability
  cloud        = "AZURE"
  region       = var.confluent_cloud_cluster_region
    dedicated {
    cku = var.confluent_cloud_cluster_cku
  }
    network {
	    id = confluent_network.main.id
    }
  environment {
    id = confluent_environment.main.id
  }
}

resource "confluent_network" "main" {
  display_name = "Private Link Network"
  cloud = "AZURE"
  region = var.confluent_cloud_cluster_region
  connection_types = ["PRIVATELINK"]
  environment {
    id = confluent_environment.main.id
  }
}

resource "confluent_private_link_access" "main" {
  display_name = "Azure Private Link Access"
  azure {
    subscription = var.subscription_id 
  }
  environment {
    id = confluent_environment.main.id
  }
  network {
    id = confluent_network.main.id
  }
}

API Key creation timeout on dedicated cluster

Hello there!

We've encountered an issue while trying to create API keys on our new and only dedicated cluster for our production platform.
The error is the following :

Error: error waiting for Kafka API Key "[REDACTED]" to sync: error listing Kafka Topics using Kafka API Key "[REDACTED]": Get "[https://[REDACTED].europe-west1.gcp.confluent.cloud:443/kafka/v3/clusters/lkc-[REDACTED]/topics](https://[REDACTED].europe-west1.gcp.confluent.cloud/kafka/v3/clusters/lkc-[REDACTED]/topics)": GET [https://[REDACTED].europe-west1.gcp.confluent.cloud:443/kafka/v3/clusters/lkc-[REDACTED]/topics](https://[REDACTED].europe-west1.gcp.confluent.cloud/kafka/v3/clusters/lkc-[REDACTED]/topics) giving up after 5 attempt(s): Get "[https://[REDACTED].europe-west1.gcp.confluent.cloud:443/kafka/v3/clusters/lkc-[REDACTED]/topics](https://[REDACTED].europe-west1.gcp.confluent.cloud/kafka/v3/clusters/lkc-[REDACTED]/topics)": net/http: TLS handshake timeout

The key is created and can be seen in the confluent website.
If we launch again the terraform script, as expected, a new key is created, and the same error appears.

Is it possible to increase the timeout? I didn't find anything in the documentation in this regard.

Thanks in the advance!

Failed to query available provider packages

Example:

terraform {
  required_providers {
    confluent = {
      source  = "confluentinc/confluent"
      version = "0.7.0"
    }
  }
}

provider "confluent" {}

log:

terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/confluentcloud...
- Finding confluentinc/confluent versions matching "0.7.0"...
- Installing confluentinc/confluent v0.7.0...
- Installed confluentinc/confluent v0.7.0 (signed by a HashiCorp partner, key ID D4A2B1EDB0EC0C8E)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/confluentcloud: provider registry registry.terraform.io does not have a      
│ provider named registry.terraform.io/hashicorp/confluentcloud
│
│ All modules should specify their required_providers so that external consumers will get the correct providers when using a module. To see which       
│ modules are currently depending on hashicorp/confluentcloud, run the following command:
│     terraform providers
╵```

Can't import existing api-keys

Hello,

Switching from confluentcloud 0.5.0 went smoothly! Now I want to add support for api key generation.
The problem is: I don't want to recreate existing API keys, as the users of our clusters are actively using them.

The doc doesn't describe how to import the key in our tfstate.

Is there a way to do it? I suppose that's not possible atm, as the error message while trying to import one key is quiet descriptive:
Error: resource confluent_api_key doesn't support import

Is it possible for you to implement it? Or is it "by-design"?

confluent_kafka_cluster http_endpoint not being populated when created with a dedicated block

Hi all,

Looking for some help here,

I'm trying to create a cluster following the provided example dedicated-privatelink-aws-kafka-acls and keep running into issue with the dedicated cluster type.

It appears that the confluent_kafka_cluster resource does not expose the http_endpoint attribute when created with a dedicated block and I am unable to find the outputted attribute elsewhere. This seems to break the above example

Error in question at plan time

│ Error: invalid value for http_endpoint (the REST endpoint must start with 'https://')
│ 
│   with confluent_kafka_topic.orders,
│   on confluent.tf line 96, in resource "confluent_kafka_topic" "orders":
│   96:   http_endpoint = confluent_kafka_cluster.cluster.http_endpoint

Tried with version 0.8.1 and 0.8.0 of the provider

In my past experience with the provider using standard or basic config blocks. The http_endpoint attribute seems to be correctly populated which allows topic and api key creation using the Terraform attribute

When outputting the http_endpoint attribute I can see that it is set to an empty string

resource "confluent_kafka_cluster" "cluster" {
  display_name = "cluster"
  availability = "MULTI_ZONE"
  cloud        = "AWS"
  region       = local.remote_state_config.region

  dedicated {
    cku = 2
  }

  environment {
    id = confluent_environment.<env_name>.id
  }

  network {
    id = confluent_network.<net_name>.id
  }
}

output "endpoint" {
  value = confluent_kafka_cluster.cluster.http_endpoint
}

Terraform output

endpoint = ""

Just checking in that this attribute is being populated in your code when using dedicated clusters.

Thanks in advance

Failed to query available provider packages

I'm trying to use a new provider. I think I don't have the access to the repository:
https://github.com/confluentinc/terraform-provider-confluent
Maybe the repository is private?

Example:

terraform {
  required_providers {
   confluent = {
      source  = "confluentinc/confluent"
      version = "0.7.0"
    }
  }
}

Log:

terraform init

Initializing the backend...

Initializing provider plugins...
- Finding confluentinc/confluent versions matching "0.7.0"...
- Finding latest version of hashicorp/confluentcloud...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/confluentcloud: provider registry
│ registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/confluentcloud
│
│ All modules should specify their required_providers so that external consumers will get the correct providers when using  
│ a module. To see which modules are currently depending on hashicorp/confluentcloud, run the following command:
│     terraform providers
╵

╷
│ Error: Failed to install provider
│
│ Error while installing confluentinc/confluent v0.7.0: could not query provider registry for
│ registry.terraform.io/confluentinc/confluent: failed to retrieve authentication checksums for provider: 404 Not Found     

Addition of a confluent_network data source.

An additional feature I think would make sense is a confluent_network data source to look up pre-existing networks by name and/or id. We've got some pre-existing networks set up for private link and would love to stop hardcoding the private_link_endpoint_service by using a data source.

API key creation for dedicated clusters fails if terraform can't access cluster

Scenario: terraform running from a CICD pipeline provisioning API key resources fails with this error:

Error: error waiting for Kafka API Key "[REDACTED]" to sync: error listing Kafka Topics using Kafka API Key "[REDACTED]": Get "[https://[REDACTED]/kafka/v3/clusters/[REDACTED]/topics](https://[REDACTED]/kafka/v3/clusters/[REDACTED]/topics)": GET [https://[REDACTED]/kafka/v3/clusters/[REDACTED]/topics](https://[REDACTED]/kafka/v3/clusters/[REDACTED]/topics) giving up after 5 attempt(s): Get "[https://[REDACTED]/kafka/v3/clusters/[REDACTED]/topics](https://[REDACTED]/kafka/v3/clusters/[REDACTED/topics)": dial tcp [REDACTED]:443: i/o timeout

Subsequent terraform runs will attempt to re-create the API keys, which will fail again.

The API keys are created and applied to the clusters, but the provider assumes that it has connectivity to the cluster and this is not necessarily the case - dedicated clusters using VPC peering may not be accessible from the node running the CI pipeline. It should be possible to provision resources via the confluent cloud API without also having access to the resource (cluster). Terraform provider should be able to provision all resources via the confluent cloud public API endpoint without requiring direct access to private kafka cluster endpoints.

Suggestion: validate API key creation via confluent cloud API instead of against the actual cluster endpoint OR add optional confluent_api_key resource attribute disabling this test

Kafka Connect Provisioning - Can we hide sensitive variables in code?

While provisioning connectors using terraform, how do we ensure the "sensitive" values are not checked in and are not visible to people viewing the code? Is there a way to inject the values through environment variables?

Referring to the sink documents in your example - I don't see an alternative to hide the secrets while provisioning connectors

resource "confluent_connector" "sink" {
    .......................
    config_sensitive = {
        "aws.access.key.id"     = "***REDACTED***"
        "aws.secret.access.key" = "***REDACTED***"
    }
 
   config_nonsensitive = {
         ......................
         "s3.bucket.name"       = "***REDACTED***"
          .....................
   }
   .....................
}

Error deleting Role Binding roles. : giving up after 5 attempt(s)

I am repeatedly getting error when trying to update/destroy a plan that involves Role Binding. There is no problem creating one, but deletion or updates have not been possible today at all.

Below is the error

│ Error: error deleting Role Binding "rb-*****": Delete "https://api.confluent.cloud/iam/v2/role-bindings/rb-*****": DELETE https://api.confluent.cloud/iam/v2/role-bindings/rb-***** giving up after 5 attempt(s)

All this worked fine till this am, but then....it's been Error all the time.

Thanks

New provider confluentinc/confluent not usable

All updated links to the new provider like https://github.com/confluentinc/terraform-provider-confluent/issues are returning 404.

Using the new provider in terraform (as described here is not working:

Error: Failed to install provider
│
│ Error while installing confluentinc/confluent v0.7.0: could not query provider registry for registry.terraform.io/confluentinc/confluent:    
│ failed to retrieve authentication checksums for provider: 404 Not Found

I think these checksums are fetched from github -> so same problem

I assume the new github repo is still (wrongly) in private access mode?

cc @linouk23

Getting POST errors while running tf scripts

I'm trying to run tf scripts for topic provisioing, service accounts with acls, those failds with this error.

Error: Post "https://api.confluent.cloud/iam/v2/service-accounts": POST https://api.confluent.cloud/iam/v2/service-accounts giving up after 5 attempt(s)

provisioning connector fails

While provisioning Postgres Debezium connector using terraform provider version 0.8.1 connector fails after provisioning. In the confluent control centre you see this message.

Unexpected error occurred with connector. Confluent connect team is looking at your failure. We will reach out to you via support if we need more details. Please check back here for an update.
An in the connector logs you see this entry.

{
  "datacontenttype": "application/json",
  "data": {
    "level": "ERROR",
    "context": {
      "connectorId": "lcc-yo6mvk"
    },
    "summary": {
      "connectorErrorSummary": {
        "message": "Non tolerated exception in error handler",
        "rootCause": "Failed to access Avro data from topic name.of.topic : Unauthorized; error code: 401"
      }
    }
  },
  "subject": "lcc-yo6mvk-lcc-yo6mvk-0",
  "specversion": "1.0",
  "id": "23710d52-5689-4599-9002-e2168e0f7231",
  "source": "crn://confluent.cloud/connector=lcc-yo6mvk",
  "time": "2022-05-20T11:04:39.677Z",
  "type": "io.confluent.logevents.connect.TaskFailed"
}

After some time the connector goes from failed to running state. but because of the first failure terraform aborts and the state file doesn't get updated.

Display detailed error when getting 400 for confluent_role_binding

What

Sample configuration

terraform {
  required_providers {
    confluent = {
      source  = "confluentinc/confluent"
      version = "0.7.0"
    }
  }
}

provider "confluent" {
  api_key    = var.confluent_cloud_api_key
  api_secret = var.confluent_cloud_api_secret
}

resource "confluent_environment" "staging" {
  display_name = "Staging"
}

resource "confluent_service_account" "app-manager" {
  display_name = "app-manager"
  description  = "Service account to manage 'inventory' Kafka cluster"
}

resource "confluent_role_binding" "environment-example-rb" {
  principal   = "User:${confluent_service_account.app-manager.id}"
  role_name   = "MetricsViewer"
  crn_pattern = confluent_environment.staging.resource_name
}

During apply one can see:

Error: error creating Role Binding: 400 Bad Request

  on main.tf line 24, in resource "confluent_role_binding" "environment-example-rb":
  24: resource "confluent_role_binding" "environment-example-rb" {

which is not very descriptive.

When doing the same request via the API we can see a more descriptive error though

{"error_code":40002,"message":"Role MetricsViewer must be bound at scope organization, but was bound at environment"

confluent_api_key : how to retrieve the key itself

Via confluent_api_key.secret it is possible to access the key's secret but how to access the key (name, e.g. "AB6CD2EFGHIJK1LM") itself? confluent_api_key.key as suggested in other tickets doesn't work. Please also add this to the documentation.

BTW: It would also be a good idea to mention in the documentation that the secret itself can be read out and stored e.g. in a key-vault if human access is required.

Support for managed connectors

Hello,
Do you folks have anything in the roadmap to manage the connectors in TF?
Would be super nice, for example, to manage S3 Synk for backup purposes and keep the connector config updated dynamically with all the topics!

Thank you,
Andrei

Unsure why topic and ACL resources have a required credentials attribute

When declaring a topic, one is forced to specify the credentials attribute

This is also the case for ACL resources.

Why would a user that is configuring these resources need to provide this information? There is nothing in the "domain abstraction" for a topic or ACL (either in Apache Kafka or Confluent Cloud Kafka) that requires a reference to some credentials, when working with topics (and ACLs)

No other experience for creating or configuring topics and ACLs involves such credentials. i.e., the behaviour of topics and ACLs is independent of some form of authentication/use of credentials by whatever mechanism was used to manage these. Furthermore, the ability to work with topics (and the ACLs) is possible using many different forms of authentication and access control.

If a credentials attribute is introduced for these resources, what kinds of changes will be reported by terraform as configuration drift?

connector hangs in provisioning state

Debezium Postgres Connector created using terraform provider 0.8.1 hangs in provisioning state and never moves to success. This necessarily isn't a provider issue. But its hard to debug since there is no error in connector log and happens only when using terraform.

Question : What is the best practice to create "n" topics ( n>=100) in terraform

This is more of seeking for input/guidance/best practices for creating a large number of topics in a given environment.

Current, I am using one file per topic to declare the respective resource blocks

  1. Topic Name
  2. Producers and Producer ACL's
  3. Consumers and Consumer ACL's.

Most of the code is very similar and repetitive and follows the same pattern except for the names.

What is the suggested best practice to create topics and respective producer and consumers?

  1. Declare each of them seperately
  2. Use like a module and inject the names for each topic.
    • If this approach is possible, will be okay to create resources inside a dynamic block and use for-each and reach the input that contains the list of topics and respective producer and consumer names?

Thanks

Problem with confluent_kafka_topic reapplying

Hello,
we've encountered the following problem:
I create several confluent_kafka_topic resources with the following config:

resource "confluent_kafka_topic" "topic" {
  for_each = {for topic in var.topics:  topic.name => topic}
  kafka_cluster{
    id      = var.cluster_id
  }
  topic_name         = each.value.name
  partitions_count   = each.value.partitions_count
  http_endpoint      = var.http_endpoint

  config = each.value.config

  credentials {
    key    = var.api_key
    secret = var.api_secret
  }
}

An example of my topics var list is :

topics = [
    {
      name = "topic1",
      partitions_count = 6,
      config = {
        cleanup_policy    = "delete",
        max_message_bytes = "604800000",
        retention_ms      = "2097164"
        delete_retention_ms = "86400000",
        max_compaction_lag_ms = "9223372036854775807",
        message_timestamp_difference_max_ms = "9223372036854775807",
        message_timestamp_type= "CreateTime",
        min_compaction_lag_ms = "0",
        min_insync_replicas = "2",
        retention_bytes = "-1",
        segment_bytes = "104857600",
        segment_ms = "604800000"
        }
    },
    {
      name = "topci2",
      partitions_count = 6,
      config = {
        cleanup_policy    = "delete",
        max_message_bytes = "604800000",
        retention_ms      = "2097164"
        delete_retention_ms = "86400000",
        max_compaction_lag_ms = "9223372036854775807",
        message_timestamp_difference_max_ms = "9223372036854775807",
        message_timestamp_type= "CreateTime",
        min_compaction_lag_ms = "0",
        min_insync_replicas = "2",
        retention_bytes = "-1",
        segment_bytes = "104857600",
        segment_ms = "604800000"
        }
    },
]

After applying the module, when I do terraform plan it says that all my resources will be updated in-place because config attribute value of confluent_kafka_topic is changed even if I didn't change the configuration. This is a problem for me because my whole infrastructure is reapplying with terragrunt after every change. Is there any solution to this problem so terraform could see that value of topics configurations is the same?

Provider doesn't detect network deletion

If you've got a confluent_network resource and then delete it outside of terraform (e.g. delete it in the UI), the provider will happily continue to believe it exists, including trying to create resources like private_link_access attached to it (which obviously fails).

Undefined response type creating connector

I'm using v.0.10.0 of the Confluent TF provider. I'm getting issues when creating a connector:

 Error: error creating Connector "assets_outbox_source_connector": error sending validation request: undefined response type
   with module.confluent_config.confluent_connector.outbox_source_connector["true"],
   on ../../modules/confluent_config/connector.tf line 13, in resource "confluent_connector" "outbox_source_connector":
   13: resource "confluent_connector" "outbox_source_connector" { 

My connector code is the following:

resource "confluent_connector" "outbox_source_connector" {
  for_each = local.all_outbox_params_present ? toset(["true"]) : toset([])

  environment {
    id = data.confluent_environment.env.id
  }

  kafka_cluster {
    id = data.confluent_kafka_cluster.cluster.id
  }

  config_sensitive = {
    "database.password" = var.outbox_db_password
  }

  config_nonsensitive = {
    "name"            = "assets_outbox_source_connector"
    "connector.class" = "io.debezium.connector.postgresql.PostgresConnector"

    "kafka.auth.mode"          = "SERVICE_ACCOUNT"
    "kafka.service.account.id" = resource.confluent_service_account.sa.id

    "plugin.name"          = "pgoutput"
    "database.port"        = "${var.outbox_db_port}"
    "database.user"        = var.outbox_db_username
    "database.hostname"    = var.outbox_db_host
    "database.dbname"      = var.outbox_db_database
    "database.server.name" = var.outbox_db_server_name
    "database.sslmode"     = "require"
    "table.includelist"    = var.outbox_db_table
    "tombstones.on.delete" = "false"
    "output.key.format"    = "AVRO"
    "binary.handling.mode" = "bytes"
    "output.data.format"   = "AVRO"

    #transforms = "onlyCreate,useAfterState,setKey,extractKeyValue,setTopic"

    #"transforms.onlyCreate.type"             = "io.confluent.connect.transforms.Filter$Value"
    #"transforms.onlyCreate.filter.condition" = "[?($.op=='c' || $.op=='r')]"
    #"transforms.onlyCreate.filter.type"      = "include"

    #"transforms.useAfterState.type"  = "org.apache.kafka.connect.transforms.ExtractField$Value"
    #"transforms.useAfterState.field" = "after"

    #"transforms.setKey.type"   = "org.apache.kafka.connect.transforms.ValueToKey"
    #"transforms.setKey.fields" = "key"

    #"transforms.extractKeyValue.type"  = "org.apache.kafka.connect.transforms.ExtractField$Key"
    #"transforms.extractKeyValue.field" = "key"

    #"transforms.setTopic.type"  = "io.confluent.connect.transforms.ExtractTopic$Value"
    #"transforms.setTopic.field" = "outboxTopic"
  }

  depends_on = [
    resource.confluent_kafka_acl.assets_topics["cluster_eh_allow_describe"],
    resource.confluent_kafka_acl.assets_topics["cluster_eh_allow_describe_configs"],
    resource.confluent_kafka_acl.assets_topics["topic_outbox_connector_topic_create"],
    resource.confluent_kafka_acl.assets_topics["topic_outbox_connector_topic_write"],
    resource.confluent_kafka_acl.assets_topics["topic_assets_log_allow_describe"],
    resource.confluent_kafka_acl.assets_topics["topic_assets_log_allow_read"],
    resource.confluent_kafka_acl.assets_topics["topic_assets_log_allow_write"],
    resource.confluent_kafka_acl.assets_topics["group_assets_consumer_groups_allow_read"]
  ]
}

I've confirmed ACLs look good, all the DB settings are setup, and I've created a very similar connector sucessfully through the UI, basically just with a different name. I've removed the transforms here to test if those were the issue, but doesn't appear to be.

Is there a way to get better feedback as to why the issue is?

How to : Substitute user-id for a given user email

To assign RBAC roles (DeveloperRead as example), is there a way to automate the process in terraform for extracting the user-id for a given user email address?

Currently, the CLI command I use to find the user id for an email is as follows

confluent iam user list \                                                                        
    -o json \
| jq '.[] | select(.email == "[email protected]") | .id'

It would be nice to have some block in terraform that took the email id and could output the user id that can be used in the RBAC block.

Thanks

Retrieve resource_name for an org

Hi guys,

from the current documentation of the confluent_role_binding resource it is not clear how a role binding of a role on organization level can be done.

I was expecting something like this:

resource "confluent_role_binding" "some-role-binding" {
  principal   = "User:${confluent_service_account.some-service-account.id}"
  role_name   = "MetricsViewer"
}

But crn_pattern is a required attribute.

It would be great if you could add an example to the documentation which shows how to use the confluent_role_binding for such a scenario.

Btw. as a mitigation I tried to bind the role on environment level instead, but this is giving me an Error: error creating Role Binding: 400 Bad Request error:

resource "confluent_role_binding" "some-role-binding" {
  principal   = "User:${confluent_service_account.some-service-account.id}"
  role_name   = "MetricsViewer"
  crn_pattern = confluent_environment.some-env.resource_name
}

Is this supposed to work?

Greetings
Valentin

confluent_kafka_cluster Data Source is missing attribute 'http_endpoint'

Hello,

we've encountered the following problem:

The resource confluent_kafka_cluster is exposing an attribute 'http_endpoint'

The corresponding datasource confluent_kafka_cluster does not provide this attribute.

When using this attribute (it is defined the the schema and terraform validation is successful) it seems to be empty as the following error shows during "terrafom apply":
Error: Post "/kafka/v3/clusters/redacted/topics": POST /kafka/v3/clusters/redacted/topics giving up after 1 attempt(s): Post "/kafka/v3/clusters/redacted/topics": unsupported protocol scheme ""

I think this is a bug?

Upgrading to v0.8.0 or 0.8.1 fails from v0.7.0 (when referencing modules)

I get the below error when trying to upgrade my current version v0.7.0 to v0.8.0 or v.0.8.1

$ terraform init -upgrade --input=false -backend-config=./backend.sandbox.hcl

│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider confluentinc/confluent: no available releases match the given constraints
│ 0.7.0, 0.8.0

Same goes for v0.7.1 as well

│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider confluentinc/confluent: no available releases match the given constraints
│ 0.7.0, 0.7.1

Thanks

Add confluent_stream_governance_cluster resource

Hi @linouk23, hope you're doing well!

Could you please kindly let me know how can we manage schema registry in the confluent cloud using terraform. I dont think right now this provider supports that? What are you thoughts about this? Or is there any other way to do that.

RBAC provisioning errors : 403 error

I am trying to grant DeveloperRead access to users. In my project setup I have

.
├── developer-read.tf
├── main.tf
├── sa-cloudclusteradmin.tf
└── variables.tf

sa-cloudclusteradmin file creates a new service account with "CloudClusterAdmin" privileges.

Error-1

When I run the tf apply command for the first time I get the below error when assigning DeveloperRead. I have the confluent_cloud_api_key/secret defined in main.tf file. The error below - is it because it needs an account with "CloudClusterAdmin" privileges? If yes, how do I use them in the "confluent_role_binding" block?

error creating Role Binding: 403 Forbidden: Forbidden Access

However the same works when I use the confluent CLI "iam rbac role-binding create" command.

Error-2

Also, when I run the tf apply command again for the second time, I get an additional error for the CloudClusterAdmin account. Should it not skip creating the service account if it already exists?

error creating Service Account "app-manager-rbac-sa-non-prod": 409 Conflict: Service name is already in use.

I tried the below to assign the CloudClusterAdmin keys while provisioning..... but it did not work

resource "confluent_role_binding" "first-last-topic-rb" {
  principal   = "User:${var.user_first_last}"
  role_name   = "DeveloperRead"
  crn_pattern = "${var.kafka_cluster_rbac_crn}/kafka=${var.kafka_cluster_id}/group=test*"

  credentials {
    key    = confluent_api_key.app-manager-rbac-kafka-api-key.id
    secret = confluent_api_key.app-manager-rbac-kafka-api-key.secret
  }
}

---
Error: Unsupported block type

Thanks.

confluent_service_account : can't deal with manually deleted service accounts

If you happen to delete a service account via CLI which was created via TerraForm, the provider is unable to deal with the fact that the service account is still in the TF-state but no longer on confluent cloud itself. The only way out at the moment is, to delete the TF-state and drop the affected environment (incl. cluster and topics). This is a major problem for in-production systems.
Other TF providers (like azurem) can update the TF-state to reflect externally deleted resources - as can confluent_kafka_topic.

I'm not sure about externally modified/deleted ACLs (confluent_kafka_acl), API-Keys (confluent_api_key) or role bindings (confluent_role_binding) and after cleaning up a dead-lock just now I'd rather not test it. But please also verify if these modules are able to cope with external midifications.

409 Error - when deleting and creating service account with same name in the plan

I've run into a scenario when a service account is being deleted and created with the same name in the same plan run, I see 409 error.

Below is what happens when I made updates to a resource instance name

  1. terraform detects it needs to destroy the existing service account associated with that resource instance name change and
  2. create a new service account with the same name

In the logs below, you can see it has "destroyed" the service account but then immediately fails to create a new service account with the same name

module.prod-cloudclusteradmin.confluent_api_key.sa-account-kafka-api-key: Destroying... [id=***-*****]
module.prod-cloudclusteradmin.confluent_service_account.cluster: Creating...
module.prod-cloudclusteradmin.confluent_api_key.sa-account-kafka-api-key: Destruction complete after 1s
module.prod-cloudclusteradmin.confluent_role_binding.sa-account-kafka-cluster-admin: Destroying... [id=***-*****]
module.prod-cloudclusteradmin.confluent_role_binding.sa-account-kafka-cluster-admin: Destruction complete after 1s
module.prod-cloudclusteradmin.confluent_service_account.sa-account: Destroying... [id=***-*****]
module.prod-cloudclusteradmin.confluent_service_account.sa-account: Destruction complete after 0s
╷
│ Error: error creating Service Account "prod-***-*****-cloudclusteradmin-sa": 409 Conflict: Service name is already in use.
│
│   with module.prod-cloudclusteradmin.confluent_service_account.cluster,
│   on ../../../../modules/sa-rbac/sa-rbac.tf line 19, in resource "confluent_service_account" "cluster":
│   19: resource "confluent_service_account" "cluster" {

I think what may help is - to add some kind of a wait for "x" seconds between the steps , just in the case of creation of key like other scenarios.

Because of the above, I am not able to apply the changes successfully the next time and when I try to run the same plan/apply command I get 401 error (which I think is an incorrect error indicator). Also the state file is not in a steady state at this point.

Error: 401 Unauthorized: Authentication failed

Thanks.

Deleting a provisioned connector does not delete the "dlq-lcc-[id]" associated with it

After provisioning and then deleting the connector, the "dql-lcc-[id]" associated with the connector does not get deleted. The dlq is created as part of the connector provisioning process.

The dlq has to be deleted manually or via the cleanup process. At this point, we don't have the id of the deleted connector to automate the cleanup process as well.

Thanks.

role-binding-id: unable to find in Confluent Cloud

Hi,
We have configured resources manually through the Confluent cloud console so far. Now we would like to use terraform. In order to import the existing role-bindings into the tf state, the command line requires role-binding IDs in the format "rb-abc123". I could not find the existing role-binding IDs from the Confluent cloud console or through the CLI. Could you please clarify?
Thanks,
Hari

Error deleting Role Binding on destroy

Error when running "terraform destroy" of a standard cluster with RBAC.

  • Cluster Type: Standard
  • Availability: SINGLE_ZONE
  • Cloud: GCP
  • Region: europe-west3

Error: error deleting Role Binding "rb-2PgRL": 403 Forbidden: Unable to get role binding for id: rb-2PgRL

Every resource is created as expected with "terraform apply", but I got that error when executing "terraform destroy", after that, if I execute "terraform destroy" again, it works fine and every resource is removed.

The RB is with the issue seems to be the related with the group: group=confluent_cli_consumer_*

The github repo: https://github.com/mcolomerc/ccloud-demo-tf

Can not create resource_api_key_v2 for clusters with private networking

When creating a cluster API key, the provider attempts to connect to the Kafka cluster to verify that the API key has successfully synced to the cluster before proceeding. For clusters that can be accessed from the location that Terraform is executing from, this works -- however, when creating a cluster without public connectivity, this can fail (i.e. if the path to the cluster traverses private IP addresses).

One possible remediation here is to add an input to the api_key_v2 resource e.g. wait_for_api_key_sync_to_cluster, which defaults to true, that governs whether this connection and check of the cluster state happens.

Ksqldb integration

Hi all,

is it planned to incorperate the provisioning of Ksqldb in the near future?

Thanks

Kind regards

resources incorrectly expecting `http_endpoint` in `confluent_kafka_cluster` resource

Cluster's REST http_endpoint is incorrectly expected as a required variable for several resources, thereby deterring usability. Examples: confluent_api_key, confluent_kafka_acl

╷
│ Error: error fetching Kafka Cluster "abc-cluster"'s "http_endpoint" attribute: http_endpoint is nil or empty for Kafka Cluster "abc-cluster"
│    8: resource "confluent_api_key" "cluster-api-key" {
│ 
╵

From TF Docs:
image

From API Docs:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.