Comments (8)
@andros3 that sounds a little bit unexpected, let me try to reproduce it (thanks for providing an example) and come back to you.
from terraform-provider-confluent.
Managed to reproduce it for
resource "confluent_kafka_topic" "orders_1" {
kafka_cluster {
id = confluent_kafka_cluster.basic.id
}
topic_name = "orders_1"
http_endpoint = confluent_kafka_cluster.basic.http_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
partitions_count = 6
config = {
"cleanup_policy" = "delete",
"max_message_bytes" = "604800000",
"retention_ms" = "2097164"
"delete_retention_ms" = "86400000",
"max_compaction_lag_ms" = "9223372036854775807",
"message_timestamp_difference_max_ms" = "9223372036854775807",
"message_timestamp_type" = "CreateTime",
"min_compaction_lag_ms" = "0",
"min_insync_replicas" = "2",
"retention_bytes" = "-1",
"segment_bytes" = "104857600",
"segment_ms" = "604800000"
}
}
btw that's probably not important but make sure to use quotes for keys under a config
map.
config = {
cleanup_policy = "delete",
config = {
"cleanup_policy" = "delete",
to match our docs.
The good news the config is still getting applied at least 😁.
When I looked at TF state, it saved config = {}
which is a but surprising, looking further.
from terraform-provider-confluent.
As a very short term and hacky fix you could add lifecycle { ignore_changes = [config] }
such that your terraform plan
will stop displaying any changes:
resource "confluent_kafka_topic" "orders_1" {
kafka_cluster {
id = confluent_kafka_cluster.basic.id
}
topic_name = "orders_1"
http_endpoint = confluent_kafka_cluster.basic.http_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
partitions_count = 6
config = {
"cleanup_policy" = "delete",
"max_message_bytes" = "604800000",
"retention_ms" = "2097164"
"delete_retention_ms" = "86400000",
"max_compaction_lag_ms" = "9223372036854775807",
"message_timestamp_difference_max_ms" = "9223372036854775807",
"message_timestamp_type" = "CreateTime",
"min_compaction_lag_ms" = "0",
"min_insync_replicas" = "2",
"retention_bytes" = "-1",
"segment_bytes" = "104857600",
"segment_ms" = "604800000"
}
lifecycle { ignore_changes = [config] }
}
from terraform-provider-confluent.
That is the solution to my problem. Thank you a lot!!
from terraform-provider-confluent.
@andros3 could you keep the issue open for additional visibility? It's definitely a bug in our TF Provider that we need to fix since no one should use lifecycle { ignore_changes = [config] }
😕
from terraform-provider-confluent.
Of course, sorry for closing it 😁
from terraform-provider-confluent.
@andros3 I investigated the issue and the problem was with config names. We should have used segment_ms
-> segment.ms
instead. E.g., compare creating orders_1
with "retention_ms" = "2097164"
against creating orders_2
with "retention.ms" = "2097164"
(see the .
instead of _
):
(and terraform plan
works without lifecycle { ignore_changes = [config] }
too)
so sounds like we might need to add a client input validation to avoid this kind of confusing errors.
Let me know if ⬆️ helps.
from terraform-provider-confluent.
as a follow up, just run updated config (cleanup_policy
-> cleanup.policy
)
locals {
topics = [
{
name = "topic1",
partitions_count = 6,
config = {
"cleanup.policy" = "delete",
"retention.ms" = "2097164"
"delete.retention.ms" = "86400000",
"max.compaction.lag.ms" = "9223372036854775807",
"message.timestamp.difference.max.ms" = "9223372036854775807",
"message.timestamp.type"= "CreateTime",
"min.compaction.lag.ms" = "0",
"min.insync.replicas" = "2",
"retention.bytes" = "-1",
"segment.bytes" = "104857600",
"segment.ms" = "604800000"
}
},
{
name = "topic2",
partitions_count = 6,
config = {
"cleanup.policy" = "delete",
"retention.ms" = "2097164"
"delete.retention.ms" = "86400000",
"max.compaction.lag.ms" = "9223372036854775807",
"message.timestamp.difference.max.ms" = "9223372036854775807",
"message.timestamp.type"= "CreateTime",
"min.compaction.lag.ms" = "0",
"min.insync.replicas" = "2",
"retention.bytes" = "-1",
"segment.bytes" = "104857600",
"segment.ms" = "604800000"
}
},
]
}
resource "confluent_kafka_topic" "topic" {
for_each = {for topic in local.topics: topic.name => topic}
kafka_cluster{
id = confluent_kafka_cluster.basic.id
}
topic_name = each.value.name
partitions_count = each.value.partitions_count
http_endpoint = confluent_kafka_cluster.basic.http_endpoint
config = each.value.config
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
that terraform plan
run afterwards displayed
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
note:
had to delete one topic configuration because of
Error: error creating Kafka Topic: 400 Bad Request: Config property 'max.message.bytes' with value '604800000' exceeded max limit of 8388608.
from terraform-provider-confluent.
Related Issues (20)
- Flink compute resource thinks it needs to be recreated HOT 4
- Configuring SSO strictly through Terraform seems to not work.
- Crate a dependency between SR ID and the SR endpoint to avoid errors when two or more clusters share the same endpoint
- CRUD operations of kafka_acl uses the APIKey in the credentials block instead of provider block HOT 3
- Is it possible to get Kafka `credentials` from `confluent_kafka_cluster`? HOT 1
- Stale resource due to terraform provider upgrade HOT 4
- Schema diff can match against an earlier version of the schema
- RBAC support for "DataDiscovery" in Terraform HOT 2
- Is it possible to set a topic schema within `confluent_kafka_topic`? HOT 1
- Incompatible schemas, and Client.Timeout while contacting schema registry corrupt tfstate file during apply HOT 2
- Creating and managing user groups HOT 2
- Feature request - support default values for topic config items after expert mode edit
- Upon mirror topic creation, have the option to not store credentials in the Terraform state file.
- Add support for new topic configuration fields - message.timestamp.after.max.ms and message.timestamp.before.max.ms HOT 6
- Schema not found when importing a schema in a 20000+ schemas in Schema Registry HOT 3
- Request: allow export of API keys with confluent_tf_importer HOT 1
- Support importing network,schema registry,tgw attachment resources using Resource Importer
- confluent_tag_binding resource should support entity_name updates for sr_record and sr_field HOT 1
- custom connector deployment and generic API error
- Alias argument for confluent_subject_config resource HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from terraform-provider-confluent.