It might not be an issue but my bad configuration, but I can reproduce it with default configs.
name=demo-postgresql
tasks.max=1
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.url=jdbc:postgresql://172.17.0.1:5433/sc_orders?user=postgres&password=system
table.whitelist=bpo_customer
mode=incrementing
incrementing.column.name=updated_at
timestamp.column.name=updated_at
topic.prefix=test_jdbc_distr_
bootstrap.servers=localhost:9092
# The converters specify the format of data in Kafka and how to translate it into Connect data.
# Every Connect user will need to configure these based on the format they want their data in
# when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schems.enable=false
# The internal converter used for offsets and config data is configurable and must be specified,
# but most users will always want to use the built-in default. Offset and config data is never
# visible outside of Connect in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# Local storage file for offset data
offset.storage.file.filename=/tmp/connect.offsets
bin/connect-standalone ~/kafka/connect-json-standalone.properties ~/kafka/postgresql.properties
{"schema":{"type":"struct","fields":[{"type":"string","optional":true,"field":"bpo_id"},{"type":"string","optional":true,"field":"bpo_id"},{"type":"int64","optional":false,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"updated_at"},{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"bpo_customer"},"payload":{"customer_id":"14","bpo_id":"14","updated_at":1469011811729,"id":17}}
{"schema":{"type":"struct","fields":[{"type":"string","optional":true,"field":"bpo_id"},{"type":"string","optional":true,"field":"bpo_id"},{"type":"int64","optional":false,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"updated_at"},{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"bpo_customer"},"payload":{"customer_id":"14","bpo_id":"14","updated_at":1469011811729,"id":17}}
{"schema":{"type":"struct","fields":[{"type":"string","optional":true,"field":"bpo_id"},{"type":"string","optional":true,"field":"bpo_id"},{"type":"int64","optional":false,"name":"org.apache.kafka.connect.data.Timestamp","version":1,"field":"updated_at"},{"type":"int32","optional":false,"field":"id"}],"optional":false,"name":"bpo_customer"},"payload":{"customer_id":"14","bpo_id":"14","updated_at":1469011811729,"id":17}}
The latest record infinitely produces into topic. In case of mode incrementing
it works as expected no duplicates.