Code Monkey home page Code Monkey logo

kafka-connect-cosmosdb's Introduction

Kafka Connect for Azure Cosmos DB (SQL API)

Open Source Love svg2 PRs Welcome Maintenance

Java CI with Maven Release

Introduction

File any issues / feature requests / questions etc. you may have in the Issues for this repo.

This project provides connectors for Kafka Connect to read from and write data to Azure Cosmos DB(SQL API).

The connectors in this repository are specifically for the Cosmos DB SQL API. If you are using Cosmos DB with other APIs then there is likely a specific connector for that API, but it's not this one.

Exactly-Once Support

  • Source Connector
    • For the time being, this connector supports at-least once with multiple tasks and exactly-once for single tasks.
  • Sink Connector
    • The sink connector fully supports exactly-once semantics.

Supported Data Formats

The sink & source connectors are configurable in order to support:

Format Name Description
JSON (Plain) JSON record structure without any attached schema.
JSON with Schema JSON record structure with explicit schema information to ensure the data matches the expected format.
AVRO A row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.

Since key and value settings, including the format and serialization, can be independently configured in Kafka, it is possible to work with different data formats for records' keys and values respectively.

To cater for this there is converter configuration for both key.converter and value.converter.

Converter Configuration Examples

JSON (Plain)

  • If you need to use JSON without Schema Registry for Connect data, you can use the JsonConverter supported with Kafka. The example below shows the JsonConverter key and value properties that are added to the configuration:

    key.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable=false
    value.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter.schemas.enable=false

JSON with Schema

  • When the properties key.converter.schemas.enable and value.converter.schemas.enable are set to true, the key or value is not treated as plain JSON, but rather as a composite JSON object containing both an internal schema and the data.

    key.converter=org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable=true
    value.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter.schemas.enable=true
  • The resulting message to Kafka would look like the example below, with schema and payload top-level elements in the JSON:

    {
      "schema": {
        "type": "struct",
        "fields": [
          {
            "type": "int32",
            "optional": false,
            "field": "userid"
          },
          {
            "type": "string",
            "optional": false,
            "field": "name"
          }
        ],
        "optional": false,
        "name": "ksql.users"
      },
      "payload": {
        "userid": 123,
        "name": "user's name"
      }
    }

NOTE: The message written is made up of the schema + payload. Notice the size of the message, as well as the proportion of it that is made up of the payload vs. the schema. This is repeated in every message you write to Kafka. In scenarios like this, you may want to use a serialisation format like JSON Schema or Avro, where the schema is stored separately and the message holds just the payload.

AVRO

  • This connector supports AVRO. To use AVRO you need to configure a AvroConverter so that Kafka Connect knows how to work with AVRO data. This connector has been tested with the AvroConverter supplied by Confluent, under Apache 2.0 license, but another custom converter can be used in its place instead if you prefer.

  • Kafka deals with keys and values independently, you need to specify the key.converter and value.converter properties as required in the worker configuration.

  • An additional converter property must also be added, when using AvroConverter, that provides the URL for the Schema Registry.

The example below shows the AvroConverter key and value properties that are added to the configuration:

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://schema-registry:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://schema-registry:8081

Choosing a conversion format

  • If you're configuring a Source connector and

    • If you want Kafka Connect to include plain JSON in the message it writes to Kafka, you'd set JSON (Plain) configuration.
    • If you want Kafka Connect to include the schema in the message it writes to Kafka, you’d set JSON with Schema configuration.
    • If you want Kafka Connect to include AVRO format in the message it writes to Kafka, you'd set AVRO configuration.
  • If you’re consuming JSON data from a Kafka topic in to a Sink connector, you need to understand how the JSON was serialised when it was written to the Kafka topic:

    • If it was with JSON serialiser, then you need to set Kafka Connect to use the JSON converter (org.apache.kafka.connect.json.JsonConverter).
      • If the JSON data was written as a plain string, then you need to determine if the data includes a nested schema/payload. If it does,then you would set, JSON with Schema configuration.
      • However, if you’re consuming JSON data and it doesn’t have the schema/payload construct, then you must tell Kafka Connect not to look for a schema by setting schemas.enable=false as per JSON (Plain) configuration.
    • If it was with AVRO serialiser, then you need to set Kafka Connect to use the AVRO converter (io.confluent.connect.avro.AvroConverter) as per AVRO configuration.

Common Errors

Some of the common errors you can get if you misconfigure the converters in Kafka Connect. These will show up in the sinks you configure for Kafka Connect, as it’s this point at which you’ll be trying to deserialize the messages already stored in Kafka. Converter problems tend not to occur in sources because it’s in the source that the serialization is set.

Converter Configuration Errors

Configuration

Common Configuration Properties

The Sink and Source connectors share the following common configuration properties

Name Type Description Required/Optional
connect.cosmos.connection.endpoint uri Cosmos endpoint URI string Required
connect.cosmos.master.key string The Cosmos primary key that the sink connects with Required
connect.cosmos.databasename string The name of the Cosmos database the sink writes to Required
connect.cosmos.containers.topicmap string Mapping between Kafka Topics and Cosmos Containers, formatted using CSV as shown: topic#container,topic2#container2 Required

For Sink connector specific configuration, please refer to the Sink Connector Documentation

For Source connector specific configuration, please refer to the Source Connector Documentation

Project Setup

Please refer Developer Walkthrough and Project Setup for initial setup instructions.

Performance Testing

For more information on the performance tests run for the Sink and Source connectors, refer to the Performance testing document.

Refer to the Performance Environment Setup for exact steps on deploying the performance test environment for the Connectors.

Dead Letter Queue

We introduced the standard dead level queue from Kafka. For more info see: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues

Resources

kafka-connect-cosmosdb's People

Contributors

aayush3011 avatar allantargino avatar atxryan avatar brandynbrown avatar dansemedo avatar dependabot[bot] avatar fabianmeiswinkel avatar helayoty avatar jcocchi avatar jyotsnaravikumar avatar kev-ms avatar kushagrathapar avatar kvnloo avatar marcelaldecoa avatar marcelopio avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar moderakh avatar msftgits avatar ncliang avatar ryancrawcour avatar simorenoh avatar sivamu avatar skarri-microsoft avatar tvaron3 avatar vaijanathb avatar vianeyja avatar xinlian12 avatar yevster avatar yorek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-connect-cosmosdb's Issues

Source Connector is only reading two documents and Sink is not writing

After merging "post-processors" branch with master:

  • com.microsoft.azure.cosmosdb.kafka.connect.processor.SourcePostProcessorTest only returns 2 document even if in CosmosDB there are 5
  • com.microsoft.azure.cosmosdb.kafka.connect.processor.SinkPostProcessorTest does not write anything to Cosmos DB

Regarding the second point it seems is not even reading from the Kafka Topic.

An additional note: the mentioned tests are using the code I found on Monday, so it may be that now a different set of options/configuration is needed, after the changes you've done lately.

Nice to have a FieldRenamer post processor to rename fields

Related to #15

Nice to be able to rename fields in the source

Two different options to rename any fields in the record, namely a simple 1:1 field name mapping or a more flexible approach using regexp.

Both config options are defined by inline JSON arrays containing objects which describe the renaming.

Example 1:
cosmosdb.fieldrenamer.mapping=[{"oldName":"key.fieldA","newName":"field1"},{"oldName":"value.xyz","newName":"abc"}]

These settings cause:
a field named fieldA to be renamed to field1 in the key document structure
a field named xyz to be renamed to abc in the value document structure

Example 2:
cosmosdb.fieldrenamer.mapping=[{"regexp":"^key\..my.$","pattern":"my","replace":""}, {"regexp":"^value\..*-.+$","pattern":"-","replace":"_"}]

These settings cause:
all field names of the key structure containing 'my' to be renamed so that 'my' is removed
AND
all field names of the value structure containing a '-' to be renamed by replacing '-' with '_'

Note the use of the "." character as navigational operator in both examples. It's used in order to refer to nested fields in sub documents of the record structure. The prefix at the very beginning is used as a simple convention to distinguish between the key and value structure of a document.

Nice to have a configurable id generation strategy for DocumentIdAdder post processor

Related to #17

The id field is filled by the configured document id generation strategy, which can be one of the following:

-Java UUID - (default)
-Kafka meta-data - comprised of the string concatenation based on [topic-partition-offset] information
-Provided in key - expects the sink record's key to contain an id field which is used as is (error if not present or null)
-Provided in value expects the sink record's value to contain an id field which is used as is (error if not present or null)

The strategy is set by means of a configuration property
cosmosdb.document.id.strategy

Further strategies can be easily implemented based on the IdStrategy interface

All custom strategies that should be available to the connector can be registered by specifying a list of fully qualified class names for the following configuration property:
cosmosdb.document.id.strategies=…

It's important to keep in mind that the chosen / implemented id strategy has direct implications on the possible delivery semantics. Obviously, if it's set to UUID respectively, it can only ever guarantee at-least-once delivery of records, since new ids will result due to the re-processing on retries after failures.

The other strategies permit exactly-once semantics if the respective fields forming the document id are guaranteed to be unique in the first place.

Should have support for Post Processors

  • After reading of data, and converting of data from say AVRO to JSON, documents should undergo post processing before being written to Cosmos DB.

  • These Post Processors should be chainable together. Post Processors will be applied in the order they are chained in configuration.

  • Should be easy to extend with custom Post Processors by providing an abstract class that Post Processors extend.

-Post Processors that should be supported
-- DocumentIdAdder - adds an id field to a document

-Examples of possible future Post Processors
-- FieldRenamer - renames fields
-- NullFieldRemover - removes fields with null values
-- FieldRedacter - redact fields containing sensitive information

  • Applicable to Sink and Source connectors

Confluent 3rd Party Connector Support Request

Hi Team,

Since this is a Microsoft open-source Connector is it possible for you to initiate the process to work with the Confluent partner program team to provide the connector to Confluent for verification. Once verified, it would be listed on Confluent Hub and can we get the required support from Microsoft for any issues, bugs?

Nice to have a schema registry for storing JSON schemas

This could be a separate schema collection with the id of the message it is correlated to or could be a different document type in the same collection.

Consider the best way to implement this, ideally driven by customer requests.

CosmosDBSourceTaskTest fail due to different number of rows returned

Repro:

  • Make sure you don't have any kafka-log-1 folder in C:\Temp
  • Make sure you are connecting to a Cosmos DB instance, not to the emulator
  • Run sbt test
  • Test will run successfully
  • Run sbt test again

The test will fail with the following error:

should Return a list of SourceRecords with the right format *** FAILED ***
[info]   [KafkaPayloadTest(...list of messages...)] had size 18 instead of expec
ted size 20 (CosmosDBSourceTaskTest.scala:82)

Must support a configurable list of one or more multiple topics for Sink

Kafka Connect's default behavior supports topics configuration which allows a user to specify Connect to monitor multiple topics.

Delimited String [or JSON array] that includes a list of topics from which messages will be retrieved and sent to Cosmos DB.

This is a required configuration.
No Default value.

Sink connector needs to support having SinkRecords come from multiple topics.

New document is processed multiple times

While leaving the connector running, if you try to add a new document to the monitored Cosmos DB collection, that document will be processed "n" times:

image

In the sample above the last document was added to Cosmos DB and as you can see it has been processed by the Kafka Connector more then one time

Should have support for AVRO data in Source

Since key and value settings can be independently configured, it is possible to work with different data formats for records' keys and values respectively.

Configuration example for AVRO
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081

value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

Nice to have worker task:topic mapping in Sink connector

Currently the sink connector spins up as many workers as the maxTasks configuration and assigns all topics to all workers. Because of the topic -> collection mapping it would be more efficient to assign specific works to a subset of topics instead of all having all workers read from all topics.

Investigate the implications of this and implement the best solution.

This refers to the taskConfigs function of the CosmosDBSinkConnector

Must have HandleRetriableError implemented in the Sink

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Unhandled exception when JSON is not valid

When reading from Cosmos DB Changfeed, if the original document already has a _lsn property, the ChangeFeedProcessor will add an additional one, creating an invalid JSON document

{
	"name": "davide",
	"surname": "mauri",
	"id": "1",
	"_rid": "tA4eAIlHRkMBAAAAAAAAAA==",
	"_self": "dbs/tA4eAA==/colls/tA4eAIlHRkM=/docs/tA4eAIlHRkMBAAAAAAAAAA==/",
	"_etag": "\"fa019428-0000-0700-0000-5d082aac0000\"",
	"_attachments": "attachments/",
	"_lsn": 5,
	"_metadata": {},
	"_ts": 1560816300,
	"_lsn": 8,
	"_metadata": {}
}

when the document is converted to a JSON, the code crashes. Here's the significant stack trace:

java.lang.IllegalStateException: Unable to parse JSON {[...]}
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentServiceResponse.fromJson(RxDocumentServiceResponse.java:202)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentServiceResponse.getQueryResponse(RxDocumentServiceResponse.java:155)
	at com.microsoft.azure.cosmosdb.BridgeInternal.toChaneFeedResponsePage(BridgeInternal.java:74)
	at com.microsoft.azure.cosmosdb.rx.internal.ChangeFeedQueryImpl.lambda$executeRequestAsync$2(ChangeFeedQueryImpl.java:154)

Must have CI/CD process

Must have github actions or AzDO CI/CD pipelines to auto build, test, and publish component

  • Workflow for merge in to dev (build, run unit tests)
  • Workflow for merge in to main (build, run unit & integration tests)
  • Workflow for release for github, publish to maven, and deploy to confluent hub on tagging in main

Nice to have support for queries in Source

Cosmos DB doesn't currently offer a way to filter the ChangeFeed, so offering the ability to use a query as the source of data would be nice to be able to offer a "filtered change feed" would be nice.

Related to #4

How to use this as Sink Connector

Hi,

I would like to know how to use kafka-connect-cosmosdb as Sink Connector. I don't see it on Confluent Hub. It would be very helpful knowing how to use this since I am trying to Sink data from Kafka Topic to CosmosDB.

Incorrect logic for the continuation token in the Source Connector.

Describe the bug
Due to incorrect logic for the continuation token in the Source Connector documents are left behind and not sent to Kafka.
When querying the document change feed it is required to set the MaxItemCount=BatchSize. Say we have a batchSize=100, the new continuationToken will be the current continuationToken + 100. However if the timeout expires or it reaches the bufferSize the CosmosDBReader will interrupt the process to flush the documents to Kafka, leaving documents without being processed. The next poll of the Source Connector will use the new continuationToken.

To Reproduce
Set the batchSize=100 and timeout=1 (ms). This will force leaving the processChanges function earlier with documents not sent to Kafka.

Expected behavior
The continuationToken need to be in sync with the documents sent to Kafka and not in sync with the batchSize.

Building Uber JAR Fails with Error Attached

I created plugins.sbt inside project folder with contents:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")

and tried to compile the code using sbt assemby, also I removed all the tests for simplicity to build the uber JAR and see the below issue.

kafka-connect-cosmosdb n0c018j$ sbt assembly
[info] Loading global plugins from /Users/n0c018j/.sbt/1.0/plugins
[info] Loading settings for project kafka-connect-cosmosdb-build from plugins.sbt ...
[info] Loading project definition from /Users/n0c018j/Desktop/kafka-connect-cosmosdb/project
[info] Loading settings for project kafka-connect-cosmosdb from build.sbt ...
[info] Set current project to com.microsoft.azure.cosmosdb.kafka.connect (in build file:/Users/n0c018j/Desktop/kafka-connect-cosmosdb/)
[warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list
[info] Including from cache: error_prone_annotations-2.2.0.jar
[info] Including from cache: javax.annotation-api-1.2.jar
[info] Including from cache: audience-annotations-0.5.0.jar
[info] Including from cache: commons-io-2.5.jar
[info] Including from cache: azure-cosmosdb-gateway-2.4.4.jar
[info] Including from cache: logback-classic-1.2.3.jar
[info] Including from cache: json4s-scalap_2.12-3.5.0.jar
[info] Including from cache: osgi-resource-locator-1.0.1.jar
[info] Including from cache: j2objc-annotations-1.1.jar
[info] Including from cache: jetty-server-9.4.14.v20181114.jar
[info] Including from cache: netty-common-4.1.32.Final.jar
[info] Including from cache: connect-api-2.2.0.jar
[info] Including from cache: paranamer-2.8.jar
[info] Including from cache: commons-validator-1.6.jar
[info] Including from cache: azure-cosmosdb-direct-2.4.4.jar
[info] Including from cache: animal-sniffer-annotations-1.17.jar
[info] Including from cache: javax.servlet-api-3.1.0.jar
[info] Including from cache: netty-resolver-4.1.32.Final.jar
[info] Including from cache: rxjava-1.3.8.jar
[info] Including from cache: logback-core-1.2.3.jar
[info] Including from cache: commons-beanutils-1.9.2.jar
[info] Including from cache: connect-runtime-2.2.0.jar
[info] Including from cache: rxjava-string-1.1.1.jar
[info] Including from cache: javax.ws.rs-api-2.1.1.jar
[info] Including from cache: jetty-http-9.4.14.v20181114.jar
[info] Including from cache: scala-xml_2.12-1.0.6.jar
[info] Including from cache: jersey-server-2.27.jar
[info] Including from cache: gson-2.8.5.jar
[info] Including from cache: commons-logging-1.2.jar
[info] Including from cache: netty-handler-4.1.32.Final.jar
[info] Including from cache: scala-logging_2.12-3.9.2.jar
[info] Including from cache: kafka-tools-2.2.0.jar
[info] Including from cache: rxnetty-0.4.20.jar
[info] Including from cache: netty-handler-proxy-4.1.32.Final.jar
[info] Including from cache: jersey-client-2.27.jar
[info] Including from cache: jetty-util-9.4.14.v20181114.jar
[info] Including from cache: kafka-log4j-appender-2.2.0.jar
[info] Including from cache: rxscala_2.12-0.26.5.jar
[info] Including from cache: mockito-scala_2.12-1.5.11.jar
[info] Including from cache: commons-collections-3.2.2.jar
[info] Including from cache: netty-codec-socks-4.1.32.Final.jar
[info] Including from cache: jersey-media-jaxb-2.27.jar
[info] Including from cache: slf4j-log4j12-1.7.25.jar
[info] Including from cache: jetty-io-9.4.14.v20181114.jar
[info] Including from cache: netty-codec-http-4.1.32.Final.jar
[info] Including from cache: json4s-jackson_2.12-3.5.0.jar
[info] Including from cache: validation-api-1.1.0.Final.jar
[info] Including from cache: commons-digester-1.8.1.jar
[info] Including from cache: jetty-servlet-9.4.14.v20181114.jar
[info] Including from cache: commons-lang3-3.8.1.jar
[info] Including from cache: mockito-core-2.28.2.jar
[info] Including from cache: jersey-hk2-2.27.jar
[info] Including from cache: log4j-1.2.17.jar
[info] Including from cache: netty-codec-4.1.32.Final.jar
[info] Including from cache: guava-27.0.1-jre.jar
[info] Including from cache: jetty-security-9.4.14.v20181114.jar
[info] Including from cache: json4s-core_2.12-3.5.0.jar
[info] Including from cache: rxjava-extras-0.8.0.17.jar
[info] Including from cache: argparse4j-0.7.0.jar
[info] Including from cache: failureaccess-1.0.1.jar
[info] Including from cache: hk2-locator-2.5.0-b42.jar
[info] Including from cache: json4s-ast_2.12-3.5.0.jar
[info] Including from cache: jetty-servlets-9.4.14.v20181114.jar
[info] Including from cache: listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar
[info] Including from cache: netty-transport-4.1.32.Final.jar
[info] Including from cache: jackson-jaxrs-json-provider-2.9.8.jar
[info] Including from cache: aopalliance-repackaged-2.5.0-b42.jar
[info] Including from cache: commons-collections4-4.2.jar
[info] Including from cache: scala-library-2.12.8.jar
[info] Including from cache: jetty-continuation-9.4.14.v20181114.jar
[info] Including from cache: jsr305-3.0.2.jar
[info] Including from cache: jackson-jaxrs-base-2.9.8.jar
[info] Including from cache: azure-cosmosdb-2.4.4.jar
[info] Including from cache: hk2-api-2.5.0-b42.jar
[info] Including from cache: connect-json-2.2.0.jar
[info] Including from cache: netty-buffer-4.1.32.Final.jar
[info] Including from cache: commons-text-1.6.jar
[info] Including from cache: jackson-module-jaxb-annotations-2.9.8.jar
[info] Including from cache: scala-reflect-2.12.8.jar
[info] Including from cache: checker-qual-2.5.2.jar
[info] Including from cache: javax.inject-1.jar
[info] Including from cache: jersey-container-servlet-2.27.jar
[info] Including from cache: connect-transforms-2.2.0.jar
[info] Including from cache: jackson-databind-2.9.8.jar
[info] Including from cache: azure-cosmosdb-commons-2.4.4.jar
[info] Including from cache: slf4j-api-1.7.25.jar
[info] Including from cache: jackson-core-2.9.8.jar
[info] Including from cache: hk2-utils-2.5.0-b42.jar
[info] Including from cache: jersey-container-servlet-core-2.27.jar
[info] Including from cache: lz4-java-1.5.0.jar
[info] Including from cache: jackson-annotations-2.9.0.jar
[info] Including from cache: jetty-client-9.4.14.v20181114.jar
[info] Including from cache: jopt-simple-5.0.4.jar
[info] Including from cache: java-uuid-generator-3.1.4.jar
[info] Including from cache: javax.inject-2.5.0-b42.jar
[info] Including from cache: metrics-core-2.2.0.jar
[info] Including from cache: zkclient-0.11.jar
[info] Including from cache: reflections-0.9.11.jar
[info] Including from cache: byte-buddy-1.9.10.jar
[info] Including from cache: scalactic_2.12-3.0.8.jar
[info] Including from cache: javassist-3.22.0-CR2.jar
[info] Including from cache: maven-artifact-3.6.0.jar
[info] Including from cache: byte-buddy-agent-1.9.10.jar
[info] Including from cache: generics-resolver-3.0.0.jar
[info] Including from cache: jaxb-api-2.3.0.jar
[info] Including from cache: objenesis-2.6.jar
[info] Including from cache: plexus-utils-3.1.0.jar
[info] Including from cache: jersey-common-2.27.jar
[info] Including from cache: activation-1.1.1.jar
[info] Including from cache: zookeeper-3.4.13.jar
[info] Including from cache: snappy-java-1.1.7.2.jar
[info] Including from cache: jackson-datatype-jdk8-2.9.8.jar
[info] Including from cache: kafka-clients-2.2.0.jar
[info] Including from cache: kafka_2.12-2.2.0.jar
[info] Including from cache: kafka_2.12-2.2.0-test.jar
[info] Including from cache: kafka-clients-2.2.0-test.jar
[info] Including from cache: zstd-jni-1.3.8-1.jar
[info] Run completed in 39 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Checking every .class/.jar file's SHA-1.
[info] Merging files...
[warn] Merging 'META-INF/DEPENDENCIES' with strategy 'discard'
[warn] Merging 'META-INF/INDEX.LIST' with strategy 'discard'
[warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'
[warn] Merging 'META-INF/maven/ch.qos.logback/logback-classic/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/ch.qos.logback/logback-classic/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/ch.qos.logback/logback-core/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/ch.qos.logback/logback-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-core/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-databind/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.core/jackson-databind/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.datatype/jackson-datatype-jdk8/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.datatype/jackson-datatype-jdk8/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.uuid/java-uuid-generator/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.fasterxml.uuid/java-uuid-generator/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.github.davidmoten/rxjava-extras/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.github.davidmoten/rxjava-extras/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.code.findbugs/jsr305/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.code.findbugs/jsr305/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.code.gson/gson/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.code.gson/gson/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.errorprone/error_prone_annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.errorprone/error_prone_annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/failureaccess/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/failureaccess/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/guava/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/guava/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/listenablefuture/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.guava/listenablefuture/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.j2objc/j2objc-annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.google.j2objc/j2objc-annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-commons/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-commons/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-direct/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-direct/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-gateway/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb-gateway/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.microsoft.azure/azure-cosmosdb/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.thoughtworks.paranamer/paranamer/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.thoughtworks.paranamer/paranamer/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.yammer.metrics/metrics-core/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/com.yammer.metrics/metrics-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-beanutils/commons-beanutils/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-beanutils/commons-beanutils/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-collections/commons-collections/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-collections/commons-collections/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-digester/commons-digester/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-digester/commons-digester/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-io/commons-io/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-io/commons-io/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-logging/commons-logging/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-logging/commons-logging/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-validator/commons-validator/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/commons-validator/commons-validator/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-buffer/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-buffer/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec-http/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec-http/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec-socks/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec-socks/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-codec/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-common/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-common/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-handler-proxy/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-handler-proxy/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-handler/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-handler/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-resolver/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-resolver/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-transport/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/io.netty/netty-transport/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.annotation/javax.annotation-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.annotation/javax.annotation-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.servlet/javax.servlet-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.servlet/javax.servlet-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.validation/validation-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.validation/validation-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.ws.rs/javax.ws.rs-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.ws.rs/javax.ws.rs-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.xml.bind/jaxb-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/javax.xml.bind/jaxb-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/log4j/log4j/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/log4j/log4j/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy-agent/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy-agent/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy-dep/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy-dep/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.bytebuddy/byte-buddy/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.sf.jopt-simple/jopt-simple/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.sf.jopt-simple/jopt-simple/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.sourceforge.argparse4j/argparse4j/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/net.sourceforge.argparse4j/argparse4j/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-collections4/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-collections4/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-lang3/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-lang3/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-text/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.commons/commons-text/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.maven/maven-artifact/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.maven/maven-artifact/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.yetus/audience-annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.apache.yetus/audience-annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.codehaus.mojo/animal-sniffer-annotations/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.codehaus.mojo/animal-sniffer-annotations/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.codehaus.plexus/plexus-utils/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.codehaus.plexus/plexus-utils/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-client/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-client/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-continuation/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-continuation/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-http/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-http/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-io/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-io/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-security/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-security/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-server/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-server/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-servlet/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-servlet/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-servlets/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-servlets/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-util/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.eclipse.jetty/jetty-util/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2.external/aopalliance-repackaged/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2.external/aopalliance-repackaged/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2.external/javax.inject/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2.external/javax.inject/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-locator/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-locator/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-utils/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/hk2-utils/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/osgi-resource-locator/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.hk2/osgi-resource-locator/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.containers/jersey-container-servlet-core/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.containers/jersey-container-servlet-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.containers/jersey-container-servlet/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.containers/jersey-container-servlet/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-client/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-client/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-common/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-common/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-server/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.core/jersey-server/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.inject/jersey-hk2/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.inject/jersey-hk2/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.media/jersey-media-jaxb/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.glassfish.jersey.media/jersey-media-jaxb/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.javassist/javassist/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.javassist/javassist/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.jctools/jctools-core/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.jctools/jctools-core/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.jvnet/tiger-types/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.jvnet/tiger-types/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.reflections/reflections/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.reflections/reflections/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.slf4j/slf4j-api/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.slf4j/slf4j-api/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.slf4j/slf4j-log4j12/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/org.slf4j/slf4j-log4j12/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/maven/ru.vyarus/generics-resolver/pom.properties' with strategy 'discard'
[warn] Merging 'META-INF/maven/ru.vyarus/generics-resolver/pom.xml' with strategy 'discard'
[warn] Merging 'META-INF/services/com.fasterxml.jackson.databind.Module' with strategy 'filterDistinctLines'
[warn] Merging 'META-INF/services/javax.servlet.ServletContainerInitializer' with strategy 'filterDistinctLines'
[warn] Merging 'META-INF/services/org.glassfish.jersey.internal.spi.AutoDiscoverable' with strategy 'filterDistinctLines'
[warn] Merging 'META-INF/services/org.glassfish.jersey.internal.spi.ForcedAutoDiscoverable' with strategy 'filterDistinctLines'
[error] 13 errors were encountered during merge
[error] java.lang.RuntimeException: deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec-http/jars/netty-codec-http-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec/jars/netty-codec-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-transport/jars/netty-transport-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-buffer/jars/netty-buffer-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-common/jars/netty-common-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-resolver/jars/netty-resolver-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-handler/jars/netty-handler-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-handler-proxy/jars/netty-handler-proxy-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec-socks/jars/netty-codec-socks-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/net.bytebuddy/byte-buddy/jars/byte-buddy-1.9.10.jar:META-INF/versions/9/module-info.class
[error] /Users/n0c018j/.ivy2/cache/net.bytebuddy/byte-buddy-agent/jars/byte-buddy-agent-1.9.10.jar:META-INF/versions/9/module-info.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Inject.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Inject.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Named.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Named.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Provider.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Provider.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Qualifier.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Qualifier.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Scope.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Scope.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Singleton.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Singleton.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.apache.kafka/kafka_2.12/jars/kafka_2.12-2.2.0-test.jar:log4j.properties
[error] /Users/n0c018j/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-2.2.0-test.jar:log4j.properties
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/javax.ws.rs/javax.ws.rs-api/jars/javax.ws.rs-api-2.1.1.jar:module-info.class
[error] /Users/n0c018j/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.3.0.jar:module-info.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticLoggerBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticLoggerBinder.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticMDCBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticMDCBinder.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticMarkerBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticMarkerBinder.class
[error] at sbtassembly.Assembly$.applyStrategies(Assembly.scala:142)
[error] at sbtassembly.Assembly$.x$1$lzycompute$1(Assembly.scala:26)
[error] at sbtassembly.Assembly$.x$1$1(Assembly.scala:24)
[error] at sbtassembly.Assembly$.stratMapping$lzycompute$1(Assembly.scala:24)
[error] at sbtassembly.Assembly$.stratMapping$1(Assembly.scala:24)
[error] at sbtassembly.Assembly$.inputs$lzycompute$1(Assembly.scala:68)
[error] at sbtassembly.Assembly$.inputs$1(Assembly.scala:58)
[error] at sbtassembly.Assembly$.apply(Assembly.scala:85)
[error] at sbtassembly.Assembly$.$anonfun$assemblyTask$1(Assembly.scala:249)
[error] at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40)
[error] at sbt.std.Transform$$anon$4.work(System.scala:67)
[error] at sbt.Execute.$anonfun$submit$2(Execute.scala:269)
[error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] at sbt.Execute.work(Execute.scala:278)
[error] at sbt.Execute.$anonfun$submit$1(Execute.scala:269)
[error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178)
[error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:37)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
[error] (assembly) deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec-http/jars/netty-codec-http-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec/jars/netty-codec-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-transport/jars/netty-transport-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-buffer/jars/netty-buffer-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-common/jars/netty-common-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-resolver/jars/netty-resolver-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-handler/jars/netty-handler-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-handler-proxy/jars/netty-handler-proxy-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] /Users/n0c018j/.ivy2/cache/io.netty/netty-codec-socks/jars/netty-codec-socks-4.1.32.Final.jar:META-INF/io.netty.versions.properties
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/net.bytebuddy/byte-buddy/jars/byte-buddy-1.9.10.jar:META-INF/versions/9/module-info.class
[error] /Users/n0c018j/.ivy2/cache/net.bytebuddy/byte-buddy-agent/jars/byte-buddy-agent-1.9.10.jar:META-INF/versions/9/module-info.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Inject.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Inject.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Named.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Named.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Provider.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Provider.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Qualifier.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Qualifier.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Scope.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Scope.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.glassfish.hk2.external/javax.inject/jars/javax.inject-2.5.0-b42.jar:javax/inject/Singleton.class
[error] /Users/n0c018j/.ivy2/cache/javax.inject/javax.inject/jars/javax.inject-1.jar:javax/inject/Singleton.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/org.apache.kafka/kafka_2.12/jars/kafka_2.12-2.2.0-test.jar:log4j.properties
[error] /Users/n0c018j/.ivy2/cache/org.apache.kafka/kafka-clients/jars/kafka-clients-2.2.0-test.jar:log4j.properties
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/javax.ws.rs/javax.ws.rs-api/jars/javax.ws.rs-api-2.1.1.jar:module-info.class
[error] /Users/n0c018j/.ivy2/cache/javax.xml.bind/jaxb-api/jars/jaxb-api-2.3.0.jar:module-info.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticLoggerBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticLoggerBinder.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticMDCBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticMDCBinder.class
[error] deduplicate: different file contents found in the following:
[error] /Users/n0c018j/.ivy2/cache/ch.qos.logback/logback-classic/jars/logback-classic-1.2.3.jar:org/slf4j/impl/StaticMarkerBinder.class
[error] /Users/n0c018j/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.25.jar:org/slf4j/impl/StaticMarkerBinder.class
[error] Total time: 5 s, completed Jan 7, 2020 5:28:07 PM

SinkConnector does not work without PostProcessor

Describe the bug
SinkConnector does not write to CosmosDB if we do not define the postprocessor
connectorProperties.put(CosmosDBConfigConstants.SINK_POST_PROCESSOR, "com.microsoft.azure.cosmosdb.kafka.connect.processor.sink.SelectorSinkPostProcessor")

To Reproduce

Schema Used:
{
"schema": {
"type": "struct",
"fields": [{
"type": "int32",
"optional": true,
"field": "id"
}, {
"type": "string",
"optional": true,
"field": "label"
}, {
"type": "int64",
"optional": false,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "create_ts"
}, {
"type": "int64",
"optional": false,
"name": "org.apache.kafka.connect.data.Timestamp",
"version": 1,
"field": "update_ts"
}],
"optional": false,
"name": "foobar"
},
"payload": {
"id": 1,
"label": "red",
"create_ts": 1501834166000,
"update_ts": 1501834166000
}
}

def getConnectorProperties(): Properties = {
val connectorProperties: Properties = new Properties()
connectorProperties.put(ConnectorConfig.NAME_CONFIG, "CosmosDBSinkConnector")
connectorProperties.put(ConnectorConfig.CONNECTOR_CLASS_CONFIG , "com.microsoft.azure.cosmosdb.kafka.connect.sink.CosmosDBSinkConnector")
connectorProperties.put(ConnectorConfig.TASKS_MAX_CONFIG , "1")
connectorProperties.put("connect.cosmosdb.connection.endpoint" , "https://test-kafkaconnect.documents.azure.com:443/")
connectorProperties.put("connect.cosmosdb.master.key", "5QGyQRtl4fEYT7seSBUiD2Sr0Upgvxm4KrkmeWbVavWAvyM3GQ03esjr8Qixul4MmohdAxAA35PLKpmF5vBvbQ==")
connectorProperties.put("connect.cosmosdb.database" , "test-kcdb")
connectorProperties.put("connect.cosmosdb.collection" , "labelCollection")
connectorProperties.put("topics" , COSMOSDB_TOPIC)
connectorProperties.put("connect.cosmosdb.topic.name" , COSMOSDB_TOPIC)
connectorProperties.put(CosmosDBConfigConstants.ERRORS_RETRY_TIMEOUT_CONFIG, "3")
connectorProperties.put(CosmosDBConfigConstants.SINK_POST_PROCESSOR, "com.microsoft.azure.cosmosdb.kafka.connect.processor.sink.SelectorSinkPostProcessor")

Error thrown:
c.m.a.c.k.c.processor.PostProcessor$ - Instantiating as Post-Processor
10:51:42 [pool-9-thread-1] ERROR o.a.kafka.connect.runtime.WorkerTask - WorkerSinkTask{id=CosmosDBSinkConnector-0} Task threw an uncaught and unrecoverable exception
java.lang.ClassNotFoundException:
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.microsoft.azure.cosmosdb.kafka.connect.processor.PostProcessor$.$anonfun$createPostProcessorList$1(PostProcessor.scala:25)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)

Expected behavior
SinkConnector should work without using PostProcessor

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
PostProcessor should return a empty list if no postprocessor defined here:
def createPostProcessorList(processorClassNames: String, config: CosmosDBConfig): List[PostProcessor] =
processorClassNames.split(',').map(c => if(c == null) return Empty.toList else{
logger.info(s"Instantiating ${c} as Post-Processor")
val postProcessor = Class.forName(c).newInstance().asInstanceOf[PostProcessor]
postProcessor.configure(config)
postProcessor
}).toList

Must have support for 1 or more topics in Sink

Kafka Connect allows you to specify multiple topics to monitor in the configuration.
Refer to #23 for related issue

We therefore need to support the scenario where this occurs.

  • Create a topic map of topics to collections configuration cosmosdb.collection.topicmap that allows a user to set how this mapping works. Optional.

  • If there is no map, assume collection = topic. if collection does not exist, throw non retry-able exception when attempting to write records.

HandleRetriableError throws a NullPointerException

Describe the bug
After the latest code merge Upon running the SinkConnector, HandleRetriableError throws a NullPointerException

To Reproduce
Test Kafka SinkConnector is writing to the CosmosDB.
Update latest code from master

java.lang.NullPointerException: null
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:58)
at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
at com.microsoft.azure.cosmosdb.kafka.connect.config.CosmosDBConfig.(CosmosDBConfig.scala:88)
at com.microsoft.azure.cosmosdb.kafka.connect.common.ErrorHandler.HandleRetriableError.HandleRetriableError(HandleRetriableError.scala:30)
at com.microsoft.azure.cosmosdb.kafka.connect.common.ErrorHandler.HandleRetriableError.HandleRetriableError$(HandleRetriableError.scala:27)
at com.microsoft.azure.cosmosdb.kafka.connect.sink.CosmosDBSinkConnector.HandleRetriableError(CosmosDBSinkConnector.scala:14)
at com.microsoft.azure.cosmosdb.kafka.connect.sink.CosmosDBSinkConnector.start(CosmosDBSinkConnector.scala:28)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:231)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:908)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
10:54:20 [pool-11-thread-1] INFO c.m.a.c.k.c.s.CosmosDBSinkConnector - HandleRetriableError not initialized, getting max Retires value

Expected behavior
MaxRetries should be set to 3 from HandleRetriableError class

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Need to check why commonConfig class is getting set to NULL.
We have baseConfig class, so should utilize that and need

Nice to have Sink create Database andor Collection if not exists

add a new config option createCollection and createDatabase of type Boolean.

If set to true. Sink will attempt to create the database and collection(s) based on the database and collections config values.

If the config option is off and the collection doesn't exist, the existing behavior of throw a (non retry-able) error upon connector start will remain

SinkPostProcessor is converting incoming JSON with id from String to Int

Describe the bug
SinkPostProcessor is converting incoming JSON with id from String to Int
{ "id": "1", "label": "red" } to { "id": 1, "label": "red" }

To Reproduce
Run the SourceConnectorTest with Input as { "id": "1", "label": "red" }
check the kafka topics - json in topic is as { "id": "1", "label": "red" }
Run the SinkConnectorTest with SinkPostProcessor configured
Examine record from consumer , shows as
{ "id": 1, "label": "red" }
Fails writing into CosmosDB as CosmosDB expects "id" in "String" format only
com.microsoft.azure.cosmosdb.DocumentClientException: Message: {"Errors":["One of the specified inputs is invalid"]}

Expected behavior
SinkConnector should get the JSON as { "id": "1", "label": "red" } so it can be written into CosmosDB

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Should have (mockable) unit tests for Source connector

as per https://docs.confluent.io/current/connect/devguide.html#testing

Unit Tests

The Connector Classes should include unit tests to validate internal API's. In particular, unit testsshould be written for configuration validation, data conversion from Kafka Connect framework to anydata-system-specific types, and framework integration. Tools like PowerMock( ​https://github.com/jayway/powermock​ ) can be utilized to facilitate testing of class methodsindependent of a running Kafka Connect environment)

Unit tests should therefore use a mock framework, like ScalaMock and not take a dependency on either Kafka or Cosmos DB.

This is to ensure that unit tests test only small units of work, and not the infrastructure.

Automated integration tests will be written and run that test the full end-to-end

Must have full end-to-end integration tests

using XUnit build out full end-to-end functional tests that are callable from CI/CD
these are not unit tests and should include an actual Kafka Connect environment (confluent docker compose) with a Cosmos DB environment (can be emulator).

component should be deployed to Kafka Connect environment
and scripted tests should be performed for Sink and Source.

Nice to have a Projection post processor to exclude/ include specific fields

Related to #15

Using a Projection post processor to include or exclude fields

By default the connector persists the full value structure of records. It would be nice to be able to configure blacklist or whitelist approach in order to remove/keep fields from the record's structure

Given the following fictional data record:
{ "name": "Donald",
"age": 36,
"active": true,
"address": {"city": "Austin", "country": "USA"},
"food": ["Mexican", "Italian"],
}

Example blacklist projection:
cosmosdb.projection.type=blacklist
cosmosdb.projection.list=age,address.city
will result in:

{ "name": "Anonymous",
"active": true,
"address": {"country": "USA"},
}

Example whitelist projection:
cosmosdb.projection.type=whitelist
cosmosdb.projection.list=age,address.city
will result in:

{ "age": 36,
"address": {"city": "Austin"},
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.