Code Monkey home page Code Monkey logo

cartridge-java's Introduction

Tarantool

Actions Status Code Coverage OSS Fuzz Telegram GitHub Discussions Stack Overflow

Tarantool is an in-memory computing platform consisting of a database and an application server.

It is distributed under BSD 2-Clause terms.

Key features of the application server:

Key features of the database:

  • MessagePack data format and MessagePack based client-server protocol.
  • Two data engines: 100% in-memory with complete WAL-based persistence and an own implementation of LSM-tree, to use with large data sets.
  • Multiple index types: HASH, TREE, RTREE, BITSET.
  • Document oriented JSON path indexes.
  • Asynchronous master-master replication.
  • Synchronous quorum-based replication.
  • RAFT-based automatic leader election for the single-leader configuration.
  • Authentication and access control.
  • ANSI SQL, including views, joins, referential and check constraints.
  • Connectors for many programming languages.
  • The database is a C extension of the application server and can be turned off.

Supported platforms are Linux (x86_64, aarch64), Mac OS X (x86_64, M1), FreeBSD (x86_64).

Tarantool is ideal for data-enriched components of scalable Web architecture: queue servers, caches, stateful Web applications.

To download and install Tarantool as a binary package for your OS or using Docker, please see the download instructions.

To build Tarantool from source, see detailed instructions in the Tarantool documentation.

To find modules, connectors and tools for Tarantool, check out our Awesome Tarantool list.

Please report bugs to our issue tracker. We also warmly welcome your feedback on the discussions page and questions on Stack Overflow.

We accept contributions via pull requests. Check out our contributing guide.

Thank you for your interest in Tarantool!

cartridge-java's People

Contributors

akudiyar avatar anatennis avatar artdu avatar dependabot[bot] avatar devrishal avatar dimoffon avatar dkasimovskiy avatar elishtar avatar idneprov avatar isopov avatar nickkkccc avatar savolgin avatar savolgin73 avatar valery1707 avatar vkartdu avatar vrogach2020 avatar wey1and avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cartridge-java's Issues

Implement truncate() method

The method must call crud.truncate of box.space.<space_obj>:truncate() for cluster and standalone client implementations.

Tarantool errors are lost if driver calls CompletableFuture.get() with timeout args

TarantoolClient.callForSingleResult() returns CompletableFuture

get() without timeout args: TarantoolFunctionCallException with error info from Tarantool
Caused by: TarantoolFunctionCallException: {"type":"ClientError","message":"...' ...}
at io.tarantool.driver.api.SingleValueCallResultImpl.(SingleValueCallResultImpl.java:28)
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:23)
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:13)
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:107)
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:87)
at io.tarantool.driver.mappers.AbstractResultMapper.fromValue(AbstractResultMapper.java:27)
at io.tarantool.driver.handlers.TarantoolResponseHandler.channelRead0(TarantoolResponseHandler.java:51)
at io.tarantool.driver.handlers.TarantoolResponseHandler.channelRead0(TarantoolResponseHandler.java:23)

get() with timeout args: no error info from Tarantool
Caused by: java.util.concurrent.TimeoutException: null
at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[na:na]

MessagePack allows NilValue as map value, but Java maps don't allow null as value

Suppose the driver receives a map from some stored function, where some values as MessagePack nulls (or box.NULL). In this case DefaultMapValueConverter will fail with an error like:

Caused by: java.lang.NullPointerException
	at java.util.HashMap.merge(HashMap.java:1225)
	at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at io.tarantool.driver.mappers.DefaultMapValueConverter.fromValue(DefaultMapValueConverter.java:24)
	at io.tarantool.driver.mappers.DefaultMapValueConverter.fromValue(DefaultMapValueConverter.java:13)
	at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:107)
	at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:87)

This happens because the map collector does not allow nulls as Map.Entry values.

DefaultMapValueConverter needs to filter out the pairs with null values.

Running integration tests requires additional undocumented setup of environment. Error: /app/.rocks/share/tarantool/rocks does not exist and your user does not have write permissions in /app.

Tested on Ubuntu.
When you just run integration tests there's a fail because docker container failes to bootstrap:

10:35:03.894 [main] ERROR 🐳 [testcontainers/wzumtjtg9x4spqhp:latest] - Log output from the failed container:

  • Build application in /app

  • Running `tarantoolctl rocks make`



Error: /app/.rocks/share/tarantool/rocks does not exist and your user does not have write permissions in /app

To fix this you need to add the following variables to environment:

export TARANTOOL_SERVER_USER=<current user>
export TARANTOOL_SERVER_GROUP=<current group>

We need to change this behavior or add the setup notes to readme.

The schema is not received correctly out-of-the-box on Cartridge instances

Problem statement

Currently, by default the ProxyTarantoolClient uses cartridge.get_schema function for retrieving the schema (see

String SCHEMA_FUNCTION = "cartridge.get_schema";
). The function result is treated as the result of ddl.get_schema API (see https://github.com/tarantool/ddl#input-data-format), but in fact, it returns a YAML string (see https://www.tarantool.io/en/doc/latest/book/cartridge/cartridge_api/modules/cartridge/#get-schema). Also, the README doesn't mention how to set up the TarantoolProxyClient for using a custom function (e.g. ddl.get_schema). This leads to errors when a user tries to use the ProxyTarantoolClient without additional wrappers.

TODO

  • Support selecting of the metadata converter which will be used for metadata function result conversion
  • Move the current proxy metdata converter to a "DDL schema" converter variant
  • Parse and use the schema in YAML format returned by cartridge.get_schema by a "YAML schema" metadata converter variant
  • Add a section in README describing the Proxy client customization using ddl.get_schema

Add default Long-to-Double, Integer-to-Float converters

For Tarantool 1.10 there is no type "double" for space format, float numbers are written into the field of "number" type.
This causes a problem when writing rounded values to a space and expecting them to return as float values -- they are converted into an integer MsgPack value.

Example:

  1. Create a space with format {{name = 'id', type = 'unsigned'}, {name = 'value', type = 'number'}}
  2. Write a float value to it: box.space.test:insert({1, 1.0})
  3. Try to get it as a Float or Double using the driver -- it fails with exception "ValueConverter not found for type ImmutablelongValueImpl".

Possible w/a:

  1. Use a custom ValueConverter (currently affected by #42 )
  2. Use ffi.cast() for the number value before passing it back from the space in a stored function (not applicable when using tarantool/crud or the box interface)

Expected behavior

Integer/Long values are correctly converted into a Float/Double container

Actual behavior

There is no default converter between these value pairs

Add constant or convinience methods for creating Conditions with PK.

For example now I need to write

Conditions.indexEquals(0, indexParts)
Conditions.indexEquals(TarantoolIndexQuery.PRIMARY, indexParts)

It's not obvious that I should use constants from TarantoolIndexQuery class as well that constant value must be 0 (not 1).
I suggest to imrove the Conditions class API to make the following possible:

Conditions.indexEquals(Conditions.PK_INDEX, indexParts)
Conditions.primaryIndexEquals(indexParts)

Close connections to nodes not returned by address provider

Currently, the existing connections to the addresses that were not returned by the address provider during the next connection sequence, are not closed after establishing new connections, although new requests will not be dispatched to them. If such connections will not be closed by the server, they may stay alive for a long time and affect the client resources.

These "old" connections must be closed after the new ones are established if they are not already closed by a disconnect event.

Do not fail the request with TimeoutException if a connection is failed and there are available connections

Currently, when a connection fails (a disconnect event is emitted by netty), all in-flight requests for this connection that have not received a response yet will fail with TimeoutException, when the configured request timeout exceeds.

This may cause some requests to wait unnecessarily for the timeout when it is already known that the server will not respond. In this case, we may either end these requests immediately on disconnect event or repeat these requests internally, switching them to the other alive connections.

The first variant seems better, since the requests may not be idempotent, and the decision for repeating the request lies on the user.

Consider connection successful after establishing at least one connection

Use case:

  1. Driver connects to a proxy with several connections
  2. Proxy connects to several routers
  3. If the driver connects to several routers, but then several of them go down, the connection will not be established until the number of routers is equal to the "connections setting". This will work in case of establishing several connections to one router, but it may lead to overloading one router after many connections will stack upon it.

Implement statistics API

For monitoring the internal driver state under load, the users should be able to get the following momentary parameters from the driver API:

  • Current number of connections (dead, alive)
  • Total number of processed requests
  • For each connection:
    • Time when the connection established
    • Total number of processed requests
    • Number of requests currently being processed
    • Last time of any activity within the connection

Requirements:

  • Performance of the operations with the driver shouldn't be noticeably affected
  • Collecting the statistics via API should not affect the performance (it may be called periodically by the user)
  • If the driver is closed, its "current" counters must be equal to zero
  • When the driver is created, all counters must be initialized as zero

Proposed API:

New method TarantoolClient#statistics() which returns the following structure:

class TarantoolStatistics {
     public int connectionsTotal;
     public int connectionsAlive;
     public BigDecimal requestsTotal;
     public Map<String, List<TarantoolConnectionStatistics>> connectionsStatistics; // Remote address mapped to connections
}

class TarantoolConnectionStatistics {
     public boolean isAlive;
     public LocalDateTime establishedAt;
     public LocalDateTime closedAt; // will be null if is not closed yet
     public BigDecimal requestsTotal;
     public int requestsCurrent;
     public LocalDateTime lastActivityTime; // may be not very accurate, with the precision of milliseconds
}

Custom object converter is not applied on complex types

If create a custom object converter

private val timestampConverter = new ObjectConverter[Timestamp, StringValue] {
    override def toValue(obj: Timestamp): StringValue = ValueFactory.newString(obj.getTime.toString)
}

and then register it in default simple or complex types mapper (from DefaultMessagePackMapperFactory), then it does not applied for complex types (like Map, List, ArrayList) and ata runtime throws the next exception:

io.tarantool.driver.mappers.MessagePackObjectMapperException: ObjectConverter for type class java.sql.Timestamp is not found
at io.tarantool.driver.mappers.DefaultMessagePackMapper.toValue(DefaultMessagePackMapper.java:61)
at io.tarantool.driver.api.tuple.TarantoolFieldImpl.getEntity(TarantoolFieldImpl.java:50)
at io.tarantool.driver.api.tuple.TarantoolFieldImpl.toMessagePackValue(TarantoolFieldImpl.java:39)
at io.tarantool.driver.mappers.DefaultPackableObjectConverter.toValue(DefaultPackableObjectConverter.java:20)
at io.tarantool.driver.mappers.DefaultPackableObjectConverter.toValue(DefaultPackableObjectConverter.java:11)
at io.tarantool.driver.mappers.DefaultMessagePackMapper.toValue(DefaultMessagePackMapper.java:63)
at io.tarantool.driver.mappers.DefaultListObjectConverter.lambda$toValue$0(DefaultListObjectConverter.java:26)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)

This is due to the fact that complex types mappers also contains their own list of converters for simple types (which is just a copy from the default mapper, see copy constructor of DefaultMessagePackMapper).
As a workaround I've found only to create instance of DefaultMessagePackMapper manually

private val tmpMapper = DefaultMessagePackMapperFactory.getInstance().defaultSimpleTypeMapper()
private val timestampConverter = new ObjectConverter[Timestamp, StringValue] {
  override def toValue(obj: Timestamp): StringValue = ValueFactory.newString(obj.getTime.toString)
}
tmpMapper.registerObjectConverter(timestampConverter)
val mapper = new DefaultMessagePackMapper.Builder(tmpMapper)
  .withDefaultListObjectConverter()
  .withDefaultArrayValueConverter()
  .withDefaultMapObjectConverter()
  .withDefaultMapValueConverter()
  .build();
mapper.registerObjectConverter(new DefaultPackableObjectConverter(mapper))

val config = TarantoolClientConfig.builder()
  .withMessagePackMapper(mapper)

But I suppose that it is still a bug and user should only use DefaultMessagePackMapperFactory in such case.

[build] [ubuntu] After executing integration tests usual test compilation fails without running "mvn clean"

Maven version:

 mvn --version
Apache Maven 3.6.3
Maven home: /usr/share/maven
Java version: 1.8.0_275, vendor: Private Build, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "5.4.0-53-generic", arch: "amd64", family: "unix"

Steps to reproduce on ubuntu:

  1. Run integration tests
mvn clean test -Pintegration
  1. Run unit tests without clean
mvn test
Maven error during testCompile
....
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ cartridge-driver ---
[DEBUG] Configuring mojo org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile from plugin realm ClassRealm[plugin>org.apache.maven.plugins:maven-compiler-plugin:3.8.0, parent: sun.misc.Launcher$AppClassLoader@1b6d3586]
[DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile' with basic configurator -->
[DEBUG]   (f) basedir = /home/v/work/git/cartridge-java
[DEBUG]   (f) buildDirectory = /home/v/work/git/cartridge-java/target
[DEBUG]   (f) compilePath = [/home/v/work/git/cartridge-java/target/classes, /home/v/.m2/repository/io/netty/netty-all/4.1.50.Final/netty-all-4.1.50.Final.jar, /home/v/.m2/repository/org/msgpack/msgpack-core/0.8.20/msgpack-core-0.8.20.jar, /home/v/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.11.2/jackson-databind-2.11.2.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.11.2/jackson-annotations-2.11.2.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.11.2/jackson-core-2.11.2.jar]
[DEBUG]   (f) compileSourceRoots = [/home/v/work/git/cartridge-java/src/test/java]
[DEBUG]   (f) compilerId = javac
[DEBUG]   (f) compilerVersion = 1.8
[DEBUG]   (f) debug = true
[DEBUG]   (f) encoding = UTF-8
[DEBUG]   (f) failOnError = true
[DEBUG]   (f) failOnWarning = false
[DEBUG]   (f) forceJavacCompilerUse = false
[DEBUG]   (f) fork = true
[DEBUG]   (f) generatedTestSourcesDirectory = /home/v/work/git/cartridge-java/target/generated-test-sources/test-annotations
[DEBUG]   (f) mojoExecution = org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile {execution: default-testCompile}
[DEBUG]   (f) optimize = true
[DEBUG]   (f) outputDirectory = /home/v/work/git/cartridge-java/target/test-classes
[DEBUG]   (f) parameters = false
[DEBUG]   (f) project = MavenProject: io.tarantool:cartridge-driver:1.0.0-SNAPSHOT @ /home/v/work/git/cartridge-java/pom.xml
[DEBUG]   (f) session = org.apache.maven.execution.MavenSession@2c16fadb
[DEBUG]   (f) showDeprecation = true
[DEBUG]   (f) showWarnings = true
[DEBUG]   (f) skipMultiThreadWarning = false
[DEBUG]   (f) source = 8
[DEBUG]   (f) staleMillis = 0
[DEBUG]   (s) target = 8
[DEBUG]   (f) testPath = [/home/v/work/git/cartridge-java/target/test-classes, /home/v/work/git/cartridge-java/target/classes, /home/v/.m2/repository/io/netty/netty-all/4.1.50.Final/netty-all-4.1.50.Final.jar, /home/v/.m2/repository/org/msgpack/msgpack-core/0.8.20/msgpack-core-0.8.20.jar, /home/v/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.11.2/jackson-databind-2.11.2.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.11.2/jackson-annotations-2.11.2.jar, /home/v/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.11.2/jackson-core-2.11.2.jar, /home/v/.m2/repository/org/junit/jupiter/junit-jupiter/5.6.2/junit-jupiter-5.6.2.jar, /home/v/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.6.2/junit-jupiter-api-5.6.2.jar, /home/v/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar, /home/v/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar, /home/v/.m2/repository/org/junit/platform/junit-platform-commons/1.6.2/junit-platform-commons-1.6.2.jar, /home/v/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.6.2/junit-jupiter-params-5.6.2.jar, /home/v/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.6.2/junit-jupiter-engine-5.6.2.jar, /home/v/.m2/repository/org/junit/platform/junit-platform-engine/1.6.2/junit-platform-engine-1.6.2.jar, /home/v/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar, /home/v/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar, /home/v/.m2/repository/org/testcontainers/testcontainers/1.15.0-rc2/testcontainers-1.15.0-rc2.jar, /home/v/.m2/repository/junit/junit/4.12/junit-4.12.jar, /home/v/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar, /home/v/.m2/repository/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar, /home/v/.m2/repository/org/rnorth/duct-tape/duct-tape/1.0.8/duct-tape-1.0.8.jar, /home/v/.m2/repository/org/rnorth/visible-assertions/visible-assertions/2.1.2/visible-assertions-2.1.2.jar, /home/v/.m2/repository/net/java/dev/jna/jna/5.2.0/jna-5.2.0.jar, /home/v/.m2/repository/com/github/docker-java/docker-java-api/3.2.5/docker-java-api-3.2.5.jar, /home/v/.m2/repository/com/github/docker-java/docker-java-transport-zerodep/3.2.5/docker-java-transport-zerodep-3.2.5.jar, /home/v/.m2/repository/com/github/docker-java/docker-java-transport/3.2.5/docker-java-transport-3.2.5.jar, /home/v/.m2/repository/org/testcontainers/junit-jupiter/1.14.3/junit-jupiter-1.14.3.jar, /home/v/.m2/repository/io/tarantool/testcontainers-java-tarantool/0.2.1-SNAPSHOT-vr/testcontainers-java-tarantool-0.2.1-SNAPSHOT-vr.jar, /home/v/.m2/repository/io/tarantool/driver/0.1.1/driver-0.1.1.jar, /home/v/.m2/repository/org/springframework/spring-core/5.2.6.RELEASE/spring-core-5.2.6.RELEASE.jar, /home/v/.m2/repository/org/springframework/spring-jcl/5.2.6.RELEASE/spring-jcl-5.2.6.RELEASE.jar, /home/v/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar, /home/v/.m2/repository/org/yaml/snakeyaml/1.26/snakeyaml-1.26.jar]
[DEBUG]   (f) useIncrementalCompilation = true
[DEBUG]   (f) verbose = false
[DEBUG] -- end configuration --
[DEBUG] Using compiler 'javac'.
[DEBUG] Adding /home/v/work/git/cartridge-java/target/generated-test-sources/test-annotations to test-compile source roots:
  /home/v/work/git/cartridge-java/src/test/java
[DEBUG] New test-compile source roots:
  /home/v/work/git/cartridge-java/src/test/java
  /home/v/work/git/cartridge-java/target/generated-test-sources/test-annotations
[DEBUG] CompilerReuseStrategy: reuseCreated
[DEBUG] useIncrementalCompilation enabled
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  5.071 s
[INFO] Finished at: 2020-11-20T12:47:59+03:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile) on project cartridge-driver: Execution default-testCompile of goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile failed.: NullPointerException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile) on project cartridge-driver: Execution default-testCompile of goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile failed.
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-testCompile of goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile failed.
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:148)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
Caused by: java.lang.NullPointerException
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.hasNewFile (AbstractCompilerMojo.java:1628)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.hasNewFile (AbstractCompilerMojo.java:1630)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.hasNewFile (AbstractCompilerMojo.java:1630)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.hasNewFile (AbstractCompilerMojo.java:1630)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.hasNewFile (AbstractCompilerMojo.java:1630)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.isDependencyChanged (AbstractCompilerMojo.java:1596)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:793)
    at org.apache.maven.plugin.compiler.TestCompilerMojo.execute (TestCompilerMojo.java:181)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

Disable IOException warning when driver is disconnected from the routers

When driver is disconnected from the routers (session timeout) we don't want to see IOException warning in log.
INFO messages about connect/disconnect events should not be disabled.

2021-03-16 12:47:47.698  INFO 1 --- [ntLoopGroup-2-2] .t.d.c.TarantoolClusterConnectionManager : Disconnected from some-server
2021-03-16 12:47:47.696  WARN 1 --- [ntLoopGroup-2-2] io.netty.channel.DefaultChannelPipeline  : An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler
 in the pipeline did not handle the exception.

java.io.IOException: Connection reset by peer
        at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:na]
        at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:na]
        at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) ~[na:na]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233) ~[na:na]
        at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223) ~[na:na]
        at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358) ~[na:na]
        at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1133) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:821) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.52.Final.jar!/:4.1.52.Final]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

Field nullability

It would be good to get nullability attribute from TarantoolFieldMetadata (the equivalent for attribute is_nullable from space format in Tarantool).

Implement Tarantool version verification for feature toggling

Problem statement

Currently, the Tarantool version is not checked which may lead to the user receiving server errors or missing features like updating by JSON paths since Tarantool 2.3.

Proposed solution

Implement checking the Tarantool version and a service for fencing the parts of code depending on the current Tarantool version:

  • parse the Tarantool header into parts (currently the version is not parsed)
  • use the version holder in a singleton service
  • add new exception type
  • make version holder stick with connection, implement cross-checking of version in connections (either forbid connecting of one driver instance to the servers with different versions or maintain the minimal version (?)

If a request is failed due to the connection error, retry it after connection re-established

Problem statement

This is an example of a built-in request retrying policy.

In case of connection failure, if there are in-flight requests, not submitted to the server, they should be resubmitted after the connection is restored (possible a new set of servers is received from disovery and the new connections are established). This must be a configurable request failure handling policy, bound to the specific kind of failures.

Requirements

  1. Different (including user-defined) request failure handlers must be supported, which allow dealing with request failures by the kind of error (Exception type?)
  2. The request retry policy described above for connection failure error must be available by default.

In case of lengthy exception stacktrace, the delay between retries may be very long

Description:

The retry attempts are configured to 20ms delay. The exact delay between attempts is close to 500ms in case of ~30 stack frames.

Example stacktrace:

2021-03-08 12:27:41.001 WARN 1222 --- [Pool-1-worker-3] r.v.c.c.s.c.c.t.TarantoolConfiguration : retry attempt after error
java.util.concurrent.CompletionException: TarantoolFunctionCallException: {"type":"ClientError","code":77,"message":"Connection reset by peer","trace":[{"file":"builtin/box/net_box.lua","line":263}]}
at java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
at io.tarantool.driver.handlers.TarantoolResponseHandler.channelRead0(TarantoolResponseHandler.java:53) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.handlers.TarantoolResponseHandler.channelRead0(TarantoolResponseHandler.java:23) ~[cartridge-driver-0.4.1.jar:na]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.54.Final.jar:4.1.54.Final]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: io.tarantool.driver.exceptions.TarantoolFunctionCallException: null
at io.tarantool.driver.api.SingleValueCallResultImpl.<init>(SingleValueCallResultImpl.java:28) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:23) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:13) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:107) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:87) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.AbstractResultMapper.fromValue(AbstractResultMapper.java:27) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.handlers.TarantoolResponseHandler.channelRead0(TarantoolResponseHandler.java:51) ~[cartridge-driver-0.4.1.jar:na]
... 23 common frames omitted
2021-03-08 12:27:41.498 WARN 1222 --- [Pool-1-worker-3] r.v.c.c.s.c.c.t.TarantoolConfiguration : retry attempt after error
io.tarantool.driver.exceptions.TarantoolFunctionCallException: null
at io.tarantool.driver.api.SingleValueCallResultImpl.<init>(SingleValueCallResultImpl.java:28) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:23) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.SingleValueCallResultConverter.fromValue(SingleValueCallResultConverter.java:13) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:107) ~[cartridge-driver-0.4.1.jar:na]
at io.tarantool.driver.mappers.DefaultMessagePackMapper.fromValue(DefaultMessagePackMapper.java:87) ~[cartridge-driver-0.4.1.jar:na]

Possible root cause:

The exception generation in Java is slow

Possible solution:

Introduce exception caching

If an address provider returns null, it produces NPE when establishing connections

Although this exception is then wrapped in the TarantoolConnectionException, there should be more clear exception anyway rather then a rough NPE:

Caused by: io.tarantool.driver.exceptions.TarantoolConnectionException: The client is not connected to Tarantool server
	at io.tarantool.driver.core.AbstractTarantoolConnectionManager.lambda$getConnection$0(AbstractTarantoolConnectionManager.java:96) ~[cartridge-driver-0.4.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandleStage(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.handle(Unknown Source) ~[na:na]
	at io.tarantool.driver.core.AbstractTarantoolConnectionManager.getConnection(AbstractTarantoolConnectionManager.java:88) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.AbstractTarantoolClient.makeRequest(AbstractTarantoolClient.java:411) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.AbstractTarantoolClient.call(AbstractTarantoolClient.java:228) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.AbstractTarantoolClient.call(AbstractTarantoolClient.java:222) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.AbstractTarantoolClient.call(AbstractTarantoolClient.java:216) ~[cartridge-driver-0.4.1.jar!/:na]
	... 33 common frames omitted
Caused by: java.lang.NullPointerException: null
	at io.tarantool.driver.core.AbstractTarantoolConnectionManager.getConnections(AbstractTarantoolConnectionManager.java:175) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.core.AbstractTarantoolConnectionManager.establishConnections(AbstractTarantoolConnectionManager.java:160) ~[cartridge-driver-0.4.1.jar!/:na]
	at io.tarantool.driver.core.AbstractTarantoolConnectionManager.getConnectionInternal(AbstractTarantoolConnectionManager.java:112) ~[cartridge-driver-0.4.1.jar!/:na]
	... 38 common frames omitted

Unexpected tuple format causes MessageTypeCastException in TarantoolResultImpl constructor

Example case:

  • A stored function returns a map { rows: ..., metadata: ...}
  • The stored function result is handled as TarantoolResult<TarantoolTuple>
  • Mapper sees the wrong result type and throws the MessageTypeCastException

This exception is not readable and requires debugging of the client code.

The possible solution is checking the MsgPack value type and returning a comprehensive error.

Request hangs when the password is incorrect

The client must return an error immediately if the authorization to Tarantool fails.

UPD: only the second (and probably further) requests hang, the first one fails with the expected error.

Recommended ddl.get_schema wrapper can lead to recursive calls

Use case:

  1. Tarantool cartridge application consists of instances that have both the router and storage roles, e.g. master-master token storage
  2. ddl.get_schema procedures, described in README.md, are used on router and storage roles
  3. A binary discovery endpoint class is used for configuration of ClusterTarantoolTupleClient
  4. due to usage of rawset(_G, 'ddl', { get_schema = ... }), as suggested in README.md, ddl.get_schema of router may overwrite ddl.get_schema of storage, due those roles beginning their lifecycle on a single tarantool instance; what ddl.get_schema wins in the end is a coin-flip
  5. ddl.get_schema is called by tarantool java client; if ddl.get_schema of router overwrote ddl.get_schema of storage, an endless recursive call is made

image

ddl.get_schema procedures used for this application were augmented with log.info calls

Mappers can produce NPE in toValue

Need to check all object mappers for not throwing unexpected NPEs to users.

The corresponding notNull check must be added where necessary.

An example stacktrace:

[07.12.2020 14:03] Александр Кузнецов: ""2020-12-07 13:25:31.804 [http-nio-8079-exec-7] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet] [] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is jav
a.lang.NullPointerException] with root cause
"java.lang.NullPointerException: null
        at io.tarantool.driver.mappers.DefaultMessagePackMapper.toValue(DefaultMessagePackMapper.java:59)
        at io.tarantool.driver.mappers.DefaultMapObjectConverter.lambda$toValue$0(DefaultMapObjectConverter.java:26)
        at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
        at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
        at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1691)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
        at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
        at io.tarantool.driver.mappers.DefaultMapObjectConverter.toValue(DefaultMapObjectConverter.java:26)
        at io.tarantool.driver.mappers.DefaultMapObjectConverter.toValue(DefaultMapObjectConverter.java:15)
        at io.tarantool.driver.mappers.DefaultMessagePackMapper.toValue(DefaultMessagePackMapper.java:63)
        at io.tarantool.driver.mappers.DefaultListObjectConverter.lambda$toValue$0(DefaultListObjectConverter.java:26)
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
        at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
        at io.tarantool.driver.mappers.DefaultListObjectConverter.toValue(DefaultListObjectConverter.java:27)
        at io.tarantool.driver.mappers.DefaultListObjectConverter.toValue(DefaultListObjectConverter.java:16)
        at io.tarantool.driver.mappers.DefaultMessagePackMapper.toValue(DefaultMessagePackMapper.java:63)
        at io.tarantool.driver.protocol.TarantoolRequestBody.<init>(TarantoolRequestBody.java:43)
        at io.tarantool.driver.protocol.requests.TarantoolCallRequest$Builder.build(TarantoolCallRequest.java:69)
        at io.tarantool.driver.AbstractTarantoolClient.call(AbstractTarantoolClient.java:245)
        at io.tarantool.driver.ProxyTarantoolClient.call(ProxyTarantoolClient.java:186)

Add check input data for insert, upsert, update operations according to space format

If anyone tries to insert null-value into a space field that is part of the primary index then tarantool runs into an error. In some cases, this can leads to fatal error, and exiting the event loop.

This situation can be initiated from the Java driver as well. So, it is necessary to add a check for input data, according space format and space indices before performing insert, upsert, update operations or/and make it in the safe way.

Here is an example:

local yaml = require('yaml')

local space_name = 'megaspace'

local initial_format = {
    {name = 'customer_id', type = 'string', is_nullable = false},
    {name = 'id', type = 'string', is_nullable = false},
}

local new_format = {
    {name = 'customer_id', type = 'string', is_nullable = false},
    {name = 'id', type = 'string', is_nullable = true},
}

local function init_space(space)
    print('init space: %s', space)
    local _space = box.schema.space.create(space, {if_not_exists = true})
    _space:format(initial_format)
    _space:create_index('pk', {unique = true, parts = {'id'}, if_not_exists = true})
    return _space
end

local function change_format(space)
    box.space[space]:format(new_format)
end

local function add_data(data)
    box.space[space_name]:insert(data)
end

box.cfg({})

--create space
local space = init_space(space_name)
print(yaml.encode(space:format()))

-- add some data
add_data(box.tuple.new({'VIP1', '1'}))
print(yaml.encode(space:select()))

-- change format
change_format(space_name)
print(yaml.encode(space:format()))

--add_data(box.tuple.new({'VIP2', box.NULL}))

Integration tests leave root-owned files in target

  • Run integration tesdts with mvn clean test -Pintegration - they pass
  • run mvn clean - it fails with
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on project cartridge-driver: Failed to clean project: Failed to delete /home/isopov/java/workspace/cartridge-java/target/test-classes/cartridge/.rocks/bin/stateboard

I have docker accessible to my user and run integration under it not under root.

Flacky test RoundRobinStrategyTest.testSkipConnections

Sometimes this test fails with

java.util.concurrent.CompletionException: org.opentest4j.AssertionFailedError: Unexpected exception thrown: io.tarantool.driver.exceptions.NoAvailableConnectionsException: No available connections

	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:320)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:325)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1809)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:1676)
Caused by: org.opentest4j.AssertionFailedError: Unexpected exception thrown: io.tarantool.driver.exceptions.NoAvailableConnectionsException: No available connections
	at org.junit.jupiter.api.AssertDoesNotThrow.createAssertionFailedError(AssertDoesNotThrow.java:83)
	at org.junit.jupiter.api.AssertDoesNotThrow.assertDoesNotThrow(AssertDoesNotThrow.java:54)
	at org.junit.jupiter.api.AssertDoesNotThrow.assertDoesNotThrow(AssertDoesNotThrow.java:37)
	at org.junit.jupiter.api.Assertions.assertDoesNotThrow(Assertions.java:3005)
	at io.tarantool.driver.core.RoundRobinStrategyTest.lambda$testSkipConnections$9(RoundRobinStrategyTest.java:119)
	at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1806)
	... 3 more
Caused by: io.tarantool.driver.exceptions.NoAvailableConnectionsException: No available connections
	at io.tarantool.driver.core.TarantoolConnectionSelectionStrategies$RoundRobinStrategy.next(TarantoolConnectionSelectionStrategies.java:57)
	at io.tarantool.driver.core.RoundRobinStrategyTest.lambda$testSkipConnections$8(RoundRobinStrategyTest.java:119)
	at org.junit.jupiter.api.AssertDoesNotThrow.assertDoesNotThrow(AssertDoesNotThrow.java:50)
	... 7 more

and sometimes it does pass.

@RepeatedTest(value = 100) does not help in reproducing - it seems that only first repetition can fail and all the others will always pass.

Exception check function for nework issues in TarantoolRequestRetryPolicy

TarantoolRequestRetryPolicies.byNumberOfAttempts() has an exceptionCheck argument to decide if an exception should be retried.
As a user I want predefined exception check function for network issue cases:

  1. router failures (one or all routers in cluster)
  2. storage replicas failures (one or all in replicaset for callre calls)
  3. storage leader failures (for callrw calls and leader election)
  4. entire storage failures (stop / start storage for some amount of time)

There is a base template which I tested for retry to happen on this cases, but it's not completed:

e -> {
                    boolean retryRequest = false;

                    if (e instanceof TimeoutException ||
                            e instanceof TarantoolConnectionException) {
                        retryRequest = true;
                    }

                    if (e instanceof CompletionException) {
                        Throwable cause = e.getCause();
                        if (cause instanceof TimeoutException ||
                                cause instanceof TarantoolConnectionException) {
                            retryRequest = true;
                        }
                    }

                    if (e instanceof TarantoolFunctionCallException) {
                        Throwable cause = e.getCause();
                        String message = cause.toString();
                        if (message.contains("Connection refused") ||
                                message.contains("Connection reset by peer") ||
                                message.contains("Peer closed")) {
                            retryRequest = true;
                        }
                    }

                    return retryRequest;

Some examples of exception messages from Tarantool on storage replica failure:

TarantoolFunctionCallException: {"type":"ClientError","code":77,"message":"Connection refused","trace":[{"file":"builtin/box/net_box.lua","line":490}]}
java.util.concurrent.CompletionException: TarantoolFunctionCallException: {"type":"ClientError","code":77,"message":"Connection reset by peer","trace":[{"file":"builtin/box/net_box.lua","line":263}]}
java.util.concurrent.CompletionException: TarantoolFunctionCallException: {"type":"ClientError","code":77,"message":"Peer closed","trace":[{"file":"builtin/box/net_box.lua","line":263}]}

Support batch iteration over spaces with cursor

Cursor is an iterator that encapsulates a state which includes such parameters as the data window (currently iterated batch of data), next key in the window and mechanisms for loading the next portion of data from the server.

Cursor must have the following configurable options:

  • size of the batch

Cursor must be supported for both single and cluster storage.

Support different reconnection strategies

Problem statement

Currently, the connection state machine looks like this:

  1. The actual connection process is initiated by an outgoing request. But, the call to a space by name or any other operation which requires receiving metadata, initiates a request for metadata first.
  2. Before connection:
    a. Get N host addresses from address provider
    b. Establish M*N connections, where M is the number of connections per host
    c. If all connections are established successfully, perform the target request and go to 2
    d. If any of the connection attempts fail, return an error. But any subsequent request may complete successfully since the other connections in a pool may be established. The connection failure listener may enable the connection mode if the failure is caused by the server side, so depending on the connection failure nature, the next state is either 0 or 2.
  3. While connections are active:
    a. On normal client closing: the client waits (blocking) for all in-flight requests to finish and then closes all the connections. Any subsequent requests using a closed client will return an error.
    b. If the underlying channel in a connection is closed (may be caused by failure), the connection mode is enabled, and any subsequent request will go to the state 1. Although, other requests will attempt to use the remaining alive connections. If the number of alive connections for a specific host equals M, it will not be re-established, otherwise all current connections to the host will be closed (gracefully, see 2.a) and re-established.

The schema above has the following pitfalls:

  1. The first request which starts the connection establishing may fail if not all connections are established, but the next request may succeed. Either the first request should try to select the next alive connection or all subsequent requests should fail as well until the connections are re-established.
  2. The reconnection behavior is not determined if the default connection failure listener was not triggered (in what cases?)
  3. If all hosts become unavailable (e.g. the Tarantool server restarted), the client may run out of alive connections and if the connection re-establishing fails as well (due to timeout error), there are no other reconnection attempts (the connection mode is not enabled). A scheduled reconnection strategy may help in this case.
  4. A request which falls to a broken connection should be handed over to an alive one (are there any cases when it is not desired?).

Requirements

  • All connection failure cases must be determined and appropriately handled. Need tests for most possible cases.
  • Connection mode must be enabled always if there are no alive connections.
  • Support for different reconnection strategies must be implemented (indefinite, number-of-attempts, time-based?).
  • A strategy for request handover between connections must be implemented (best-effort, number-of-attempts, time-based?)
  • User must have an ability for specifying his/her own connection failure listeners.

incorrect behavior when trying to update a field with fieldName

Data loss problem. An error is expected in assert, but it updates the last field if the fieldName that is specified in update does not exist:

updateResult = testSpace.update(conditions, TupleOperations.add("asd", 100000)).get();
assertEquals(101968, updateResult.get(0).getInteger(4));

This line changes fieldId from None to -1:

Support interactive transactions for 2.6+

Problem statement

In Tarntool 2.6+, interactive transactions became available (see https://www.tarantool.io/en/doc/latest/book/box/atomic/#transactional-manager). This opens the door to implementing transaction support in the driver. In-driver transactions will enable transactional support in the Spring Data module.

Requirements

Transaction implementation in other noSQL DB drivers must be analyzed.

Perform batch update/delete operations using cursor

Problem statement

  1. Operations on spaces that involve full scan complexity must be avoided and are not supported in tarantool/crud (except for the select operations).
  2. Customers still need a convenient way of performing batch updates/deletes with conditions like in the relational databases.
  3. It is possible to implement such operations using the cursor interface, making batch operations asynchronous and not blocking the Tarantool instance operations.

Proposed solution

  1. Implement cursor interface (#18)
  2. Support batch operations using cursor using background tasks and returning the result asynchronously

ACID

A -- these operations cannot be atomic since they have "select for update" semantics with on-by-one tuple update. But each tuple update is atomic internally
C -- consistency cannot be guaranteed on this level
I -- isolation cannot be guaranteed, but each batch update task must be fault-tolerant to the specific kind of errors (update of a non-existing value, delete of a non-existing value)
D -- user must be able to specify the retry and recovery policies for the batch tasks

TarantoolCallResultFactory allows concurrent modification of mapper state

Internal cache in TarantoolCallResultMapperFactory produces race conditions when instantiating mappers for the same target type (e.g. TarantoolTuple).

Steps to reproduce:

  1. Setup several threads which invoke the call method in the ProxyTarantoolClient concurrently with TarantoolCallResult<TarantoolTuple> result type but for different entities (space metadata)
  2. In some cases the mapper returned by the factory will not correspond to the mapper required by the call result

See https://github.com/tarantool/cartridge-springdata/blob/42f9d1bf0d3c42f4ed8a7a281ba8d357f592dec9/src/main/java/org/springframework/data/tarantool/core/TarantoolTemplate.java#L232 (easy to reproduce with several scheduled threads + API calls)

Unbound retry policy continues retrying despite the finished original future

Steps to reproduce:

  1. Configure a RetryingTarantoolTupleClient with TarantoolRequestRetryPolicies.unbound()
  2. Execute an operation (e.g. callForSingleResult) with explicit timeout (using get(time, timeUnit))

Actual behavior:

Retries continue to be performed despite that future finished with timeout

Expected behavior:

No more retries are performed after the request future is finished with timeout

Client does not perform reconnect in case if the channel is closed gracefully

Use case:

  1. Driver connects to a proxy with several connections (e.g. .withConnections(4)). The proxy setup may look like:
global
    daemon

defaults
    log     global
    mode    tcp
    option  tcplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend stats
    bind *:8081
    mode http
    stats enable
    stats uri /stats
    stats refresh 5s
    stats admin if LOCALHOST

frontend squid
    bind 0.0.0.0:3309
    default_backend squid_pool

backend squid_pool
    balance roundrobin
    mode tcp
    server squid1:3309 check maxconn 30 inter 100 fall 3
    server squid2:3310 check maxconn 30 inter 100 fall 3
    server squid3:3309 check maxconn 30 inter 100 fall 3
    server squid4:3310 check maxconn 30 inter 100 fall 3
  1. Proxy connects to several routers (4, for example)
  2. All of the routers disconnect
  3. Driver has no available connections (all have the status "not connected") but the connection sequence does not start since the connection mode is not switched to true

Add audit logging

Problem statement

Currently, there is no logging at all, except what comes from Netty. But logging is necessary for debug purposes and auditing the data flows.

Proposed solution

The following sides of logging should be considered:

  • Failure logging
  • Audit logging (tracing) for requests
  • Connection/disconnection logging
  • Logging settings (examples of logback configuration)

Update by field name is not working

When trying to update a tuple by field name, the following error occurs:

Illegal parameters, field id must be a number or a string

The reason is missing field name serialization in TarantoolUpdateOperation:

    @Override
    public Value toMessagePackValue(MessagePackObjectMapper mapper) {
        return mapper.toValue(
                Arrays.asList(getOperationType().toString(), getFieldIndex(), getValue()));
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.