Code Monkey home page Code Monkey logo

azure-cosmosdb-java's Introduction

Java SDK for SQL API of Azure Cosmos DB

Important

Retirement of this SDK (Azure Cosmos DB Async Java SDK v2.x) has been announced for August 31, 2024. We recommend upgrading to Azure Cosmos DB Java SDK v4. Please visit our Azure Cosmos DB Java SDK migration guide for details on the benefits of migration and the migration process.

Maven Central Build Status Known Vulnerabilities

Consuming the official Microsoft Azure Cosmos DB Java SDK

This project provides a SDK library in Java for interacting with SQL API of Azure Cosmos DB Database Service. This project also includes samples, tools, and utilities.

Jar dependency binary information for maven and gradle can be found here at maven.

For example, using maven, you can add the following dependency to your maven pom file:

<dependency>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>azure-cosmosdb</artifactId>
  <version>2.6.16</version>
</dependency>

Useful links:

Prerequisites

  • Java Development Kit 8
  • An active Azure account. If you don't have one, you can sign up for a free account. Alternatively, you can use the Azure Cosmos DB Emulator for development and testing. As emulator https certificate is self signed, you need to import its certificate to java trusted cert store as explained here
  • (Optional) SLF4J is a logging facade.
  • (Optional) SLF4J binding is used to associate a specific logging framework with SLF4J.
  • (Optional) Maven

SLF4J is only needed if you plan to use logging, please also download an SLF4J binding which will link the SLF4J API with the logging implementation of your choice. See the SLF4J user manual for more information.

API Documentation

Javadoc is available here.

The SDK provide Reactive Extension Observable based async API. You can read more about RxJava and Observable APIs here.

Usage Code Sample

Code Sample for creating a Document:

import com.microsoft.azure.cosmosdb.rx.*;
import com.microsoft.azure.cosmosdb.*;

ConnectionPolicy policy = new ConnectionPolicy();
policy.setConnectionMode(ConnectionMode.Direct);

AsyncDocumentClient asyncClient = new AsyncDocumentClient.Builder()
                .withServiceEndpoint(HOST)
                .withMasterKeyOrResourceToken(MASTER_KEY)
                .withConnectionPolicy(policy)
                .withConsistencyLevel(ConsistencyLevel.Eventual)
                .build();

Document doc = new Document(String.format("{ 'id': 'doc%d', 'counter': '%d'}", 1, 1));

Observable<ResourceResponse<Document>> createDocumentObservable =
    asyncClient.createDocument(collectionLink, doc, null, false);
    createDocumentObservable
                .single()           // we know there will be one response
                .subscribe(

                    documentResourceResponse -> {
                        System.out.println(documentResourceResponse.getRequestCharge());
                    },

                    error -> {
                        System.err.println("an error happened: " + error.getMessage());
                    });

We have a get started sample app available here.

Also We have more examples in form of standalone unit tests in examples project.

Guide for Prod

To achieve better performance and higher throughput there are a few tips that are helpful to follow:

Use Appropriate Scheduler (Avoid stealing Eventloop IO Netty threads)

SDK uses netty for non-blocking IO. The SDK uses a fixed number of IO netty eventloop threads (as many CPU cores your machine has) for executing IO operations.

The Observable returned by API emits the result on one of the shared IO eventloop netty threads. So it is important to not block the shared IO eventloop netty threads. Doing CPU intensive work or blocking operation on the IO eventloop netty thread may cause deadlock or significantly reduce SDK throughput.

For example the following code executes a cpu intensive work on the eventloop IO netty thread:

Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
  collectionLink, document, null, true);

createDocObs.subscribe(
  resourceResponse -> {
    //this is executed on eventloop IO netty thread.
    //the eventloop thread is shared and is meant to return back quickly.
    //
    // DON'T do this on eventloop IO netty thread.
    veryCpuIntensiveWork();
  });

After result is received if you want to do CPU intensive work on the result you should avoid doing so on eventloop IO netty thread. You can instead provide your own Scheduler to provide your own thread for running your work.

import rx.schedulers;

Observable<ResourceResponse<Document>> createDocObs = asyncDocumentClient.createDocument(
  collectionLink, document, null, true);

createDocObs.subscribeOn(Schedulers.computation())
subscribe(
  resourceResponse -> {
    // this is executed on threads provided by Scheduler.computation()
    // Schedulers.computation() should be used only the work is cpu intensive and you are not doing blocking IO, thread sleep, etc. in this thread against other resources.
    veryCpuIntensiveWork();
  });

Based on the type of your work you should use the appropriate existing RxJava Scheduler for your work. Please read here Schedulers.

Disable netty's logging

Netty library logging is very chatty and need to be turned off (suppressing log in the configuration may not be enough) to avoid additional CPU costs. If you are not in debugging mode disable netty's logging altogether. So if you are using log4j to remove the additional CPU costs incurred by org.apache.log4j.Category.callAppenders() from netty add the following line to your codebase:

org.apache.log4j.Logger.getLogger("io.netty").setLevel(org.apache.log4j.Level.OFF);

OS Open files Resource Limit

Some Linux systems (like Redhat) have an upper limit on the number of open files and so the total number of connections. Run the following to view the current limits:

ulimit -a

The number of open files (nofile) need to be large enough to have enough room for your configured connection pool size and other open files by the OS. It can be modified to allow for a larger connection pool size.

Open the limits.conf file:

vim /etc/security/limits.conf

Add/modify the following lines:

* - nofile 100000

Use native SSL implementation for netty

Netty can use OpenSSL directly for SSL implementation stack to achieve better performance. In the absence of this configuration netty will fall back to Java's default SSL implementation.

on Ubuntu:

sudo apt-get install openssl
sudo apt-get install libapr1

and add the following dependency to your project maven dependencies:

<dependency>
  <groupId>io.netty</groupId>
  <artifactId>netty-tcnative</artifactId>
  <version>2.0.20.Final</version>
  <classifier>linux-x86_64</classifier>
</dependency>

For other platforms (Redhat, Windows, Mac, etc) please refer to the instructions on netty's wiki.

Using system properties to modify default Direct TCP options

We have added the ability to modify the default Direct TCP options utilized by the SDK. In priority order we will take default Direct TCP options from:

  1. The JSON value of system property azure.cosmos.directTcp.defaultOptions. Example:

    java -Dazure.cosmos.directTcp.defaultOptions={\"idleEndpointTimeout\":\"PT24H\"} -jar target/cosmosdb-sdk-testing-1.0-jar-with-dependencies.jar Direct 10 0 Read
  2. The contents of the JSON file located by system property azure.cosmos.directTcp.defaultOptionsFile. Example:

    java -Dazure.cosmos.directTcp.defaultOptionsFile=/path/to/default/options/file -jar Direct 10 0 Query
    
  3. The contents of the JSON resource file named azure.cosmos.directTcp.defaultOptions.json. Specifically, the resource file is read from this stream:

    RntbdTransportClient.class.getClassLoader().getResourceAsStream("azure.cosmos.directTcp.defaultOptions.json")

    Example: Contents of resource file azure.cosmos.directTcp.defaultOptions.json.

    {
      "bufferPageSize": 8192,
      "connectionTimeout": "PT1M",
      "idleChannelTimeout": "PT0S",
      "idleEndpointTimeout": "PT1M10S",
      "maxBufferCapacity": 8388608,
      "maxChannelsPerEndpoint": 10,
      "maxRequestsPerChannel": 30,
      "receiveHangDetectionTime": "PT1M5S",
      "requestExpiryInterval": "PT5S",
      "requestTimeout": "PT1M",
      "requestTimerResolution": "PT0.5S",
      "sendHangDetectionTime": "PT10S",
      "shutdownTimeout": "PT15S"
    }
    

Values that are in error are ignored.

Common Perf Tips

There is a set of common perf tips written for our sync SDK. The majority of them also apply to the async SDK. It is available here.

Future, CompletableFuture, and ListenableFuture

The SDK provide Reactive Extension (Rx) Observable based async API.

RX API has advantages over Future based APIs. But if you wish to use Future you can translate Observables to Java native Futures:

// You can convert an Observable to a ListenableFuture.
// ListenableFuture (part of google guava library) is a popular extension
// of Java's Future which allows registering listener callbacks:
// https://github.com/google/guava/wiki/ListenableFutureExplained

import rx.observable.ListenableFutureObservable;

Observable<ResourceResponse<Document>> createDocObservable = asyncClient.createDocument(
  collectionLink, document, null, false);

// NOTE: if you are going to do CPU intensive work
// on the result thread consider changing the scheduler see Use Proper Scheduler
// (Avoid Stealing Eventloop IO Netty threads) section
ListenableFuture<ResourceResponse<Document>> listenableFuture =
  ListenableFutureObservable.to(createDocObservable);

ResourceResponse<Document> rrd = listenableFuture.get();

For this to work you will need RxJava Guava library dependency. More information available on RxJavaGuava's GitHub.

Checking out the Source Code

The SDK is open source and is available here sdk.

Clone the Repo

git clone https://github.com/Azure/azure-cosmosdb-java.git
cd azure-cosmosdb-java

How to Build from Command Line

Run the following maven command to build:

maven clean package -DskipTests

Running Tests from Command Line

Running tests require Azure Cosmos DB Endpoint credentials:

mvn test -DACCOUNT_HOST="https://REPLACE_ME_WITH_YOURS.documents.azure.com:443/" -DACCOUNT_KEY="REPLACE_ME_WITH_YOURS"

Import into Intellij or Eclipse

  • Load the main parent project pom file in Intellij/Eclipse (That should automatically load examples).

  • For running the samples you need a proper Azure Cosmos DB Endpoint. The endpoints are picked up from TestConfigurations.java. There is a similar endpoint config file for the sdk tests here.

  • You can pass your endpoint credentials as VM Arguments in Eclipse JUnit Run Config:

     -DACCOUNT_HOST="https://REPLACE_ME.documents.azure.com:443/" -DACCOUNT_KEY="REPLACE_ME"
  • or you can simply put your endpoint credentials in TestConfigurations.java

  • The SDK tests are written using TestNG framework, if you use Eclipse you may have to add TestNG plugin to your eclipse IDE as explained here. Intellij has builtin support for TestNG.

  • Now you can run the tests in your Intellij/Eclipse IDE.

FAQ

We have a frequently asked questions which is maintained here.

Release changes

Release changelog is available here.

Contribution and Feedback

This is an open source project and we welcome contributions.

If you would like to become an active contributor to this project please follow the instructions provided in Azure Projects Contribution Guidelines.

We have travis build CI which should pass for any PR.

If you encounter any bugs with the SDK please file an issue in the Issues section of the project.

License

MIT License Copyright (c) 2018 Copyright (c) Microsoft Corporation

Attribution

This project includes the MurmurHash3 algorithm, which came with the following notice: β€œThe MurmurHash3 algorithm was created by Austin Appleby and placed in the public domain. * This java port was authored by Yonik Seeley and also placed into the public domain. * The author hereby disclaims copyright to this source code.”

azure-cosmosdb-java's People

Contributors

aayush3011 avatar anfeldma-ms avatar brunoborges avatar chetanmeh avatar christopheranderson avatar david-noble-at-work avatar ealsur avatar fpaparoni avatar gmsantos avatar jonathangiles avatar kirankumarkolli avatar kushagrathapar avatar mbhaskar avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar moderakh avatar msftgits avatar otaviojava avatar shettyh avatar simplynaveen20 avatar snehagunda avatar snyk-bot avatar southpolesteve avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-cosmosdb-java's Issues

Unable to perform delete operation

When I try to delete, I am getting the following logs:
java.lang.IllegalArgumentException: port out of range:-1 at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) ~[na:1.8.0_25] at java.net.InetSocketAddress.createUnresolved(InetSocketAddress.java:254) ~[na:1.8.0_25] at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:126) ~[netty-transport-4.1.7.Final.jar:4.1.7.Final] at io.reactivex.netty.client.ClientChannelFactoryImpl.connect(ClientChannelFactoryImpl.java:61) ~[rxnetty-0.4.20.jar:0.4.20] at io.reactivex.netty.client.ConnectionPoolImpl.performAquire(ConnectionPoolImpl.java:171) ~[rxnetty-0.4.20.jar:0.4.20] at io.reactivex.netty.client.ConnectionPoolImpl.access$300(ConnectionPoolImpl.java:45) ~[rxnetty-0.4.20.jar:0.4.20] at io.reactivex.netty.client.ConnectionPoolImpl$1.call(ConnectionPoolImpl.java:139) ~[rxnetty-0.4.20.jar:0.4.20] at io.reactivex.netty.client.ConnectionPoolImpl$1.call(ConnectionPoolImpl.java:124) ~[rxnetty-0.4.20.jar:0.4.20] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.subscribe(Observable.java:10240) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.subscribe(Observable.java:10207) [rxjava-1.2.5.jar:1.2.5] at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:444) [rxjava-1.2.5.jar:1.2.5] at rx.observables.BlockingObservable.single(BlockingObservable.java:341) [rxjava-1.2.5.jar:1.2.5] at com.microsoft.azure.documentdb.BridgeInternal$2.getDatabaseAccountFromEndpoint(BridgeInternal.java:95) ~[azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.GlobalEndpointManager.getDatabaseAccountFromAnyEndpoint(GlobalEndpointManager.java:121) ~[azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.GlobalEndpointManager.refreshEndpointListInternal(GlobalEndpointManager.java:155) ~[azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.GlobalEndpointManager.initialize(GlobalEndpointManager.java:148) ~[azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.GlobalEndpointManager.getReadEndpoint(GlobalEndpointManager.java:81) ~[azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.GlobalEndpointManager.resolveServiceEndpoint(GlobalEndpointManager.java:93) ~[azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.getUri(RxGatewayStoreModel.java:214) ~[azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.performRequest(RxGatewayStoreModel.java:165) ~[azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.doRead(RxGatewayStoreModel.java:119) ~[azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.processMessage(RxGatewayStoreModel.java:383) ~[azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl.lambda$doRead$17(RxDocumentClientImpl.java:440) [azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl$$Lambda$18/170067997.call(Unknown Source) ~[na:na] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeRedo$2.call(OnSubscribeRedo.java:273) [rxjava-1.2.5.jar:1.2.5] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73) [rxjava-1.2.5.jar:1.2.5] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeRedo$5.request(OnSubscribeRedo.java:361) [rxjava-1.2.5.jar:1.2.5] at rx.Subscriber.setProducer(Subscriber.java:211) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:353) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:47) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:248) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(OnSubscribeDoOnEach.java:101) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorSubscribeOn$1$1.onNext(OperatorSubscribeOn.java:53) [rxjava-1.2.5.jar:1.2.5] at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl.lambda$createPutMoreContentObservable$26(RxDocumentClientImpl.java:682) [azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl$$Lambda$19/1881124608.call(Unknown Source) [azure-documentdb-rx-0.9.0-rc2.jar:na] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94) [rxjava-1.2.5.jar:1.2.5] at rx.internal.schedulers.ImmediateScheduler$InnerImmediateScheduler.schedule(ImmediateScheduler.java:58) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorSubscribeOn.call(OperatorSubscribeOn.java:45) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OperatorSubscribeOn.call(OperatorSubscribeOn.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.unsafeSubscribe(Observable.java:10144) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) [rxjava-1.2.5.jar:1.2.5] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.subscribe(Observable.java:10240) [rxjava-1.2.5.jar:1.2.5] at rx.Observable.subscribe(Observable.java:10207) [rxjava-1.2.5.jar:1.2.5] at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:444) [rxjava-1.2.5.jar:1.2.5] at rx.observables.BlockingObservable.single(BlockingObservable.java:341) [rxjava-1.2.5.jar:1.2.5] at com.microsoft.azure.documentdb.BridgeInternal$1.readCollection(BridgeInternal.java:74) [azure-documentdb-rx-0.9.0-rc2.jar:na] at com.microsoft.azure.documentdb.internal.routing.ClientCollectionCache.readCollection(ClientCollectionCache.java:33) [azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.internal.routing.ClientCollectionCache.getByRid(ClientCollectionCache.java:26) [azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.internal.routing.CollectionCache$3.call(CollectionCache.java:119) [azure-documentdb-1.13.0.jar:na] at com.microsoft.azure.documentdb.internal.routing.CollectionCache$3.call(CollectionCache.java:116) [azure-documentdb-1.13.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_25] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_25] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_25] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]

RxJava 1.x now EOL

As per RxJava 1.x Readme its marked as EOL

As of March 31, 2018, The RxJava 1.x branch and version is end-of-life (EOL). No further development, bugfixes, documentation changes, PRs, releases or maintenance will be performed by the project on the 1.x line.

Currently this project uses 1.3.3 version. Given the EOL are there any plans to switch to 2.x series?

RxNetty

RxNetty version being used is 0.4.2. Currently there is no release of RxNetty and as per current status

1.0.x will be the RxNetty-2 update. It is currently pending an RFC for API changes.

PS: Did not found any link to forum. So posting this question as issue. Let me know if there is any user forum where such queries can be posted

Support for LINQ operators / Pagination

Hi,

We have application that requires pagination, Searching through internet found the below reference.
https://azure.microsoft.com/en-us/updates/documentdb-paging-with-top-and-new-linq-operators/

However, reading further revealed that LINQ is specific for .NET programming model.
Can you please suggest if it is possible with java sdk(I searched all methods on client, but couldn't find any).
Or What is the best solution for pagination on 3000+ records(with approximately 50 as page size)

How to partition the query to different physical partitions.

Hi,

I'm trying to execute a very large query that needs to return millions of records, so I want to partition the query and use multiple machines to process the results.

My logical partition key would be a UUID of a document, so that will not be very helpful for me to allocate different parts to each worker node. Can I get the physical partition ID and execute my query only within a particular physical partition?

Here's what I have tried:

FeedOptions feedOptions = new FeedOptions();
feedOptions.setEnableCrossPartitionQuery(false);
feedOptions.setPartitionKeyRangeIdInternal("0");

client.queryDocuments(collectionPath, "SELECT * FROM e where e.docType = 'address'", feedOptions)
.flatMapIterable(FeedResponse::getResults);

But changing the partitionKeyRangeId doesn't seem to change the results at all.

Please advise.

The reply message length 4976828 is less than the maximum message length 4194304

Description:
I realize this isn't the correct forum for this bug - as this is with the MongoDB api not the cosmosdb sql api, but I don't see a related project I can post an issue against for this.

When reading data using Azure Databricks against a Cosmos DB MongoDB API:

df = spark.read.format("com.mongodb.spark.sql.DefaultSource").option("database", "ingress").option("collection", "ingress").option("pipeline", pipeline).load() cnt = df.count()

When trying to perform a find, I receive the following error:

The reply message length 4976828 is less than the maximum message length 4194304

This is completely non-deterministic, sometimes it works sometimes it does not (same pipeline details being provided). The collection being read has roughly 170mb of data over 1mm records. I have 5000 RUs for this collection.,

Compilation errors

Hi There,

I downloaded the v2.0.1 release and attempted to compile it. I am getting a bunch of compilation errors reproduced below.

$ mvn clean install
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] Azure Cosmos DB SQL API
[INFO] Async SDK for SQL API of Azure Cosmos DB Service
[INFO] Async SDK for SQL API of Azure Cosmos DB Service - Examples
[INFO] Async SDK for SQL API of Azure Cosmos DB Service - Benchmarking tool
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Azure Cosmos DB SQL API 2.0.1
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ azure-cosmosdb-parent ---
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ azure-cosmosdb-parent ---
[INFO] Installing /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/pom.xml to /Users/dbhati2/.m2/repository/com/microsoft/azure/azure-cosmosdb-parent/2.0.1/azure-cosmosdb-parent-2.0.1.pom
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Async SDK for SQL API of Azure Cosmos DB Service 2.0.1
[INFO] ------------------------------------------------------------------------
Downloading: http://xxx/content/groups/public/org/apache/maven/plugins/maven-javadoc-plugin/2.9.1/maven-javadoc-plugin-2.9.1.pom
Downloaded: http://xxx/content/groups/public/org/apache/maven/plugins/maven-javadoc-plugin/2.9.1/maven-javadoc-plugin-2.9.1.pom (16 KB at 40.0 KB/sec)
Downloading: http://xxx/content/groups/public/org/apache/maven/plugins/maven-javadoc-plugin/2.9.1/maven-javadoc-plugin-2.9.1.jar
Downloaded: http://xxx/content/groups/public/org/apache/maven/plugins/maven-javadoc-plugin/2.9.1/maven-javadoc-plugin-2.9.1.jar (360 KB at 194.0 KB/sec)
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ azure-cosmosdb ---
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ azure-cosmosdb ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/src/main/resources
[INFO]
[INFO] --- maven-compiler-plugin:3.6.0:compile (default-compile) @ azure-cosmosdb ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 184 source files to /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/target/classes
[INFO] /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/src/main/java/com/microsoft/azure/cosmosdb/JsonSerializable.java: Some input files use unchecked or unsafe operations.
[INFO] /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/src/main/java/com/microsoft/azure/cosmosdb/JsonSerializable.java: Recompile with -Xlint:unchecked for details.
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/src/main/java/com/microsoft/azure/cosmosdb/rx/internal/caches/AsyncCache.java:[93,60] no suitable method found for flatMap((vaule)->{...; },(err)->{ l...; },()->Observ[...]pty())
method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>) is not applicable
(cannot infer type-variable(s) R
(actual and formal argument lists differ in length))
method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,int) is not applicable
(cannot infer type-variable(s) R
(actual and formal argument lists differ in length))
method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,rx.functions.Func1<? super java.lang.Throwable,? extends rx.Observable<? extends R>>,rx.functions.Func0<? extends rx.Observable<? extends R>>) is not applicable
(no instance(s) of type variable(s) T exist so that rx.Observable conforms to ? extends rx.Observable<? extends R>)
method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,rx.functions.Func1<? super java.lang.Throwable,? extends rx.Observable<? extends R>>,rx.functions.Func0<? extends rx.Observable<? extends R>>,int) is not applicable
(cannot infer type-variable(s) R
(actual and formal argument lists differ in length))
method rx.Observable.<U,R>flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends U>>,rx.functions.Func2<? super TValue,? super U,? extends R>) is not applicable
(cannot infer type-variable(s) U,R
(actual and formal argument lists differ in length))
method rx.Observable.<U,R>flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends U>>,rx.functions.Func2<? super TValue,? super U,? extends R>,int) is not applicable
(cannot infer type-variable(s) U,R
(argument mismatch; int is not a functional interface))
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Azure Cosmos DB SQL API ............................ SUCCESS [ 0.208 s]
[INFO] Async SDK for SQL API of Azure Cosmos DB Service ... FAILURE [ 4.723 s]
[INFO] Async SDK for SQL API of Azure Cosmos DB Service - Examples SKIPPED
[INFO] Async SDK for SQL API of Azure Cosmos DB Service - Benchmarking tool SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.026 s
[INFO] Finished at: 2018-08-31T13:40:30-07:00
[INFO] Final Memory: 20M/116M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.0:compile (default-compile) on project azure-cosmosdb: Compilation failure
[ERROR] /Users/dbhati2/Downloads/azure-cosmosdb-java-2.0.1/sdk/src/main/java/com/microsoft/azure/cosmosdb/rx/internal/caches/AsyncCache.java:[93,60] no suitable method found for flatMap((vaule)->{...; },(err)->{ l...; },()->Observ[...]pty())
[ERROR] method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>) is not applicable
[ERROR] (cannot infer type-variable(s) R
[ERROR] (actual and formal argument lists differ in length))
[ERROR] method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,int) is not applicable
[ERROR] (cannot infer type-variable(s) R
[ERROR] (actual and formal argument lists differ in length))
[ERROR] method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,rx.functions.Func1<? super java.lang.Throwable,? extends rx.Observable<? extends R>>,rx.functions.Func0<? extends rx.Observable<? extends R>>) is not applicable
[ERROR] (no instance(s) of type variable(s) T exist so that rx.Observable conforms to ? extends rx.Observable<? extends R>)
[ERROR] method rx.Observable.flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends R>>,rx.functions.Func1<? super java.lang.Throwable,? extends rx.Observable<? extends R>>,rx.functions.Func0<? extends rx.Observable<? extends R>>,int) is not applicable
[ERROR] (cannot infer type-variable(s) R
[ERROR] (actual and formal argument lists differ in length))
[ERROR] method rx.Observable.<U,R>flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends U>>,rx.functions.Func2<? super TValue,? super U,? extends R>) is not applicable
[ERROR] (cannot infer type-variable(s) U,R
[ERROR] (actual and formal argument lists differ in length))
[ERROR] method rx.Observable.<U,R>flatMap(rx.functions.Func1<? super TValue,? extends rx.Observable<? extends U>>,rx.functions.Func2<? super TValue,? super U,? extends R>,int) is not applicable
[ERROR] (cannot infer type-variable(s) U,R
[ERROR] (argument mismatch; int is not a functional interface))
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :azure-cosmosdb

Crash inside connector while creating document

When calling client.createDocument(documentLink, assetState, requestOptions, true)

/usr/lib/jvm/java-8-openjdk-amd64/bin/java -ea -Didea.test.cyclic.buffer.size=1048576 -javaagent:/snap/intellij-idea-ultimate/139/lib/idea_rt.jar=42743:/snap/intellij-idea-ultimate/139/bin -Dfile.encoding=UTF-8 -classpath /snap/intellij-idea-ultimate/139/lib/idea_rt.jar:/snap/intellij-idea-ultimate/139/plugins/junit/lib/junit-rt.jar:/snap/intellij-idea-ultimate/139/plugins/junit/lib/junit5-rt.jar:/home/pavel/.m2/repository/org/junit/platform/junit-platform-launcher/1.3.2/junit-platform-launcher-1.3.2.jar:/home/pavel/.m2/repository/org/apiguardian/apiguardian-api/1.0.0/apiguardian-api-1.0.0.jar:/home/pavel/.m2/repository/org/junit/platform/junit-platform-engine/1.3.2/junit-platform-engine-1.3.2.jar:/home/pavel/.m2/repository/org/junit/platform/junit-platform-commons/1.3.2/junit-platform-commons-1.3.2.jar:/home/pavel/.m2/repository/org/opentest4j/opentest4j/1.1.1/opentest4j-1.1.1.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/charsets.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/cldrdata.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/icedtea-sound.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/jaccess.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/java-atk-wrapper.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jce.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jsse.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/resources.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar:/home/pavel/devel/signal-processing/collector/out/test/classes:/home/pavel/devel/signal-processing/collector/out/test/resources:/home/pavel/devel/signal-processing/collector/out/production/classes:/home/pavel/devel/signal-processing/collector/out/production/resources:/home/pavel/devel/signal-processing/normalizer/out/production/classes:/home/pavel/devel/signal-processing/normalizer/out/production/resources:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-eventhubs-eph/2.4.0/8583fad2314cd3d73c9fb535ff67ac5159f06233/azure-eventhubs-eph-2.4.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-eventhubs/2.2.0/de54fdebf55f78e478c75d5c346ca14068e1d53f/azure-eventhubs-2.2.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-cosmosdb/2.4.4/2c648881a52c6c07cc65f1d175b026fae9cf1711/azure-cosmosdb-2.4.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.datastax.cassandra/cassandra-driver-core/3.6.0/1d689ae757862f7c497dd6b186793d1bf921fd28/cassandra-driver-core-3.6.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-cosmosdb-direct/2.4.4/c819653093c4dbe924e6a2f5a6ebec0c017522c7/azure-cosmosdb-direct-2.4.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-storage/8.0.0/f6759c16ade4e2a05bc1dfbaf55161b9ed0e78b9/azure-storage-8.0.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-keyvault-core/1.0.0/e89dd5e621e21b753096ec6a03f203c01482c612/azure-keyvault-core-1.0.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.guava/guava/27.1-jre/e47b59c893079b87743cdcfb6f17ca95c08c592c/guava-27.1-jre.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.junit.jupiter/junit-jupiter-api/5.3.2/3602b523ffae9dabc04c329d73ab39ab04b3cbe2/junit-jupiter-api-5.3.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.hamcrest/hamcrest-all/1.3/63a21ebc981131004ad02e0434e799fd7f3a8d5a/hamcrest-all-1.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.junit.jupiter/junit-jupiter-engine/5.3.2/69350316a14c46d8f6c4c909e469ec9edf58c4f8/junit-jupiter-engine-5.3.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.6.1/d2ad7ee0b0450204d720b4bc3f99b50e208bf51e/log4j-api-2.6.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.kafka/kafka-clients/1.1.0/87ab9fd8c521eb2573d54dac313d958694b12d7d/kafka-clients-1.1.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.typesafe/config/1.3.3/4b68c2d5d0403bb4015520fcfabc88d0cbc4d117/config-1.3.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.6.1/2b557bf1023c3a3a0f7f200fafcd7641b89cbb83/log4j-core-2.6.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.prometheus/simpleclient_hotspot/0.6.0/2703b02c4b2abb078de8365f4ef3b7d5e451382d/simpleclient_hotspot-0.6.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.6.1/f473a40ee76f71f70bd123f9bc3acf572d9b288f/log4j-slf4j-impl-2.6.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.prometheus/simpleclient_httpserver/0.6.0/d46c7273a4dd10e611e3cd617f46eb8b2dae8fdc/simpleclient_httpserver-0.6.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.prometheus/simpleclient_common/0.6.0/8b4f119cfdff67a02a066e6e519bb2bab0a2a1b/simpleclient_common-0.6.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.prometheus/simpleclient/0.6.0/26073e94cbfa6780e10ef524e542cf2a64dabe67/simpleclient-0.6.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/qpid-proton-j-extensions/1.1.0/53cc795517e3947b04046290d47e8bc0f7863ad7/qpid-proton-j-extensions-1.1.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.qpid/proton-j/0.31.0/2716896d77d7be83d35b471069c90356ea450167/proton-j-0.31.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-cosmosdb-gateway/2.4.4/f47e8beaeba13f34a489325f0488fbc59eeebb53/azure-cosmosdb-gateway-2.4.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.microsoft.azure/azure-cosmosdb-commons/2.4.4/5fd143218725d9bbed95895799968c1e1711bdce/azure-cosmosdb-commons-2.4.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.dropwizard.metrics/metrics-core/3.2.2/cd9886f498ee2ab2d994f0c779e5553b2c450416/metrics-core-3.2.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.openmatics.apps.components/real-time-data-service-message-api/1.1.1/177b8544f11b830666f0f766ea813c9bebc33a0b/real-time-data-service-message-api-1.1.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.25/da76ca59f6a57ee3102f8f9bd9cee742973efa8a/slf4j-api-1.7.25.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.code.gson/gson/2.8.5/f645ed69d595b24d4cf8b3fbb64cc505bede8829/gson-2.8.5.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.uuid/java-uuid-generator/3.1.4/ae83b2b74ee694812130dc1b3eec17df04498f3a/java-uuid-generator-3.1.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-io/commons-io/2.5/2852e6e05fbb95076fc091f6d1780f1f8fe35e0f/commons-io-2.5.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.davidmoten/rxjava-extras/0.8.0.17/1e3f06a9c64c5d0a1a281d9420526ce7673d04ea/rxjava-extras-0.8.0.17.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.reactivex/rxjava/1.3.8/8c192792ad2e65a90867ab418ac49703f44d2baf/rxjava-1.3.8.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.reactivex/rxjava-string/1.1.1/cce4404cabb9e1efed91d77e57c581388ad981a5/rxjava-string-1.1.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.reactivex/rxnetty/0.4.20/5a2f67135eb7755a5ac5e1474d517d16260aefaf/rxnetty-0.4.20.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.32.Final/58b621246262127b97a871b88c09374c8c324cb7/netty-handler-proxy-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jdk8/2.9.8/bcd02aa9195390e23747ed40bf76be869ad3a2fb/jackson-datatype-jdk8-2.9.8.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.32.Final/b9218adba7353ad5a75fcb639e4755d64bd6ddf/netty-codec-http-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.9.8/28ad1bced632ba338e51c825a652f6e11a8e6eac/jackson-datatype-jsr310-2.9.8.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.32.Final/b4e3fa13f219df14a9455cc2111f133374428be0/netty-handler-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.32.Final/b1e83cb772f842839dbeebd9a1f053da98bf56d2/netty-codec-socks-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.32.Final/8f32bd79c5a16f014a4372ed979dc62b39ede33a/netty-codec-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.32.Final/d5e5a8ff9c2bc7d91ddccc536a5aca1a4355bd8b/netty-transport-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-text/1.6/ba72cf0c40cf701e972fe7720ae844629f4ecca2/commons-text-1.6.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-lang3/3.8.1/6505a72a097d9270f7a9e7bf42c4238283247755/commons-lang3-3.8.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.guava/failureaccess/1.0.1/1dcf1de382a0bf95a3d8b0849546c88bac1292c9/failureaccess-1.0.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/b421526c5f297295adef1c886e5246c39d4ac629/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.spotbugs/spotbugs-annotations/3.1.12/ba2c77a05091820668987292f245f3b089387bfa/spotbugs-annotations-3.1.12.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.code.findbugs/jsr305/3.0.2/25ea2e8b0c338a877313bd4672d3fe056ea78f0d/jsr305-3.0.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.checkerframework/checker-qual/2.5.2/cea74543d5904a30861a61b4643a5f2bb372efc4/checker-qual-2.5.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.errorprone/error_prone_annotations/2.2.0/88e3c593e9b3586e1c6177f89267da6fc6986f0c/error_prone_annotations-2.2.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.google.j2objc/j2objc-annotations/1.1/ed28ded51a8b1c6b112568def5f4b455e6809019/j2objc-annotations-1.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.codehaus.mojo/animal-sniffer-annotations/1.17/f97ce6decaea32b36101e37979f8b647f00681fb/animal-sniffer-annotations-1.17.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.junit.platform/junit-platform-commons/1.3.2/378d1d1b162426ad031522f6d51e3bf28d1631a4/junit-platform-commons-1.3.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.lz4/lz4-java/1.4/9bedb74f461a87ff2161bdf0778ad8ca6bad3e1c/lz4-java-1.4.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.7.1/d5190b41f3de61e3b83d692322d58630252bc8c3/snappy-java-1.1.7.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-posix/3.0.44/1f8e4551454e613c04f6d4045ed9d5b98e21980f/jnr-posix-3.0.44.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-ffi/2.1.7/31a7391a212069303935a1df29566b7372d3ef9f/jnr-ffi-2.1.7.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.9.8/f5a654e4675769c716e5b387830d19b501ca191/jackson-core-2.9.8.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-validator/commons-validator/1.6/e989d1e87cdd60575df0765ed5bac65c905d7908/commons-validator-1.6.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-collections4/4.2/54ebea0a5b653d3c680131e73fe807bb8f78c4ed/commons-collections4-4.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.9.0/7c10d545325e3a6e72e06381afe469fd40eb701/jackson-annotations-2.9.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.junit.platform/junit-platform-engine/1.3.2/c54bc1d4654bd1ef15fccf512ce664184085969/junit-platform-engine-1.3.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.32.Final/46ede57693788181b2cafddc3a5967ed2f621c8/netty-buffer-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.32.Final/3e0114715cb125a12db8d982b2208e552a91256d/netty-resolver-4.1.32.Final.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.apiguardian/apiguardian-api/1.0.0/3ef5276905e36f4d8055fe3cb0bdcc7503ffc85d/apiguardian-api-1.0.0.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.opentest4j/opentest4j/1.1.1/efd9f971e91074491ea55b19f67b13470cf4fcdd/opentest4j-1.1.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jffi/1.2.16/5c1149dfcc9a16f85c8d9b8797f03806667cb9f1/jffi-1.2.16.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/joda-time/joda-time/2.9.9/f7b520c458572890807d143670c9b24f4de90897/joda-time-2.9.9.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm-commons/5.0.3/a7111830132c7f87d08fe48cb0ca07630f8cb91c/asm-commons-5.0.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm-analysis/5.0.3/c7126aded0e8e13fed5f913559a0dd7b770a10f3/asm-analysis-5.0.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm-util/5.0.3/1512e5571325854b05fb1efce1db75fcced54389/asm-util-5.0.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm-tree/5.0.3/287749b48ba7162fb67c93a026d690b29f410bed/asm-tree-5.0.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/org.ow2.asm/asm/5.0.3/dcc2193db20e19e1feca8b1240dbbc4e190824fa/asm-5.0.3.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-x86asm/1.0.2/6936bbd6c5b235665d87bd450f5e13b52d4b48/jnr-x86asm-1.0.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jnr-constants/0.9.9/33f23994e09aeb49880aa01e12e8e9eff058c14c/jnr-constants-0.9.9.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-beanutils/commons-beanutils/1.9.2/7a87d845ad3a155297e8f67d9008f4c1e5656b71/commons-beanutils-1.9.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-digester/commons-digester/1.8.1/3dec9b9c7ea9342d4dbe8c38560080d85b44a015/commons-digester-1.8.1.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-logging/commons-logging/1.2/4bfc12adfe4842bf07b657f0369c4cb522955686/commons-logging-1.2.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/com.github.jnr/jffi/1.2.16/3c1f0edf2df2c6e0419d60d0baa59659211624cb/jffi-1.2.16-native.jar:/home/pavel/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar:/home/pavel/devel/signal-processing/core/out/production/classes:/home/pavel/devel/signal-processing/service-core/out/production/classes:/home/pavel/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.32.Final/e95de4f762606f492328e180c8ad5438565a5e3b/netty-common-4.1.32.Final.jar com.intellij.rt.execution.junit.JUnitStarter -ideVersion5 -junit5 com.openmatics.apps.sigproc.collector.CosmosStateSinkTest,testSink

java.util.concurrent.ExecutionException: java.lang.ClassCastException: Cannot cast java.lang.Integer to java.lang.String

	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
	at com.openmatics.apps.sigproc.collector.CosmosStateSinkTest.testSink(CosmosStateSinkTest.java:35)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:532)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:171)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:167)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:114)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:59)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:108)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
	at java.util.ArrayList.forEach(ArrayList.java:1257)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
	at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
	at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.ClassCastException: Cannot cast java.lang.Integer to java.lang.String
	at java.lang.Class.cast(Class.java:3369)
	at com.microsoft.azure.cosmosdb.JsonSerializable.getObject(JsonSerializable.java:301)
	at com.microsoft.azure.cosmosdb.PartitionKeyDefinition.getVersion(PartitionKeyDefinition.java:81)
	at com.microsoft.azure.cosmosdb.CommonsBridgeInternal.isV2(CommonsBridgeInternal.java:28)
	at com.microsoft.azure.cosmosdb.internal.routing.PartitionKeyInternalHelper.getEffectivePartitionKeyString(PartitionKeyInternalHelper.java:166)
	at com.microsoft.azure.cosmosdb.internal.routing.PartitionKeyInternalHelper.getEffectivePartitionKeyString(PartitionKeyInternalHelper.java:139)
	at com.microsoft.azure.cosmosdb.internal.directconnectivity.AddressResolver.tryResolveServerPartitionByPartitionKey(AddressResolver.java:660)
	at com.microsoft.azure.cosmosdb.internal.directconnectivity.AddressResolver.tryResolveServerPartitionAsync(AddressResolver.java:239)
	at com.microsoft.azure.cosmosdb.internal.directconnectivity.AddressResolver.lambda$resolveAddressesAndIdentityAsync$16(AddressResolver.java:443)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:66)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:36)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:32)
	at rx.Single.subscribe(Single.java:1979)
	at rx.Single$2$1.onSuccess(Single.java:687)
	at rx.Single$2$1.onSuccess(Single.java:683)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:74)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:36)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:32)
	at rx.Single.subscribe(Single.java:1979)
	at rx.Single$2$1.onSuccess(Single.java:687)
	at rx.Single$2$1.onSuccess(Single.java:683)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:74)
	at rx.internal.operators.SingleOperatorOnErrorResumeNext$2.onSuccess(SingleOperatorOnErrorResumeNext.java:63)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.NotificationLite.accept(NotificationLite.java:125)
	at rx.internal.operators.CachedObservable$ReplayProducer.replay(CachedObservable.java:403)
	at rx.internal.operators.CachedObservable$CacheState.dispatch(CachedObservable.java:220)
	at rx.internal.operators.CachedObservable$CacheState.onCompleted(CachedObservable.java:211)
	at rx.internal.operators.CachedObservable$CacheState$1.onCompleted(CachedObservable.java:179)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.Subscriber.setProducer(Subscriber.java:209)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.SingleDoOnEvent$SingleDoOnEventSubscriber.onSuccess(SingleDoOnEvent.java:63)
	at rx.internal.operators.SingleDoOnEvent$SingleDoOnEventSubscriber.onSuccess(SingleDoOnEvent.java:63)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:36)
	at rx.internal.util.ScalarSynchronousSingle$1.call(ScalarSynchronousSingle.java:32)
	at rx.Single.subscribe(Single.java:1979)
	at rx.Single$2$1.onSuccess(Single.java:687)
	at rx.Single$2$1.onSuccess(Single.java:683)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:74)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.producers.SingleDelayedProducer.emit(SingleDelayedProducer.java:110)
	at rx.internal.producers.SingleDelayedProducer.setValue(SingleDelayedProducer.java:85)
	at rx.internal.operators.OperatorToObservableList$1.onCompleted(OperatorToObservableList.java:98)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onCompleted(OperatorMerge.java:281)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.observers.Subscribers$5.onCompleted(Subscribers.java:225)
	at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.checkTerminated(OperatorObserveOn.java:281)
	at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.call(OperatorObserveOn.java:216)
	at rx.internal.schedulers.ImmediateScheduler$InnerImmediateScheduler.schedule(ImmediateScheduler.java:58)
	at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.schedule(OperatorObserveOn.java:188)
	at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.onCompleted(OperatorObserveOn.java:172)
	at rx.observables.AsyncOnSubscribe$6.onCompleted(AsyncOnSubscribe.java:334)
	at rx.observers.SerializedObserver.onCompleted(SerializedObserver.java:176)
	at rx.observers.SerializedSubscriber.onCompleted(SerializedSubscriber.java:64)
	at rx.internal.operators.OnSubscribeConcatMap$ConcatMapSubscriber.drain(OnSubscribeConcatMap.java:246)
	at rx.internal.operators.OnSubscribeConcatMap$ConcatMapSubscriber.innerCompleted(OnSubscribeConcatMap.java:209)
	at rx.internal.operators.OnSubscribeConcatMap$ConcatMapInnerSubscriber.onCompleted(OnSubscribeConcatMap.java:345)
	at rx.internal.operators.OperatorOnBackpressureBuffer$BufferSubscriber.complete(OperatorOnBackpressureBuffer.java:163)
	at rx.internal.util.BackpressureDrainManager.drain(BackpressureDrainManager.java:187)
	at rx.internal.util.BackpressureDrainManager.terminateAndDrain(BackpressureDrainManager.java:115)
	at rx.internal.operators.OperatorOnBackpressureBuffer$BufferSubscriber.onCompleted(OperatorOnBackpressureBuffer.java:134)
	at rx.internal.operators.BufferUntilSubscriber.onCompleted(BufferUntilSubscriber.java:156)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager$1.onCompleted(AsyncOnSubscribe.java:612)
	at rx.observers.SafeSubscriber.onCompleted(SafeSubscriber.java:79)
	at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onCompleted(OnSubscribeDoOnEach.java:70)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.Subscriber.setProducer(Subscriber.java:209)
	at rx.Subscriber.setProducer(Subscriber.java:205)
	at rx.Subscriber.setProducer(Subscriber.java:205)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OnSubscribeRedo$4$1.onCompleted(OnSubscribeRedo.java:321)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onCompleted(OperatorMerge.java:281)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:298)
	at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:284)
	at rx.internal.operators.NotificationLite.accept(NotificationLite.java:135)
	at rx.subjects.SubjectSubscriptionManager$SubjectObserver.emitNext(SubjectSubscriptionManager.java:253)
	at rx.subjects.BehaviorSubject.onNext(BehaviorSubject.java:160)
	at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
	at rx.subjects.SerializedSubject.onNext(SerializedSubject.java:67)
	at rx.internal.operators.OnSubscribeRedo$2$1.onCompleted(OnSubscribeRedo.java:228)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.internal.producers.ProducerArbiter.setProducer(ProducerArbiter.java:126)
	at rx.internal.operators.OnSubscribeRedo$2$1.setProducer(OnSubscribeRedo.java:267)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$4.onCompleted(OperatorOnErrorResumeNextViaFunction.java:101)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.internal.producers.ProducerArbiter.setProducer(ProducerArbiter.java:126)
	at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$4.setProducer(OperatorOnErrorResumeNextViaFunction.java:159)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OnSubscribeRedo$4$1.onCompleted(OnSubscribeRedo.java:321)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onCompleted(OperatorMerge.java:281)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:298)
	at rx.internal.operators.OnSubscribeRedo$3$1.onNext(OnSubscribeRedo.java:284)
	at rx.internal.operators.NotificationLite.accept(NotificationLite.java:135)
	at rx.subjects.SubjectSubscriptionManager$SubjectObserver.emitNext(SubjectSubscriptionManager.java:253)
	at rx.subjects.BehaviorSubject.onNext(BehaviorSubject.java:160)
	at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
	at rx.subjects.SerializedSubject.onNext(SerializedSubject.java:67)
	at rx.internal.operators.OnSubscribeRedo$2$1.onCompleted(OnSubscribeRedo.java:228)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.internal.producers.ProducerArbiter.setProducer(ProducerArbiter.java:126)
	at rx.internal.operators.OnSubscribeRedo$2$1.setProducer(OnSubscribeRedo.java:267)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$4.onCompleted(OperatorOnErrorResumeNextViaFunction.java:101)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$InnerSubscriber.onCompleted(OperatorMerge.java:860)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onCompleted(OperatorMerge.java:281)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97)
	at rx.internal.operators.DeferredScalarSubscriber.complete(DeferredScalarSubscriber.java:102)
	at rx.internal.operators.DeferredScalarSubscriber.onCompleted(DeferredScalarSubscriber.java:73)
	at io.reactivex.netty.protocol.http.UnicastContentSubject$AutoReleaseByteBufOperator$1.onCompleted(UnicastContentSubject.java:260)
	at rx.internal.operators.BufferUntilSubscriber.onCompleted(BufferUntilSubscriber.java:156)
	at io.reactivex.netty.protocol.http.UnicastContentSubject.onCompleted(UnicastContentSubject.java:282)
	at io.reactivex.netty.protocol.http.client.ClientRequestResponseConverter$ResponseState.sendOnComplete(ClientRequestResponseConverter.java:413)
	at io.reactivex.netty.protocol.http.client.ClientRequestResponseConverter$ResponseState.access$500(ClientRequestResponseConverter.java:350)
	at io.reactivex.netty.protocol.http.client.ClientRequestResponseConverter.channelRead(ClientRequestResponseConverter.java:168)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1436)
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1203)
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1247)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.reactivex.netty.metrics.BytesInspector.channelRead(BytesInspector.java:59)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.reactivex.netty.pipeline.InternalReadTimeoutHandler.channelRead(InternalReadTimeoutHandler.java:108)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

The problem is in the method JsonSerializable.getObject(propertyName="version", class="PartitionKeyDefinitionVersion.class")

on the line:

 return c.cast(c.getMethod("valueOf", String.class).invoke(null, String.class.cast(getValue(jsonObj))));

where the version is an integer:

{"paths":["/id"],"kind":"Hash","version":2,"systemKey":false}

and it is beign converted to String.

Problem occures every time when calling createDocument().

Configure check style in the project

Does make sense a PR with check style in the project?

This way, the project will easier to more person contribute and also to maintain the project, I believe that we could start with:

  • number of characts in a line
  • standard to Java class and method
  • Make the header license mandatory
  • Unused imports

Ref: http://checkstyle.sourceforge.net/

Upgrade RxJava version to 2.x

As RxJava went to EOF, is there any plan this package will upgrade to RxJava 2.x ? Below is from RxJava README.

As of March 31, 2018, The RxJava 1.x branch and version is end-of-life (EOL). No further development, bugfixes, documentation changes, PRs, releases or maintenance will be performed by the project on the 1.x line.

Avoid usage of org.json library

The driver currently has a dependency on org.json

 <dependency>
      <groupId>org.json</groupId>
      <artifactId>json</artifactId>
      <version>20140107</version>
</dependency>

This library license is marked as Category-X and hence such a library cannot be used as a dependency in Apache Project. Would it be possible to replace this library with some other json library? Otherwise it would become tricky to use azure-cosmosdb-java for Apache projects

V3 Missing CosmosContainer.get{StoredProcedure,UserDefinedFunction,Trigger} methods

Describe the bug
Without these methods in V3, it's not possible to update or delete stored procedures, user defined functions, or triggers.

To Reproduce
Attempt to write code to delete or update a stored procedure.

Expected behavior
Should have a way to delete and update these items.

Actual behavior
No way to delete or update these items.

Environment summary
SDK Version: 3.0.0-SNAPSHOT
Java JDK version: Java 8
OS Version (e.g. Windows, Linux, MacOSX): MacOSX

ChangeFeed processor library

For some of the usecases we plan to make use of CosmosDB change feed support. Per the docs there are few option including using a change feed processor library or using low level change feed api.

For java I see there is a java library which is using sync documentdb java sdk. azure-cmosmosdb-java also provides a low level api to access change feed.

Would like to know which approach to take. Would the existing library using sync sdk is the way to go or there is any library implemented using the async sdk. Or if there are examples which use the async sdk

Error inserting document with a partition key of >100 bytes

Made a PoC for it at https://github.com/marmyr/cosmos-bug-poc

But in short, this will reproduce the issue:
` private static class DummyObject {
public final String partitionKey;
public final String id;
public final Integer ttl;

	public DummyObject(String partition, String id, int ttl) {
		this.partitionKey = partition;
		this.id = id;
		this.ttl = ttl;
	}

	String partition = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
		DummyObject lock = new DummyObject(partition, UUID.randomUUID().toString(), 30);
		client.createDocument(collection.getSelfLink(), lock, null, true);`

com.microsoft.azure.documentdb.DocumentClientException: Message: {"Errors":["PartitionKey extracted from document doesn't match the one specified in the header"]}

Support for Netty 4.1.32

Currently SDK has dependency on Netty v 4.1.22 (released 21-Feb-2018). Would it be fine to use Netty v 4.1.32 (released 29-Nov-2018)? We plan to make use of some other thirdparty library which uses the newer version. So need to move to newer version.

Direct TCP: implement hang detection

These are implemented by ReadTimeoutHandler/WriteTimeoutHandler instances in each client channel pipeline. See the RntbdClientChannelHandler class for specifics.

Two hang configuration options are supported:

  • RntbdTransportClient.Options.receiveHangDetectionTime
    default: 65 s
  • RntbdTransportClient.Options.sendHangDetectionTime
    default: 10 s

Document's getTimestamp() method is returning 1970.

Team

When trying to obtain the last modified timestamp of a document ("_ts") using the util method "document.getTimestamp()", I end up getting a date that reads 1970. Upon checking the base code (Resource.java), looks like the value corresponding to field "_ts" is converted to a Double LongValue (ex. 1524473670) and the Date object is created with this longValue. This is where I guess the trouble is, since the util date is getting only the 10 digit value as against 13.

E.g.
new Date(1524473670) -> Sun Jan 18 20:57:53 IST 1970
new Date(1524473670000L) (appending 0's) -> Mon Apr 23 14:24:30 IST 2018

Can someone please take a look ?

Regards
Ramji

AsyncDocumentClient created using withPermissionFeed is unable to access Database Account

Describe the issue
When building an AsyncDocumentClient using .withPermissionFeed the application emits error logs regarding failures for reading the Database Account

To Reproduce
Create an AsyncDocumentClient using a code block similar to

String permissionJSON = "{  \n" +
        "    \"id\": \"a_permission\",  \n" +
        "    \"permissionMode\": \"Read\",  \n" +
        "    \"resource\": \"dbs/volcanodb/colls/volcano1\",  \n" +
        "    \"_rid\": \"Sl8fAG8cXgBn6Ju2GqNsAA==\",  \n" +
        "    \"_ts\": 1449604760,  \n" +
        "    \"_self\": \"dbs\\/Sl8fAA==\\/users\\/Sl8fAG8cXgA=\\/permissions\\/Sl8fAG8cXgBn6Ju2GqNsAA==\\/\",  \n" +
        "    \"_etag\": \"\\\"00000e00-0000-0000-0000-566736980000\\\"\",  \n" +
        "    \"_token\": \"type=resource&ver=1&sig=ocPyc9QQFybITu1EqzX0kg==;w+WR1aWafB3+yZq5JSoBwgz78XDlU+k9Xiqvc+Q7TlAl1P4h4t721Cn5cjhZ9h3TSd2\\/MJLy+wG+YkhDL9UlGkVv05RZGy2fMaLGdeQkWc7TShkc\\/M2boPc3GXq2yiERKl5CN4AZWSOcrFhOFuuTOqF4ZdBlflmNudaakodr\\/8qTip0i+a7moz1Jkc5+9iLAsDFyqTR1sirp7kAVNFbiqPdYTjNkvZUHF3nYYmRskOg=;\"  \n" +
        "}  ";
String cosmosDBURL = "[your host]";
Permission thePermission = new Permission(permissionJSON);

AsyncDocumentClient limitedDocumentClient = new AsyncDocumentClient.Builder()
        .withConnectionPolicy(ConnectionPolicy.GetDefault())
        .withConsistencyLevel(ConsistencyLevel.Strong)
        .withServiceEndpoint(cosmosDBURL)
        .withPermissionFeed(new ArrayList<>(Arrays.asList(thePermission)))
        .build();

Expected behavior

The Database Account is read and the LocationCache is updated.

Actual behavior
The Database Account can not be read and the error logs are emitted due to an error parsing the ResourceId provided, which is an empty string.

2019-01-11 15:05:57,119       [RxComputationScheduler-1] INFO  com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl - Getting database account endpoint from https://testing.documents.azure.com:443/
2019-01-11 15:05:57,139       [RxComputationScheduler-1] ERROR com.microsoft.azure.cosmosdb.rx.internal.GlobalEndpointManager - Fail to reach global gateway [https://testing.documents.azure.com:443/], [Invalid resource id ]
2019-01-11 15:05:57,140       [RxComputationScheduler-1] ERROR com.microsoft.azure.cosmosdb.rx.internal.GlobalEndpointManager - startRefreshLocationTimerAsync() - Unable to refresh database account from any location. Exception: java.lang.IllegalArgumentException: Invalid resource id 
java.lang.IllegalArgumentException: Invalid resource id 
	at com.microsoft.azure.cosmosdb.internal.ResourceId.parse(ResourceId.java:75)
	at com.microsoft.azure.cosmosdb.internal.ResourceTokenAuthorizationHelper.getAuthorizationTokenUsingResourceTokens(ResourceTokenAuthorizationHelper.java:161)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.getUserAuthorizationToken(RxDocumentClientImpl.java:1057)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.populateHeaders(RxDocumentClientImpl.java:1022)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.lambda$getDatabaseAccountFromEndpoint$144(RxDocumentClientImpl.java:3042)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.Single.subscribe(Single.java:1979)
	at rx.internal.operators.SingleOperatorOnErrorResumeNext.call(SingleOperatorOnErrorResumeNext.java:77)
	at rx.internal.operators.SingleOperatorOnErrorResumeNext.call(SingleOperatorOnErrorResumeNext.java:23)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.Single.subscribe(Single.java:1979)
	at rx.internal.operators.CompletableFlatMapSingleToCompletable.call(CompletableFlatMapSingleToCompletable.java:43)
	at rx.internal.operators.CompletableFlatMapSingleToCompletable.call(CompletableFlatMapSingleToCompletable.java:28)
	at rx.Completable.unsafeSubscribe(Completable.java:2035)
	at rx.Completable.subscribe(Completable.java:2056)
	at rx.internal.operators.CompletableFlatMapSingleToCompletable$SourceSubscriber.onSuccess(CompletableFlatMapSingleToCompletable.java:73)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OnSubscribeTimerOnce$1.call(OnSubscribeTimerOnce.java:54)
	at rx.internal.schedulers.EventLoopsScheduler$EventLoopWorker$2.call(EventLoopsScheduler.java:189)
	at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" java.lang.IllegalArgumentException: Invalid resource id 
	at com.microsoft.azure.cosmosdb.internal.ResourceId.parse(ResourceId.java:75)
	at com.microsoft.azure.cosmosdb.internal.ResourceTokenAuthorizationHelper.getAuthorizationTokenUsingResourceTokens(ResourceTokenAuthorizationHelper.java:161)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.getUserAuthorizationToken(RxDocumentClientImpl.java:1057)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.populateHeaders(RxDocumentClientImpl.java:1022)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.query(RxDocumentClientImpl.java:721)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.access$500(RxDocumentClientImpl.java:132)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl$2.executeQueryAsync(RxDocumentClientImpl.java:1422)
	at com.microsoft.azure.cosmosdb.rx.internal.query.DocumentQueryExecutionContextBase.executeQueryRequestInternalAsync(DocumentQueryExecutionContextBase.java:144)
	at com.microsoft.azure.cosmosdb.rx.internal.query.DocumentQueryExecutionContextBase.executeQueryRequestAsync(DocumentQueryExecutionContextBase.java:125)
	at com.microsoft.azure.cosmosdb.rx.internal.query.DocumentQueryExecutionContextBase.executeRequestAsync(DocumentQueryExecutionContextBase.java:120)
	at com.microsoft.azure.cosmosdb.rx.internal.query.DefaultDocumentQueryExecutionContext.lambda$null$1(DefaultDocumentQueryExecutionContext.java:126)
	at com.microsoft.azure.cosmosdb.rx.internal.BackoffRetryUtility.lambda$executeRetry$2(BackoffRetryUtility.java:94)
	at rx.Single$18.call(Single.java:2511)
	at rx.Single$18.call(Single.java:2505)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeRedo$2.call(OnSubscribeRedo.java:273)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52)
	at rx.internal.operators.OnSubscribeRedo$5.request(OnSubscribeRedo.java:361)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:353)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:47)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30)
	at rx.Observable.subscribe(Observable.java:10352)
	at rx.Observable.subscribe(Observable.java:10319)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.subscribeBufferToObservable(AsyncOnSubscribe.java:627)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.onNext(AsyncOnSubscribe.java:591)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.onNext(AsyncOnSubscribe.java:356)
	at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
	at com.microsoft.azure.cosmosdb.rx.internal.query.Paginator$1.next(Paginator.java:92)
	at com.microsoft.azure.cosmosdb.rx.internal.query.Paginator$1.next(Paginator.java:80)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.nextIteration(AsyncOnSubscribe.java:413)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.tryEmit(AsyncOnSubscribe.java:534)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.request(AsyncOnSubscribe.java:455)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.observables.AsyncOnSubscribe.call(AsyncOnSubscribe.java:352)
	at rx.observables.AsyncOnSubscribe.call(AsyncOnSubscribe.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:248)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:65)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
	at rx.internal.operators.OperatorSingle$ParentSubscriber.onCompleted(OperatorSingle.java:110)
	at rx.internal.util.ScalarSynchronousObservable$WeakSingleProducer.request(ScalarSynchronousObservable.java:285)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:233)
	at rx.internal.util.ScalarSynchronousObservable$3.call(ScalarSynchronousObservable.java:228)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10256)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.subscribe(Observable.java:10352)
	at rx.Observable.subscribe(Observable.java:10319)
	at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:443)
	at rx.observables.BlockingObservable.single(BlockingObservable.java:340)
	at com.microsoft.azure.cosmosdb.benchmark.DocDBUtils.getDatabase(DocDBUtils.java:43)
	at com.microsoft.azure.cosmosdb.benchmark.AsyncBenchmark.<init>(AsyncBenchmark.java:109)
	at com.microsoft.azure.cosmosdb.benchmark.AsyncWriteBenchmark.<init>(AsyncWriteBenchmark.java:74)
	at com.microsoft.azure.cosmosdb.benchmark.Main.main(Main.java:55)

Environment summary
SDK Version: 2.3.1
Java JDK version: 1.8
OS Version (e.g. Windows, Linux, MacOSX): Linux

Additional context
I've validated this action with 2 ResourceTokens, one granted Read access to a collection and one granted All access to a collection.

rx namespace collision with io.reactivex

Many rx components within com.microsoft.azure.documentdb.rx are hiding native rx components forcing me to use root in scala, perhaps its better to leave the term rx where it belongs avoiding confusion. It took me hours to realize that rx components are being hidden... I had to explicitly import the components I needed.

import _root_.rx.{Completable, SingleSubscriber, Subscriber, Observable => JObservable}
import _root_.rx.functions.{Action0, Action1, FuncN, Functions}
import _root_.rx.lang.scala.JavaConversions._
import _root_.rx.lang.scala.Observable

CosmosDB Client does not create Indexing Policy properly

Describe the bug
I am using com.microsoft.azure:azure-cosmosdb:2.4.4. When I send a request to create a collection with IndexingPolicy, the Indexing Policy is not created in Cosmos DB. When I use com.microsoft.azure:azure-cosmosdb:2.3.1 the Indexing Policy is created in Cosmos DB.

To Reproduce
I created a minimal example that reproduces the bug: https://github.com/denis-huber-qaware/cosmosdb-indexing-policy. The README contains the steps to run the program.

Here are the relevant lines:

DocumentCollection documentCollection = new DocumentCollection();
documentCollection.setId(COLLECTION);
IndexingPolicy indexingPolicy = new IndexingPolicy();
indexingPolicy.setIndexingMode(IndexingMode.Consistent);
IncludedPath includedPath = new IncludedPath();
includedPath.setPath("/*");
Index numberIndex = Index.Range(DataType.Number, -1);
Index stringIndex = Index.Range(DataType.String);
includedPath.setIndexes(Arrays.asList(numberIndex, stringIndex));
indexingPolicy.setIncludedPaths(Collections.singletonList(includedPath));
documentCollection.setIndexingPolicy(indexingPolicy);

ResourceResponse<DocumentCollection> response = client.createCollection(String.format("/dbs/%s", DATABASE), documentCollection, null).toBlocking().single();

Source: https://github.com/denis-huber-qaware/cosmosdb-indexing-policy/blob/master/src/main/java/com/example/demo/App.java#L56

Expected behavior
I expect, that the Indexing Policy is created with the following value in the Data Explorer in the Microsoft Azure's webapp (https://portal.azure.com):

{
    "indexingMode": "consistent",
    "automatic": true,
    "includedPaths": [
        {
            "path": "/*",
            "indexes": [
                {
                    "kind": "Range",
                    "dataType": "Number",
                    "precision": -1
                },
                {
                    "kind": "Range",
                    "dataType": "String",
                    "precision": -1
                }
            ]
        }
    ],
    "excludedPaths": [
        {
            "path": "/\"_etag\"/?"
        }
    ]
}

Actual behavior
I receive the following Indexing Policy in the webapp (https://portal.azure.com):

{
    "indexingMode": "consistent",
    "automatic": true,
    "includedPaths": [
        {
            "path": "/*",
            "indexes": []
        }
    ],
    "excludedPaths": [
        {
            "path": "/\"_etag\"/?"
        }
    ]
}

Environment summary
SDK Version: 2.4.4
Java JDK version:

openjdk version "1.8.0_201"
OpenJDK Runtime Environment (build 1.8.0_201-b09)
OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode)

OS Version (e.g. Windows, Linux, MacOSX): Linux

Response entity too large exception thrown for documents size more than 1 MB

CosmosDB supports documents upto 2 MB. However when create a document of size 1.5MB results in following error to be thrown

Caused by: io.netty.handler.codec.TooLongFrameException: Response entity too large: HttpObjectAggregator$AggregatedFullHttpResponse(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 1048576, cap: 1048576, components=128))
HTTP/1.1 201 Created
Cache-Control: no-store, no-cache
Pragma: no-cache
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
Strict-Transport-Security: max-age=31536000
x-ms-last-state-change-utc: Tue, 24 Apr 2018 04:40:08.198 GMT
etag: "2400d364-0000-0000-0000-5adec0d30000"
x-ms-resource-quota: documentSize=10240;documentsSize=10485760;documentsCount=-1;collectionSize=10485760;
x-ms-resource-usage: documentSize=1;documentsSize=1537;documentsCount=0;collectionSize=1537;
x-ms-schemaversion: 1.6
x-ms-alt-content-path: dbs/java.rx.com.microsoft.azure.cosmosdb.rx.DocumentCrudTest/colls/7dba480a-3a92-48a0-ac8d-5ea39e754959
x-ms-quorum-acked-lsn: 7
x-ms-current-write-quorum: 3
x-ms-current-replica-set-size: 4
x-ms-xp-role: 1
x-ms-global-Committed-lsn: 7
x-ms-number-of-read-regions: 0
x-ms-transport-request-id: 12
x-ms-share-throughput: true
x-ms-request-charge: 1252.67
x-ms-serviceversion: version=1.21.0.0
x-ms-activity-id: ac53339f-1ce8-4716-8749-eac4311b03de
x-ms-session-token: 4:8
x-ms-gatewayversion: version=1.21.0.0
Date: Tue, 24 Apr 2018 05:29:54 GMT
Connection: keep-alive
    at io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:283)
    at io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:87)
    at io.netty.handler.codec.MessageAggregator.invokeHandleOversizedMessage(MessageAggregator.java:383)
    at io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:277)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297)
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
    at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1389)
    at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1159)
    at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1203)
    at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.reactivex.netty.metrics.BytesInspector.channelRead(BytesInspector.java:59)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.reactivex.netty.pipeline.InternalReadTimeoutHandler.channelRead(InternalReadTimeoutHandler.java:108)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    ... 1 more

This happens because by default Netty sets the size to 1 MB and sdk uses the default constructor. Would be better to set it to 2 MB

PR to follow

IllegalStateException when doing readMedia

Upon reading attachments following exception is seen

java.lang.AssertionError: Unexpected onError events (0 completions) (+1 error)

	at rx.observers.TestSubscriber.assertionError(TestSubscriber.java:667)
	at rx.observers.TestSubscriber.assertNoErrors(TestSubscriber.java:416)
	at com.microsoft.azure.cosmosdb.rx.ReadFeedAttachmentsTest.validateReadEmbededAttachment(ReadFeedAttachmentsTest.java:180)
	at com.microsoft.azure.cosmosdb.rx.ReadFeedAttachmentsTest.readAndUpdateEmbededAttachments(ReadFeedAttachmentsTest.java:129)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:124)
	at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:54)
	at org.testng.internal.InvokeMethodRunnable.run(InvokeMethodRunnable.java:44)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: invalid resource type
	at com.microsoft.azure.cosmosdb.internal.PathsHelper.generatePath(PathsHelper.java:315)
	at com.microsoft.azure.cosmosdb.internal.PathsHelper.generatePath(PathsHelper.java:62)
	at com.microsoft.azure.cosmosdb.rx.internal.RxGatewayStoreModel.getUri(RxGatewayStoreModel.java:240)
	at com.microsoft.azure.cosmosdb.rx.internal.RxGatewayStoreModel.performRequest(RxGatewayStoreModel.java:178)
	at com.microsoft.azure.cosmosdb.rx.internal.RxGatewayStoreModel.read(RxGatewayStoreModel.java:131)
	at com.microsoft.azure.cosmosdb.rx.internal.RxGatewayStoreModel.invokeAsyncInternal(RxGatewayStoreModel.java:446)
	at com.microsoft.azure.cosmosdb.rx.internal.RxGatewayStoreModel.lambda$invokeAsync$11(RxGatewayStoreModel.java:460)
	at com.microsoft.azure.cosmosdb.rx.internal.BackoffRetryUtility.lambda$executeRetry$2(BackoffRetryUtility.java:102)

This is being seen since update to 2.4.0 release (see apache/openwhisk#4283). Currently only test which uses the readMedia api in ReadFeedAttachmentsTest#readAndUpdateEmbededAttachments is commented out. If the test is enabled then same error is seen

Environment summary
SDK Version: 2.4.0
Java JDK version: 1.8
OS Version (e.g. Windows, Linux, MacOSX) MacOSX

Netty Resource Leak observed with sorted queries leading to OutOfDirectMemoryError

We are observing increased used of memory due to Netty resource leaks which is leading the process to fail with OutOfDirectMemoryError. In some of the setups we are observing Netty leak warnings like

[2018-11-15T17:59:45.025Z] [ERROR] LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.

At times the container crashes with following exception seen

[2018-11-16T11:08:34.576Z] [WARN] An exception was thrown by io.reactivex.netty.pipeline.ssl.SslCompletionHandler$1.operationComplete()
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 7147094023, max: 7158628352)
    at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
    at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:226)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324)

Per Netty Leak Detection documentation this warning would be emitted if its resource leak detection logic (which works by sampling 1% allocations) detects a leak.

Key Observations

With some trial and error following observations can be made wrt leak

  1. Its only being seen with sorted queries
  2. Happens more prominently with data set involving multiple partitions. Probably its a function of number of simulatenous parallel connections being made for a query
  3. Issue is not seen with unsorted query. This probably because such queries do not make connections to partitions in parallel (#54)

Due to requirement of large partitions its bit hard to reflect in simple test case

Test Setup

This issue also has a PR #74 which adds a testcase NettyLeakTest which can either make use of existing data set or can create a synthetic data set and then perform multiple query both in sorted and non sorted mode. It makes use of Netty ResourceLeakDetector to track number of reported leaks via a custom RecordingLeakDetector. In ideal case the reported leak count should be zero. The test would also set -Dio.netty.leakDetection.level=PARANOID

It also tracks the Netty DIRECT_MEMORY_COUNTER which tracks the direct memory allocated by Netty. In case of leak this counter would report an upward trend. In ideal case it should eventually return to a base value. It reports in following format

==> Query Count - Direct Memory Counter [Query Result Count]
==>   0 - 134217735 [200]
==>   5 - 134217735 [200]
==>  10 - 134217735 [200]
==>  15 - 134217735 [200]

Where

  1. First column is query count
  2. Second column is amount of direct memory used
  3. Third column is query result count

The leak reports are more pronounced when memory is constrained -Xmx200m

Netty leak detection logic also reports stack traces of recent access to leaked resource i.e. ByteBuf

2018-11-25 11:27:57,578       [rxnetty-nio-eventloop-3-5] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
#1:
    io.netty.handler.codec.http.DefaultHttpContent.release(DefaultHttpContent.java:94)
    io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:88)
    io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:90)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
    io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1389)
    io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1159)
    io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1203)
    io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    io.reactivex.netty.metrics.BytesInspector.channelRead(BytesInspector.java:59)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.reactivex.netty.pipeline.InternalReadTimeoutHandler.channelRead(InternalReadTimeoutHandler.java:108)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:146)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.lang.Thread.run(Thread.java:748)

Test runs

Run against existing dataset (sorted)

While running against our dataset (having 5 partitions now) -Xmx200m -ea -Dcosmosdb.useExistingDB=true -Dcosmosdb.dbName=<db name> following output can be seen

==> Query Count - Direct Memory Counter [Query Result Count]
==>   0 - 33554439 [200]
==>   5 - 33554439 [200]
==>  10 - 33554439 [200]
2018-11-25 11:12:31,924       [rxnetty-nio-eventloop-3-3] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
==>  15 - 33554439 [200]
==>  20 - 33554439 [200]
...
==>  45 - 33554439 [200]
==>  50 - 33554439 [200]
==>  55 - 33554439 [200]
==>  60 - 67108871 [200]
...
==> 105 - 67108871 [200]
==> 110 - 67108871 [200]
==> 115 - 83886087 [200]
==> 120 - 83886087 [200]
==> 125 - 100663303 [200]
...
==> 165 - 100663303 [200]
==> 170 - 100663303 [200]
==> 175 - 117440519 [200]
==> 180 - 134217735 [200]
==> 185 - 134217735 [200]
==> 190 - 134217735 [200]
==> 195 - 134217735 [200]

java.lang.AssertionError: [reported leak count] 
Expecting:
 <1>
to be equal to:
 <0>
but was not.

    at com.microsoft.azure.cosmosdb.rx.NettyLeakTest.queryAndLog(NettyLeakTest.java:80)
    at com.microsoft.azure.cosmosdb.rx.NettyLeakTest.queryDocumentsSortedManyTimes(NettyLeakTest.java:60)

Key points

  1. Direct memory size can be seen as increasing. Eventually it can end up with OutOfDirectMemory
  2. Here the backing dataset has 5 partitions

Run against test dataset (sorted)

While running against dataset created by test itself where it seeds 50000 records and then peform query against it.


2018-11-25 11:18:53,095       [RxComputationScheduler-1] INFO  com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl - Getting database account endpoint from https://xxxd.documents.azure.com:443/
2018-11-25 11:18:56,830       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Seeding database with 50000 records
2018-11-25 11:19:00,668       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Inserted batch of 1000. 2% done
2018-11-25 11:19:03,394       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Inserted batch of 1000. 4% done
2018-11-25 11:21:12,166       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Inserted batch of 1000. 98% done
2018-11-25 11:21:14,900       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Inserted batch of 1000. 100% done
2018-11-25 11:21:14,900       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.NettyLeakTest - Created db with 50000 records
==> Query Count - Direct Memory Counter [Query Result Count]
==>   0 - 33554439 [200]
==>   5 - 33554439 [200]
...
==>  60 - 33554439 [200]
==>  65 - 33554439 [200]
==>  70 - 33554439 [200]
2018-11-25 11:21:50,529       [rxnetty-nio-eventloop-3-3] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
==>  75 - 33554439 [200]
...
==> 185 - 33554439 [200]
==> 190 - 33554439 [200]
==> 195 - 33554439 [200]

java.lang.AssertionError: [reported leak count] 
Expecting:
 <63>
to be equal to:
 <0>
but was not.

    at com.microsoft.azure.cosmosdb.rx.NettyLeakTest.queryAndLog(NettyLeakTest.java:80)
    at com.microsoft.azure.cosmosdb.rx.NettyLeakTest.queryDocumentsSortedManyTimes(NettyLeakTest.java:60)

Key points

  1. Direct memory size does not show as increasing in this run
  2. Here the backing dataset has 2 partitions
  3. Leaks are still observed

Run against existing dataset (unsorted)

While running the

2018-11-25 11:30:00,098       [TestNG-method=beforeClass-1] INFO  com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl - Initializing DocumentClient with serviceEndpoint [https://xxx.documents.azure.com:443/], ConnectionPolicy [ConnectionPolicy{requestTimeoutInMillis=60000, mediaRequestTimeoutInMillis=300000, connectionMode=Gateway, mediaReadMode=Buffered, maxPoolSize=1000, idleConnectionTimeoutInMillis=60000, userAgentSuffix='', retryOptions=RetryOptions{maxRetryAttemptsOnThrottledRequests=9, maxRetryWaitTimeInSeconds=30}, enableEndpointDiscovery=true, preferredLocations=null, usingMultipleWriteLocations=false, inetSocketProxyAddress=null}], ConsistencyLevel [Session]
2018-11-25 11:30:00,553       [RxComputationScheduler-1] INFO  com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl - Getting database account endpoint from https://xxx.documents.azure.com:443/
==> Query Count - Direct Memory Counter [Query Result Count]
==>   0 - 33554439 [200]
==>   5 - 33554439 [200]
==>  10 - 33554439 [200]
==>  15 - 33554439 [200]
==>  20 - 33554439 [200]
...
==> 165 - 33554439 [200]
==> 170 - 33554439 [200]
==> 175 - 33554439 [200]
==> 180 - 33554439 [200]
==> 185 - 33554439 [200]
==> 190 - 33554439 [200]
==> 195 - 33554439 [200]
2018-11-25 11:33:00,832       [TestNG-method=afterClass-1] INFO  com.microsoft.azure.cosmosdb.rx.internal.GlobalEndpointManager - GlobalEndpointManager closed.

===============================================
Default Suite
Total tests run: 200, Failures: 0, Skips: 0
===============================================

Key points

  1. No memory increase seen
  2. No resource leak reported

Environment

  • SDK Version: 2.2.2, master branch
  • Java JDK version: jdk1.8.0_162
  • OS Version - MacOSX , Linux (Alpine)

Prefer Direct mode over Gateway

With 2.4 release its mentioned that Direct mode access is supported now. Per docs its mentioned that Direct mode if supported by SDK should be preferred over gateway mode.

Wanted to check if its fine to always enable direct mode now or there are some other aspects pne need to consider before enabling this mode

com.microsoft.azure.documentdb.DocumentClient - Failed to retrieve database account information

Hi!, Thank you for building RxJava client for cosmos, I got the following exception from my code and the example provided, I am using OSX.

/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:61888,suspend=y,server=n -ea -Didea.test.cyclic.buffer.size=1048576 -Dfile.encoding=UTF-8 -classpath "/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar:/Applications/IntelliJ IDEA CE.app/Contents/plugins/junit/lib/junit-rt.jar:/Applications/IntelliJ IDEA CE.app/Contents/plugins/junit/lib/junit5-rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/jfxswt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/packager.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/lib/tools.jar:/Users/shashigireddy/code/git/azure-documentdb-rxjava/azure-documentdb-examples/target/test-classes:/Users/shashigireddy/.m2/repository/com/microsoft/azure/azure-documentdb-rx/0.9.0-rc2/azure-documentdb-rx-0.9.0-rc2.jar:/Users/shashigireddy/.m2/repository/commons-io/commons-io/2.5/commons-io-2.5.jar:/Users/shashigireddy/.m2/repository/io/reactivex/rxjava/1.2.5/rxjava-1.2.5.jar:/Users/shashigireddy/.m2/repository/io/reactivex/rxjava-string/1.1.1/rxjava-string-1.1.1.jar:/Users/shashigireddy/.m2/repository/com/microsoft/azure/azure-documentdb/1.13.0/azure-documentdb-1.13.0.jar:/Users/shashigireddy/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.3/jackson-databind-2.8.3.jar:/Users/shashigireddy/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.8.0/jackson-annotations-2.8.0.jar:/Users/shashigireddy/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.8.3/jackson-core-2.8.3.jar:/Users/shashigireddy/.m2/repository/org/json/json/20140107/json-20140107.jar:/Users/shashigireddy/.m2/repository/org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar:/Users/shashigireddy/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/Users/shashigireddy/.m2/repository/commons-codec/commons-codec/1.9/commons-codec-1.9.jar:/Users/shashigireddy/.m2/repository/org/apache/httpcomponents/httpcore/4.4.6/httpcore-4.4.6.jar:/Users/shashigireddy/.m2/repository/io/reactivex/rxnetty/0.4.20/rxnetty-0.4.20.jar:/Users/shashigireddy/.m2/repository/io/reactivex/rxnetty-servo/0.4.20/rxnetty-servo-0.4.20.jar:/Users/shashigireddy/.m2/repository/com/netflix/servo/servo-core/0.7.5/servo-core-0.7.5.jar:/Users/shashigireddy/.m2/repository/com/google/code/findbugs/annotations/2.0.0/annotations-2.0.0.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-codec-http/4.1.7.Final/netty-codec-http-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-codec/4.1.7.Final/netty-codec-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-handler/4.1.7.Final/netty-handler-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-buffer/4.1.7.Final/netty-buffer-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-transport/4.1.7.Final/netty-transport-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-resolver/4.1.7.Final/netty-resolver-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-transport-native-epoll/4.1.7.Final/netty-transport-native-epoll-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/io/netty/netty-common/4.1.7.Final/netty-common-4.1.7.Final.jar:/Users/shashigireddy/.m2/repository/org/apache/commons/commons-lang3/3.5/commons-lang3-3.5.jar:/Users/shashigireddy/.m2/repository/io/reactivex/rxjava-guava/1.0.3/rxjava-guava-1.0.3.jar:/Users/shashigireddy/.m2/repository/com/google/guava/guava/19.0-rc1/guava-19.0-rc1.jar:/Users/shashigireddy/.m2/repository/junit/junit/4.12/junit-4.12.jar:/Users/shashigireddy/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/Users/shashigireddy/.m2/repository/org/mockito/mockito-core/1.10.19/mockito-core-1.10.19.jar:/Users/shashigireddy/.m2/repository/org/objenesis/objenesis/2.1/objenesis-2.1.jar:/Users/shashigireddy/.m2/repository/org/hamcrest/hamcrest-all/1.3/hamcrest-all-1.3.jar:/Users/shashigireddy/.m2/repository/org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.jar:/Users/shashigireddy/.m2/repository/org/slf4j/slf4j-log4j12/1.7.6/slf4j-log4j12-1.7.6.jar:/Users/shashigireddy/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar" com.intellij.rt.execution.junit.JUnitStarter -ideVersion5 -junit4 com.microsoft.azure.documentdb.rx.examples.DatabaseAndCollectionCreationAsyncAPITest,testCreateDatabase_Async
Connected to the target VM, address: '127.0.0.1:61888', transport: 'socket'
2017-11-28 13:46:59,408       [main] INFO  com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl - Initializing DocumentClient with serviceEndpoint [https://xxxxxxxxxxxxxx.documents.azure.com:xxxxx/], ConnectionPolicy [ConnectionPolicy [requestTimeout=60, mediaRequestTimeout=300, connectionMode=Gateway, mediaReadMode=Buffered, maxPoolSize=100, idleConnectionTimeout=60, userAgentSuffix=, retryOptions=com.microsoft.azure.documentdb.RetryOptions@58c1c010, enableEndpointDiscovery=true, preferredLocations=null]], ConsistencyLevel [Session]
2017-11-28 13:46:59,903       [main] INFO  com.microsoft.azure.documentdb.DocumentClient - Initializing DocumentClient with serviceEndpoint [https://xxxxxxxxxxxxxx.documents.azure.com:xxxxx/], ConnectionPolicy [ConnectionPolicy [requestTimeout=60, mediaRequestTimeout=300, connectionMode=Gateway, mediaReadMode=Buffered, maxPoolSize=100, idleConnectionTimeout=60, userAgentSuffix=, retryOptions=com.microsoft.azure.documentdb.RetryOptions@58c1c010, enableEndpointDiscovery=true, preferredLocations=null]], ConsistencyLevel [Session]
2017-11-28 13:47:00,385       [main] WARN  com.microsoft.azure.documentdb.DocumentClient - Failed to retrieve database account information. org.apache.http.NoHttpResponseException: The target server failed to respond
2017-11-28 13:47:03,241       [main] INFO  com.microsoft.azure.documentdb.rx.examples.DatabaseAndCollectionCreationAsyncAPITest - cleanup databases invoked
2017-11-28 13:47:03,447       [RxDocdb-io1] WARN  com.microsoft.azure.documentdb.DocumentClient - Failed to retrieve database account information. org.apache.http.NoHttpResponseException: The target server failed to respond
java.lang.RuntimeException: org.apache.http.NoHttpResponseException: The target server failed to respond
	at rx.exceptions.Exceptions.propagate(Exceptions.java:58)
	at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:464)
	at rx.observables.BlockingObservable.single(BlockingObservable.java:341)
	at com.microsoft.azure.documentdb.rx.examples.DatabaseAndCollectionCreationAsyncAPITest.cleanUpGeneratedDatabases(DatabaseAndCollectionCreationAsyncAPITest.java:243)
	at com.microsoft.azure.documentdb.rx.examples.DatabaseAndCollectionCreationAsyncAPITest.setUp(DatabaseAndCollectionCreationAsyncAPITest.java:100)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.http.NoHttpResponseException: The target server failed to respond
	at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143)
	at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
	at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
	at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
	at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
	at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
	at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
	at com.microsoft.azure.documentdb.internal.GatewayProxy.performPostRequest(GatewayProxy.java:270)
	at com.microsoft.azure.documentdb.internal.GatewayProxy.doQuery(GatewayProxy.java:129)
	at com.microsoft.azure.documentdb.internal.GatewayProxy.processMessage(GatewayProxy.java:343)
	at com.microsoft.azure.documentdb.DocumentClient$10.apply(DocumentClient.java:3021)
	at com.microsoft.azure.documentdb.internal.RetryUtility.executeDocumentClientRequest(RetryUtility.java:58)
	at com.microsoft.azure.documentdb.DocumentClient.doQuery(DocumentClient.java:3027)
	at com.microsoft.azure.documentdb.DocumentQueryClientInternal.doQuery(DocumentQueryClientInternal.java:40)
	at com.microsoft.azure.documentdb.internal.query.AbstractQueryExecutionContext.executeRequest(AbstractQueryExecutionContext.java:214)
	at com.microsoft.azure.documentdb.internal.query.DefaultQueryExecutionContext.executeOnce(DefaultQueryExecutionContext.java:131)
	at com.microsoft.azure.documentdb.internal.query.DefaultQueryExecutionContext.fillBuffer(DefaultQueryExecutionContext.java:101)
	at com.microsoft.azure.documentdb.internal.query.DefaultQueryExecutionContext.next(DefaultQueryExecutionContext.java:84)
	at com.microsoft.azure.documentdb.internal.query.DefaultQueryExecutionContext.next(DefaultQueryExecutionContext.java:33)
	at com.microsoft.azure.documentdb.internal.query.ProxyQueryExecutionContext.<init>(ProxyQueryExecutionContext.java:68)
	at com.microsoft.azure.documentdb.internal.query.QueryExecutionContextFactory.createQueryExecutionContext(QueryExecutionContextFactory.java:23)
	at com.microsoft.azure.documentdb.QueryIterable.createQueryExecutionContext(QueryIterable.java:70)
	at com.microsoft.azure.documentdb.QueryIterable.reset(QueryIterable.java:115)
	at com.microsoft.azure.documentdb.QueryIterable.<init>(QueryIterable.java:57)
	at com.microsoft.azure.documentdb.QueryIterable.<init>(QueryIterable.java:36)
	at com.microsoft.azure.documentdb.DocumentClient.queryDatabases(DocumentClient.java:537)
	at com.microsoft.azure.documentdb.rx.internal.RxWrapperDocumentClientImpl$9.invoke(RxWrapperDocumentClientImpl.java:305)
	at com.microsoft.azure.documentdb.rx.internal.RxWrapperDocumentClientImpl$9.invoke(RxWrapperDocumentClientImpl.java:302)
	at com.microsoft.azure.documentdb.rx.internal.RxWrapperDocumentClientImpl$3.call(RxWrapperDocumentClientImpl.java:190)
	at com.microsoft.azure.documentdb.rx.internal.RxWrapperDocumentClientImpl$3.call(RxWrapperDocumentClientImpl.java:185)
	at rx.Observable.unsafeSubscribe(Observable.java:10144)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)
	at rx.Observable.unsafeSubscribe(Observable.java:10144)
	at rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94)
	at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
	at rx.internal.schedulers.ExecutorScheduler$ExecutorSchedulerWorker.run(ExecutorScheduler.java:107)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
2017-11-28 13:47:05,636       [rxnetty-nio-eventloop-3-1] WARN  com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl - Failed to retrieve database account information. java.io.IOException: Connection closed by peer before sending a response.
2017-11-28 13:47:05,638       [rxdocdb-computation1] WARN  com.microsoft.azure.documentdb.rx.internal.RetryFunctionFactory - unknown failure, cannot retry [java.io.IOException: Connection closed by peer before sending a response.], attempt number [1]
java.lang.RuntimeException: java.io.IOException: Connection closed by peer before sending a response.
	at rx.exceptions.Exceptions.propagate(Exceptions.java:58)
	at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:464)
	at rx.observables.BlockingObservable.single(BlockingObservable.java:341)
	at com.microsoft.azure.documentdb.BridgeInternal$2.getDatabaseAccountFromEndpoint(BridgeInternal.java:95)
	at com.microsoft.azure.documentdb.GlobalEndpointManager.getDatabaseAccountFromAnyEndpoint(GlobalEndpointManager.java:121)
	at com.microsoft.azure.documentdb.GlobalEndpointManager.refreshEndpointListInternal(GlobalEndpointManager.java:155)
	at com.microsoft.azure.documentdb.GlobalEndpointManager.initialize(GlobalEndpointManager.java:148)
	at com.microsoft.azure.documentdb.GlobalEndpointManager.getWriteEndpoint(GlobalEndpointManager.java:73)
	at com.microsoft.azure.documentdb.GlobalEndpointManager.resolveServiceEndpoint(GlobalEndpointManager.java:91)
	at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.getUri(RxGatewayStoreModel.java:214)
	at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.performRequest(RxGatewayStoreModel.java:165)
	at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.doCreate(RxGatewayStoreModel.java:111)
	at com.microsoft.azure.documentdb.rx.internal.RxGatewayStoreModel.processMessage(RxGatewayStoreModel.java:375)
	at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl.lambda$doCreate$23(RxDocumentClientImpl.java:655)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)
	at rx.Observable.unsafeSubscribe(Observable.java:10144)
	at rx.internal.operators.OnSubscribeRedo$2.call(OnSubscribeRedo.java:273)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52)
	at rx.internal.operators.OnSubscribeRedo$5.request(OnSubscribeRedo.java:361)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:353)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:47)
	at rx.Observable.unsafeSubscribe(Observable.java:10144)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:248)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
	at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(OnSubscribeDoOnEach.java:101)
	at rx.internal.operators.OperatorSubscribeOn$1$1.onNext(OperatorSubscribeOn.java:53)
	at com.microsoft.azure.documentdb.rx.internal.RxDocumentClientImpl.lambda$createPutMoreContentObservable$26(RxDocumentClientImpl.java:682)
	at rx.Observable.unsafeSubscribe(Observable.java:10144)
	at rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94)
	at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
	at rx.internal.schedulers.ExecutorScheduler$ExecutorSchedulerWorker.run(ExecutorScheduler.java:107)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection closed by peer before sending a response.
	at io.reactivex.netty.protocol.http.client.ClientRequestResponseConverter.<clinit>(ClientRequestResponseConverter.java:96)
	at io.reactivex.netty.protocol.http.client.ClientRequiredConfigurator.configureNewPipeline(ClientRequiredConfigurator.java:53)
	at io.reactivex.netty.pipeline.PipelineConfiguratorComposite.configureNewPipeline(PipelineConfiguratorComposite.java:55)
	at io.reactivex.netty.pipeline.PipelineConfiguratorComposite.configureNewPipeline(PipelineConfiguratorComposite.java:55)
	at io.reactivex.netty.client.RxClientImpl$2.initChannel(RxClientImpl.java:127)
	at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
	at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:597)
	at io.netty.channel.DefaultChannelPipeline.access$000(DefaultChannelPipeline.java:44)
	at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1387)
	at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1122)
	at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:647)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:506)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:419)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:478)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
	... 1 more
an error happened in database creation: actual cause: java.io.IOException: Connection closed by peer before sending a response.
Disconnected from the target VM, address: '127.0.0.1:61888', transport: 'socket'

Process finished with exit code 130 (interrupted by signal 2: SIGINT)

Issue for non-english locales

Computers running on non-English locales may run in to a

com.microsoft.azure.cosmosdb.DocumentClientException: The input date header is invalid format. Please pass in RFC 1123 style date format.

Setting a locale

Locale.setDefault(Locale.US);

solves the problem. This is like another issue in a related library

Exception in client when cosmosDb response headerSize > 8192

By default the io.netty.handler.codec.http.HttpObjectDecoder that you use has a maxHeaderSize of 8092. Headers in response from cosmosDb REST API (e.g. querying documents) can be far bigger than this. Particularly if the response has a continuation header.

In my case a simple select * from events e where e.type='LOGIN' was enough to hit this limit and therefore get the following error.

2200470 [rxnetty-nio-eventloop-3-2] DEBUG com.microsoft.azure.cosmosdb.rx.internal.query.Paginator  - Querying/Reading class com.microsoft.azure.cosmosdb.Documents
2200999 [rxnetty-nio-eventloop-3-2] DEBUG com.microsoft.azure.cosmosdb.rx.internal.ResourceThrottleRetryPolicy  - Operation will NOT be retried. Current attempt 0, Exception: {} 
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 8192 bytes.
	at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.newException(HttpObjectDecoder.java:839)
	at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.process(HttpObjectDecoder.java:831)
	at io.netty.buffer.AbstractByteBuf.forEachByteAsc0(AbstractByteBuf.java:1272)
	at io.netty.buffer.AbstractByteBuf.forEachByte(AbstractByteBuf.java:1252)
	at io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:803)
	at io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:603)
	at io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:227)
	at io.netty.handler.codec.http.HttpClientCodec$Decoder.decode(HttpClientCodec.java:202)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)

The cosmosdb code is calling no-arg constructor on HttpClientPipelineConfigurator() which uses MAX_HEADER_SIZE_DEFAULT=8192

I think your RxDocumentClientBuilder class needs to have its CompositeHttpClientBuilder tweaked to somehow put a larger configuration. The following is untested but might be on the right track:

PipelineConfigurator clientPipelineConfigurator = new PipelineConfiguratorComposite<HttpClientResponse<ByteBuf>, HttpClientRequest<ByteBuf>>(new HttpClientPipelineConfigurator<ByteBuf, ByteBuf>(4096, 32768, true ), new HttpObjectAggregationConfigurator());

      CompositeHttpClientBuilder<ByteBuf, ByteBuf> builder = new CompositeHttpClientBuilder<ByteBuf, ByteBuf>()
              .withSslEngineFactory(new DefaultSSLEngineFactory())
              .withMaxConnections(connectionPolicy.getMaxPoolSize())
              .withIdleConnectionsTimeoutMillis(this.connectionPolicy.getIdleConnectionTimeoutInMillis())
              .pipelineConfigurator(clientPipelineConfigurator);

Create resource if exists

Hi

Is it possible to create a resource only if it doesn't exist?
i.e. a database or a collection,

currently it seems a bit clunky

I had a look at what the tests do to get around this, and they do a delete and create which isn't feasible for production usage.

I'm currently checking if they exist and then creating if needed.
But the code I have isn't the nicest

as I want to do a:

createIfNotExists(db)
 .flatMap(createIfNotExistsCollection)
 .flatMap(createIfNotExistsAnotherCollection)

ClassCastException seen during query

Describe the bug
At times we are seeing following exception getting logged in our query flow. Unfortunately due to the way logging is done stack trace is not present

[2019-03-11T21:43:19.781Z] [ERROR] [#tid_33EsHVlP1DvswJiZO4asmuVXG9evto0g] [StoreUtils] [QUERY] 'activations' internal error, failure: 'com.microsoft.azure.cosmosdb.PartitionKeyRange cannot be cast to org.apache.commons.lang3.tuple.ImmutablePair' [ClassCastException] [marker:database_queryView_error:154:152]

To Reproduce
Not know

Environment summary
SDK Version: 2.3.0
Java JDK version: 1.8
OS Version (e.g. Windows, Linux, MacOSX)

How to parameterize IN clause

Hi,

If I have a list of IDs, and I'd like to query like select * from c where c.id IN (...)
How can I parameterize this with the SqlQuerySpec?

Thanks.

Direct TCP: RntbdServiceEndpoint should respect maxChannelsPerEndpoint and maxRequestsPerChannel

This feature can be thought of as self-throttling on a connection. No more than RntbdTransportClient.Options.maxRequestsPerChannel will be allowed on a channel before a new channel is created. By default we will respect these limits:

  • maxRequestsPerChannel: 30

    This means that no more than 30 pending requests will be allowed to be placed on a channel before a new RntbdServiceEndpoint channel is created.

  • maxChannelsPerEndpoint: 10

    This means that no more than 10 channels will be created for an RntbdServiceEndpoint

See related issue: #112

Direct TCP: Implement request timeout

Direct TCP requests are represented as RntbdRequestRecord instances and these are tracked and timed by the RntbdRequestManager. Requests are timed using the (static) RntbdRequestTimer class which a HashedWheelTimer. The timer interval is controlled by this setting value:

  • RntbdTransportClient.Options.requestTimeout
    default: 60 s

Request method 'GET' not supported

In our application, we have one messaging system which has the functionality to send, receive and update the messages. Whenever a user wants to send a message he requests that to our Rest server send method which is of the type RequestMethod.POST. In send method, we saved the messages in DB. But when I tried replacing my current DB to cosmosDb(SQL API) then I am getting an error as "Request method 'GET' not supported"(Status code 405). Are we using GET type internally in createDocument method?? We can't change the type of our method to GET, but for testing purposes, I have changed my method type to GET its working as expected.
As per my understanding inserting or creating a document is POST type and fetching will be of type GET.

Order By with Continuation Tokens

2.4.3 added Continuation Token support for Cross Partition Queries for everything but Order By.

This is in progress and we'll close this issue once we've added this last bit of support.

How to read all documents from a collection

For some cases we need to read all documents from a collection. For that I tried using AsyncDocumentClient.readDocuments but that failed with UnsupportedOperationException: PartitionKey value must be supplied for this operation.

Caused by: java.lang.UnsupportedOperationException: PartitionKey value must be supplied for this operation.
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.addPartitionKeyInformation(RxDocumentClientImpl.java:786)
	at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.lambda$addPartitionKeyInformation$20(RxDocumentClientImpl.java:767)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:66)
	at rx.internal.util.ScalarSynchronousSingle$2$1.onSuccess(ScalarSynchronousSingle.java:140)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emitLoop(OperatorMerge.java:656)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.emit(OperatorMerge.java:568)
	at rx.internal.operators.OperatorMerge$MergeSubscriber.onCompleted(OperatorMerge.java:281)
	at rx.internal.operators.OperatorMapNotification$MapNotificationSubscriber.tryEmit(OperatorMapNotification.java:155)
	at rx.internal.operators.OperatorMapNotification$MapNotificationSubscriber.onCompleted(OperatorMapNotification.java:121)
	at rx.internal.producers.SingleProducer.request(SingleProducer.java:75)
	at rx.internal.operators.OperatorMapNotification$MapNotificationSubscriber.setProducer(OperatorMapNotification.java:136)
	at rx.internal.operators.SingleLiftObservableOperator$WrapSubscriberIntoSingle.onSuccess(SingleLiftObservableOperator.java:76)
	at rx.internal.operators.OnSubscribeSingle$1.onCompleted(OnSubscribeSingle.java:55)
	at rx.internal.operators.NotificationLite.accept(NotificationLite.java:125)
	at rx.internal.operators.CachedObservable$ReplayProducer.replay(CachedObservable.java:403)
	at rx.internal.operators.CachedObservable$ReplayProducer.request(CachedObservable.java:304)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.CachedObservable$CachedSubscribe.call(CachedObservable.java:244)
	at rx.internal.operators.CachedObservable$CachedSubscribe.call(CachedObservable.java:230)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.Single.subscribe(Single.java:1979)
	at rx.internal.util.ScalarSynchronousSingle$2.call(ScalarSynchronousSingle.java:144)
	at rx.internal.util.ScalarSynchronousSingle$2.call(ScalarSynchronousSingle.java:124)
	at rx.Single.subscribe(Single.java:1979)
	at rx.internal.operators.SingleOnSubscribeMap.call(SingleOnSubscribeMap.java:45)
	at rx.internal.operators.SingleOnSubscribeMap.call(SingleOnSubscribeMap.java:30)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.Single.subscribe(Single.java:1979)
	at rx.Single$18.call(Single.java:2518)
	at rx.Single$18.call(Single.java:2505)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeRedo$2.call(OnSubscribeRedo.java:273)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73)
	at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52)
	at rx.internal.operators.OnSubscribeRedo$5.request(OnSubscribeRedo.java:361)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:353)
	at rx.internal.operators.OnSubscribeRedo.call(OnSubscribeRedo.java:47)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:81)
	at rx.internal.operators.OnSubscribeSingle.call(OnSubscribeSingle.java:27)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:39)
	at rx.internal.operators.SingleToObservable.call(SingleToObservable.java:27)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41)
	at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30)
	at rx.Observable.subscribe(Observable.java:10423)
	at rx.Observable.subscribe(Observable.java:10390)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.subscribeBufferToObservable(AsyncOnSubscribe.java:627)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.onNext(AsyncOnSubscribe.java:591)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.onNext(AsyncOnSubscribe.java:356)
	at rx.observers.SerializedObserver.onNext(SerializedObserver.java:91)
	at com.microsoft.azure.cosmosdb.rx.internal.query.Paginator$1.next(Paginator.java:92)
	at com.microsoft.azure.cosmosdb.rx.internal.query.Paginator$1.next(Paginator.java:80)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.nextIteration(AsyncOnSubscribe.java:413)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.tryEmit(AsyncOnSubscribe.java:534)
	at rx.observables.AsyncOnSubscribe$AsyncOuterManager.request(AsyncOnSubscribe.java:455)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.observables.AsyncOnSubscribe.call(AsyncOnSubscribe.java:352)
	at rx.observables.AsyncOnSubscribe.call(AsyncOnSubscribe.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.unsafeSubscribe(Observable.java:10327)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51)
	at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.subscribe(Observable.java:10423)
	at rx.Observable.subscribe(Observable.java:10390)
	at rx.internal.reactivestreams.PublisherAdapter.subscribe(PublisherAdapter.java:35)
	at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary.preStart(ActorGraphInterpreter.scala:117)
	at akka.stream.impl.fusing.GraphInterpreter.init(GraphInterpreter.scala:295)
	at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:557)
	at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:679)
	at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:727)
	at akka.actor.Actor$class.aroundPreStart(Actor.scala:528)
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:670)
	at akka.actor.ActorCell.create(ActorCell.scala:652)
	at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:523)
	at akka.actor.ActorCell.systemInvoke(ActorCell.scala:545)
	at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:283)
	at akka.dispatch.Mailbox.run(Mailbox.scala:224)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.microsoft.azure.cosmosdb.DocumentCollection.class
	at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:118)
	at rx.internal.operators.SingleOnSubscribeMap$MapSubscriber.onSuccess(SingleOnSubscribeMap.java:70)
	... 110 more

Then I switched to use a query like SELECT * FROM root r and it seems to work for small dataset I have. Yet to test on larger dataset.

So need to confirm whats the correct and most efficient way to read all documents?

Direct TCP: Implement endpoint as load balanced channel pool

The RntbdServiceEndpoint class encapsulates a Direct TCP service endpoint address with service requests load balanced across of set of channels. This channel set is encapsulated by RntbdClientChannelPool which derives from Netty's FixedChannelPool class. The size of the fixed channel pool and the maximum number of requests per channel is determined by these setting values:

  • RntbdTransportClient.Options.maxChannelsPerEndpoint
    default: 10
  • RntbdTransportClient.Options.maxRequestsPerChannel
    default: 30

The default values will be tweaked based on performance, reliability, and scale test results. The default values might vary by platform (Linux, macOS, and Windows).

Direct TCP: Implement health check requests

A health check request must be sent before each write operation. This will be implemented a lot like the RntbdContextRequest that we send when opening a channel.

Health check requests will be implemented together with a revised channel pool that's designed to "prime the request pipeline" with an initial health check. A channel will not be considered available until:

  • The connection is open
  • SSL negotiation is complete
  • Rntbd context request is complete
  • Initial "priming" health check is complete

Subsequently, a health check request will be issued before each write operation.

Test cases to be examined

  • TokenResolverTest::readDocumentThroughTokenResolver
    Timeouts on initial point reads produce extra latency that should be addressed by this feature addition. We currently wait as long as 16 seconds. Goal: 2 seconds.

  • TokenResolveTest::deleteDocumentThroughTokenResolver
    Timeouts on initial point deletes produce extra latency that should be addressed by this feature addition. We currently wait as long as 16 seconds. Goal: 2 seconds.

  • DocumentClientResourceLeakTest
    Get to the bottom on two failure types: timeout and before/after memory deltas greater than the specified limit. See issue #188.

Issues to be addressed

  • We occasionally see a warning message about a memory leak in the SslHandler code following closure of an RntbdTransportClient. Long-running profiled benchmarks on Linux (on Azure) and macOS (locally) indicate that this memory is eventually released. Memory consumption does not appear to grow over time.

That said we should get to the bottom of this issue:

ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:349)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:115)
io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:2125)
io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1327)
io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1227)
io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)

Support for MongoDB API

Hello. Are there any plans to support MongoDB API with Reactive access?

We are currently using mongodb-driver-async and we are wondering if it will work reliably with CosmosDB or do you plan first party support in azure-cosmosdb-java.

Cosmos AsyncDocumentClient close method does not properly free resources

Describe the bug
When repeatedly creating an AsyncDocumentClient and closing it, the closed AsyncDocumentClient is not freeing up its resources, particularly threads. The use case for this is constructing an AsyncDocumentClient with permissions, which expire, then closing that one and creating another one in its place.

To Reproduce
The use case uses an AsyncDocumentClient with permission feed, however this can easily be reproduced with master key as well.

@Test
public void testThreadPoolLeakForCosmos() throws Exception {
  AsyncDocumentClient ref;
  for(int i = 0; i < 100; i++) {
    System.out.println("Created client at: " + Instant.now().toString());
    ref = getMasterDocumentClient();
    Thread.sleep(2000);
    ref.close();
    Thread.sleep(2000);
  }
}

private static AsyncDocumentClient getMasterDocumentClient() {
  return new AsyncDocumentClient.Builder()
      .withServiceEndpoint(cosmosDBAccountTestController.getHost())
      .withConsistencyLevel(ConsistencyLevel.Strong)
      .withConnectionPolicy(ConnectionPolicy.GetDefault())
      .withMasterKeyOrResourceToken(cosmosDBAccountTestController.getPrimaryKey())
      .build();
}

Expected behavior
I expect the threads to increase when creating the first AsyncDocumentClient. Upon closing, I expect the threads to be released within a few seconds. I expect the number of threads to oscillate up and down by 2-3 threads as I create and close clients.

Actual behavior
If you run the equivalent of the above code, and monitor the process with VisualVM, you will notice the threads climbing higher and higher. If you take a thread dump, you'll see netty threads in a wait. Here is one of many. Note that the pattern is rxnetty-nio-eventloop-{threadPoolId}-{threadId}. The threadId is only ever 1, but the threadPoolId keeps incrementing, indicating that each new AsyncDocumentClient makes a new thread pool (as expected), with one thread, but never cleans it up on close();.

"rxnetty-nio-eventloop-184-1" #221 daemon prio=5 os_prio=0 cpu=43.33ms elapsed=27.88s tid=0x00007f3dd811b800 nid=0x6786 runnable  [0x00007f3d9ee93000]
   java.lang.Thread.State: RUNNABLE
        at sun.nio.ch.EPoll.wait([email protected]/Native Method)
        at sun.nio.ch.EPollSelectorImpl.doSelect([email protected]/EPollSelectorImpl.java:120)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect([email protected]/SelectorImpl.java:124)
        - locked <0x0000000719ae8c38> (a io.netty.channel.nio.SelectedSelectionKeySet)
        - locked <0x0000000719ae8a10> (a sun.nio.ch.EPollSelectorImpl)
        at sun.nio.ch.SelectorImpl.select([email protected]/SelectorImpl.java:136)
        at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:765)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:413)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
        - None

image

Environment summary
SDK Version: 2.4.1
Java JDK version: 11
OS Version (e.g. Windows, Linux, MacOSX): Linux

Additional context
This is a time critical issue, as we are in the middle of a major deployment featuring cosmos as the primary storage.

StringPartitionKeyComponent limits the key length to 100 chars

StringPartitionKeyComponent truncates the key to 100 chars.

Per docs the maximum length for id field is 255 chars. However I was not able to find any documented limit for partition key. Closest reference found here mentions

The length of RowKey is 255 bytes and the length of PartitionKey is 1 KB.

Can this limit in StringPartitionKeyComponent be set to 255 at least? Due to current truncation I get error PartitionKey extracted from document doesn't match the one specified in the header as the id field in document being added has length say 120 chars while one passed in header is truncated to 100.

ClientRetryPolicy broken when executing stored procedure

Describe the bug
When executing a stored procedure ( e.g. bulk-delete.js) via AsyncDocumentClient and the request runs into rate limiting then the retry mechanism throws an error.

Here is the log output:

2019-05-14 12:32:38.129 ERROR [-,,,] 28224 --- [o-eventloop-4-2] c.m.a.c.rx.internal.ClientRetryPolicy    : locationEndpoint is null because ClientRetryPolicy::onBeforeRequest(.) is not invoked, probably request creation failed due to invalid options, serialization setting, etc.
2019-05-14 12:32:38.131 ERROR [-,,,] 28224 --- [o-eventloop-4-2] i.RenameCollectionAwareClientRetryPolicy : onBeforeSendRequest is not invoked, encountered failure due to request being null

com.microsoft.azure.cosmosdb.DocumentClientException: Message: {"Errors":["Request rate is large"]}
...

To Reproduce
Execute bulk-delete.js stored procedure on a large enough dataset so that the execution runs into execution bounds (rate limiting).
In my case there were 500 documents to delete and the partition was configured with 400 RU/s. In the first execution 345 documents were successfully deleted. The described error occurred after executing the stored procedure a second time.

Expected behavior
In version com.microsoft.azure:azure-cosmosdb:2.4.4 everything is working correctly.
I get the following log output:

2019-05-14 13:03:33.363  WARN [-,,,] 28633 --- [o-eventloop-4-3] c.m.a.c.r.i.ResourceThrottleRetryPolicy  : Operation will be retried after 8569 milliseconds. Current attempt 1, Cumulative delay PT8.569S Exception: {}

The retry is performed automatically and the remaining documents are getting deleted.

Actual behavior
Error is thrown (see log output above) and no retry is performed.

Environment summary
Affected SDK Version: com.microsoft.azure:azure-cosmosdb:2.4.5. (Same for the 3.0.0-beta releases)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.