Code Monkey home page Code Monkey logo

azure-cosmosdb-js-server's Introduction

Microsoft Azure Cosmos DB's Server-Side JavaScript

Azure Cosmos DB’s language integrated, transactional execution of JavaScript supports stored procedures, triggers and user defined functions (UDFs) written natively in JavaScript. This allows developers to write application logic which can be shipped and executed directly on the database storage partitions. JavaScript support at the server side has a number of intrinsic advantages that can be utilized to build rich applications.

Documentation

  • Official documentation can be found the Azure website

  • JSDocs for the Server-Side JavaScript SDK can be found here

  • .NET sample code for creating and executing a sproc can be found on our .NET GitHub repo.

  • Node.js sample code for creating and executing a sproc can be found on our Node.js GitHub repo.

Videos

https://www.youtube.com/watch?v=s0cXdHNlVI0

Share

Want to share your awesome stored procedure? Please, send us a pull-request! We’d love to feature and spotlight you on our Github and Twitter accounts.

azure-cosmosdb-js-server's People

Contributors

aliuy avatar danielrosenwasser avatar k-kit avatar kevinkuszyk avatar kirankumarkolli avatar martijnhoekstra avatar microsoft-github-policy-service[bot] avatar mkolt avatar thomaslevesque avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-cosmosdb-js-server's Issues

[Resource Not Found]Error

I use a stored procedure in documentDB with partitions. documentDB Consistency is configured for "Session".

I do [1-5] in a function.

[1]. I use collection.queryDocuments to query a document A by some keys.
[2]. I use collection.deleteDocument to delete the document A.
[3]. I use collection.createDocument to create a document B.
[4]. I use collection.queryDocuments to query the document B by some keys.
[5]. I use collection.deleteDocument to delete the document B.

in [5], I get a [Resource Not Found]Error.
Message: {"Errors":["Encountered exception while executing function. Exception =
Error: Error{"Errors":["Resource Not Found"]}\r\nStack trace: Error: Error{"Errors":["Resource Not Found"]}\n

BulkImport failing with serialization error

I'm getting the following error when executing the BulkImport stored procedure code. I've gone through several iterations. I copied a stack overflow with the same issue unresolved. It appears the store procedures can not accept array as an input. I eventually got it to work by inputing a stringified array like below. I then had to parse the input in the store proc.

var testArr = []
for(var i=0; i<150; i++){
testArr.push({"id":"test"+i})
}
var testArrStr = JSON.stringify(testArr)
//passed above to store proc with below alteration

exports.storeProcedure = {
id: "bulkImportArray",
serverScript:function bulkImportArray(docs) {
var context = getContext();
var collection = context.getCollection();
var docsToCreate = JSON.parse(docs)
var count = 0;
var docsLength = docsToCreate.length;
if (docsLength == 0) {
getContext().getResponse().setBody(0);
}
var totals = ""
function insertDoc(){
var msg = " count=" + count+" docsLength=" +docsLength + " typeof docsToCreate[]=" + typeof docsToCreate+ " length =" + docsToCreate.length
if(typeof docsToCreate[count] != 'undefined' ) {

            collection.createDocument(collection.getSelfLink(),
                docsToCreate[count],
                function (err, documentCreated) {
                    if (err){
                    // throw new Error('Error' + err.message);
                    getContext().getResponse().setBody(count + " : " + err);
                    }else{ 
                      if (count < docsLength -1) { 
                            count++;    
                            insertDoc();
                            getContext().getResponse().setBody(msg);
                        } else { 
                            getContext().getResponse().setBody(msg);
                        }
                    }
                });
             }else{ 
                 getContext().getResponse().setBody(msg);
             }    
       
    }
    insertDoc()
}

}

An array i.e. throws the following error

[{"id":"test0"},{"id":"test1"},{"id":"test2"},{"id":"test3"},{"id":"test4"},{"id":"test5"},{"id":"test6"},{"id":"test7"},{"id":"test8"},{"id":"test9"},{"id":"test10"},{"id":"test11"},{"id":"test12"},{"id":"test13"},{"id":"test14"},{"id":"test15"},{"id":"test16"},{"id":"test17"},{"id":"test18"},{"id":"test19"},{"id":"test20"},{"id":"test21"},{"id":"test22"},{"id":"test23"},{"id":"test24"},{"id":"test25"},{"id":"test26"},{"id":"test27"},{"id":"test28"},{"id":"test29"},{"id":"test30"},{"id":"test31"},{"id":"test32"},{"id":"test33"},{"id":"test34"},{"id":"test35"},{"id":"test36"},{"id":"test37"},{"id":"test38"},{"id":"test39"},{"id":"test40"},{"id":"test41"},{"id":"test42"},{"id":"test43"},{"id":"test44"},{"id":"test45"},{"id":"test46"},{"id":"test47"},{"id":"test48"},{"id":"test49"}]

_____error

Encountered exception while executing function. Exception = Error: The document body must be an object or a string representing a JSON-serialized object.
Stack trace: Error: The document body must be an object or a string representing a JSON-serialized object.
at createDocument (bulkImportArray.js:646:21)
at tryCreate (bulkImportArray.js:24:12)
at Anonymous function (bulkImportArray.js:37:26)
at Anonymous function (bulkImportArray.js:691:29)

_____stackoverflow

https://stackoverflow.com/questions/47748684/documentdb-bulkimport-stored-proc-getting-400-error-on-array-json-issue?newreg=315ec12d8a1448b0908ddd6d28515664

is uniqueConstraint.js isolated across a single collection?

We need per-collection (not per-partition) uniqueness for our problem domain. I just want to confirm that uniqueConstraint.js is indeed safe for that. For example if two creates come in at the same time with the same unique value, one will win and the other will return to the calling client with a "unique constraint violation" or something similar.

thank you!

update.js Not Working with REST API

For update.js to work, I had to add JSON.parse() or the object would never turn into JS properties. update.$inc etc. would not be present. Once JSON parameter is parsed it worked great!

if (!update) throw new Error("The update is undefined or null.");
update = JSON.parse(update); // convert JSON string to object

Sample Increment HTTP Request Body...

["foo", "{\"$inc\":{\"counter\":1}}"]

query returns empty results retrievedDocs

This line returns empty results:

For this query:
SELECT * FROM c

var isAccepted = collection.queryDocuments(collectionLink, query, requestOptions, function (err, retrievedDocs, responseOptions) {

Examples for update.js using C# .NET SDK

Hi,

I'm trying to use the stored procedure update.js from a c# .NET client using the following:

string[] spparams = new string[2] { docID, @"$addToSet: {arrayname: {foo1:\""bar1\"", foo2:\""bar2\""}" };

var response = await dbclient.ExecuteStoredProcedureAsync<string>(UriFactory.CreateStoredProcedureUri(_appSettings.DBID, _appSettings.CollectionID, sproc), spparams);

It seems to run without error but just returns the original document - no changes.

I'm not sure I'm submitting the correct structure but when I test this in the portal it runs correctly but again, no changes made to the data in the document. Just the original document returned.

["5b22f104-a943-98f9-14dd-947e79a17480","$addToSet: {arrayname: {foo1:\"bar1\",foo2:\"bar2\"}"]

I also tried a more simple $set but see same issues.

Is there any .NET examples for using $addToSet in update.js?

Thanks

upsertDocument etag mismatch returns the wrong error code.

I wrote a sproc that contains the following code (calling from C#):

        var upsertOptions = {
            disableAutomaticIdGeneration: true,
            etag: doc._etag
        }

        var isAccepted = collection.upsertDocument(collectionLink, doc, upsertOptions, callback);

The error returned when the etag does not match is the following:

{
  "code": "BadRequest",
  "message": "Message: {\"Errors\":[\"Encountered exception while executing function. Exception = Error: {\\\"Errors\\\":[\\\"One of the specified pre-condition is not met\\\"]}\\r\\nStack trace: Error: {\\\"Errors\\\":[\\\"One of the specified pre-condition is not met\\\"]}\\n   at callback (bulkUpsertV1.js:53:13)\\n   at Anonymous function (bulkUpsertV1.js:751:29)\"]}\r\nActivityId: da2b9e29-da83-4e52-8ac8-cfdbbdaa465e, Request URI: /apps/DocDbApp/services/DocDbServer24/partitions/a4cb4964-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats: , SDK: documentdb-dotnet-sdk/2.0.0 Host/64-bit MicrosoftWindowsNT/6.3.9600.0"
}

I would expect that the "code" is PreconditionFailed rather than BadRequest.

I'm currently working around it by checking for the error code BadRequest, and then checkign that the message contains

One of the specified pre-condition is not met

but I seek a more elegant and robust solution.

To use trigger when calling stored procedure

I have a scenario where I am calling stored procedure for bulkImport. So is there any option to call trigger from stored procedure while creating document ?
In trigger there is a pre logic that needs to be executed before document.

bulkImport.js importing different amount depending on how large I make the batches

Hello,

I'm using the stored procedure bulkImport.js to import C# collection of objects I've just created (in the thousands). If I use the following, I get a different number of documents inserted into the collection at the end of the process depending on the batchSize.

List<IEnumerable<Thing>> batches = things.Batch(batchSize).ToList();
await Client.CreateStoredProcedureAsync(
    UriFactory.CreateDocumentCollectionUri(Database, Collection),
    new StoredProcedure
    {
        Id = "UploadThings",
        Body = File.ReadAllText(@".\StoredProcedures\bulkImport.js")
    });
foreach (IEnumerable<Thing> batch in batches)
{
    submitted += await Client.ExecuteStoredProcedureAsync<int>(
        UriFactory.CreateStoredProcedureUri(Database, Collection, "UploadThings"), batch);
}

For instance, with about 7,700 documents being inserted with just one batch, I got 20 of them to actually get into the collection. So I tried batches of 3000 documents. I got a few thousand to make it to the collection. So on and so forth. I have a feeling that I'm missing something pretty fundamental here, but I can't think of what it is. Is there a way I can use the bulkImport.js in this repository in such a way that all the C# objects make it as documents into the collection?

Stored procedures swallow errors emitted from async function calls

Describe the bug
If a stored procedure calls an async function, and that function throws an error, the client response will still have status code 200, and the error will not be present anywhere in the response body (the resource of the response will be set to an empty string). This is the case even if catch() is called on the promise.

To Reproduce
Steps to reproduce the behavior:

  1. Create the following stored proc:
() => {
  (async () => {
    throw new Error('foo');
    getContext().getResponse().setBody('hello world');
  })();
}
  1. Execute the stored procedure through your API of choice.

Appending an error handler that places the error in the response body works as expected, but this is a lousy workaround as the HTTP status code of the response will still be OK.

Note that this does not occur if an error is thrown directly from an ordinary function; it only occurs if an error is thrown from an async function (even if, as above, the function never suspends execution).

Expected behavior
The error should be correctly propagated to the response body and the corresponding HTTP status code should indicate an error state. This could either be built in to the runtime, or else exposed as a handler that can be passed to a promise's catch block.

Can I use await inside stored procedure?

Chakra, V8 have had native support for async await for a while. Can I use this inside of Cosmos DB?

Also, I looked but couldn't find which version of Chakra is used inside of CosmosDB, would be helpful it this were documented.

__.replaceDocument always fails

I have tried about every possible variation of this:

function test(id) {
    var collection = getContext().getCollection();

    var result = __.filter(
        function(doc) {
            return doc.id == id;
        },
        function (err, feed, options) {
            if (err) throw err;
            if (!feed || !feed.length) throw new Error("Not found");
            var doc = feed[0];
            doc.name = "foo";
            var result1 = __.replaceDocument(doc._self, doc, options, function (err1) {
                if (err1) throw err1;
            });
            if (!result1.isAccepted) throw new Error("replaceDocument was not accepted");
        });
    if (!result.isAccepted) throw new Error("filter was not accepted");
}

But whatever I do, I'm always getting "replaceDocument was not accepted". What am I doing wrong? Why does replaceDocument always fail?

Is it possible to reuse the same chained query for continuation ?

When I try to reuse the same chained query an error occurs "Encountered exception while executing function", is it normal ?

Here is my code:
`
function getAsCsv(startDate, endDate) {
var query =
__.chain()
.filter(function (message) { return true; });
if (startDate != null) {
query = query.filter(function (message) { return (startDate <= message.providingDate); });
}
if (endDate != null) {
query = query.filter(function (message) { return (message.providingDate <= endDate); });
}
query = query.sortBy/Descending/(function (message) { return message.providingDate; });

var result = [];

process(null);

function process(continuationToken) {
    var queryResponse =
    query.value(
        { pageSize: -1, continuation: continuationToken },
       function (error, messages, options) {            
            messages.forEach(function (message) {
                result.push(message.providingDate);
            });

            if (options.continuation != null) {
                process(options.continuation);
            }
            else {
                __.response.setBody(result);
            }
        });
    if (!queryResponse.isAccepted) {
        result.push(":(");
        __.response.setBody(result);
    }
}          

}
`

Maintenance of CosmosDB JavaScript Aspects

Hi, Andrew @aliuy

As someone with whom I've been in touch before from the DocumentDB days, I'm getting in touch to find out the ongoing support for CosmosDB.

Specifically, we are encountering weird problems when using Stored Procedures defined in JavaScript (as opposed stored procedures using SQL statements).

Where do we seek help with this?

The issue on this repo are not getting any attention from what I can see.

Thanks
Noel

Support for Promises in Stored Procedures

Hi there,

I just started using JavaScript for developing a transactional StoredProcedure in DocumentDB.

It seemed for me to be the best choice to use Promises instead of callbacks in designing my procedure. However, I just realized that I am sadly not able to rollback my transactions / throw errors when using promises.

The problem is:

Once I'm inside a promise, I cannot "throw errors out" of it to the execution engine.
See this:
http://stackoverflow.com/a/33446005
or even better this one:
http://stackoverflow.com/questions/30715367/why-can-i-not-throw-inside-a-promise-catch-handler

They suggest some, in my oppinion, dirty hacks by just throwing the exception from another call stack in order to avoid the promise letting catch it. I don't see a possibility to do this in DocumentDB StoredProcedure as well, as setTimeout() seems not to work on another call stack and I don't see a way to work with listeners or find a somehow other way to let a function run on a different callstack.

What I would like to have is one of the following:
Create a function to "kill" the stored procedure and rollback the transaction like getContext().getResponse().reject(errorMsg)
or likely even better, just evaluate if my stored procedure returns a promise, and if, wait for that result and when it is in an error state, rollback the transaction and throw an exception (in my java SDK) like you would, if I would throw an error.

Bulk delete store procedure Execution result is empty

Hi @phurytw @aliuy @mkolt @jnonce @chlahav And team
We want to delete the Documents which is not have a partition key in Document DB collection, So far we find out the tool which is suitable in our requirement,So we created the store procedure (SP) using the bulkdelete.js (https://github.com/Azure/azure-cosmosdb-js-server/blob/master/samples/stored-procedures/bulkDelete.js) but its not working Its return the empty string value in the result,And we tried with C# application also here we are facing the "The requested operation exceeded maximum allocated time." We know the consmosdb execution time 5Sec is more than that it will through this exception for continution token , but continuation token value has true even though we are getting same issue,So could please help me out the issue to move forward on the solution.

Inconsistencies with pageSize, queryDocuments and QueryAPI

I recently reached over 100 documents in my collections however I encountered an issue. Newer documents wouldn't be fetched by Stored Procedures.

After investigation I think it's related to the pageSize property of the Feed Options.
Consider the following stored procedure scripts:

function sqlQueryWithoutContains(ids) {
    __.queryDocuments(__.getSelfLink(), 
    "SELECT * FROM i WHERE i.id = '" + ids[0] + "'", 
    { pageSize: 100 }, 
    function (err, feed, options) {
        __.response.setBody(feed);
    });
}
function sqlQueryWithContains(ids) {
    __.queryDocuments(__.getSelfLink(),
        "SELECT * FROM i WHERE ARRAY_CONTAINS(" + JSON.stringify(ids) + ", i.id)",
        { pageSize: 100 },
        function (err, feed, options) {
        __.response.setBody(feed);
    });
}
function parameterizedQueryWithoutContains(ids) {
    __.queryDocuments(__.getSelfLink(), {
        query: "SELECT * FROM i WHERE i.id = @id",
        parameters: [{ name: "@id", value: ids[0] }]
    }, { pageSize: 100 },
    function (err, feed, options) {
        __.response.setBody(feed);
    });
}
function parameterizedQueryWithContains(ids) {
    __.queryDocuments(__.getSelfLink(), {
        query: "SELECT * FROM i WHERE ARRAY_CONTAINS(@ids, i.id)",
        parameters: [{ name: "@ids", value: ids }]
    }, { pageSize: 100 },
    function (err, feed, options) {
        __.response.setBody(feed);
    });
}
function queryAPIWithoutContains(ids) {
    __.filter(function (i) {
        return i.id == ids[0];
    }, { pageSize: 100 },
    function (err, feed, options) {
        __.response.setBody(feed);
    });
}
function queryAPIWithContains(ids) {
    __.filter(function (i) {
        return ids.indexOf(i.id) > -1;
    }, { pageSize: 100 },
    function (err, feed, options) {
        __.response.setBody(feed);
    });
}

I would execute each script above with ids = ["e073396f-c49c-4399-9db5-9992804930fa"]
For every script involving queryDocuments() I'd get the desired item.
However when using filter() I wouldn't get it.
If I increase the pageSize enough I will get the item with filter()

The other thing I have noticed is that the request charge for queryAPIWithoutContains() is way higher than the queryDocuments() counterparts (~40 vs ~12)
When avoiding ARRAY_CONTAINS in queryDocuments() I'm saving ~30 RU however doing the same with filter() only saves ~5 RU.

I guess filter() iterates through each document until pageSize is reached and each iteration uses RU.
But queryDocuments() will get my item as long as my pageSize value is at least 1 which is the expected behaviour according to the documentation.

Collection.filter returns documents from other partitions

I just noticed that Collection.filter returns documents from other partitions.

I have a SP in a partitioned collection. The client specifies the partition key when calling the SP. However, a query like this:

var result = __.filter(
    doc => doc.type === 'foo',
    callback);

returns documents from other partitions.

On the other hand, an equivalent query using queryDocuments only returns documents from the current partition, as expected:

    var result = __.queryDocuments(
        __.getSelfLink(),
        "select * from c where c.type = 'foo'",
        callback);

Honestly, a bug like this is really scary. SP are supposed to run in the context of a single partition. How is it even possible that I get results from other partitions?

[Question] Maintaining stored procedures

I have seen a few examples of creating and execution stored procedures, which has brought me to this repo that contains stored procedures I would like to use.

Are there any best practices or workflows around where/when to create these in documentDb for instance, I can add these manually, but would like to script these into some kind of migration when updating the application, so would like to create or drop and re-create the stored procedure if it changes, any idea on how to manage that in an automated way?

Logging from triggers doesn't work

If I use console.log in a stored procedure, I can get the log from the response headers. However, if I use console.log in a trigger, the log is not returned. Is this intentional?

bulkDelete

bulk delete with partition key is not working , same query passed to bulk delete returns records in azure query editor.

RequestTimeoutException for bulkDelete example

I'm getting Microsoft.Azure.Documents.RequestTimeoutException with the current implementation of samples/stored-procedures/bulkDelete.js. I would expect the IsAccept handling to catch this. I see this while executing the stored proc via the newest .NET SDK for DocumentDB.

_ts is not returned by upsertDocument or replaceDocument

I have written a Stored Procedure to do a partial Document update and I want to return only the _ts attribute in order for subsequent queries to use this as a parameter.

This was tested in the Azure Portal Data Explorer and using the node.js sdk, both had the same result.

Whether I use upsertDocument or replaceDocument these are the only four system attributes returned in the RequestCallback.resource:

"_rid": "0eUiAJMAdQDl9QAAAAAAAA==",
"_self": "dbs/0eUiAA==/colls/0eUiAJMAdQA=/docs/0eUiAJMAdQDl9QAAAAAAAA==/",
"_etag": "\"27014d93-0000-0000-0000-5b7ba4ae0000\"",
"_attachments": "attachments/"

The Document itself is updated properly by the Stored Procedure, and the changes (including the new _ts) can be verified immediately in Data Explorer.

I also tried executing a small query after the upsert/replace and even this doesn't work inside the same Stored Procedure:

SELECT c._ts FROM c WHERE c.id='f21d829d-2de5-0a27-8886-ff2c3ddb2119'

Return value from Stored Procedure:

[
    {}
]

Result in Data Explorer (run as SQL Query):

[
    {
        "_ts": 1534831246
    }
]

Stored Procedure code:

function UpdatePartial(id, update, rootNode){

    var context = getContext();
    var collection = context.getCollection();

    var query = `SELECT * FROM c WHERE c.id='${id}'`;

    var queryAccepted = collection.queryDocuments(collection.getSelfLink(), query, {}, onQueryResponse);
    if(!queryAccepted) throw "Unable to query DocumentDB";

    function onQueryResponse(err, documents, responseOptions) {

        if(err){
            throw err;
        }

        if(documents.length === 0){
            throw `Could not find document with id [${id}]`;
        }

        var source = documents[0];
        update = JSON.parse(update);

        if(rootNode){
            source[rootNode] = Merge(source[rootNode], update);
        } else {
            source = Merge(source, update);
        }

        var updateAccepted = collection.replaceDocument(source._self, source, onUpdateResponse);
        if(!updateAccepted) throw "Unable to update DocumentDB";

    }

    function onUpdateResponse(err, resource, options){

        if(err){
            throw err;
        }

        context.getResponse().setBody({"_ts": resource._ts || ''});

        // use this to return the entire document instead
        // context.getResponse().setBody(resource);

        // uncomment these lines to execute the query
        // var query = `SELECT c._ts FROM c WHERE c.id='${id}'`;
        // console.log(query);
        // collection.queryDocuments(collection.getSelfLink(), query, onTimeStampResponse);

    }

    function onTimeStampResponse(err, resource){

        if(err){
            throw err;
        }

        context.getResponse().setBody(resource);      

    }

    
    function Merge(source, update) {

        for (var key in update) {
        
            try {
        
                if ( update[key].constructor==Object ) {
                    source[key] = Merge(source[key], update[key]);
                } else {
                    source[key] = update[key];
                }
        
            } catch(err) {
                source[key] = update[key];
            }
        
        }

        return source;

    }


}

Here is the node.js code that calls the Stored Procedure:

exports.executeSproc = function(id, update, callback){

  let url = getCollectionUrl(config.database, config.collection) + '/sprocs/' + 'UpdatePartial';
  let options = [id, update];
  
  client.executeStoredProcedure(url, options, function(err, resource){

    if(err){
      console.error(`ERROR in documentdb.executeSproc\nid was:${id}\nUpdate was: ${JSON.stringify(update)}\nError:\n${JSON.stringify(err,null,2)}`);
      callback(err);
      return;
    }

    callback(resource);

  });

}

Typescript

Can you re-implement the library in Typescript, or at a minimum provide modern Typescript API (with support for promises/async/await, good typings, etc.)? The API uses an old-style callback approach and should be using promises instead. While I could wrap it myself (e.g. https://medium.freecodecamp.org/how-to-make-a-promise-out-of-a-callback-function-in-javascript-d8ec35d1f981), I really shouldn't have to do this.

Typescript is awesome and I would have expected that a modern Microsoft product would use it.

Error "Invalid param specified" when execute in the Data Explorer (Azure)

Hi! Sometimes when i execute this procedure in the Data Explorer on Azure i received the error: "invalid param specified". The notification windows presents this message: "Finished executing stored procedure bulkDelete for container ...". If i leave the DataExplorer and return, works fine, but if a change the document items the error happens again.
Regards.

id links

As mentioned in this article, Is it possible to reference a document via it's id link rather than self link? It's pretty annoying and not very efficient to have to query for the document first every time.

For example, something like:
collection.replaceDocument("/dbs/db_id/colls/coll_id/docs/doc_id", doc, callback);

instead of:
collection.replaceDocument(retrievedDocs[0]._self, doc, callback);

If so, how do I resolve the collection link id from getContext().getCollection()?

Since we are performing this operation on a collection object already, it would be ideal if we could do something like:

collection.replaceDocument("doc_id", doc, callback);

bulkDelete fails with RequestTimeout error

I am using bulkDelete stored procedure to delete stale data. This stored procedure fails with the error message - "The requested operation exceeded maximum alloted time."

I am guessing this is happening only when I execute the stored procedure on a partition that has huge data (close to 10 GB). I have executed this SP on a partition that has relatively less data and was working fine. Is there any problem with the way I am consuming the SP or is there any constraints for this SP?

The query I use to pass to this SP is

SELECT c._self FROM c WHERE c.documentType = 'DocumentType' and c.payload.NextExecution < '2019-03-01' and c.payload.NextExecution >= '2019-01-01'

Looking forward for your help.

Support nested keys in update sproc $set

It would be great to add support for deep properties in the update $set sproc. This would enable us to send in more complex updates such as {$set: {"inventory.location.primary.city": "Miami", "product.sku.upc": "0124236542321"}}

With the current version we can only modify root nodes of the document.

This would also be needed for mongodb parity (see docs)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.