Code Monkey home page Code Monkey logo

couchdb-nano's Introduction

NPM

Nano

Offical Apache CouchDB library for Node.js.

Features:

  • Minimalistic - There is only a minimum of abstraction between you and CouchDB.
  • Pipes - Proxy requests from CouchDB directly to your end user. ( ...AsStream functions only)
  • Promises - The vast majority of library calls return native Promises.
  • TypeScript - Detailed TypeScript definitions are built in.
  • Errors - Errors are proxied directly from CouchDB: if you know CouchDB you already know nano.

Installation

  1. Install npm
  2. npm install nano

or save nano as a dependency of your project with

npm install --save nano

Note the minimum required version of Node.js is 10.

Table of contents

Getting started

To use nano you need to connect it to your CouchDB install, to do that:

const nano = require('nano')('http://localhost:5984');

Note: The URL you supply may also contain authentication credentials e.g. http://admin:mypassword@localhost:5984.

To create a new database:

nano.db.create('alice');

and to use an existing database:

const alice = nano.db.use('alice');

Under-the-hood, calls like nano.db.create are making HTTP API calls to the CouchDB service. Such operations are asynchronous. There are two ways to receive the asynchronous data back from the library

  1. Promises
nano.db.create('alice').then((data) => {
  // success - response is in 'data'
}).catch((err) => {
  // failure - error information is in 'err'
})

or in the async/await style:

try {
  const response = await nano.db.create('alice')
  // succeeded
  console.log(response)
} catch (e) {
  // failed
  console.error(e)
}
  1. Callbacks
nano.db.create('alice', (err, data) => {
  // errors are in 'err' & response is in 'data'
})

In nano the callback function receives always three arguments:

  • err - The error, if any.
  • body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
  • header - The HTTP response header from CouchDB, if no error.

The documentation will follow the async/await style.


A simple but complete example in the async/await style:

async function asyncCall() {
  await nano.db.destroy('alice')
  await nano.db.create('alice')
  const alice = nano.use('alice')
  const response = await alice.insert({ happy: true }, 'rabbit')
  return response
}
asyncCall()

Running this example will produce:

you have inserted a document with an _id of rabbit.
{ ok: true,
  id: 'rabbit',
  rev: '1-6e4cb465d49c0368ac3946506d26335d' }

You can also see your document in futon (http://localhost:5984/_utils).

Configuration

Configuring nano to use your database server is as simple as:

const nano = require('nano')('http://localhost:5984')
const db = nano.use('foo');

If you don't need to instrument database objects you can simply:

// nano parses the URL and knows this is a database
const db = require('nano')('http://localhost:5984/foo');

You can also pass options to the require to specify further configuration options you can pass an object literal instead:

// nano parses the URL and knows this is a database
const opts = {
  url: 'http://localhost:5984/foo',
  requestDefaults: {
    proxy: {
      protocol: 'http',
      host: 'myproxy.net'
    },
    headers: {
      customheader: 'MyCustomHeader'
    }
  }
};
const db = require('nano')(opts);

Nano works perfectly well over HTTPS as long as the SSL cert is signed by a certification authority known by your client operating system. If you have a custom or self-signed certificate, you may need to create your own HTTPS agent and pass it to Nano e.g.

const httpsAgent = new https.Agent({
  ca: '/path/to/cert',
  rejectUnauthorized: true,
  keepAlive: true,
  maxSockets: 6
})
const nano = Nano({
  url: process.env.COUCH_URL,
  requestDefaults: {
    agent: httpsAgent,
  }
})

Please check axios for more information on the defaults. They support features like proxies, timeout etc.

You can tell nano to not parse the URL (maybe the server is behind a proxy, is accessed through a rewrite rule or other):

// nano does not parse the URL and return the server api
// "http://localhost:5984/prefix" is the CouchDB server root
const couch = require('nano')(
  { url : "http://localhost:5984/prefix"
    parseUrl : false
  });
const db = couch.use('foo');

Pool size and open sockets

A very important configuration parameter if you have a high traffic website and are using nano is the HTTP pool size. By default, the Node.js HTTP global agent has a infinite number of active connections that can run simultaneously. This can be limited to user-defined number (maxSockets) of requests that are "in flight", while others are kept in a queue. Here's an example explicitly using the Node.js HTTP agent configured with custom options:

const http = require('http')
const myagent = new http.Agent({
  keepAlive: true,
  maxSockets: 25
})

const db = require('nano')({
  url: 'http://localhost:5984/foo',
  requestDefaults : {
    agent : myagent
  }
});

TypeScript

There is a full TypeScript definition included in the the nano package. Your TypeScript editor will show you hints as you write your code with the nano library with your own custom classes:

import * as Nano  from 'nano'

let n = Nano('http://USERNAME:PASSWORD@localhost:5984')
let db = n.db.use('people')

interface iPerson extends Nano.MaybeDocument {
  name: string,
  dob: string
}

class Person implements iPerson {
  _id: string
  _rev: string
  name: string
  dob: string

  constructor(name: string, dob: string) {
    this._id = undefined
    this._rev = undefined
    this.name = name
    this.dob = dob
  }

  processAPIResponse(response: Nano.DocumentInsertResponse) {
    if (response.ok === true) {
      this._id = response.id
      this._rev = response.rev
    }
  }
}

let p = new Person('Bob', '2015-02-04')
db.insert(p).then((response) => {
  p.processAPIResponse(response)
  console.log(p)
})

Database functions

nano.db.create(name, [opts], [callback])

Creates a CouchDB database with the given name, with options opts.

await nano.db.create('alice', { n: 3 })

nano.db.get(name, [callback])

Get information about the database name:

const info = await nano.db.get('alice')

nano.db.destroy(name, [callback])

Destroys the database name:

await nano.db.destroy('alice')

nano.db.list([callback])

Lists all the CouchDB databases:

const dblist = await nano.db.list()

nano.db.listAsStream()

Lists all the CouchDB databases as a stream:

nano.db.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

nano.db.compact(name, [designname], [callback])

Compacts name, if designname is specified also compacts its views.

nano.db.replicate(source, target, [opts], [callback])

Replicates source to target with options opts. The targetdatabase has to exist, add create_target:true to opts to create it prior to replication:

const response = await nano.db.replicate('alice',
                  'http://admin:[email protected]:5984/alice',
                  { create_target:true })

nano.db.replication.enable(source, target, [opts], [callback])

Enables replication using the new CouchDB api from source to target with options opts. target has to exist, add create_target:true to opts to create it prior to replication. Replication will survive server restarts.

const response = await nano.db.replication.enable('alice',
                  'http://admin:[email protected]:5984/alice',
                  { create_target:true })

nano.db.replication.query(id, [opts], [callback])

Queries the state of replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                  'http://admin:[email protected]:5984/alice',
                   { create_target:true })
const q = await nano.db.replication.query(r.id)

nano.db.replication.disable(id, [opts], [callback])

Disables replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                   'http://admin:[email protected]:5984/alice',
                   { create_target:true })
await nano.db.replication.disable(r.id);

nano.db.changes(name, [params], [callback])

Asks for the changes feed of name, params contains additions to the query string.

const c = await nano.db.changes('alice')

nano.db.changesAsStream(name, [params])

Same as nano.db.changes but returns a stream.

nano.db.changes('alice').pipe(process.stdout);

nano.db.info([callback])

Gets database information:

const info = await nano.db.info()

nano.use(name)

Returns a database object that allows you to perform operations against that database:

const alice = nano.use('alice');
await alice.insert({ happy: true }, 'rabbit')

The database object can be used to access the Document Functions.

nano.db.use(name)

Alias for nano.use

nano.db.scope(name)

Alias for nano.use

nano.scope(name)

Alias for nano.use

nano.request(opts, [callback])

Makes a custom request to CouchDB. This can be used to create your own HTTP request to the CouchDB server, to perform operations where there is no nano function that encapsulates it. The available opts are:

  • opts.db โ€“ the database name
  • opts.method โ€“ the http method, defaults to get
  • opts.path โ€“ the full path of the request, overrides opts.doc and opts.att
  • opts.doc โ€“ the document name
  • opts.att โ€“ the attachment name
  • opts.qs โ€“ query string parameters, appended after any existing opts.path, opts.doc, or opts.att
  • opts.content_type โ€“ the content type of the request, default to json
  • opts.headers โ€“ additional http headers, overrides existing ones
  • opts.body โ€“ the document or attachment body
  • opts.encoding โ€“ the encoding for attachments
  • opts.multipart โ€“ array of objects for multipart request
  • opts.stream - if true, a request object is returned. Default false and a Promise is returned.

nano.relax(opts, [callback])

Alias for nano.request

nano.config

An object containing the nano configurations, possible keys are:

  • url - the CouchDB URL
  • db - the database name

nano.updates([params], [callback])

Listen to db updates, the available params are:

  • params.feed โ€“ Type of feed. Can be one of
  • longpoll: Closes the connection after the first event.
  • continuous: Send a line of JSON per event. Keeps the socket open until timeout.
  • eventsource: Like, continuous, but sends the events in EventSource format.
  • params.timeout โ€“ Number of seconds until CouchDB closes the connection. Default is 60.
  • params.heartbeat โ€“ Whether CouchDB will send a newline character (\n) on timeout. Default is true.

nano.info([callback])

Fetch information about the CouchDB cluster:

const info = await nano.info()

The response is an object with CouchDB cluster information.

Document functions

db.insert(doc, [params], [callback])

Inserts doc in the database with optional params. If params is a string, it's assumed it is the intended document _id. If params is an object, it's passed as query string parameters and docName is checked for defining the document _id:

const alice = nano.use('alice');
const response = await alice.insert({ happy: true }, 'rabbit')

The insert function can also be used with the method signature db.insert(doc,[callback]), where the doc contains the _id field e.g.

const alice = nano.use('alice')
const response = await alice.insert({ _id: 'myid', happy: true })

and also used to update an existing document, by including the _rev token in the document being saved:

const alice = nano.use('alice')
const response = await alice.insert({ _id: 'myid', _rev: '1-23202479633c2b380f79507a776743d5', happy: false })

db.destroy(docname, rev, [callback])

Removes a document from CouchDB whose _id is docname and who's revision is _rev:

const response = await alice.destroy('rabbit', '3-66c01cdf99e84c83a9b3fe65b88db8c0')

db.get(docname, [params], [callback])

Gets a document from CouchDB whose _id is docname:

const doc = await alice.get('rabbit')

or with optional query string params:

const doc = await alice.get('rabbit', { revs_info: true })

If you pass attachments=true, the doc._attachments.attachmentNameN.data fields will contain the base-64 encoded attachments. Or, you can use db.multipart.get and parse the returned buffer to get the document and attachments.

See the attachments methods to retrieve just an attachment.

db.head(docname, [callback])

Same as get but lightweight version that returns headers only:

const headers = await alice.head('rabbit')

Note: if you call alice.head in the callback style, the headers are returned to you as the third argument of the callback function.

db.bulk(docs, [params], [callback])

Bulk operations(update/delete/insert) on the database, refer to the CouchDB doc e.g:

const documents = [
  { a:1, b:2 },
  { _id: 'tiger', striped: true}
];
const response = await alice.bulk({ docs: documents })

db.list([params], [callback])

List all the docs in the database .

const doclist = await alice.list().then((body)=>{
    body.rows.forEach((doc) => {
        console.log(doc);
    })
});

or with optional query string additions params:

const doclist = await alice.list({include_docs: true})

db.listAsStream([params])

List all the docs in the database as a stream.

alice.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.fetch(docnames, [params], [callback])

Bulk fetch of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, include_docs is always set to true.

const keys = ['tiger', 'zebra', 'donkey'];
const datat = await alice.fetch({keys: keys})

db.fetchRevs(docnames, [params], [callback])

** changed in version 6 **

Bulk fetch of the revisions of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, this is the same method as fetch but include_docs is not automatically set to true.

db.createIndex(indexDef, [callback])

Create index on database fields, as specified in CouchDB doc.

const indexDef = {
  index: { fields: ['foo'] },
  name: 'fooindex'
};
const response = await alice.createIndex(indexDef)

Reading Changes Feed

Nano provides a low-level API for making calls to CouchDB's changes feed, or if you want a reliable, resumable changes feed follower, then you need the changesReader.

There are three ways to start listening to the changes feed:

  1. changesReader.start() - to listen to changes indefinitely by repeated "long poll" requests. This mode continues to poll for changes until changesReader.stop() is called, at which point any active long poll will be canceled.
  2. changesReader.get() - to listen to changes until the end of the changes feed is reached, by repeated "long poll" requests. Once a response with zero changes is received, the 'end' event will indicate the end of the changes and polling will stop.
  3. changesReader.spool() - listen to changes in one long HTTP request. (as opposed to repeated round trips) - spool is faster but less reliable.

Note: for .get() & .start(), the sequence of API calls can be paused by calling changesReader.pause() and resumed by calling changesReader.resume().

Set up your database connection and then choose changesReader.start() to listen to that database's changes:

const db = nano.db.use('mydb')
db.changesReader.start()
  .on('change', (change) => { console.log(change) })
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
  }).on('seq', (s) => {
    console.log('sequence token', s);
  }).on('error', (e) => {
    console.error('error', e);
  })

Note: you probably want to monitor either the change or batch event, not both.

If you want changesReader to hold off making the next _changes API call until you are ready, then supply wait:true in the options to get/start. The next request will only fire when you call changesReader.resume():

db.changesReader.get({wait: true})
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
    // do some asynchronous work here and call "changesReader.resume()"
    // when you're ready for the next API call to be dispatched.
    // In this case, wait 5s before the next changes feed request.
    setTimeout( () => {
      db.changesReader.resume()
    }, 5000)
  }).on('end', () => {
    console.log('changes feed monitoring has stopped');
  });

You may supply a number of options when you start to listen to the changes feed:

Parameter Description Default value e.g.
batchSize The maximum number of changes to ask CouchDB for per HTTP request. This is the maximum number of changes you will receive in a batch event. 100 500
since The position in the changes feed to start from where 0 means the beginning of time, now means the current position or a string token indicates a fixed position in the changes feed now 390768-g1AAAAGveJzLYWBgYMlgTmGQ
includeDocs Whether to include document bodies or not false e.g. true
wait For get/start mode, automatically pause the changes reader after each request. When the the user calls resume(), the changes reader will resume. false e.g. true
fastChanges Adds a seq_interval parameter to fetch changes more quickly false true
selector Filters the changes feed with the supplied Mango selector {"name":"fred} null
timeout The number of milliseconds a changes feed request waits for data 60000 10000

The events it emits are as follows:s

Event Description Data
change Each detected change is emitted individually. Only available in get/start modes. A change object
batch Each batch of changes is emitted in bulk in quantities up to batchSize. An array of change objects
seq Each new sequence token (per HTTP request). This token can be passed into ChangesReader as the since parameter to resume changes feed consumption from a known point. Only available in get/start modes. String
error On a fatal error, a descriptive object is returned and change consumption stops. Error object
end Emitted when the end of the changes feed is reached. ChangesReader.get() mode only, Nothing

The ChangesReader library will handle many temporal errors such as network connectivity, service capacity limits and malformed data but it will emit an error event and exit when fed incorrect authentication credentials or an invalid since token.

The change event delivers a change object that looks like this:

{
	"seq": "8-g1AAAAYIeJyt1M9NwzAUBnALKiFOdAO4gpRix3X",
	"id": "2451be085772a9e588c26fb668e1cc52",
	"changes": [{
		"rev": "4-061b768b6c0b6efe1bad425067986587"
	}],
	"doc": {
		"_id": "2451be085772a9e588c26fb668e1cc52",
		"_rev": "4-061b768b6c0b6efe1bad425067986587",
		"a": 3
	}
}

N.B

  • doc is only present if includeDocs:true is supplied
  • seq is not present for every change

The id is the unique identifier of the document that changed and the changes array contains the document revision tokens that were written to the database.

The batch event delivers an array of change objects.

Partition Functions

Functions related to partitioned databases.

Create a partitioned database by passing { partitioned: true } to db.create:

await nano.db.create('my-partitioned-db', { partitioned: true })

The database can be used as normal:

const db = nano.db.use('my-partitioned-db')

but documents must have a two-part _id made up of <partition key>:<document id>. They are insert with db.insert as normal:

const doc = { _id: 'canidae:dog', name: 'Dog', latin: 'Canis lupus familiaris' }
await db.insert(doc)

Documents can be retrieved by their _id using db.get:

const doc = db.get('canidae:dog')

Mango indexes can be created to operate on a per-partition index by supplying partitioned: true on creation:

const i = {
  ddoc: 'partitioned-query',
  index: { fields: ['name'] },
  name: 'name-index',
  partitioned: true,
  type: 'json'
}

// instruct CouchDB to create the index
await db.index(i)

Search indexes can be created by writing a design document with opts.partitioned = true:

// the search definition
const func = function(doc) {
  index('name', doc.name)
  index('latin', doc.latin)
}

// the design document containing the search definition function
const ddoc = {
  _id: '_design/search-ddoc',
  indexes: {
    search-index: {
      index: func.toString()
    }
  },
  options: {
    partitioned: true
  }
}
 
await db.insert(ddoc)

MapReduce views can be created by writing a design document with opts.partitioned = true:

const func = function(doc) {
  emit(doc.family, doc.weight)
}

// Design Document
const ddoc = {
  _id: '_design/view-ddoc',
  views: {
    family-weight: {
      map: func.toString(),
      reduce: '_sum'
    }
  },
  options: {
    partitioned: true
  }
}

// create design document
await db.insert(ddoc)

db.partitionInfo(partitionKey, [callback])

Fetch the stats of a single partition:

const stats = await alice.partitionInfo('canidae')

db.partitionedList(partitionKey, [params], [callback])

Fetch documents from a database partition:

// fetch document id/revs from a partition
const docs = await alice.partitionedList('canidae')

// add document bodies but limit size of response
const docs = await alice.partitionedList('canidae', { include_docs: true, limit: 5 })

db.partitionedListAsStream(partitionKey, [params])

Fetch documents from a partition as a stream:

// fetch document id/revs from a partition
nano.db.partitionedListAsStream('canidae')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

// add document bodies but limit size of response
nano.db.partitionedListAsStream('canidae', { include_docs: true, limit: 5 })
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedFind(partitionKey, query, [params])

Query documents from a partition by supplying a Mango selector:

// find document whose name is 'wolf' in the 'canidae' partition
await db.partitionedFind('canidae', { 'selector' : { 'name': 'Wolf' }})

db.partitionedFindAsStream(partitionKey, query)

Query documents from a partition by supplying a Mango selector as a stream:

// find document whose name is 'wolf' in the 'canidae' partition
db.partitionedFindAsStream('canidae', { 'selector' : { 'name': 'Wolf' }})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedSearch(partitionKey, designName, searchName, params, [callback])

Search documents from a partition by supplying a Lucene query:

const params = {
  q: 'name:\'Wolf\''
}
await db.partitionedSearch('canidae', 'search-ddoc', 'search-index', params)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedSearchAsStream(partitionKey, designName, searchName, params)

Search documents from a partition by supplying a Lucene query as a stream:

const params = {
  q: 'name:\'Wolf\''
}
db.partitionedSearchAsStream('canidae', 'search-ddoc', 'search-index', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedView(partitionKey, designName, viewName, params, [callback])

Fetch documents from a MapReduce view from a partition:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
await db.partitionedView('canidae', 'view-ddoc', 'view-name', params)
// { rows: [ { key: ... , value: [Object] } ] }

db.partitionedViewAsStream(partitionKey, designName, viewName, params)

Fetch documents from a MapReduce view from a partition as a stream:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
db.partitionedViewAsStream('canidae', 'view-ddoc', 'view-name', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { rows: [ { key: ... , value: [Object] } ] }

Multipart functions

db.multipart.insert(doc, attachments, params, [callback])

Inserts a doc together with attachments and params. If params is a string, it's assumed as the intended document _id. If params is an object, its passed as query string parameters and docName is checked for defining the _id. Refer to the doc for more details. The attachments parameter must be an array of objects with name, data and content_type properties.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.multipart.insert({ foo: 'bar' }, [{name: 'rabbit.png', data: data, content_type: 'image/png'}], 'mydoc')
  }
});

db.multipart.get(docname, [params], [callback])

Get docname together with its attachments via multipart/related request with optional query string additions. The multipart response body is a Buffer.

const response = await alice.multipart.get('rabbit')

Attachments functions

db.attachment.insert(docname, attname, att, contenttype, [params], [callback])

Inserts an attachment attname to docname, in most cases params.rev is required. Refer to the CouchDB doc for more details.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.attachment.insert('rabbit', 
      'rabbit.png', 
      data, 
      'image/png',
      { rev: '12-150985a725ec88be471921a54ce91452' })
  }
});

db.attachment.insertAsStream(docname, attname, att, contenttype, [params])

As of Nano 9.x, the function db.attachment.insertAsStream is now deprecated. Now simply pass a readable stream to db.attachment.insert as the third paramseter.

db.attachment.get(docname, attname, [params], [callback])

Get docname's attachment attname with optional query string additions params.

const fs = require('fs');

const body = await alice.attachment.get('rabbit', 'rabbit.png')
fs.writeFile('rabbit.png', body)

db.attachment.getAsStream(docname, attname, [params])

const fs = require('fs');
alice.attachment.getAsStream('rabbit', 'rabbit.png')
  .on('error', e => console.error)
  .pipe(fs.createWriteStream('rabbit.png'));

db.attachment.destroy(docname, attname, [params], [callback])

changed in version 6

Destroy attachment attname of docname's revision rev.

const response = await alice.attachment.destroy('rabbit', 'rabbit.png', {rev: '1-4701d73a08ce5c2f2983bf7c9ffd3320'})

Views and design functions

db.view(designname, viewname, [params], [callback])

Calls a view of the specified designname with optional query string params. If you're looking to filter the view results by key(s) pass an array of keys, e.g { keys: ['key1', 'key2', 'key_n'] }, as params.

const body = await alice.view('characters', 'happy_ones', { key: 'Tea Party', include_docs: true })
body.rows.forEach((doc) => {
  console.log(doc.value)
})

or

const body = await alice.view('characters', 'soldiers', { keys: ['Hearts', 'Clubs'] })

When params is not supplied, or no keys are specified, it will simply return all documents in the view:

const body = await alice.view('characters', 'happy_ones')
const body = alice.view('characters', 'happy_ones', { include_docs: true })

db.viewAsStream(designname, viewname, [params])

Same as db.view but returns a stream:

alice.viewAsStream('characters', 'happy_ones', {reduce: false})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.viewWithList(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document.

const body = await alice.viewWithList('characters', 'happy_ones', 'my_list')

db.viewWithListAsStream(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document as a stream.

alice.viewWithListAsStream('characters', 'happy_ones', 'my_list')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.show(designname, showname, doc_id, [params], [callback])

Calls a show function from the specified design for the document specified by doc_id with optional query string additions params.

const doc = await alice.show('characters', 'format_doc', '3621898430')

Take a look at the CouchDB wiki for possible query paramaters and more information on show functions.

db.atomic(designname, updatename, docname, [body], [callback])

Calls the design's update function with the specified doc in input.

const response = await db.atomic('update', 'inplace', 'foobar', {field: 'foo', value: 'bar'})

Note that the data is sent in the body of the request. An example update handler follows:

"updates": {
  "in-place" : "function(doc, req) {
      var request_body = JSON.parse(req.body)
      var field = request_body.field
      var value = request_body.value
      var message = 'set ' + field + ' to ' + value
      doc[field] = value
      return [doc, message]
  }"
}

db.search(designname, searchname, params, [callback])

Calls a view of the specified design with optional query string additions params.

const response = await alice.search('characters', 'happy_ones', { q: 'cat' })

or

const drilldown = [['author', 'Dickens']['publisher','Penguin']]
const response = await alice.search('inventory', 'books', { q: '*:*', drilldown: drilldown })

Check out the tests for a fully functioning example.

db.searchAsStream(designname, searchname, params)

Calls a view of the specified design with optional query string additions params. Returns stream.

alice.search('characters', 'happy_ones', { q: 'cat' }).pipe(process.stdout);

db.find(selector, [callback])

Perform a "Mango" query by supplying a JavaScript object containing a selector:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
const response = await alice.find(q)

db.findAsStream(selector)

Perform a "Mango" query by supplying a JavaScript object containing a selector, but return a stream:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
alice.findAsStream(q)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

using cookie authentication

Nano supports making requests using CouchDB's cookie authentication functionality. If you initialise Nano so that it is cookie-aware, you may call nano.auth first to get a session cookie. Nano will behave like a web browser, remembering your session cookie and refreshing it if a new one is received in a future HTTP response.

const nano = require('nano')({
  url: 'http://localhost:5984',
  requestDefaults: {
    jar: true
  }
})
const username = 'user'
const userpass = 'pass'
const db = nano.db.use('mydb')

// authenticate
await nano.auth(username, userpass)

// requests from now on are authenticated
const doc = await db.get('mydoc')
console.log(doc)

The second request works because the nano library has remembered the AuthSession cookie that was invisibily returned by the nano.auth call.

When you have a session, you can see what permissions you have by calling the nano.session function

const doc = await nano.session()
// { userCtx: { roles: [ '_admin', '_reader', '_writer' ], name: 'rita' },  ok: true }

Advanced features

Getting uuids

If your application needs to generate UUIDs, then CouchDB can provide some for you

const response = await nano.uuids(3)
// { uuids: [
// '5d1b3ef2bc7eea51f660c091e3dffa23',
// '5d1b3ef2bc7eea51f660c091e3e006ff',
// '5d1b3ef2bc7eea51f660c091e3e007f0',
//]}

The first parameter is the number of uuids to generate. If omitted, it defaults to 1.

Extending nano

nano is minimalistic but you can add your own features with nano.request(opts)

For example, to create a function to retrieve a specific revision of the rabbit document:

function getrabbitrev(rev) {
  return nano.request({ db: 'alice',
                 doc: 'rabbit',
                 method: 'get',
                 params: { rev: rev }
               });
}

getrabbitrev('4-2e6cdc4c7e26b745c2881a24e0eeece2').then((body) => {
  console.log(body);
});

Pipes

You can pipe the return values of certain nano functions like other stream. For example if our rabbit document has an attachment with name picture.png you can pipe it to a writable stream:

const fs = require('fs');
const nano = require('nano')('http://127.0.0.1:5984/');
const alice = nano.use('alice');
alice.attachment.getAsStream('rabbit', 'picture.png')
  .on('error', (e) => console.error('error', e))
  .pipe(fs.createWriteStream('/tmp/rabbit.png'));

then open /tmp/rabbit.png and you will see the rabbit picture.

Functions that return streams instead of a Promise are:

  • nano.db.listAsStream

attachment functions:

  • db.attachment.getAsStream
  • db.attachment.insertAsStream

and document level functions

  • db.listAsStream

Logging

When instantiating Nano, you may supply the function that will perform the logging of requests and responses. In its simplest for, simply pass console.log as your logger:

const nano = Nano({ url: process.env.COUCH_URL, log: console.log })
// all requests and responses will be sent to console.log

You may supply your own logging function to format the data before output:

const url = require('url')
const logger = (data) => {
  // only output logging if there is an environment variable set
  if (process.env.LOG === 'nano') {
    // if this is a request
    if (typeof data.err === 'undefined') {
      const u = new url.URL(data.uri)
      console.log(data.method, u.pathname, data.qs)
    } else {
      // this is a response
      const prefix = data.err ? 'ERR' : 'OK'
      console.log(prefix, data.headers.statusCode, JSON.stringify(data.body).length)
    }
  }
}
const nano = Nano({ url: process.env.COUCH_URL, log: logger })
// all requests and responses will be formatted by my code
// GET /cities/_all_docs { limit: 5 }
// OK 200 468

Tutorials, examples in the wild & screencasts

Roadmap

Check issues

Tests

To run (and configure) the test suite simply:

cd nano
npm install
npm run test

Meta

https://freenode.org/

Release

To create a new release of nano. Run the following commands on the main branch

  npm version {patch|minor|major}
  github push  origin main --tags
  npm publish

couchdb-nano's People

Contributors

adamsaeid avatar alesch avatar cttttt avatar dependabot[bot] avatar dscape avatar fracek avatar garrensmith avatar glynnbird avatar insidewhy avatar janl avatar jaybeavers avatar jhs avatar jlank avatar jo avatar klaemo avatar mlecoq avatar mmalecki avatar oleics avatar perezd avatar pgte avatar piotrzarzycki21 avatar pmanijak avatar revington avatar ricellis avatar smithsz avatar streunerlein avatar suprememoocow avatar svnlto avatar tarantoga avatar thiagoarrais avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

couchdb-nano's Issues

Cannot find module 'follow' with 6.4.1

I'm getting:

Error: Cannot find module 'follow'
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object. (/Users/fred/dev/mycatalog/node_modules/nano/lib/nano.js:21:14)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)

with nano 6.4.1

looking at package.json, it depends on "cloudant-follow" but require("follow") on line 21.

TypeError: Cannot use 'in' operator to search for 'counts' in undefined

Original issue

kanongil
Regression in 6.2.0 from c03b552. The patch breaks lookups that has neither qs or callback set.

jo
Thanks for catching this!

WaldoJeffers
Yup, I have the same issue. For example, if do db.viewWithList('design','view','list').pipe(...), I get this error : TypeError: Cannot use 'in' operator to search for 'counts' in undefined.

MatthieuLemoine
I also have this issue when using viewWithList as @WaldoJeffers.

bednie
Still seeing this. Would a PR help?

Function db.get returns database information when empty docname provided

Calling below function db.get with docname value either undefined, or, null, or empty string, it returns the information of db.
db.get(docname, [params], [callback])

Result:
{ db_name: 'alice',
update_seq: '9-g1AAAAEzeJzLYWBg4MhgTmHgzcvPy09JdcjLz8gvLskBCjMlMiTJ____PyuRAYeCJAUgmWQPVsOES40DSE08fnMSQGrq8arJYwGSDA1ACqhsPiF1CyDq9h 16 if(err) {
NSdwCi7n5WIjtedQ8g6kDuywIAi4xi_w',
sizes: { file: 70863, external: 1322, active: 3753 },
purge_seq: 0,
other: { data_size: 1322 },
doc_del_count: 0,
doc_count: 2,
disk_size: 70863,
disk_format_version: 6,
data_size: 3753,
compact_running: false,
cluster: { q: 8, n: 1, w: 1, r: 1 },
instance_start_time: '0' }

Expected Behavior

If docname is undefined, null, or empty string, the result should be an error like 404 not found. It makes developer hard to find the root cause if he made a mistake to pass docname with empty value.

Current Behavior

The database information is returned.

Possible Solution

Check against the value of docname, and return error if it's undefined or null or ''.

Steps to Reproduce (for bugs)

db.get(undefined, [params], [callback]) or
db.get(null, [params], [callback]) or
db.get('', [params], [callback])

Context

Developer make a mistake and "undefined" passed to docname. It takes time to locate this issue, because function "get" returned database information rather than an Error.

Your Environment

nano 6.4.2

insertDocument API broken backwards compatibility (cannot update document)

Original issue

nacho4d
Hello everyone.
Thanks for this awesome library!

I got the latest version of nano and realized I was not able to update documents anymore when using some old code. I found there were some style changes in commit: 848caf7 that made the API not to work with old code

The exact change can be found here:
848caf7#commitcomment-11989091

I am not sure about the policy for addressing this kind of things. The commit is like 9 months old and since nobody else seems to have noticed this then I guess updating the documentation to highlight this kind of changes would be enough. Otherwise I can make I small pul request to support both :)

Support for _find (mango queries)

Original issue

chrisfosterelli
CouchDB 2.0 has a _find route that accepts mango queries. We're using the developer preview of Couch 2 and it'd be great if nano could expose this route!

Currently we have to work around nano to use these.

chirsfosterelli
Something like this might work:

const couch = nano(url)
couch.use(db).mango.find({ ... }, (err, results) => {
  // do your thing
})

Use of 'docname' in documents unclear

Original issue

justin-hackin
I am new to Cloudant and couchdb so forgive my ignorance, but I'm finding it difficult to understand what you mean by 'docname'. I tried googling what it implied in Cloudant/couchdb but couldn't find anything useful. I have to assume it means the _id property of the record ... by why not say docId then ?

Improvement to documentation

I have noticed that the params object used in the db.view context can be difficult to understand, since the example given doesn't include on at all. I think adding a few more examples will help people more quickly understand what is being asked for there.

Remote Memory Exposure in [email protected] > [email protected] > [email protected]

Hi,

I am checking my projects with nsp and get the following findings.

selection_010

Could you please update dependencies or remove the follow module if possible?
It looks like that the project is no longer active, the same issue was reported on Jun 10, 2016 at the follow project iriscouch/follow#84.
The last commit on master was on May 24, 2015.

https://nodesecurity.io/advisories/309
https://nodesecurity.io/advisories/77

Thanks in advance
Konrad

anonymous functions transpiled with babel don't work with _design docs in couchdb

Original issue

export-mike
Hi I've been using nano and cradle and both libraries seem to have this problem,

possibly this change should be fixed in couchdb.

flatiron/cradle#306

Have you encountered this issue before? How else could it be fixed?

export-mike
update from @janl flatiron/cradle#306 (comment)

chirsfosterelli
For what it's worth, we've dealt with that by doing this:

const validate = function(newDoc, savedDoc, userCtx) {
   // Content here
}.toString()

This results in validate being a stringified function that can be passed to Nano, and babel will not name it.

db.multipart.get does not work with Cloudant Local 1.0.0.5

Original issue

yawenchien
Version

Cloudant Local 1.0.0.5
Cloudant 1.3 http://github.com/cloudant/nodejs-cloudant
nano 6.1.5
Issue
db.multipart.insert creates doc successfully & correctly in Cloudant Local 1.0.0.5. However, db.multipart.get does not return error code but does not retrieve correct/readable doc.

Workaround
Use https://$HOST/$DATABASE/$DOCUMENT_ID and https://$HOST/$DATABASE/$DOCUMENT_ID/FullContent to retrieve doc & its attachment.

Allow forcing GET on VIEW commands (even with keys)

Original issue

libHive
This issue rose when working against Cloudant. Their pricing is 5 times cheaper for GET requests than POST requests. I had a lot of requests for small sets of keys, nothing that couldn't go into a query string, and I had made this change in order to keep my bills down.

The actual change was rather small - just adding an optional parameter to the request -

  let exampleQuery = { 
            reduce : true, 
            group : true, 
            keys : [ [a,b], [c,d] ], 
            forceGet: true // this is the parameter I added
        }; 

Do you think this is something that should go into the main branch ?
Thanks

New version?

Is there a plan to cut a new version to NPM any time soon?

Or-Clauses

I spent several hours trying to figure out how to use the OR condition and accidentally discovered it

Possible Solution

apache/nano#357
I propose to include such explanation in the doc.

Increase the minimum engine version

The current minimum engine version for Nano is Node 0.12. That version is EOL and no longer supported, the oldest LTS stream is the 4.x, although that is only going to be maintained until April 2018 [1].

It also appears that #45 stopped testing Nano on older versions and made 4.x the minimum tested version.

As a result I think it makes sense to increase the minimum engine version to at least 4.

Context

As seen by #62 some of Nano's dependencies (e.g. requests) already specify an engine >=4 and an in-range update of that dependency caused a break for people running nodejs-cloudant and/or Nano on Node 0.12. The changes in #62 will prevent that break, but will also stop further minor version updates of request which may well be needed for vulnerability fixes etc in future (there have been some in Hawk in the past) so I think the change to pin request version can only be a stop-gap.

Expected Behavior

The minimum engine version of Nano should be equal to the greatest minimum engine version of any of the dependencies and should match the oldest tested version.

Current Behavior

The engine version is an EOL, un-maintained version of Node.js (0.12).

Possible Solution

  • Update the engine version to >=4, or maybe even >=6 since that will be the oldest maintained LTS in April 2018.
  • Re-enable minor version updates of the request dependency.
  • I'm not 100% clear from the NPM documentation, but it might be worth considering adding the config: {engine-strict: true} flag to the package.json to by default prevent installing on unsupported engine versions.

Provide more information for socket errors

It would be helpful to provide additional error information in the message Nano outputs for socket or connection errors.

Expected Behavior

Suggest including, for example, reading the error .code and .description and adding it to the message to provide some more useful diagnostics.

{
  code: 'ECONNRESET',
  description: 'socket hang up'
}

err.message -> error happened in your connection ECONNRESET socket hang up

Current Behavior

Currently for a socket/connection error only the err.message is set to error happened in your connection.

err.message -> error happened in your connection

Possible Solution

Could build up the error message to include .code and .description if they are available e.g.

  if (err && err.code) {
    err.message = `${err.message} ${err.code}`;
  }
  if (err && err.description) {
    err.message = `${err.message} ${err.description}`;
  }

Steps to Reproduce (for bugs)

  1. Could use nock replyWithError to reproduce e.g.
nock.get(...).replyWithError({'description': 'socket hang up', 'code': 'ECONNRESET'});

Context

Made it hard to debug some connection issues without adding additional output statements to see the underlying cause of the error. Similarly to #54 where the whole error object has been output because the message error happened in your connection is insufficient to identify the issue.

Your Environment

  • Version used:

6.4.0

  • Browser Name and version:

Node.js 8.2.0

  • Operating System and version (desktop or mobile):

macOS 10.12.6

  • Link to your project:

https://github.com/cloudant/nodejs-cloudant

socket hangup on db.list with large numbers of ids

Expected Behavior

The db.list function should return doc metadata based on the ids you pass it, if any.

Current Behavior

With large numbers of docs you instead get:

> { Error: socket hang up
    at createHangUpError (_http_client.js:254:15)
    at Socket.socketOnEnd (_http_client.js:346:23)
    at emitNone (events.js:91:20)
    at Socket.emit (events.js:185:7)
    at endReadableNT (_stream_readable.js:974:12)
    at _combinedTickCallback (internal/process/next_tick.js:74:11)
    at process._tickDomainCallback (internal/process/next_tick.js:122:9)
  name: 'Error',
  scope: 'socket',
  errid: 'request',
  code: 'ECONNRESET',
  description: 'socket hang up',
  stacktrace:
   [ 'Error: socket hang up',
     '    at createHangUpError (_http_client.js:254:15)',
     '    at Socket.socketOnEnd (_http_client.js:346:23)',
     '    at emitNone (events.js:91:20)',
     '    at Socket.emit (events.js:185:7)',
     '    at endReadableNT (_stream_readable.js:974:12)',
     '    at _combinedTickCallback (internal/process/next_tick.js:74:11)',
     '    at process._tickDomainCallback (internal/process/next_tick.js:122:9)' ] }

Steps to Reproduce (for bugs)

Run this (modifying as needed for your local env etc):

const nano = require('nano')('http://admin:pass@localhost:5984');
nano.db.create('too-many-ids');
const db = nano.db.use('too-many-ids');

let ids = [];
for (let i = 0; i < 240; i++) {
  ids.push(`This-is-a-fake-id-${i}`);
}

let result = db.list({ids: ids}, console.log);

At least on my system, and in this test case, 239/240 is when the failure starts.

Context

NB: One clear solution would be to use batching. However, I'm raising this because it seems unintentional.

Your Environment

  • Version used: Nano 6.4.0
  • CouchDB used: Tested against both 1.6 and 2.0
  • Operating System and version (desktop or mobile): MacOS 10.12.5

Perform search over POST too

Original issue

TheHominid
A too long search query results in an error 400 from couchDB (due to GET url length limitations), however if the same query is sent inside the body with a POST call, couchDB works properly.
From the code it seems that nano can only perform search operations over GET and not POST.

Can't create DBs via nano, but can do so via command line

I am running the official docker image for couchdb on Windows, and it starts right up with my app, the dashboard works, and everything seems fine. The issue is that in my app nano's create db calls do not actually create new databases in the instance. When I manually curl to create via the localhost URL, new databases are created without issue.

I don't get any errors or warnings about this, but the issue feels like some sort of config or auth issue where the lib is either writing to the wrong directory or can't write to the one it is supposed to. I can't really explain it, given I can issue commands directly and it works fine.

Expected Behavior

When I run create(), I should be able to create a new database and view it in the dashboard.

Current Behavior

Calls to create() report no errors, but fail to add a new db to the instance.

Possible Solution

Steps to Reproduce (for bugs)

I can provide the exact proj if you need it, but it is the most vanilla install possible:

Docker compose looks like this:

version: '2'
services:
  hub:
    build: .
    ports:
      - "3000:3000"
    environment:
      - ENVIRONMENT=dev
      - COUCHDB_URL="http://localhost:5984"
  couchdb:
    image: "couchdb"
    ports:
      - "5984:5984"

Nano include looks like this:

const nano = require('nano')(appConfig.dbURL); // dbURL is the localhost:5984

  1. Docker-compose up
  2. Run node proj
  3. Run simple nano create()
  4. No errors, just doesn't create a database

Context

Can't use couchdb/nano until this first step works.

Your Environment

Is a native promise version planned

It's generally known that callbacks lead to bad code
also promisifying callback functions makes it (slower and) harder to debug given the stack traces

db.view encodes start_key / startkey and end_key / endkey params differently

I have a composite key view aka key = [id, 0], [id, 1], .... When I use the start_key alias (with underscore), the uri encoding is incorrect. startkey passes the correct encoding.

    db.view(designdoc, viewname, {
        start_key: [req.params.id],
        end_key: [req.params.id, {}],
        include_docs: true,
    }, callback);

(incorrectly) encodes to start_key[0]=id&end_key[0]=id which in fact returns all documents in the view.

    db.view(designdoc, viewname, {
        startkey: [req.params.id],
        endkey: [req.params.id, {}],
        include_docs: true,
    }, callback);

(correctly) encodes to startkey=[id]&endkey=[id, {}] and returns the correct documents.

Can not get correct userctx

Original issue

ceremcem
When I open the example.com/_session, it returns:

{"userCtx":{"roles":["_admin","_reader","_writer"],"name":"demeter"},"ok":true,"info":{"authentication_db":"_users","authentication_handlers":["delegated","cookie","default","local"],"authenticated":"cookie"}}

But when I use the following code:

err, body, headers <- nano.auth user.name, user.password
console.log "err", err
console.log "body: "
console.log body
console.log "headers..."
console.log headers

body returns

{"roles":[],"ok":true,"name":"demeter"}

Which is not correct.

Edit

Following bash script also returns same response with nano:

#!/bin/bash
HOST=https://example.com

curl -vX POST $HOST/_session \
    -H 'Content-Type:application/x-www-form-urlencoded' \
    -H 'Content-Type:application/json' \
    -d 'name=myuser&password=mypassword'

So the problem might not be related with nano itself, it might be a setup issue.

ceremcem
However, following code gets the same correct userCtx with the browser:

require! \nano
conn = nano cfg.url

err, body, headers <- conn.auth cfg.user.name, cfg.user.password
return console.log "err", err if err

if headers and headers.\set-cookie
    cfg.cookie = headers.\set-cookie
else
    return console.error "We got no cookie?", headers

sconn = nano do
    url: cfg.url
    cookie: cfg.cookie

err, session <- sconn.session
return console.error "Err: ", err if err

console.log 'Session is: ', session 

db.list params ?

Original issue

pjatinsight
Apologies if this is a newbie mistake.

Trying to get back some documents without having to use a view.
I have different documents it the db.

I want to extract the 'schedule' documents...

db.list({type:'schedule'},function(err, body) {
      if (!err) {
        console.log("And got..");
        body.rows.forEach(function(doc) {
          console.log(doc);
        });
        res.json(body.rows);
      }
    });

The 'schedule' documents are like this:

{
       "_id": "emailjob1",
       "_rev": "3-94b1eb8bff5bc64c224ac0a86b4dc2e0",
       "shortname": "EmailJob1",
       "displayname": "Daily at 11am (weekdays only)",
       "type": "schedule",
       "chronstring": "00 30 11 * * 1-5"
    }

But I'm just getting what looks like a random selection of documents back...
Do I need to specify the params in some other way?

kandebonfim
+1

Handle ECONNRESET errors

Original issue

fhahne
I may be missing something here, but it seems that nano is not handling ECONNRESET errors, or other sorts of socket hangups. I assume that one would want to attach an error listener to the request object in order to do so. Since that is all encapsulated in the module I am not quite sure how to archive that. Any suggestions?

bmiller59
+1 I am also getting similar errors. What is the correct strategy for adding an error handler to address these econnreset errors? Also, any suggestions what the cause could be?

0x1mason
can you do something like

require('nano')({
request: function () {
  // request wrapper with special handling
}
})

Handle database names with `/` in them

Original issue

BigBlueHat
Right now, if I give couchdb-push (which depends on nano) a URL like http://localhost:5984/mail/bigbluehat-com/byoung it throws the following error:

...\node_modules\couchdb-push\node_modules\couchdb-ensure\index.js:12
  couch.request({
        ^
TypeError: undefined is not a function

If I escape them, however, things work as they should: http://localhost:5984/mail%2Fbigbluehat-com%2Fbyoung

Obviously the escaped version is what CouchDB actually wants, but it'd be nice for the developer to not have to care that much--or go figure out what's needed from that error.

Having this as part of the library would save a bunch of URL parsing boilerplate in other people's code too, fwiw.

Thanks!

DB.insert: Can not insert extra headers like If-Match:rev to handle 'Document update conflict'

Original issue

cloidjgreen
This might be just pure ignorance but in attempting to set up CRUD from Angular through Express/nano to CouchDB I of course ran into the need to handle 'Document update conflict'.

There seemed to be no path to resolve the problem through DB.insert. The second parameter accepting only docName at line 372 of nano before assigning the object passed in to qs. I had set up the insert call with DB.insert(req.body,{docNname:'name',headers{'If-Match':'rev#'}},cb(){});

Following the pass off of the insertDoc function in nano to the relax function I found that the code danced all around opts and qs and never included qs.headers into the request header.

So on line 111 of nano I changed
req.headers = _.extend(req.headers, opts.headers, cfg.defaultHeaders);
to
req.headers = _.extend(req.headers, opts.headers, qs.headers, cfg.defaultHeaders);

Now after failing on a DB.insert with a 'Document update conflict' I can get the current doc rev, add the headers object with and appropriate "If-Match" to the PARAMS, second argument to insert, and force the update.

Comments??

jo
The revision does not necessarily have to be in the If-Match header. Just include the revision in the doc body, eg

db.insert({ foo: bar, _rev: '1-asd' }, 'mydoc')

cloidjgreen
Yes I can do that too but reviewing the document and resolving differences does not suit the specific use case I have.

Thanks for the update.

Other than this I have found nano to be very elegant. Makes for a very easy marriage between CouchDB -- Express,

Regards
Cloid

jo
Sorry, I maybe did not understand your use case. Could you explain it further please?

However, I really like the change you propose, because its a minimal change which extends nanos flexibility.

cloidjgreen
Two use cases actually.

I am developing json data templates and as I edit those I am not
inserting a revision field or maintaining it. I want to be able to load
these on system boot and apply updates them through an admin utility so do
not want to have to concern myself with revisions in the case of these
templates.
The application is an collaboration offering system where parties can
negotiate or bid to collaborate and the offering party will have "GOD"
rights over the offer. ( offer and bidding all happening on the same
document. ) So when the offering party decides to accept a collaboration
the offering party will premtively change the state on the offer document
regardless of unseen inputs.
So those are the two things I think I want to do. Not sure it will work
out this way though. It might turn out to be necessary to deal with these
race conditions in a different way.

Still on the learning curve and experimenting continuously.

Regards

Cloid

jo
Although I recommend to use a different data model approach for 2 (see Document Modeling to Avoid Conflicts) you can achieve what you want in current nano by just setting doc._rev. The If-Match header does nothing special than the doc._rev or rev query parameter.

That said you can also supply the revision as a query string:

DB.insert(req.body, { docNname: 'name', rev: 'rev#' }, cb)

Does this help?

PS: While working with express and nano you might also find connect-nano useful.

npm package version (6.2.0) doesn't include `uuids` method

The version of nano included in npm, which is currently 6.2.0, doesn't include the uuids method. It doesn't appear the npm version has been updated in about a year.

Edit: Just noticed it's only in master and not included in the 6.2.0 release. Will close out this issue but am still wondering the plan for releasing this feature.

Please, consider using "follow" library with the following patch

Original issue

MaxXx1313
Hi!
It looks like "follow" library is no more supported. However it's used in this library.
Could you please find a way to apply following patch into it? If you ask me, I think it's a good idea to make your own copy of "follow" lib and support it as well as nano.

iriscouch/follow#83

Thank you

micophilip
I second @MaxXx1313 's motion. There's even open security vulnerabilities that are open against follow which haven't been addressed yet. There's no commit there in over a year.

Destroying An Undefined Doc Removes the Whole Database

Recently discovered a bug when trying to delete a doc from a CouchDB database. The action of deleting a document with "undefined" inputs actually removes the whole database. The .destroy() call should really be restricted to destroying documents and not the whole DB itself. If an undefined is sent in, I think nano should ideally return an error?

This is a sample code to reproduce what happens:

const nano = require( 'nano' )( CouchDB );
const dbName = nano.db.use( 'dbName' );

dbName.destroy( undefined, undefined, ( err ) => {
//DB is destroyed at this point
} );

Missing db.attachment.head method

Original issue

hellboy81
I should pipe attachment stream to response stream with correct error handling (missing attachment, etc..)

I am trying to check if attachment exists with db.attachment.head, but I am missing this method in nano

Ha

var attachment = db.attachment.get(documentName, attName, function (err) {
   if (err) {
        if (err.statusCode === 404) {
              // Handle attachment not found
       } else {
             // Handle other error
      }
   }
}

// Still executing even if attachment not found
// Original CouchDB error response is has been sended to client BEFORE callback with error is called
attachment.pipe(res)

hellboy81
As I mentioned db.attachment.get can not work simultaneously with pipe and error handling:

var attachment = db.attachment.get(documentName, attName, function (err) {
   if (err) {
        if (err.statusCode === 404) {
              // Handle attachment not found
       } else {
             // Handle other error
      }
   }

     // Throws error:
     // You cannot pipe after data has been emitted from the response.
     attachment.pipe(res)
}

Regular expression in `scrub` produces incorrect results if URL path portion contains a bare '@'

Original issue

browndav
In https://github.com/dscape/nano/blob/master/lib/nano.js#L64, (.)@ is being used instead of the non-greedy (.?)@ or ([^@]*)@ โ€“ this matches up until the last occurrence of @, rather than the first. If the URL's path component contains a bare @, the entire hostname and a portion of the path could be stripped. For example, scrub('https://foo:bar@host/foo/bar/@quux') will yield "https://XXXXXX:XXXXXX@quux" instead of "https://XXXXXX:XXXXXX@host/foo/bar/@quux".

I can't see any way to exploit this beyond potentially hiding URL contents in logs, but admittedly haven't investigated closely.

Feature: Couch 2.0 db.index Support

Create a db.index({index:{fields:['field']}}, function(err, body) {...}) to create an index in Couch 2.0

Similar and complimentary to db.find that was recently implemented.

Documentation needs fixed for passing keys as parameter in db.view

Original issue

PaulAllan1
Currently the documentation for db.view is as follows:
"db.view(designname, viewname, [params], [callback])

calls a view of the specified design with optional query string additions params. if you're looking to filter the view results by key(s) pass an array of keys, e.g { keys: ['key1', 'key2', 'key_n'] }, as params."

In actual fact what needs to be passed in to filter by keys is more like {keys: [['key1', 'key2', 'key_n']] }. Please update the documentation.

franklinlindemberg
I agree with @PaulAllan1, please update the documentation. I lost much time trying to figure it out.
Regards.

Broken logging

Original issue.

jzaefferer
I'm trying to get nano to log all requests while running tests, which are supposed to stub all nano calls. The logging looks like it should do that by providing DEBUG=nano/logger as an env var. Yet it doesn't actually log anything. After some searching I found that lib/logger.js exports a function that returns the logEvent function which then returns a log function. Unfortunately all the log() calls, like this one, end up calling the logEvent function, ignoring the resulting function, causing nothing to be logged.

I can't tell what the intention of that inner log() function is, that seems unnecessary and could be removed. Usually I'd send a PR, but considering #265 I'll hold off on that for now. Maybe an existing contributor (and CLA signee) can just apply the fix (or tell me what I'm doing wrong).

db.follow ignores requestDefaults

Original issue

gr2m
Given

a database exists at http://localhost:5984/db-with-security, which is only accessible when signed in
an admin exists with username admin and password secret
And I run this code

var nano = require('nano')

var db = nano({
  url: 'http://localhost:5984/db-with-security',
  requestDefaults: {
    auth: {
      user: 'admin',
      pass: 'secret'
    }
  }
})

db.info(function (error, info) {
  if (error) {
    return console.log(error)
  }
  console.log('info ok.')

  db.follow(function (error) {
    if (error) {
      return console.log(error)
    }

    console.log('follow ok.')
  })
})

I get this error

info ok.
[Error: Bad DB response: {"error":"unauthorized","reason":"You are not authorized to access this db."}]

I've managed to add the user & pass to the CouchDB directly, so don't rely on requestDefaults anymore. But I'd still consider this an issue, from a user expectation perspective.

Documentation for atomic method is wrong

atomic documentation refers to request.form directly. Instead the method stringifies the object in req.body. So in your update handler, you need to insert:

var body = JSON.parse(req.body);

In addition to this issue with the docs is there any particular reason why the object body is stringified?

Thanks

Streamable multipart attachments are not supported

Consider a scenario when you want to upload a document and several attachments to a CouchDB in a single request - a typical example would be uploading a "Couchapp": a design doc with multiple HTML/JS/CSS attachments. This scenario is supported by CouchDB API using Content-Type: multipart/related request. Furthermore, you could expect to leverage Node Streams in order to avoid the necessity of buffering a bunch of files in memory. Unfortunately, the combination of issues in CouchDB and request prevents this.

  1. CouchDB has a nasty bug which prevents from using Transfer-Encoding: chunked along with Content-Type: multipart/related. It won't be fixed until 1.7, and its status in 2.x branch is unknown.
  2. request uses exactly the Transfer-Encoding: chunked to upload data from Streams. Uh-oh.
  3. request could be told explicitly not to use chunked encoding - but then you can't give it any Streams! See docs for a multipart option in request(options, callback). Bummer.

Technically, it should be possible to stream a bunch of attachments, in case where one knows the length of the stream beforehand (which is not a problem when you upload files from a disk). I attempted to add the support for this case in apache/nano#300, but failed miserably because request rightfully thinks it's smarter than me.

end_key and nano

Hi,

  this.db.view('message', 'by_profile_id', {limit: 50, key: [profileId],  end_key:[profileId, {}]  }, (err, body) => {

Send this to couchdb:

GET /messages/_design/message/_view/by_profile_id?limit=50&key=%5B%2243e89ff24c5285b5d3d4cadd87062871%22%5D&end_key%5B0%5D=43e89ff24c5285b5d3d4cadd87062871 200 ok 1

As you can see the end_key:[profileId, {}] is totally messed up.
Is there a work around for this?

Thanks!

Cannot specify auth in initialization object

Original issue

timwis
I'd like to pass a username and password to nano without storing it in the couchdb url string, so that the url string can be used in other parts of the application that require separate authentication. The documentation suggests I can pass an auth object into the requestDefaults property of the initialization object, like this:

var db = require('nano')({
url: 'http://localhost:5984',
requestDefaults: {
auth: { user: 'admin', pass: 'foo' }
}
})
But this seems to have no effect on the request, resulting in the standard 401 unauthorized error. Passing the same user/pass as part of the url (http://admin:foo@localhost:5984) works fine.

In the meantime, I'm getting around this using a function that deconstructs the url, adds in the user/pass, and reconstructs it. But I'd prefer this cleaner approach.

I also tried upgrading the version of request and I think I did it correctly... Am I doing it wrong or is this a bug?

timwis
may be related to #278

Dependency might be out of date

Original issue

danielcobo
It seems the dependency used for following changes in CouchDB could be outdated.

Running npm install nano on a current version of Node will log npm WARN engine [email protected]: wanted: {"node":"0.12.x || 0.10.x || 0.8.x"} (current: {"node":"4.2.2","npm":"2.14.7"})

The issue has been already raised at the repository of the dependency, however it looks like it has not yet been resolved - iriscouch/follow#73

satazor
Getting the same issue. The README points to https://github.com/jhs/follow/blob/master/package.json#L11 which has the issue resolved, but irishcoush's follow is being used instead of jh's fork. I'm confused.

collinsrj
nsp flags this as a security issue:

[email protected] > [email protected] > [email protected] > [email protected]

There is a CVE open against hawk. See here https://nodesecurity.io/advisories/77

Couchdb-lucene support / interoperability with Cloudant

Original issue

homerjam
Hi,

I've been using pouchdb with a plugin to allow couchdb-lucene to be used locally and cloudant in production - I presume this could be done by extending nano.

However as both extensions are very popular would you consider adding dedicated support for couchdb-lucene also? It would be nice to have a clean, maintained implementation.

Thanks

dscape
Would consider it, can you propose how this would look like in your view?

homerjam
I guess something like:

// usage

var nano = require('nano');

var db = nano({
    url: 'http://localhost:5984/foo',
    searchVendor: nano.CLOUDANT_SEARCH || nano.LUCENE_SEARCH
);


// in nano.js

// declare constants (after line 12)
nano.CLOUDANT_SEARCH = 'cloudant';
nano.LUCENE_SEARCH = 'lucene';

// choose viewPath style (around 474)
var viewPath;

if (meta.type === 'search' && cfg.searchVendor === nano.LUCENE_SEARCH) {
    viewPath = '/_fti/local/' + dbName + '/_design/' + ddoc + '/' + viewName;
} else {
    viewPath = '_design/' + ddoc + '/_' + meta.type + '/'  + viewName;
}

The trouble is that couchdb-lucene uses a different path - normally the database name is at the start but instead it comes after the /_fti/local/ prefix. So this would require some further modification in the relax() function I think.

Support for mango index creation

Original issue

chrisfosterelli
Related to #dscape/nano/329

To create mango indices to query on, you have to POST http://couchdb/dbname/_index. This isn't possible right now, since nano's insert changes the method to PUT if a document ID is provided.

A temporary workaround is to call the relax method directly, but a better long term solution would be if nano added support for mango index creation.

Something like this might work:

const couch = nano(url)
couch.use(db).mango.createIndex({ ... }, err => {
  // handle err
})

define a configurable default timeout for all request, and provide timeouts for specific requests

Original issue

osher
Hi,
I was looking for a way to provide timeouts to specific calls, and/or default timeout for all requests.

I concluded that there is no such thing, because searching for "timeout" in nano.js yields nothing (where I expected some entries - at least in the relax function...)

Personally I think that's something that should be facilitated by such a good infrastructure.

If I'm just misreading the picture - I'd be delighted to be directed to how it's done.

If in deed it's not implemented - then - If it fits the spirit here - would you like a PR to add such functionality?

What I propose to do is:
1 - add handling of options.timeout in the relax function
2 - replace access to the higher relax to calls from all methods of doc-scope to calls to an enclosed relax - one that works with the dbName of the "docScope" - and db.timeout.
3 - expose the enclosed relax on docScope so it can be used with the defaults enclosed for it
4 - have the enclosed db.relax uses db.timeout as timeout to pass to higher relax

mmm - I'm not sure what would be more useful: if timeout should compare against begining of accepting content from the server, or a "brutal" timeout - i.e - compares against completion of response...
let me know what you think. We can implement both, however, the 2nd is simpler.

Thoughts?

satazor
I think you can specify the default timeout with requestDefaults, not sure about specifying a timeout for each request.

weird headers getting passed through

Original issue

mandric
There are two headers, uri and statusCode which are not valid http that are being returned from the httpAgent code see lib/nano.js line 195. This could lead to some sensitive information being leaked to the client if you're just passing headers through from CouchDB. Let me know if you think this is a real problem, I'd be happy to try to come up with a valid patch.

Here's an example:

$ curl -I http://localhost:3333
HTTP/1.1 200 OK
X-Powered-By: Express
etag: "2-8f443270fec4fb34bbc4ebca93a565d3"
date: Tue, 09 Feb 2016 05:24:29 GMT
Content-Type: application/json; charset=utf-8
cache-control: must-revalidate
statusCode: 200
uri: http://admin:secret@localhost:5984/test/foo
Content-Length: 713
Connection: keep-alive
var express = require('express'), 
    db = require('nano')('http://admin:secret@localhost:5984/test'), 
    app = module.exports = express();

app.get('/', function(request,response) {
  db.get('foo', function (error, body, headers) {
    for (var k in headers) {
      response.header(k, headers[k]);
    }
    if (error) {
      return response.status(error.statusCode).send(error.reason);
    }
    response.send(body, 200);
  });
});
app.listen(3333);

Some question

Original issue

Not copying the body of the issue since it is a question about how to do something and it contains pictures and other stuff. Refer to the original issue linked on top.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.