Code Monkey home page Code Monkey logo

nedb's Issues

Calling loadDatabase creates a redundant directory "-p"

After loadDatabase call, NeDB creates and leaves -p directory in the root of the application. Running on Win7, Node v0.10.10.

A quick look-up revealed that customUtils.js:16 is doing childProcess.exec mkdir with arguments that seam to bug out. Why not replace it with fs.mkdir() ?

Do you support $exists??

To find documents where a property isn't defined e.g.

db.find({"key": {$exists: false}},...

or is there another way of doing that?

Parent and child using same db

Forked child process is accessing the DB file with _this.db = new Datastore({ filename: conf.mail.db }); same process is done on the parent. While the child is reporting new entry inserted in the DB and increased docs count parent accessing same DB is reporting no change in the count of the entries in the same DB.

How to achieve advertised benchmark performance?

I've run nedb benchmark and was disappointed by the results. When in-memory-only all works perfect, but writing to file slows everything to disc I/O max speed.
Is it expected behaviour or some problem on my machine? I thought that the reason why copy of all database is kept in memory is to use it as a cache so disc I/O is not a problem. Something changed in the meantime?

Thanks for clarification.

Benchmark result:

node insert.js
----------------------------
Test with 10000 documents
Don't use an index
Don't use pipelining
Use a persistent datastore
----------------------------
INSERT BENCH - Begin profiling
INSERT BENCH - Begin inserting 10000 docs - 0ms (total: 0ms)
===== RESULT (insert) ===== 217 ops/s
INSERT BENCH - Finished inserting 10000 docs - 45.9s (total: 45.9s)
INSERT BENCH - Benchmark finished - 0ms (total: 45.9s)

node insert.js -m
----------------------------
Test with 10000 documents
Don't use an index
Don't use pipelining
Use an in-memory datastore
----------------------------
INSERT BENCH - Begin profiling
INSERT BENCH - Begin inserting 10000 docs - 0ms (total: 0ms)
===== RESULT (insert) ===== 11876 ops/s
INSERT BENCH - Finished inserting 10000 docs - 842ms (total: 842ms)
INSERT BENCH - Benchmark finished - 0ms (total: 842ms)
``'

Basic sorting of results

Not sorely needed as JS sorting would be very fast on the expected datasets anyway but it's a plus.

Database files can be saved but won't load.

Hey, I recently switched to NeDB and am getting some sort of bug where the database file isn't getting loaded for some reason. When I do an insert, it updates the file just fine, but whenever I try to retrieve anything from the database it's always empty.
I'm currently on Windows 8 x64 with Node v0.10.6

Data properties with null value produce TypeError

If the to-be-inserted data object has a property with null value, like:

var data = {
    prop: null
};

It will produce:

[TypeError: Object.keys called on non-object]

You are probably missing a condition to check for null, thus trying to iterate over it, because in our lovely JavaScript:

typeof null === 'object' // true

Implement pipelining

No need to wait for the fs to persist data to the disk. (gain 0.1-0.15ms per write, corresponding to a 2x in speed)

No risk of overflow if the number of writes is less than 10,000 per second
(may need to implement a buffering system)

Weird behaviour - after deleting entire DB

I don't have a reproducible test case for this (yet - I'm working towards one) but I thought I'd raise it anyway as you may have some idea.

I have a problem whereby calling db.find will sometimes result in the callback simply not being called either with an error or a result.

It seems to happen most often after removing all the records from the db (e.g. db.remove({}) - but it's fine with empty dbs you've not just cleared...

I've stepped through code, reached the db.find({},callbackhandler) and it simply doesn't call 'callbackhandler' at all - no errors in the console (I'm using node webkit) - no clue what's up...

Reloading the page (e.g. redoing db.load) fixes it - should I just do that anyway, I wonder...

As I say tho, can't get a test case working in straight node.db - yet - so perhaps it's another node webkit related issue...

It looks like we build similar database engine

Hi, glad do found your project. Seems that near to the same time frame we did something very similar, but we focus on closer compatibility with MongoDB api and its features. One time we already choose wrong solution (it was Alfred DB) and spent significant time to replace it with MongoDB. You can take a look: https://github.com/sergeyksv/tingodb
As we go in parallel (I didn't saw your db before we start) it will be interesting to compare performance and maybe add cross references in readme.md files.

Allow name of ID field to be changed

I'm storing some CouchDB documents. Couch uses _id as it's unique ID name, so I have to rename that on my documents before I can add them. If one could perhaps supply an optional parameter specifying what the internal ID field should be named, that would be very convenient.
If I have time I'll make the change and submit a pull request.

Build an executor to avoid concurrency issues on deletes and updates

This happens very rarely (didn't happen for the 2 months I used NeDB with Braindead CI) and I needed to specifically craft a test to get this behaviour, but this needs to be fixed.

Asynchronouslt removing multiple documents results in only one being actually removed, since the last call to remove() is the one which overwrites the file.

Solution: build an executor to manage file rewrites.

blob

Hi, great work!
Is it on your pipeline to support blob file storage? Also I think Chrome does not yet support blob writing on indexdb, maybe a workaround could be base64 (like PouchDB) or use the filesystem API like this polyfill https://github.com/ebidel/idb.filesystem.js
Would certainly be helpfull to have a 'full feature' javascript solution for something like node-webkit.

Extending Arrays - by storing Objects in the arrays...

I've created this as a new issue because I suspect it's just going further than you perhaps plan to with NeDB - AND it's something I don't fully understand in MongoDB - but I thought I'd ask

Have you given any thought to handing Arrays which contain objects - rather than just values?

I know MongoDB supports this to some degree through array: and $elemmatch etc. but it's hideously complex. Nevertheless, I can already think of a few uses for doing just that!

Example: in SQL you'd have had a child table containing - say events and their dates - and you'd want to show items where they had an event within a period of time - that's the sort of situation which would need something like this

{"Name": "testitem","events": [{"eventname": "testevent", "date": "1/1/01"},{"eventname": "testevent2", "date": "1/2/01"}]}

Of course you'd need query tools to work with the objects in the array (as the current tools only match values!?)

I'm really just thinking aloud I guess - wondering if this is something you've considered or plan to support or if it's just heading into the realms of 'arcane'??

Q) Record order from find() and support for sorting?

I have an app where I expect to add records with monotonically increasing dates. I assumed that a call to find({}) would give me back the records in their time order, but it does not appear to behave that way (perhaps because of the btree indexing).

  1. What, if anything, can be said about the order of records returned by find()?

  2. Is there any support for sorting the results before they are returned?

  3. Would it help if the sort field was also an index?

Thanks!

Make an in-browser version

After seeing how much faster NeDB is than TaffyDB, I'm considering porting NeDB to the browser, with persistence support using LocalStorage. Please comment if you think this would be useful.

Datasets up to 1GB?

Hello Louis.
NeDB looks really awsome at first glance. Little brother of MongoDB is just what I was looking for to use with node-webkit.
In my application single database could easily grow to 1GB. Will be NeDB capable of handling such large files in nearest future? Handling whole database in memory ofcourse is not an option :)

regards
Jakub

db.update callback optional? I'm getting an error if I skip it...

I may be missing something but the docs say passing a callback to db.update is optional, but if I don't pass that 4th param I'm getting this

TypeError: Object # has no method 'push'
at E:\Dropbox~work\gaml\nw\node_modules\nedb\lib\executor.js:27:22
at Object.q.process as _onImmediate
at processImmediate as _immediateCallback

Any ideas?

No data after restart

Hello,
I do application node-webkit database file is created and the data is then written, but when restarting the node-webkit application I cannot read data from a database file.

// Connection and opening the database:
var Datastore = require('nedb'),
db = new Datastore({ filename: 'test.db', autoload: true, nodeWebkitAppName: 'testApp' });

// When you FIRST start the program, write this in the database
db.insert({"test":"value"}, function() {
    db.find({}, function(err, response) {
        console.log(response);  // Successfully returns an object
    });  
}

// --------------------- //

// In the SECOND run of the application, requesting data from a database
db.find({}, function(err, response) {
    console.log(response);  // The answer is []
}); 

What am I doing wrong?

Indexes - when and how to create them

Less of an issue more of a need for clarification really...

You cannot index an 'empty' collection - so I assume you need to check after every insert that it's not the first insert and create an index!? Seems a bit messy??

Not sure of the protocol for this with MongoDB or other NoDB tools either so looking for a bit of guidance really - I assume indexes are in-memory and only exist whilst your table is open too?

Script to test the database a lot of times

Some parts of the code is randomized (binary search tree removal order, _id generation) so some tests may fail only 50% of the time. Thorough testing of the code should run the whole test suite several times in a row to make sure the results are always correct.

The way the db files are written could be an issue...

I notice that you write changes to documents into a 'new' document object within the db files and that you only remove redundant/older versions when the database is reloaded!?

I have an application which posts frequent updates to fairly large documents - it's a desktop tool which people will have open for hours-at-a-time, which means the db file could grow in size quite considerably??

Is there a means/mechanism whereby I can force the database to consolidate itself to avoid it swallowing lots of HDD space - and for that matter, memory!?

Should I be closing/reopening the DB for each access do you think??

err object is not 'null' it is 'undefined'

The doco says that for the callbacks, if everything went OK then the err object will be null. However it is not 'null' it is 'undefined'. I have confirmed this for ensureIndex() but I think it is in most of not all cases.

The behaviour of the 'err' object should be as consistent as possible and the doco should be as clear and accurate as possible. So I recommend choosing either 'null' or 'undefined' and ensuring that the 'err' object is always the same value in the 'success' case for all API calls and that the doco states the correct value.

Thanks for this useful library.

More robust executor

Test that it cannot get stuck.

Implement a timeout so that if for one reason it does get stuck the rest of the operation queue will still be executed.

How asynchronous DB tasks are carried out?

I'm wondering if async tasks like db.update are carried-out in parallel or via a queue/stack?

If I was to simply run a large amount of db.updates (e.g. from a loop) would this result in multiple threads all trying to update at the same time - or is the 'async' aspect simply a single thread which processed things as it gets them?

Deferring operations till loadDatabase() completed?

Hello again.

If I understand correctly nedb doesn't wait for loadDatabase() to finish before executing find() requests and will return empty list in this situation. It would be much more convenient if database by itself (so I don't have to :) ) will check if data loading is in progress and defers all initiated database operations until loading completes.

Jakub

Upsert error, object.key called on non-object

Basically, I'm trying to upsert into an empty collection.
Here is my code:

db.metadata.update(
    { category : item.category } ,
    item,
    {upsert : true },
    function(err, doc){ 
    callback(null);
    });

Just before I check if item is not null. Still I get this error:

Error: Object.keys called on non-object
at Function.keys (native)
at checkObject (C:\node\webservice_client\node_modules\nedb\lib\model.js:48:

So far I have modified that function in model.js and checked that the obj is not null. Seems to be working for now.

  if (typeof obj === 'object' && !_.isNull(obj) ) {

Implement indexes?

Doesn't seem necessary considering current speed without indexes but should be interesting to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.