louischatriot / nedb Goto Github PK
View Code? Open in Web Editor NEWThe JavaScript Database, for Node.js, nw.js, electron and the browser
License: MIT License
The JavaScript Database, for Node.js, nw.js, electron and the browser
License: MIT License
After loadDatabase
call, NeDB creates and leaves -p
directory in the root of the application. Running on Win7, Node v0.10.10.
A quick look-up revealed that customUtils.js:16
is doing childProcess.exec mkdir
with arguments that seam to bug out. Why not replace it with fs.mkdir()
?
To find documents where a property isn't defined e.g.
db.find({"key": {$exists: false}},...
or is there another way of doing that?
nedb api like mongodb ,that means it available support mongoose?
Forked child process is accessing the DB file with _this.db = new Datastore({ filename: conf.mail.db });
same process is done on the parent. While the child is reporting new entry inserted in the DB and increased docs count parent accessing same DB is reporting no change in the count of the entries in the same DB.
I've run nedb benchmark and was disappointed by the results. When in-memory-only all works perfect, but writing to file slows everything to disc I/O max speed.
Is it expected behaviour or some problem on my machine? I thought that the reason why copy of all database is kept in memory is to use it as a cache so disc I/O is not a problem. Something changed in the meantime?
Thanks for clarification.
Benchmark result:
node insert.js
----------------------------
Test with 10000 documents
Don't use an index
Don't use pipelining
Use a persistent datastore
----------------------------
INSERT BENCH - Begin profiling
INSERT BENCH - Begin inserting 10000 docs - 0ms (total: 0ms)
===== RESULT (insert) ===== 217 ops/s
INSERT BENCH - Finished inserting 10000 docs - 45.9s (total: 45.9s)
INSERT BENCH - Benchmark finished - 0ms (total: 45.9s)
node insert.js -m
----------------------------
Test with 10000 documents
Don't use an index
Don't use pipelining
Use an in-memory datastore
----------------------------
INSERT BENCH - Begin profiling
INSERT BENCH - Begin inserting 10000 docs - 0ms (total: 0ms)
===== RESULT (insert) ===== 11876 ops/s
INSERT BENCH - Finished inserting 10000 docs - 842ms (total: 842ms)
INSERT BENCH - Benchmark finished - 0ms (total: 842ms)
``'
Not sorely needed as JS sorting would be very fast on the expected datasets anyway but it's a plus.
Hey, I recently switched to NeDB and am getting some sort of bug where the database file isn't getting loaded for some reason. When I do an insert, it updates the file just fine, but whenever I try to retrieve anything from the database it's always empty.
I'm currently on Windows 8 x64 with Node v0.10.6
If the to-be-inserted data object has a property with null
value, like:
var data = {
prop: null
};
It will produce:
[TypeError: Object.keys called on non-object]
You are probably missing a condition to check for null
, thus trying to iterate over it, because in our lovely JavaScript:
typeof null === 'object' // true
Using binary search trees instead of hash tables was exactly to have that capability.
No need to wait for the fs to persist data to the disk. (gain 0.1-0.15ms per write, corresponding to a 2x in speed)
No risk of overflow if the number of writes is less than 10,000 per second
(may need to implement a buffering system)
I don't have a reproducible test case for this (yet - I'm working towards one) but I thought I'd raise it anyway as you may have some idea.
I have a problem whereby calling db.find will sometimes result in the callback simply not being called either with an error or a result.
It seems to happen most often after removing all the records from the db (e.g. db.remove({}) - but it's fine with empty dbs you've not just cleared...
I've stepped through code, reached the db.find({},callbackhandler) and it simply doesn't call 'callbackhandler' at all - no errors in the console (I'm using node webkit) - no clue what's up...
Reloading the page (e.g. redoing db.load) fixes it - should I just do that anyway, I wonder...
As I say tho, can't get a test case working in straight node.db - yet - so perhaps it's another node webkit related issue...
Hi, glad do found your project. Seems that near to the same time frame we did something very similar, but we focus on closer compatibility with MongoDB api and its features. One time we already choose wrong solution (it was Alfred DB) and spent significant time to replace it with MongoDB. You can take a look: https://github.com/sergeyksv/tingodb
As we go in parallel (I didn't saw your db before we start) it will be interesting to compare performance and maybe add cross references in readme.md files.
I'm storing some CouchDB documents. Couch uses _id as it's unique ID name, so I have to rename that on my documents before I can add them. If one could perhaps supply an optional parameter specifying what the internal ID field should be named, that would be very convenient.
If I have time I'll make the change and submit a pull request.
If, for no other reason, for something as simple as case-insentitive matching in values?
This happens very rarely (didn't happen for the 2 months I used NeDB with Braindead CI) and I needed to specifically craft a test to get this behaviour, but this needs to be fixed.
Asynchronouslt removing multiple documents results in only one being actually removed, since the last call to remove() is the one which overwrites the file.
Solution: build an executor to manage file rewrites.
Hi, great work!
Is it on your pipeline to support blob file storage? Also I think Chrome does not yet support blob writing on indexdb, maybe a workaround could be base64 (like PouchDB) or use the filesystem API like this polyfill https://github.com/ebidel/idb.filesystem.js
Would certainly be helpfull to have a 'full feature' javascript solution for something like node-webkit.
I've created this as a new issue because I suspect it's just going further than you perhaps plan to with NeDB - AND it's something I don't fully understand in MongoDB - but I thought I'd ask
Have you given any thought to handing Arrays which contain objects - rather than just values?
I know MongoDB supports this to some degree through array: and $elemmatch etc. but it's hideously complex. Nevertheless, I can already think of a few uses for doing just that!
Example: in SQL you'd have had a child table containing - say events and their dates - and you'd want to show items where they had an event within a period of time - that's the sort of situation which would need something like this
{"Name": "testitem","events": [{"eventname": "testevent", "date": "1/1/01"},{"eventname": "testevent2", "date": "1/2/01"}]}
Of course you'd need query tools to work with the objects in the array (as the current tools only match values!?)
I'm really just thinking aloud I guess - wondering if this is something you've considered or plan to support or if it's just heading into the realms of 'arcane'??
To add elements to a stored array in MongoDB I believe you use $push (as you would $set for a regular property)???
Otherwise I guess you have to retrieve the array, modify it and put it back - which is a bit messy!?
Not one for all databases in the same process
I have an app where I expect to add records with monotonically increasing dates. I assumed that a call to find({}) would give me back the records in their time order, but it does not appear to behave that way (perhaps because of the btree indexing).
What, if anything, can be said about the order of records returned by find()?
Is there any support for sorting the results before they are returned?
Would it help if the sort field was also an index?
Thanks!
After seeing how much faster NeDB is than TaffyDB, I'm considering porting NeDB to the browser, with persistence support using LocalStorage. Please comment if you think this would be useful.
new Datastore();
creates file 'undefined' and stores data in there.
new Datastore('');
creates directory '-p' ?!
In both cases error should be raised.
Or even better...
(feature request begin) in-memory-only instance constructed :) (feature request end)
Hello Louis.
NeDB looks really awsome at first glance. Little brother of MongoDB is just what I was looking for to use with node-webkit.
In my application single database could easily grow to 1GB. Will be NeDB capable of handling such large files in nearest future? Handling whole database in memory ofcourse is not an option :)
regards
Jakub
I may be missing something but the docs say passing a callback to db.update is optional, but if I don't pass that 4th param I'm getting this
TypeError: Object # has no method 'push'
at E:\Dropbox~work\gaml\nw\node_modules\nedb\lib\executor.js:27:22
at Object.q.process as _onImmediate
at processImmediate as _immediateCallback
Any ideas?
Hello,
I do application node-webkit database file is created and the data is then written, but when restarting the node-webkit application I cannot read data from a database file.
// Connection and opening the database:
var Datastore = require('nedb'),
db = new Datastore({ filename: 'test.db', autoload: true, nodeWebkitAppName: 'testApp' });
// When you FIRST start the program, write this in the database
db.insert({"test":"value"}, function() {
db.find({}, function(err, response) {
console.log(response); // Successfully returns an object
});
}
// --------------------- //
// In the SECOND run of the application, requesting data from a database
db.find({}, function(err, response) {
console.log(response); // The answer is []
});
What am I doing wrong?
Less of an issue more of a need for clarification really...
You cannot index an 'empty' collection - so I assume you need to check after every insert that it's not the first insert and create an index!? Seems a bit messy??
Not sure of the protocol for this with MongoDB or other NoDB tools either so looking for a bit of guidance really - I assume indexes are in-memory and only exist whilst your table is open too?
Some parts of the code is randomized (binary search tree removal order, _id generation) so some tests may fail only 50% of the time. Thorough testing of the code should run the whole test suite several times in a row to make sure the results are always correct.
I notice that you write changes to documents into a 'new' document object within the db files and that you only remove redundant/older versions when the database is reloaded!?
I have an application which posts frequent updates to fairly large documents - it's a desktop tool which people will have open for hours-at-a-time, which means the db file could grow in size quite considerably??
Is there a means/mechanism whereby I can force the database to consolidate itself to avoid it swallowing lots of HDD space - and for that matter, memory!?
Should I be closing/reopening the DB for each access do you think??
So that you can use nedb until you outgrow it
If you use the automatically generated _ids, there shouldn't be any problem anyway so not urgent
The doco says that for the callbacks, if everything went OK then the err object will be null. However it is not 'null' it is 'undefined'. I have confirmed this for ensureIndex() but I think it is in most of not all cases.
The behaviour of the 'err' object should be as consistent as possible and the doco should be as clear and accurate as possible. So I recommend choosing either 'null' or 'undefined' and ensuring that the 'err' object is always the same value in the 'success' case for all API calls and that the doco states the correct value.
Thanks for this useful library.
Test that it cannot get stuck.
Implement a timeout so that if for one reason it does get stuck the rest of the operation queue will still be executed.
http://visionmedia.github.io/mocha/#mocha.opts
Use that instead of passing args in the makefile, dump the makefile and tell the user to just run mocha, or set the test script in the package.json scripts entry as your test runner (and use npm test
). :)
This could lead to conflicts with the $$date type and upcoming $inc and $set operators.
As with Mongo, if a field is an array, it matches if any of its elements matches
May need to make it optionnal
I'm wondering if async tasks like db.update are carried-out in parallel or via a queue/stack?
If I was to simply run a large amount of db.updates (e.g. from a loop) would this result in multiple threads all trying to update at the same time - or is the 'async' aspect simply a single thread which processed things as it gets them?
Hello again.
If I understand correctly nedb doesn't wait for loadDatabase() to finish before executing find() requests and will return empty list in this situation. It would be much more convenient if database by itself (so I don't have to :) ) will check if data loading is in progress and defers all initiated database operations until loading completes.
Jakub
Basically, I'm trying to upsert into an empty collection.
Here is my code:
db.metadata.update(
{ category : item.category } ,
item,
{upsert : true },
function(err, doc){
callback(null);
});
Just before I check if item is not null
. Still I get this error:
Error: Object.keys called on non-object
at Function.keys (native)
at checkObject (C:\node\webservice_client\node_modules\nedb\lib\model.js:48:
So far I have modified that function in model.js and checked that the obj is not null
. Seems to be working for now.
if (typeof obj === 'object' && !_.isNull(obj) ) {
Doesn't seem necessary considering current speed without indexes but should be interesting to do.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.