Code Monkey home page Code Monkey logo

kfs's Introduction

Notice: Development on this repo is deprecated as we continue our v3 rearchitecture. Please see https://github.com/storj/storj for ongoing v3 development.

KFS (Kademlia File Store)

Build Status Coverage Status NPM License

The KFS system describes a method for managing the storage layer of nodes on the Storj Network by creating a sharded local database where content-addressable data is placed in a shard using the same routing metric and algorithm used by the Kademlia distributed hash table.

Be sure to read about the motivation and how it works!

Quick Start

Install the kfs package using Node Package Manager.

npm install kfs --save

This will install kfs as a dependency of your own project. See the documentation for in-depth usage details. You can also install globally to use the kfs command line utility.

const kfs = require('kfs');
const store = kfs('path/to/store');

store.writeFile('some key', Buffer.from('some data'), (err) => {
  console.log(err || 'File written to store!');
});

License

KFS - A Local File Storage System Inspired by Kademlia
Copyright (C) 2016 Storj Labs, Inc

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see [http://www.gnu.org/licenses/].

kfs's People

Contributors

bookchin avatar braydonf avatar jtolio avatar littleskunk avatar ralphtheninja avatar super3 avatar vasilenkoalexey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kfs's Issues

64kbyte file chunks?

That is only 8 modern disk sectors... that seems really small relative to the 2MB shards we use today.

Open/Close State Management Bug?

When running the performance benchmark tests, after some time there was an exception with an attempt to open/close an already opened/closed s-bucket.

Output from running the benchmarks:

bookchin@librem ~/Code/kfs
> $ npm run benchmark 100 performance-results.json

> [email protected] benchmark /home/bookchin/Code/kfs
> node perf/index.js exec "100" "performance-results.json"

Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Unlinking (deleting) data written to database...
Generating some random files, hold on...
Preparing 8388608 byte file...
Preparing 16777216 byte file...
Preparing 33554432 byte file...
Preparing 67108864 byte file...
Preparing 134217728 byte file...
Preparing 268435456 byte file...
Preparing 536870912 byte file...
Test files prepared, writing to KFS...
Starting read tests from previously written data...
Error running benchmarks: [Error: IO error: lock /tmp/KFS_PERF_SANDBOX/1473283410015.kfs/111.s/LOCK: already held by process]

State of the B-table on disk:

bookchin@librem /tmp                                       [17:43:03] 
> $ du -h KFS_PERF_SANDBOX                                   ⬡ 4.5.0 
161M    KFS_PERF_SANDBOX/1473283410015.kfs/105.s
121M    KFS_PERF_SANDBOX/1473283410015.kfs/018.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/073.s
796M    KFS_PERF_SANDBOX/1473283410015.kfs/231.s
547M    KFS_PERF_SANDBOX/1473283410015.kfs/121.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/042.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/194.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/153.s
543M    KFS_PERF_SANDBOX/1473283410015.kfs/111.s
49M KFS_PERF_SANDBOX/1473283410015.kfs/158.s
523M    KFS_PERF_SANDBOX/1473283410015.kfs/248.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/053.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/002.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/012.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/237.s
298M    KFS_PERF_SANDBOX/1473283410015.kfs/122.s
298M    KFS_PERF_SANDBOX/1473283410015.kfs/136.s
290M    KFS_PERF_SANDBOX/1473283410015.kfs/135.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/238.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/183.s
636M    KFS_PERF_SANDBOX/1473283410015.kfs/197.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/100.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/016.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/185.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/047.s
137M    KFS_PERF_SANDBOX/1473283410015.kfs/138.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/095.s
266M    KFS_PERF_SANDBOX/1473283410015.kfs/010.s
137M    KFS_PERF_SANDBOX/1473283410015.kfs/233.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/252.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/152.s
571M    KFS_PERF_SANDBOX/1473283410015.kfs/199.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/057.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/228.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/101.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/171.s
780M    KFS_PERF_SANDBOX/1473283410015.kfs/212.s
153M    KFS_PERF_SANDBOX/1473283410015.kfs/159.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/244.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/089.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/027.s
45M KFS_PERF_SANDBOX/1473283410015.kfs/008.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/145.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/028.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/251.s
378M    KFS_PERF_SANDBOX/1473283410015.kfs/206.s
57M KFS_PERF_SANDBOX/1473283410015.kfs/094.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/161.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/140.s
278M    KFS_PERF_SANDBOX/1473283410015.kfs/207.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/222.s
185M    KFS_PERF_SANDBOX/1473283410015.kfs/205.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/077.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/040.s
266M    KFS_PERF_SANDBOX/1473283410015.kfs/224.s
539M    KFS_PERF_SANDBOX/1473283410015.kfs/079.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/201.s
73M KFS_PERF_SANDBOX/1473283410015.kfs/098.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/182.s
145M    KFS_PERF_SANDBOX/1473283410015.kfs/000.s
832M    KFS_PERF_SANDBOX/1473283410015.kfs/225.s
539M    KFS_PERF_SANDBOX/1473283410015.kfs/063.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/085.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/030.s
274M    KFS_PERF_SANDBOX/1473283410015.kfs/123.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/071.s
314M    KFS_PERF_SANDBOX/1473283410015.kfs/109.s
1.1G    KFS_PERF_SANDBOX/1473283410015.kfs/088.s
193M    KFS_PERF_SANDBOX/1473283410015.kfs/234.s
828M    KFS_PERF_SANDBOX/1473283410015.kfs/178.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/025.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/019.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/144.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/059.s
81M KFS_PERF_SANDBOX/1473283410015.kfs/213.s
643M    KFS_PERF_SANDBOX/1473283410015.kfs/196.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/093.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/129.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/014.s
523M    KFS_PERF_SANDBOX/1473283410015.kfs/187.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/150.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/060.s
21M KFS_PERF_SANDBOX/1473283410015.kfs/072.s
274M    KFS_PERF_SANDBOX/1473283410015.kfs/160.s
836M    KFS_PERF_SANDBOX/1473283410015.kfs/108.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/215.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/076.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/013.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/156.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/168.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/080.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/240.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/115.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/245.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/083.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/195.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/033.s
193M    KFS_PERF_SANDBOX/1473283410015.kfs/062.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/054.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/096.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/005.s
81M KFS_PERF_SANDBOX/1473283410015.kfs/214.s
105M    KFS_PERF_SANDBOX/1473283410015.kfs/082.s
97M KFS_PERF_SANDBOX/1473283410015.kfs/189.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/081.s
511M    KFS_PERF_SANDBOX/1473283410015.kfs/191.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/038.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/181.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/216.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/102.s
394M    KFS_PERF_SANDBOX/1473283410015.kfs/249.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/149.s
1.4G    KFS_PERF_SANDBOX/1473283410015.kfs/034.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/001.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/172.s
185M    KFS_PERF_SANDBOX/1473283410015.kfs/052.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/218.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/127.s
25M KFS_PERF_SANDBOX/1473283410015.kfs/050.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/247.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/174.s
507M    KFS_PERF_SANDBOX/1473283410015.kfs/193.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/131.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/090.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/141.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/061.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/198.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/044.s
571M    KFS_PERF_SANDBOX/1473283410015.kfs/154.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/157.s
636M    KFS_PERF_SANDBOX/1473283410015.kfs/031.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/022.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/209.s
1.2G    KFS_PERF_SANDBOX/1473283410015.kfs/097.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/068.s
25M KFS_PERF_SANDBOX/1473283410015.kfs/226.s
145M    KFS_PERF_SANDBOX/1473283410015.kfs/103.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/184.s
17M KFS_PERF_SANDBOX/1473283410015.kfs/116.s
81M KFS_PERF_SANDBOX/1473283410015.kfs/167.s
1.2G    KFS_PERF_SANDBOX/1473283410015.kfs/036.s
1.1G    KFS_PERF_SANDBOX/1473283410015.kfs/142.s
274M    KFS_PERF_SANDBOX/1473283410015.kfs/117.s
49M KFS_PERF_SANDBOX/1473283410015.kfs/192.s
73M KFS_PERF_SANDBOX/1473283410015.kfs/176.s
81M KFS_PERF_SANDBOX/1473283410015.kfs/029.s
85M KFS_PERF_SANDBOX/1473283410015.kfs/058.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/227.s
764M    KFS_PERF_SANDBOX/1473283410015.kfs/110.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/084.s
65M KFS_PERF_SANDBOX/1473283410015.kfs/170.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/065.s
354M    KFS_PERF_SANDBOX/1473283410015.kfs/164.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/015.s
394M    KFS_PERF_SANDBOX/1473283410015.kfs/180.s
33M KFS_PERF_SANDBOX/1473283410015.kfs/049.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/241.s
290M    KFS_PERF_SANDBOX/1473283410015.kfs/132.s
258M    KFS_PERF_SANDBOX/1473283410015.kfs/235.s
25M KFS_PERF_SANDBOX/1473283410015.kfs/106.s
644M    KFS_PERF_SANDBOX/1473283410015.kfs/179.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/162.s
8.1M    KFS_PERF_SANDBOX/1473283410015.kfs/024.s
515M    KFS_PERF_SANDBOX/1473283410015.kfs/210.s
266M    KFS_PERF_SANDBOX/1473283410015.kfs/219.s
129M    KFS_PERF_SANDBOX/1473283410015.kfs/190.s
41G KFS_PERF_SANDBOX/1473283410015.kfs
4.0K    KFS_PERF_SANDBOX/1473283410015
42G KFS_PERF_SANDBOX

RocksDB support

Propose to implement the support of modern KVS RocksDB
There is nodejs binding A Low-level Node.js RocksDB, which implement same API as LevelDown, used in KFS. So the implementation of support should be easy.
RocksDB is optimized for Flash or RAM, but the official documentation also says

RocksDB features highly flexible configuration settings that may be tuned to run on a variety of production environments, including pure memory, Flash, hard disks or HDFS. It supports various compression algorithms and good tools for production support and debugging.

It has multi-threaded compactions, making it specially suitable for storing multiple terabytes of data in a single database.

So it looks most relevant as a backend for a large shard(several terabytes). Also RocksDB is more tunable and have some features which not present at LevelDB.

Both RocksDB and LevelDB have similar benchmark tool db_bench. Thus, it is possible to perform comparative performance tests and estimate the increase in read/write speed.

Now using RocksDB instead of LevelDB can be an experimental function that is activated when a node is created. And in the future it can be a good substitute for an outdated LevelDB.

Error when stopping and restarting Daemon w/ google-drive-ocamlfuse

storjshare --version

  • daemon: 2.5.0, core: 6.3.0, protocol: 1.1.0

`node Storj_Farmer_Contracts.js

events.js:160
throw er; // Unhandled 'error' event
^
OpenError: IO error: lock /root/Storj/contracts.db/LOCK: Resource temporarily unavailable
at /root/bin/node_modules/levelup/lib/levelup.js:119:34
at /root/bin/node_modules/abstract-leveldown/abstract-leveldown.js:39:16`

Error when stopping and restarting Daemon w/ google-drive-ocamlfuse. It seems like LOCK was not removed when using storjshare killall.

update docs to refect updated constants

C is now 128KiB after running tests on 64, 128, 256, and 512 C values. This value provided a small but significant performance improvement over the other tests.

Issue with s-bucket access lock

{"level":"info","message":"received valid message from {\"userAgent\":\"3.2.0\",\"protocol\":\"0.8.1\",\"address\":\"server024.storj.dk\",\"port\":5424,\"nodeID\":\"a69d4f2e3e97e4f26ece3d437b2825c5d7fed9e6\",\"lastSeen\":1474368604202}","timestamp":"2016-09-20T10:50:04.202Z"}
{"level":"warn","message":"error in storage manager: IO error: lock /home/subwolf/.storjkfs/sharddata.kfs/027.s/LOCK: already held by process","timestamp":"2016-09-20T10:50:04.317Z"}
{"level":"warn","message":"error in storage manager: IO error: lock /home/subwolf/.storjkfs/sharddata.kfs/027.s/LOCK: already held by process","timestamp":"2016-09-20T10:50:04.317Z"}
{"level":"warn","message":"error in storage manager: IO error: lock /home/subwolf/.storjkfs/sharddata.kfs/027.s/LOCK: already held by process","timestamp":"2016-09-20T10:50:04.317Z"}
events.js:141
      throw er; // Unhandled 'error' event
      ^

Error: IO error: lock /home/subwolf/.storjkfs/sharddata.kfs/027.s/LOCK: already held by process
    at Error (native)

Disable Compression

It looks like compression might be enabled by default: https://github.com/Storj/kfs/blob/master/lib/s-bucket.js#L69

I would suggest disabling compression completely as the vast majority of data stored is assumed to be encrypted first. Since encrypted data is essentially random data, compression should actually increase the storage costs which in the end is a waste of cpu and disk space.

I don't have in depth knowledge of how snappy works, but I know in general compression takes advantage of patterns in common media types. However encrypted data should have no such patterns and thus you end up taking up more space for the compression block headers with no compression achieved.

I could be wrong so if someone could easily do a test to verify this is the case that might be a worthy test.

Used space query keeps 1028 files open

Package Versions

Replace the values below using the output from node --version.

v6.10.0

Expected Behavior

kfs contains 257 leveldb and each leveldb needs at least 4 open files. LOG, LOCK, MANIFEST and one *.log file. I would expect that only the used leveldbs are open.

Actual Behavior

Because of the used space query all leveldbs and all 1028 files are open and never closed.

root@storj:~# ls -l /proc/416/fd | grep 'storjshare' | wc -l
1028

I disabled the used space query: https://github.com/littleskunk/core/blob/debug/lib/storage/adapters/embedded.js#L171-L173

root@storj:~# ls -l /proc/416/fd | grep 'storjshare' | wc -l
4

Open files never closed

Package Versions

root@storj:~# npm list -g kfs
/usr/local/lib
└─┬ [email protected]
  └─┬ [email protected]
    └── [email protected]

Replace the values below using the output from node --version.

v6.10.0

Expected Behavior

I have 2 unused download requests and one completed download. The unused download are expired (30 minutes TOKEN_EXPIRE). 1 minutes later (SBUCKET_IDLE) these leveldbs should be closed. I would expect only 4 open files (contract db).
Note: This is my worst case expectation and I could live with it. Also possible is a 1 minute timeout after data channel authorized. Close leveldb and open it again as soon as the download is used.

root@storj:~# grep 'download\|upload\|contract offer' .storjshare/storjshare/logs/188071ba7cfd974a9e47b59e24b0737ebf845db3.log
{"level":"info","message":"authorizing download data channel for 144d1265baf0908fe6c6bd272c701aac7811e3e4","timestamp":"2017-02-28T15:19:12.366Z"}
{"level":"info","message":"authorizing download data channel for 144d1265baf0908fe6c6bd272c701aac7811e3e4","timestamp":"2017-02-28T15:29:39.830Z"}
{"level":"info","message":"authorizing download data channel for 144d1265baf0908fe6c6bd272c701aac7811e3e4","timestamp":"2017-02-28T15:57:29.300Z"}
{"level":"debug","message":"Shard download completed","timestamp":"2017-02-28T15:57:35.104Z"}

Actual Behavior

All leveldbs are still open. By comparing the timestamps you can see which leveldb belongs to which download. (1 hour differenz because of my timezone.)

root@storj:~# ls -l /proc/418/fd | grep 'storjshare'
l-wx------ 1 root root 64 Feb 28 16:59 111 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/165.s/LOG
lrwx------ 1 root root 64 Feb 28 16:59 112 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/165.s/LOCK
l-wx------ 1 root root 64 Feb 28 16:59 113 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/165.s/000450.log
l-wx------ 1 root root 64 Feb 28 16:59 114 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/165.s/MANIFEST-000449
l-wx------ 1 root root 64 Feb 28 16:10 12 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/contracts.db/LOG
lrwx------ 1 root root 64 Feb 28 16:10 13 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/contracts.db/LOCK
l-wx------ 1 root root 64 Feb 28 16:10 14 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/contracts.db/004236.log
l-wx------ 1 root root 64 Feb 28 16:10 15 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/contracts.db/MANIFEST-004235
l-wx------ 1 root root 64 Feb 28 16:12 23 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/210.s/LOG
lrwx------ 1 root root 64 Feb 28 16:12 25 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/210.s/LOCK
l-wx------ 1 root root 64 Feb 28 16:12 45 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/210.s/066990.log
l-wx------ 1 root root 64 Feb 28 16:12 46 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/210.s/MANIFEST-066989
l-wx------ 1 root root 64 Feb 28 16:13 82 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/121.s/LOG
lrwx------ 1 root root 64 Feb 28 16:13 83 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/121.s/LOCK
l-wx------ 1 root root 64 Feb 28 16:13 84 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/121.s/000448.log
l-wx------ 1 root root 64 Feb 28 16:13 85 -> /root/.storjshare/storjshare/shares/188071ba7cfd974a9e47b59e24b0737ebf845db3/sharddata.kfs/121.s/MANIFEST-000447

EventEmitter memory leak

 (node) warning: possible EventEmitter memory leak detected. 11 close listeners added. Use emitter.setMaxListeners() to increase limit.

The media in the drive may have changed

OS: windows 10
Storjshare-GUI: 4.1.0

events.js:160 Uncaught Error: IO error: D:\storj\storjshare-eadf0e\storjshare-6fba36\sharddata.kfs\108.s/000051.ldb: The media in the drive may have changed

kfs

Uncaught TypeError: Data must be a string or a buffer

One storjshare GUI user (OSX 10.12.1) is not sending any OFFER message and developer console has this error messages. Every time he would like to send an OFFER he will write a new error message.

Different GUI versions and start farming with an empty folder didn't help. Still this error message:

Uncaught TypeError: Data must be a string or a bufferHash.update @ crypto.js:73
module.exports.hashKey @ utils.js:45
module.exports.coerceKeyOrIndex @ utils.js:60
_getStat @ b-table.js:193
(anonymous function) @ async.js:1156
replenish @ async.js:1030
(anonymous function) @ async.js:1034
_asyncMap @ async.js:1154
(anonymous function) @ async.js:1240
Btable.stat @ b-table.js:213
(anonymous function) @ embedded.js:173

Optimize for SMR tech drives & Raid 5/6 backed storage

Storj is a read-mostly application with archive like characteristics (write data once, store for a long time, maybe read back). This type of application screams for archive class data storage, like those that utilize SMR (Shingled Magnetic Recording) for cost effectiveness. SMR drives suffer from random write performance issues but are very cost effective.

Please consider adding an option to "Copy on Commit", where when a data transfer is starting, a seperate temporary area (perhaps on a write friendly drive like an SSD) would be utilized until the entire ldb file is received, after which it is copied to the bulk storage (using sequential write I/Os by the nature of such a copy). This would significantly improve the performance of any SMR class drive as well any Raid 5 or Raid 6 implementations where entire stripe sets oculd be written, reducing Raid 6 I/O by as much as 4X.

Corrupted database

Package Versions

Replace the values below using the output from storjshare --version.

deamon 2.4.3
core 6.2.0
protocol 1.1.0

Replace the values below using the output from node --version.

6.9.2

Actual Behavior

program does restarts 15-30 times a day

{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.013Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.013Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.013Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.013Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.014Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file specified.\r\n","timestamp":"2017-02-23T00:02:27.015Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\020.s/MANIFEST-000368: The system cannot find the file 
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\024.s/MANIFEST-000359: The system cannot open the file.\r\n","timestamp":"2017-02-23T00:02:49.033Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\059.s: An unexpected network error occurred.\r\n","timestamp":"2017-02-23T00:02:49.410Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\059.s: An unexpected network error occurred.\r\n","timestamp":"2017-02-23T00:02:49.410Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\059.s: An unexpected network error occurred.\r\n","timestamp":"2017-02-23T00:02:49.411Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\031.s/000995.log: The system cannot open the file.\r\n","timestamp":"2017-02-23T00:02:49.420Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.070Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.070Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.070Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.070Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.133Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.136Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.146Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.151Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.164Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.167Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.167Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.168Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.170Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.182Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.219Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.271Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.426Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.432Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.438Z"}
{"level":"warn","message":"missing or empty reply from contact","timestamp":"2017-02-23T00:59:58.467Z"}
{"level":"error","message":"Could not get usedSpace: IO error: U:\\TUNEL_CLI_B_800gb\\sharddata.kfs\\149.s: An unexpected network error occurred.\r\n","timestamp":"2017-02-23T00:59:58.600Z"}

Power loss or similar problems can damage the leveldb. This was reported a few times now from GUI and daemon users. I think we need the good old repair command again or even better avoid it somehow.

Workaround:
npm install -g [email protected]
kfs -d /path-to-sharddata.kfs compact

50% chance that this can repair the leveldb.

Error: too many open files

image

StorJ Share GUI report an error about too many open files if I tried to start second disk (resource monitor looks there are a very big numbers of I/O operations by StorjShare program). Win 10 v. 1607 64 bit (with all hotfixes and updates), CPU Intel Core i5 M430@2,27 GHz, 8 GB RAM, 256 GB SSD Kingston HyperX Savage, Storj Share 4.1.0, Core Library 6.3.0, Protocol 1.1.0, Electron 1.4.15, Chrome 53.0.2785.143.

image

Too many open files

$ storjshare --version
StorjShare: 7.0.3
Core:       4.1.0
Protocol:   0.9.1

[email protected]

I can't find an issue on this bug elsewhere so I'm posting, however not sure if this is fixed in 2.2.5 already.

At the moment when running lsof it outputs 23270 open files. This seems like none of the leveldb files are ever closed.

{"level":"debug","message":"contact already in bucket, moving to tail","timestamp":"2016-11-05T15:00:16.026Z"}
{"level":"info","message":"handling storage consignment request from a69d4f2e3e97e4f26ece3d437b2825c5d7fed9e6","timestamp":"2016-11-05T15:00:16.060Z"}
{"level":"info","message":"authorizing data channel for a69d4f2e3e97e4f26ece3d437b2825c5d7fed9e6","timestamp":"2016-11-05T15:00:16.117Z"}
{"level":"info","message":"replying to message to 13af651c84a74787103c961ada33a9d057a98c05","timestamp":"2016-11-05T15:00:16.117Z"}
{"level":"debug","message":"not waiting on callback for message 13af651c84a74787103c961ada33a9d057a98c05","timestamp":"2016-11-05T15:00:16.133Z"}
{"level":"info","message":"data channel connection opened","timestamp":"2016-11-05T15:00:16.451Z"}
events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: IO error: /mnt/md3/storj/sharddata.kfs/196.s/002392.ldb: Too many open files
    at Error (native)

Using [email protected] result with Syntax error: Unexpected token

I have a corrupted database so I want to use the compact tool in KFS package.

Package correctly installed:

C:\WINDOWS\system32>npm install -g [email protected]
|
> [email protected] install C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs\node_modules\leveldown
> prebuild-install || node-gyp rebuild

prebuild-install info begin Prebuild-install version 2.1.0
prebuild-install info looking for local prebuild @ prebuilds\leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install info looking for cached prebuild @ C:\Users\mediacenter\AppData\Roaming\npm-cache\_prebuilds\https-github.com-level-leveldown-releases-download-v1.6.0-leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install http request GET https://github.com/level/leveldown/releases/download/v1.6.0/leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install http 200 https://github.com/level/leveldown/releases/download/v1.6.0/leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install info downloading to @ C:\Users\mediacenter\AppData\Roaming\npm-cache\_prebuilds\https-github.com-level-leveldown-releases-download-v1.6.0-leveldown-v1.6.0-node-v46-win32-x64.tar.gz.8576-92938a5b.tmp
prebuild-install info renaming to @ C:\Users\mediacenter\AppData\Roaming\npm-cache\_prebuilds\https-github.com-level-leveldown-releases-download-v1.6.0-leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install info unpacking @ C:\Users\mediacenter\AppData\Roaming\npm-cache\_prebuilds\https-github.com-level-leveldown-releases-download-v1.6.0-leveldown-v1.6.0-node-v46-win32-x64.tar.gz
prebuild-install info unpack resolved to C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs\node_modules\leveldown\build\Release\leveldown.node
prebuild-install info unpack required C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs\node_modules\leveldown\build\Release\leveldown.node successfully
prebuild-install info install Prebuild successfully installed!
C:\Users\mediacenter\AppData\Roaming\npm\kfs -> C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs\bin\kfs.js
[email protected] C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs
├── [email protected]
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected])

But when I want to run the command, I have always this error :

E:\>kfs -d E:\\godzilla_gui\\storjshare-349319 compact
C:\Users\mediacenter\AppData\Roaming\npm\node_modules\kfs\bin\kfs.js:9
const {homedir} = require('os');
      ^

SyntaxError: Unexpected token {
    at exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:373:25)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)
    at startup (node.js:139:18)
    at node.js:974:3

Uncaught error on events.js too many files opened

After some hours the node restarts:

{"level":"error","message":"failed to read from mirror node: connect ETIMEDOUT 158.69.248.73:5015","timestamp":"2017-05-13T21:33:05.997Z"}
events.js:163
throw er; // Unhandled 'error' event
^

Error: IO error: /drivepath/sharddata.kfs/241.s/000588.ldb: Too many open files

Addition infos:
daemon: 2.5.3, core: 6.4.2, protocol: 1.1.0
linux, max files the kernel permits to open is deafult (should be 1024).

After some time of running

/home/marek/.nvm/versions/node/v4.5.0/lib/node_modules/storjshare-cli/node_modules/storj-lib/node_modules/kfs/lib/s-bucket.js:225
        callback(null);
        ^

TypeError: callback is not a function
    at /home/marek/.nvm/versions/node/v4.5.0/lib/node_modules/storjshare-cli/node_modules/storj-lib/node_modules/kfs/lib/s-bucket.js:225:9

Another Error with google-drive-ocamlfuse when running Storj_Farmer_Contracts.js

storjshare -V

  • daemon: 2.5.0, core: 6.3.0, protocol: 1.1.0

node Storj_Farmer_Contracts.js

assert.js:85
throw new assert.AssertionError({
^
AssertionError: Table path is not a valid KFS instance
at Btable._validateTablePath (/root/bin/node_modules/kfs/lib/b-table.js:97:7)
at Btable._open (/root/bin/node_modules/kfs/lib/b-table.js:62:12)
at new Btable (/root/bin/node_modules/kfs/lib/b-table.js:51:10)
at module.exports (/root/bin/node_modules/kfs/index.js:15:34)
at new EmbeddedStorageAdapter (/root/bin/node_modules/storj-lib/lib/storage/adapters/embedded.js:30:14)
at Object. (/root/bin/Storj_Farmer_Contracts.js:83:19)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
at run (bootstrap_node.js:394:7)
at startup (bootstrap_node.js:149:9)
at bootstrap_node.js:509:3


S-Buckets don't fill evenly

From the read.me file:

Due to the nature of Kademlia's metric for determining distance, buckets will fill approximately evenly. The result of the XOR operation on the pseudo-random first bytes of the reference ID and hash should give any given bucket a relatively even chance of receiving any given file.

The attached file is the du result of one node. All of my 12 nodes are showing the same non uniform filling.
kfs_du.txt
Please note that the filling is not always distributed around 080.s entry.

I wonder if this is related to the bad shard distribution in the storj network ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.