Code Monkey home page Code Monkey logo

interface-js-ipfs-core's Introduction

๐Ÿ”’ Archived

The contents of this repo have been merged into ipfs/js-ipfs.

Please open issues or submit PRs there.

interface-ipfs-core

standard-readme compliant

A test suite and interface you can use to implement an IPFS core interface.

Lead Maintainer

Alan Shaw.

Table of Contents

Background

The primary goal of this module is to define and ensure that both IPFS core implementations and their respective HTTP client libraries offer the same interface, so that developers can quickly change between a local and a remote node without having to change their applications. In addition to the definition of the expected interface, this module offers a suite of tests that can be run in order to check if the interface is used as described.

The API is presented with both Node.js and Go primitives. However, there are no actual limitations keeping it from being extended for any other language, pushing forward cross compatibility and interoperability through different stacks.

Modules that implement the interface

Send in a PR if you find or write one!

Badge

Include this badge in your readme if you make a new module that implements interface-ipfs-core API.

[![IPFS Core API Compatible](https://cdn.rawgit.com/ipfs/interface-ipfs-core/master/img/badge.svg)](https://github.com/ipfs/interface-ipfs-core)

Install

In JavaScript land:

npm install interface-ipfs-core

If you want to run these tests against a go-ipfs daemon, checkout ipfs-http-client and run test tests:

git clone https://github.com/ipfs/js-ipfs-http-client
npm install
npm test

Usage

Install interface-ipfs-core as one of the dependencies of your project and as a test file. Then, using mocha (for Node.js) or a test runner with compatible API, do:

const tests = require('interface-ipfs-core')
const nodes = []

// Create common setup and teardown
const createCommon = () => ({
  // Do some setup common to all tests
  setup: async () => {
    // Use ipfsd-ctl or other to spawn an IPFS node for testing
    const node = await spawnNode()
    nodes.push(node)

    return node.api
  },
  // Dispose of nodes created by the IPFS factory and any other teardown
  teardown: () => {
    return Promise.all(nodes.map(n => n.stop()))
  }
})

tests.block(createCommon)
tests.config(createCommon)
tests.dag(createCommon)
// ...etc. (see src/index.js)

Running tests by command

tests.repo.version(createCommon)

Skipping tests

tests.repo.gc(createCommon, { skip: true }) // pass an options object to skip these tests

// OR, at the subsystem level

// skips ALL the repo.gc tests
tests.repo(createCommon, { skip: ['gc'] })
// skips ALL the object.patch.addLink tests
tests.object(createCommon, { skip: ['patch.addLink'] })
Skipping specific tests
tests.repo.gc(createCommon, { skip: ['should do a thing'] }) // named test(s) to skip

// OR, at the subsystem level

tests.repo(createCommon, { skip: ['should do a thing'] })

Running only some tests

tests.repo.gc(createCommon, { only: true }) // pass an options object to run only these tests

// OR, at the subsystem level

// runs only ALL the repo.gc tests
tests.repo(createCommon, { only: ['gc'] })
// runs only ALL the object.patch.addLink tests
tests.object(createCommon, { only: ['patch.addLink'] })
Running only specific tests
tests.repo.gc(createCommon, { only: ['should do a thing'] }) // only run these named test(s)

// OR, at the subsystem level

tests.repo(createCommon, { only: ['should do a thing'] })

API

In order to be considered "valid", an IPFS core implementation must expose the API described in /SPEC. You can also use this loose spec as documentation for consuming the core APIs. Here is an outline of the contents of that directory:

Contribute

Feel free to join in. All welcome. Open an issue!

This repository falls under the IPFS Code of Conduct.

Want to hack on IPFS?

License

Copyright (c) Protocol Labs, Inc. under the MIT License. See LICENSE.md for details.

interface-js-ipfs-core's People

Contributors

0x-r4bbit avatar achingbrain avatar alanshaw avatar daviddias avatar dignifiedquire avatar dirkmc avatar dryajov avatar greenkeeper[bot] avatar greenkeeperio-bot avatar haadcode avatar hacdias avatar hackergrrl avatar hugomrdias avatar jacobheun avatar kevinsimper avatar lidel avatar michaelmure avatar nginnever avatar niinpatel avatar olizilla avatar pgte avatar prabhakar-poudel avatar richardlitt avatar richardschneider avatar stebalien avatar terichadbourne avatar vasco-santos avatar victorb avatar vmx avatar wraithgar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

interface-js-ipfs-core's Issues

MFS ipfs.files.cp arguments

What's the thinking behind the ipfs.files.cp arguments? Specifically why is the first argument an array?

ipfs.files.cp([sourceFile, destinationFile], callback)

It could be more straightforward as:

ipfs.files.cp(sourceFile, destinationFile, callback)

I suppose maybe you'd want to imitate the unix cp command and support both the above and:

ipfs.files.cp(...sourceFiles, destinationDirectory, callback)

Obviously JavaScript doesn't support left-variadic functions so you'd have to emulate it by interrogating arguments but that's an implementation detail.

Lack of hash -> Buffer interface

A noticeable gap in the APIs seems to be taking a hash and getting a buffer back.

The Scenario ... on the Archive.org gateway, we add a file to IPFS using the https..../api/v0/add" interface (working with Kyle) and get back a multihash-base58 that could EITHER
a) hash of the raw bytes (on a small file) or
b) the IPLD (for a larger file that got sharded).

The problem is turning that hash back to bytes in the JS, we've been using block.get() to retrieve it, but of course that files on case b.

Kyle has suggested using files.get() but this runs into the API limitation that files.cat returns a stream, and I'm guessing that a lot of people using this end up writing code to convert that Stream back into a buffer (so it can be passed to a Blob).

If I understand the code in files.js (and its possible that I don't), the code in cat() already has the whole file in a Buffer, which it converts into a Stream, which the caller then has to convert back into a buffer. Wouldn't it make a lot of sense to provide an API that allows returning it as a Buffer in the first place ?

Revisit exposed DHT API

The data types of the return values can be vastly improved + more valuable information can be provided. Currently js-ipfs is fulfilling the contract that was brought into the spec through go-ipfs + js-ipfs-api, but there is nothing stopping us from making it better.

There are also some questions with regards if we should be able to provide a block we don't have. go-ipfs let's you do it, js-ipfs wasn't letting.

Due to time constraints, I'll focus on DHT internals that are used for peerRouting and contentRouting, so that we don't touch too much at surface API and fulfil our goal of having DHT used by internals.

js-ipfs tests are failing

module.js:515
    throw err;
    ^

Error: Cannot find module 'aegir/fixtures'
    at Function.Module._resolveFilename (module.js:513:15)
    at Function.Module._load (module.js:463:25)
    at Module.require (module.js:556:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (c:\Users\Owner\Documents\GitHub\js-ipfs\node_modules\interface-ipfs-core\src\files.js:12:21)
    at Module._compile (module.js:612:30)
    at Object.Module._extensions..js (module.js:623:10)
    at Module.load (module.js:531:32)
    at tryModuleLoad (module.js:494:12)
    at Function.Module._load (module.js:486:3)
    at Module.require (module.js:556:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (c:\Users\Owner\Documents\GitHub\js-ipfs\node_modules\interface-ipfs-core\src\index.js:4:17)
    at Module._compile (module.js:612:30)
    at Object.Module._extensions..js (module.js:623:10)
    at Module.load (module.js:531:32)
    at tryModuleLoad (module.js:494:12)
    at Function.Module._load (module.js:486:3)
    at Module.require (module.js:556:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (c:\Users\Owner\Documents\GitHub\js-ipfs\test\core\interface\block.js:4:14)
    at Module._compile (module.js:612:30)
    at Object.Module._extensions..js (module.js:623:10)
    at Module.load (module.js:531:32)
    at tryModuleLoad (module.js:494:12)
    at Function.Module._load (module.js:486:3)
    at Module.require (module.js:556:17)
    at require (internal/module.js:11:18)
    at Suite.describe (c:\Users\Owner\Documents\GitHub\js-ipfs\test\core\interface\interface.spec.js:8:3)
    at Object.create (C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\lib\interfaces\common.js:112:19)
    at context.describe.context.context (C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\lib\interfaces\bdd.js:44:27)
    at Object.<anonymous> (c:\Users\Owner\Documents\GitHub\js-ipfs\test\core\interface\interface.spec.js:7:1)
    at Module._compile (module.js:612:30)
    at Object.Module._extensions..js (module.js:623:10)
    at Module.load (module.js:531:32)
    at tryModuleLoad (module.js:494:12)
    at Function.Module._load (module.js:486:3)
    at Module.require (module.js:556:17)
    at require (internal/module.js:11:18)
    at C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\lib\mocha.js:231:27
    at Array.forEach (<anonymous>)
    at Mocha.loadFiles (C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\lib\mocha.js:228:14)
    at Mocha.run (C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\lib\mocha.js:514:10)
    at Object.<anonymous> (C:\Users\Owner\AppData\Roaming\npm\node_modules\aegir\node_modules\mocha\bin\_mocha:484:18)
    at Module._compile (module.js:612:30)
    at Object.Module._extensions..js (module.js:623:10)
    at Module.load (module.js:531:32)
    at tryModuleLoad (module.js:494:12)
    at Function.Module._load (module.js:486:3)
    at Function.Module.runMain (module.js:653:10)
    at startup (bootstrap_node.js:187:16)
    at bootstrap_node.js:608:3

dag api: add {ls, tree}

We should add the following commands to the core/dag interface:

  • ipfs.dag.ls(path) - show the first level link names of the node pointed at by <path>
    • include simple return value one, for small dag objects
    • include stream/iterator based one, for huge dag objects
  • ipfs.dag.tree(path) - enumerate all entries in the graph, parting from node at <path>.
    • include simple return value one, for small subgraphs
    • stream/iterator based so that we can explore massive graphs with it.

cc @diasdavid @whyrusleeping

Consider enabling `add(path)` to the files API

@jbenet brought up that a lot of users will expect ipfs.add(path) to just work as if they were in the terminal.

In the past, we went through several iterations and discussions and eventually ended up agreeing that it would be beneficial to have utilities like addFromFs and addFromUrl as actions that are explicit, this keeps the promise intact that code can be run in the browser and in Node.js, as long as the dev is using the ipfs-core API.

We can add this feature, we will just need to be verbose about that caveat.

ipfs.files.add tests

From @diasdavid: The tests should cover adding a files:

  • file with less than a block size
  • file bigger than a block size
  • file bigger than a block size, but that is not a multiple of the block size (size % 256KiB !== 0)
  • file that is 'big' (between 10 and 20Mb)
  • directory with files
  • directory with files and empty dirs
  • directory with other directories that have files and other empty dirs (nested dirs)

Add CI (travis + circle ci)

So that at least linting is run, in the future we could even clone all repos that should adhere to the spec and run them as a bonus

URLs for IPFS paths

I'd asked this question on #284 but it's a bit of a tangent so I've moved it here.

Is there a reason we don't support URLs to indicate which kind of paths are which?

e.g. ipfs://QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif, ipfs:///a-file-in-mfs, file:///home/me/some-file.txt etc?

It would make the users' intentions clearer when trying to work out what they wanted to copy/move/add/etc, specifically in the case where they have valid MFS/filesystem paths that clash with filesystem/MFS or IPFS paths. E.g. they've got a file in MFS at /etc/passwd or on the filesystem at/ipfs/QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif.

If we detect a conflict while using regular paths we could prompt the user to re-specify their args using URLs.

Something like:

$ mkdir /ipfs
$ touch /ipfs/QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif
$ ipfs files cp /ipfs/QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif /hello.txt

Cannot copy file

A local file was detected at /ipfs/QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif 
which is also an IPFS path.  Please use file:// or ipfs:// URLs.

E.g.:
ipfs files cp ipfs://QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif /hello.txt
ipfs files cp file:///ipfs/QmPbFXUdgtyrCjExNAjugJTro6Lx4GG4MuvXTiHcZQkiif /hello.txt

Create a working example

I want to be able to use this. I made a repository following the instructions in this Reamde, I think, and I got nowhere. Can someone help me through this?

interface-ipfs-core-example

To run:

$ mocha test.js
/Users/richard/src/interface-ipfs-core-example/test.js:13
test.all(common)
     ^

TypeError: test.all is not a function
    at Object.<anonymous> (/Users/richard/src/interface-ipfs-core-example/test.js:13:6)
    at Module._compile (module.js:425:26)
    at Object.Module._extensions..js (module.js:432:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:311:12)
    at Module.require (module.js:366:17)
    at require (module.js:385:17)
    at /Users/richard/.nvm/versions/node/v5.0.0/lib/node_modules/mocha/lib/mocha.js:220:27
    at Array.forEach (native)
    at Mocha.loadFiles (/Users/richard/.nvm/versions/node/v5.0.0/lib/node_modules/mocha/lib/mocha.js:217:14)
    at Mocha.run (/Users/richard/.nvm/versions/node/v5.0.0/lib/node_modules/mocha/lib/mocha.js:469:10)
    at Object.<anonymous> (/Users/richard/.nvm/versions/node/v5.0.0/lib/node_modules/mocha/bin/_mocha:404:18)
    at Module._compile (module.js:425:26)
    at Object.Module._extensions..js (module.js:432:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:311:12)
    at Function.Module.runMain (module.js:457:10)
    at startup (node.js:136:18)
    at node.js:972:3

Add MFS core spec

We should add ipfs.files core spec for MFS specific stuff. I'll assign this to myself and do this when I can ๐Ÿ˜„

Add files with progress callback - called 58 times?

https://github.com/ipfs/interface-ipfs-core/blob/master/src/files.js#L115

Just wondering where there 58 comes from and what it's testing? I think it might be a brittle check as it's probably determined by data storage/transfer in much deeper dependencies that these tests have no control over.

It might work for now (and on the systems the tests are run on) but break later. I think it's enough to check that the progress function was called at least once and that the reported progress bytes is equal to the total bytes uploaded.

Would you accept a PR for this change?

Errors don't bubble up correctly from Factory

When something goes wrong during the startup sequence of the test nodes, the errors don't bubble up leaving the user without any details to as to what went wrong:

[12:21:03] Starting 'daemons:start'...
[12:21:03] Starting 'factory:start'...
[12:21:03] Finished 'factory:start' after 382 ms
  ipfs init done - (bootstrap and mdns off) - b
  ipfs init done - (bootstrap and mdns off) - a

npm ERR! Darwin 15.5.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "test:node"
npm ERR! node v6.9.1
npm ERR! npm  v3.10.8
npm ERR! code ELIFECYCLE
npm ERR! [email protected] test:node: `gulp test:node`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] test:node script 'gulp test:node'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the ipfs-api package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     gulp test:node
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs ipfs-api
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls ipfs-api
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/haad/code/js-ipfs-api-haadcode/npm-debug.log
make: *** [test] Error 1

The expected behaviour is to display the correct error message and stack trace.

ipfsFactory - Tests that require more than one IPFS node

Some API calls require more than one node to be active, otherwise they fail or timeout (also considered failure), for example: name, DHT and swarm tests.

Our current:

var test = require('interface-ipfs-core')

var common = {
  setup: function (cb) {
    cb(null, yourIPFSInstance)
  },
  teardown: function (cb) {
    cb()
  }
}

// use all of the test suits
test.all(common)

Needs to be upgraded to:

var test = require('interface-ipfs-core')

var common = {
  setup: function (cb) {
    cb(null, ipfsFactory)
  },
  teardown: function (cb) {
    cb()
  }
}

// use all of the test suits
test.all(common)

Where ipfsFactory has a method to spawn a new node

ipfsFactory.spawnNode((err, node)

More, since we already learned from js-ipfs-api and js-ipfs that is useful to spawn nodes with custom configs (e.g: mdns off; no boostrap nodes, etc). What we really need is

ipfsFactory.spawnNode(config, (err, node)

Make API more dev friendly

@haadcode you had a proposal to make the default encoding of strings (in a multihash) to be base58 (and not multihash.toString), could you confirm and if possible, make a PR to update the tests with that?

Short and concise example

feel free to close this off, but why are the examples around 1000lines long? it seems to me that any example that requires sifting through that much code is no longer helpful.

i am looking to do two things, set up a ipfs node to serve an api, and a client to make requests. i haven't found any examples that connect these two together.

this example led me to interface-ipfs-core/ repo. but it adds the file to a local instance, not a remote api. i'm having trouble finding an answer.
https://github.com/ipfs/js-ipfs/tree/master/examples/ipfs-101

rename `src` folder to `test`

The confusing part here is that the tests are in src/ and not in test/. It took me a while to look into that directory, it's not an intuitive place. Is it possible to put the tests in test/ like every JS project?
ipfs-inactive/js-ipfs-http-client#325 (comment) @haadcode

We have had the tests in src because this is a 'module', but if it makes sense for everyone, we can have the folder being exposed by the package.json being the test folder, avoiding the confusion.

๐Ÿ‘ by me, anyone has anything against?

Adding examples to the API definition

A lot of our users have been using our tests as examples, it works but it is not the ideal solution because the tests are not curated to be used as examples. I'll be adding examples to the DAG API since they are being requested, but we should add to all the API definitions one or two examples as it makes sense.

ipfs.add failure~~~!!!

js script:
const IPFS = require('ipfs')
const node = new IPFS()
const fs = require('fs')

node.on('ready', () => {
  console.log('Your node is now ready to use ')

  const files = [
    fs.readFileSync(process.cwd() + '/kelvv.cs')
  ]

  node.files.add(files, (err, files) => {
    if (err) { console.log(err) }
    console.log(files)
    // 'files' will be an array of objects containing paths and the multihashes of the files added
  })

  // stopping a node
  node.stop(() => {
    console.log('node is now offline')
  })
})
I got following result:
Your node is now ready to use 
node is now offline
[ { path: 'QmeVh3qKsZ5SMGaHwQBFbRLMz3AzGthBUEXqJPcCP2J5Rq',
    hash: 'QmeVh3qKsZ5SMGaHwQBFbRLMz3AzGthBUEXqJPcCP2J5Rq',
    size: 26 } ]

but , I access 'https://ipfs.io/ipfs/QmeVh3qKsZ5SMGaHwQBFbRLMz3AzGthBUEXqJPcCP2J5Rq' , failure , no content return

please help me~!

ipfs.files.cat path

Why does this not work?

ipfs.files.cat('/ipfs/<hash>/path')
  • The js-ipfs api says only a multihash or a string-encoded multihash.
  • calling cat on a path is standard IPFS api. not doing this is confusing to users of ipfs.
  • nothing here: https://github.com/ipfs/js-ipfs#api takes a path. this isn't right.
  • looked for alternative functions or examples that did this for 15 minutes. could not figure it out. ๐Ÿ‘Ž

๐ŸŒŸ interface-ipfs-core Updates and Changelog ๐Ÿ“ฃ

Ohai!

We are creating this issue in order to have a channel with you, the IPFS users, to communicate interface changes on js-ipfs and js-ipfs-api. Pretty soon we will also have changelogs automatically generated upon each release, but this channel serves the purpose of notifying before hand of such change and also invite you, the user, to let us know ways to make the API better.

Once interface-ipfs-core reaches 1.0.0, the interface changes will be less frequent.

If you want to get notified to js-ipfs and js-ipfs-api interface changes (real updates and proposed updates), please subscribe to this issue

Thank you!

core-api for Go implementation

I'm working on the go-ipfs-core-api in go-ipfs/core/coreapi (ipfs/kubo#2876) for now, but I guess ideally it should move here? Should I move the js stuff into a subdirectory?

Also, should this repo be named something something core-api, instead of core interface? The terms API and interface aren't neccessarily synonymous. I know it's late for this concern, I just picked up the go-core-api work again and found it a bit irritating.

`dag get --localResolve` vs `dag resolve`

@whyrusleeping learned today that you implemented the 'resolve within a single node' on a dag resolve call. I've implemented it under the dag get through an option --local-resolve, which is what I understood that we agreed after our chat at Ada's.

ยป ipfs dag --help
#...
  ipfs dag resolve <path>    - Resolve a path through an ipld node.
#... 

Do you want to have a separate command for that? What would be the primary reason for it?

`ipfs.files.get` never calls the callback for files that aren't in the network

I am just doing some testing and I went here to find some test images on the network:
https://www.reddit.com/domain/ipfs.pics

But using the hashes of these images using the api ipfs.files.get is never calling my callback. In other cases where the images do exist I successfully get images back.

This Never Calls the Callback

const cid = 'QmNdi3iaWYDoz91Szu6M4zH7r9Qom7bCrb2YyhLvjcp8TH'
ipfs.files.get(cid, (err, files) => {
    if (err) return console.log(err)
    console.log(files)
})

This Succeeds

const cid = 'QmQ2r6iMNpky5f1m4cnm3Yqw8VSvjuKpTcK1X7dBR1LkJF'
ipfs.files.get(cid, (err, files) => {
    if (err) return console.log(err)
    console.log(files) // two files, folder and cat.gif
})

Many tests without actual assertions

A lot of tests are currently not actually doing any valuable assertions on the responses from calls. Rather, it just makes sure there is no errors and that keys exists.

Example: https://github.com/ipfs/interface-ipfs-core/blob/9d91267ab7ab61e41c84eaa81c647cd630d83797/js/src/stats.js#L39-L59

In this sample, res could have all those keys set to null and the test would pass. We should at least assert the type of the value, but best would be the exact value if we can. If not exact, we should be able to assert the format instead.

dag api: get examples

Example: ipfs.dag.get

var IPFSAPI = require('ipfs-api')
var ipfs = IPFSApi('/ip4/127.0.0.1/tcp/5001')

// given an object such as: 
//
// var obj = {
//     "a": 1,
//     "b": [1, 2, 3],
//     "c": {
//         "ca": [5, 6, 7],
//         "cb": "foo"
//     }
// }
// 
// ipfs.dag.put(obj, function(err, cid2) {
//     assert(cid == zdpuAkxd9KzGwJFGhymCZRkPCXtBmBW7mB2tTuEH11HLbES9Y)
// })

function errOrLog(err, result) {
    if (err) console.error('error: ' + err)
    else console.log(result)
}

ipfs.dag.get('zdpuAkxd9KzGwJFGhymCZRkPCXtBmBW7mB2tTuEH11HLbES9Y/a', errOrLog)
// Returns:
// 1

ipfs.dag.get('zdpuAkxd9KzGwJFGhymCZRkPCXtBmBW7mB2tTuEH11HLbES9Y/b', errOrLog)
// Returns:
// [1, 2, 3]

ipfs.dag.get('zdpuAkxd9KzGwJFGhymCZRkPCXtBmBW7mB2tTuEH11HLbES9Y/c', errOrLog)
// Returns:
// {
//   "ca": [5, 6, 7],
//   "cb": "foo"
// }

ipfs.dag.get('zdpuAkxd9KzGwJFGhymCZRkPCXtBmBW7mB2tTuEH11HLbES9Y/c/ca/1', errOrLog)
// Returns:
// 6

Aegir loadFixture doesn't work with new js directory

Hard coded to look for fixtures in /base/node_modules/interface-ipfs-core/test/* but since all the code was moved into a js directory it is not work anymore:

https://github.com/ipfs/aegir/blob/616383f749285e0319470a4b4b723e257fad9d4e/src/fixtures.js#L19

Maybe this is an aegir bug?

Maybe we should pass "interface-ipfs-core/js" for module here?:

https://github.com/ipfs/interface-ipfs-core/blob/9d91267ab7ab61e41c84eaa81c647cd630d83797/js/src/files.js#L30

First iteration of interface-ipfs-core

Here are some thoughts on the herculean task of building this out. For each section of the API (objects, blocks, etc) we

  1. write up a draft api for them as a PR in this repo
  2. get approval on that PR from the core js-ipfs devs
  3. write PRs on js-ipfs and js-ipfs-api to match said api
  4. write tests here on interface-ipfs-core for that api
  5. run both js-ipfs and js-ipfs-api against the interface-ipfs-core tests from (4)

We'll need this done for the whole core api:

high priority:

  • objects
    • draft api
    • api approval
    • js-ipfs support
    • js-ipfs-api support
    • interface-ipfs-core tests
    • tests passing against js-ipfs and js-ipfs-api
  • blocks
    • draft api
    • api approval
    • js-ipfs support
    • js-ipfs-api support
    • interface-ipfs-core tests
    • tests passing against js-ipfs and js-ipfs-api
  • add
    • draft api
    • api approval
    • js-ipfs support
    • js-ipfs-api support
    • interface-ipfs-core tests
    • tests passing against js-ipfs and js-ipfs-api
  • cat
    • draft api
    • api approval
    • js-ipfs support
    • js-ipfs-api support
    • interface-ipfs-core tests
    • tests passing against js-ipfs and js-ipfs-api
  • get
    • draft api
    • api approval
    • js-ipfs support
    • js-ipfs-api support
    • interface-ipfs-core tests
    • tests passing against js-ipfs and js-ipfs-api

lower priority:

  • version
  • id
  • refs
  • pin
  • log

Testing in the JS IPFS Project

I want to start a discussion around testing, particularly around some of the JavaScript projects (js-ipfs interface-ipfs-core js-ipfs-api), really anything that requires the use of an IPFS daemon running (consumes js-ipfsd-ctl).

I'm curious if there are any particular testing strategies someone is trying to drive in the js-ipfs project at the moment, or if it's more adhoc and we can start to discuss the way we want to approach testing here in this issue.

In the js-ipfs project there are four main testing suites that cover the four main interfaces of the project.

  • core
  • http
  • gateway
  • cli

I want to ultimately help contributors know when they should write tests in each suite, and provide documentation and tools to help everyone write great tests that provide value to the community and project.

All of the tests suites currently require running a full node by either instantiating the core module, or starting a daemon and talking to it through js-ipfs-api.

A great depiction of this can be found in the js-ipfs readme.

I took some time and benchmarked the node:* test commands for the js-ipfs project.

Test Run core http gateway cli Total
1 24.40s 113.33s 3.88s 491.06s 632.67s
2 23.52s 114.36s 5.13s 501.95s 644.96s
3 24.39s 114.07s 6.98s 491.91s 637.35s
4 24.98s 112.91s 5.07s 491.04s 634.00s
5 23.86s 113.06s 5.59s 490.48s 632.99s
6 23.86s 114.57s 5.29s 492.69s 636.41s
7 22.40s 113.37s 8.79s 493.52s 638.08s
8 23.83s 113.34s 9.97s 498.46s 645.60s
9 21.07s 114.43s 5.66s 482.90s 624.06s
10 22.11s 114.49s 4.60s 506.10s 647.30s
Avg 23.44s 113.80s 6.10s 494.01s 637.34s

From the table above we can see that the cli tests take the large majority of the time (though about 180s of this time is consumed by three tests, more info @ ipfs/jenkins#93).

The high test time makes sense given some details about what the cli tests are doing.

  • Most are executed twice, online (with a daemon through the http-api), offline (directly to core)
  • Commands are execute through a shell, executing a new process for each command

Some questions

The js-ipfs project has some sharness tests, though they haven't really been touched for years. It would appear that running cli test through node is the standard for the project. Should we remove the sharness tests?

Both the http-api and core are primarily tested through the interface-ipfs-core project. There are also some independent tests written out in the core and http-api folders. Should we strive to migrate these tests to the interface-ipfs-core project over time as features settle?

Do we see value in the cli tests as they are written at the moment, and do we believe we want to keep moving forward with the general approach currently loosely laid out in the tests?

/cc @diasdavid @victorbjelkholm

I will be responding shortly to this issue with some of my own thoughts.

List of API changes for app developers (e.g. files.cat)?

Is there any documentation with a list of the API changes suitable for app developers.

For example ...I just got bitten by the change of files.cat just started returning Buffer instead of a Stream. I know the change was announced back in October in #162 but that is in development, so its only now with the 0.27 that it bites app developers.

Is there a full list of API changes from 0.26 to 0.27 so we can check for other issues?

The only other one I know of is that app developers who use WebRTC should switch their config to websockets star.

Have I missed some list ?

Unstable tests in need of fixing

  1) interface-ipfs-core tests,.files
       "after all" hook:
     Error: Uncaught AssertionError: The libp2p node is not started yet (webpack:///node_modules/assert/assert.js:195:0 <- node_modules/aegir/src/config/karma-webpack-bundle.js:18859)

  

  2) interface-ipfs-core tests,.dag
       "before all" hook:
     Uncaught TypeError: Cannot set property 'state' of undefined (node_modules/mocha/mocha.js:5143)

  

  3) interface-ipfs-core tests,.dag
       "before all" hook:
     TypeError: Cannot read property 'call' of undefined
    at ipfs.id (webpack:///node_modules/interface-ipfs-core/js/src/dag.js:35:0 <- node_modules/aegir/src/config/karma-webpack-bundle.js:293100:13)
    at setImmediate (webpack:///src/core/components/id.js:14:0 <- node_modules/aegir/src/config/karma-webpack-bundle.js:259514:24)
    at webpack:///node_modules/async/internal/setImmediate.js:27:0 <- node_modules/aegir/src/config/karma-webpack-bundle.js:82322:16
    at run (webpack:///node_modules/setimmediate/setImmediate.js:40:0 <- node_modules/aegir/src/config/karma-webpack-bundle.js:101788:13)

From: https://ci.ipfs.team/blue/rest/organizations/jenkins/pipelines/IPFS/pipelines/js-ipfs/branches/master/runs/105/nodes/15/steps/171/log/?start=0

The tests only failed on macOS, so the tests work, but just not this time.

To make these tests less unstable we should:

  • Figure out a way of structuring the tests so they can't fail this way
  • Or, if this is impossible, add a this.retries(3) and increase timeout.

proposal: Files API - dot return DAGNodes on Add

I've been seeing a lot of users getting confused with the fact that we return DAGNodes on a files add for each file. A DAGNode contains all the information that we might would have hoped for, like the multihash and size of the tree, however, since it has a data field, users are getting confused thinking that the data of that file is on that data field, while in fact, that datafield is just one of the pieces of the whole file.

What if we changed the API to instead of returning DAGNodes, to return just an object with path, multihash and size?

Q: dag.get() returns links as CBOR Tags (or whatever); how to get a valid multihash?

Hi,
dag.get() returns all links as Buffers (I think they are CBOR Tags), but to it would be helpful to get back the original multihashes so the response actually corresponds to the original data used when creating the node. I just can't figure out how to decode these CBOR Tags back to multihashes. Using multihashes.toB58String(buffer) doesn't exactly produce the correct value:

original CID of the link: zdpuAzEszwciEA9mVc1JEyhr46S5gFF5p7AXsCTAXBHowNVEd
vs the result of multihashes.toB58String: dpuAzEszwciEA9mVc1JEyhr46S5gFF5p7AXsCTAXBHowNVEd

Notice the result is one character shorter (missing 'z') and this casues issues when I want get a CID out this object later (Error: multihash length inconsistent: 0x1220cd4f14b4b817102adc0f09b2d3edb4c3720e5ff4a65b6e5871034469ff55d16e)

dag.put({
  link: {
    "/": "zdpuAzEszwciEA9mVc1JEyhr46S5gFF5p7AXsCTAXBHowNVEd"
  },
}, {
  format: "dag-cbor",
  hashAlg: "sha2-256",
});

dag.get(rootCID, (err, node) => {
  // node.value.link["/"] is a Buffer, need to get a valid multihash again
  // multihashes.toB58String(node.value.link["/"]) doesn't work => produces invalid multihash
});
``


  

De-duplicate the callback vs promise tests

Currently, we duplicate all the tests (yay) so we're testing both the promise and callback implementations. While the idea is good, the implementation (copy-paste) is not ideal.

  1. Proposed implementation is to have one function with a callback that will automatically tests both implementations and remove all the duplicated tests.

  2. Another solution is to only test the callback vs promise interface in dedicated tests, and chose either callbacks or promises for the tests of the tests. But, we might miss some places by doing this.

Proposal for Standardising API that can return buffered results vs Node.js Streams vs Pull Streams

tl;dr; this is a proposal to standardize the API definitions of function calls that may return buffered values, Node.js Streams or pull-streams.

I'll make this proposal and succinct as possible, with the hopes we can get around it asap.

Our current needs and requirements:

  • Users want to be able to consume data coming from calls (i.e: files.{cat, get, ls}) as a single result value. It is extremely convinient.
  • We know that buffering everything doesn't work for a big part of the cases.
  • We like to use pull-streams everywhere because it saves us from some Node.js hiccups and makes it extremely convenient between modules.
  • Users still to have access to Node.js Streams, since they are the most widely use.
  • We have a strong commitment to not breaking user-space, however, if this standardization proves to be popular and more user-friendly, I believe we can open an exception as long as we don't loose functionality (only some name changing), under the condition to have a smooth transition.

Essentially we have two options on the table

a) Use option arguments to specify what we are looking for

ipfs.files.cat(cid, {
  buffer: true         // buffer all the results and return a single value
  // PStream: true // return a Pull-Stream
  // NStream: true // return a Node.js Stream
}, callback)

b) Use different method names

ipfs.files.cat(cid, callback)
// or
ipfs.files.catPStream(cid)
// or
ipfs.files.catNStream(cid)

This option has a couple of advantages:

  • interpreter friendly - by having a method for each type, it will enable V8 to optimize it better
  • comparing to what we have today, it will enable the functions that return streams to return streams, instead of the 'callback with a stream'. Enabling things like ipfs.files.catNStream(cid).pipe(process.stdout) without going into callback. (The reason why we had to return a stream in a callback, was because of Promises, but that stays on ipfs.files.cat only.

Once this proposal goes forward we need to:

  • Update the API definitions
  • Update the tests
  • Update the implementations
  • Update the internals that exposes pull-streams with something like getStream, to be getPStream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.