Code Monkey home page Code Monkey logo

helia's Introduction

Helia logo

ipfs.tech Discuss codecov CI

Helia is a lean, modular, and modern TypeScript implementation of IPFS for the prolific JS and browser environments.

See the Manifesto, the FAQ, and the State of IPFS in JS blog post from October 2022 for more info.

🌟 Usage

A quick overview of how to get different types of data in and out of your Helia node.

πŸͺ’ Strings

You can use the @helia/strings module to easily add and get strings from your Helia node:

import { createHelia } from 'helia'
import { strings } from '@helia/strings'

const helia = await createHelia()
const s = strings(helia)

const myImmutableAddress = await s.add('hello world')

console.log(await s.get(myImmutableAddress))
// hello world

πŸŒƒ JSON

The @helia/json module lets you add or get plain JS objects:

import { createHelia } from 'helia'
import { json } from '@helia/json'

const helia = await createHelia()
const j = json(helia)

const myImmutableAddress = await j.add({ hello: 'world' })

console.log(await j.get(myImmutableAddress))
// { hello: 'world' }

🌠 DAG-JSON

The @helia/dag-json allows you to store references to linked objects as CIDs:

import { createHelia } from 'helia'
import { dagJson } from '@helia/dag-json'

const helia = await createHelia()
const d = dagJson(helia)

const object1 = { hello: 'world' }
const myImmutableAddress1 = await d.add(object1)

const object2 = { link: myImmutableAddress1 }
const myImmutableAddress2 = await d.add(object2)

const retrievedObject = await d.get(myImmutableAddress2)
console.log(retrievedObject)
// { link: CID(baguqeerasor...) }

console.log(await d.get(retrievedObject.link))
// { hello: 'world' }

🌌 DAG-CBOR

@helia/dag-cbor works in a similar way to @helia/dag-json but stores objects using Concise Binary Object Representation:

import { createHelia } from 'helia'
import { dagCbor } from '@helia/dag-cbor'

const helia = await createHelia()
const d = dagCbor(helia)

const object1 = { hello: 'world' }
const myImmutableAddress1 = await d.add(object1)

const object2 = { link: myImmutableAddress1 }
const myImmutableAddress2 = await d.add(object2)

const retrievedObject = await d.get(myImmutableAddress2)
console.log(retrievedObject)
// { link: CID(baguqeerasor...) }

console.log(await d.get(retrievedObject.link))
// { hello: 'world' }

🐾 Next steps

Check out the helia-examples repo for how to do mostly anything with your Helia node.

πŸƒβ€β™€οΈ Getting Started

Check out the Helia examples repo, which covers a wide variety of use cases. If you feel something has been missed, follow the contribution guide and create a PR to the examples repo.

πŸ“— Project Docs

πŸ“’ API Docs

πŸ“ System diagram

graph TD;
    User["User or application"]-->IPNS["@helia/ipns"];
    User-->UnixFS["@helia/unixfs"];
    User-->Libp2p;
    User-->Datastore;
    User-->Blockstore;
    UnixFS-->Blockstore;
    IPNS-->Datastore;
    subgraph helia [Helia]
      Datastore
      Blockstore-->BlockBrokers;
      BlockBrokers-->Bitswap;
      BlockBrokers-->TrustlessGateways;
      Libp2p-->DHT;
      Libp2p-->PubSub;
      Libp2p-->IPNI;
      Libp2p-->Reframe;
    end
    Blockstore-->BlockStorage["File system/IDB/S3/etc"];
    Datastore-->DataStorage["Level/S3/IDB/etc"];
    Bitswap-->Network;
    TrustlessGateways-->Gateway1;
    TrustlessGateways-->GatewayN;
    DHT-->Network;
    PubSub-->Network;
    IPNI-->Network;
    Reframe-->Network;

🏭 Code Structure

Helia embraces a modular approach and encourages users to bring their own implementations of various APIs to suit their needs.

The basic Helia API is defined in:

The API is implemented by:

Helia also ships a number of supplemental libraries and tools that can be combined with Helia API implementations to accomplish tasks in distributed and trustless ways.

These libraries are not intended to be the "one true implementation" of any given API, but are made available for users to include depending on the need of their particular application:

An interop suite ensures everything is compatible:

Other modules

There are several other modules available outside this repo:

πŸ“£ Project status

Helia v1 shipped in 202303 (see releases), and development keeps on trucking as we work on initiatives in the roadmap and make performance improvements and bug fixes along the way.

πŸ›£οΈ Roadmap

Please find and comment on the Roadmap here.

πŸ‘« Get involved

  • Watch our Helia Demo Day presentations here
  • We are sharing about the progress at periodic Helia Demos. This is a good place to find out the latest and learn of ways to get involved. We'd love to see you there!
  • Pick up one of the issues.
  • Come chat in Filecoin Slack #ip-js. (Yes, we should bridge this to other chat environments. Please comment here if you'd like this.)

🀲 Contribute

Contributions welcome! Please check out the issues.

Also see our contributing document for more information on how we work, and about contributing in general.

Please be aware that all interactions related to this repo are subject to the IPFS Code of Conduct.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

πŸ›οΈ Notable Consumers/Users

🌞 Branding

πŸͺͺ License

Licensed under either of

helia's People

Contributors

2color avatar achingbrain avatar autonome avatar betterworld-liuser avatar biglep avatar bjrint avatar dependabot[bot] avatar flamenco avatar galargh avatar github-actions[bot] avatar ipfs-mgmt-read-write[bot] avatar jbenet avatar juliomatcom avatar meandavejustice avatar mmsaki avatar saul-jb avatar semantic-release-bot avatar sgtpooki avatar shuoer86 avatar tinytb avatar vmx avatar web-flow avatar web3-bot avatar whizzzkid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helia's Issues

Document our development and contributing guide

Done Criteria

There is an easily discoverable document that bootstraps someone on contributing, sets expectations of contributors/developers, and answers related frequently asked questions.

Why Important

Having a community of contributors is a key success criteria for Helia. I hypothesize that documentation here helps makes contributing more approachable/self-service.

Notes

I know we link to https://github.com/ipfs/community/blob/master/CONTRIBUTING_JS.md. It looks like that has some older stuff. I think we either need to update that doc inline, copy it and make updates, or create an addendum with Helia specifics and then link to https://github.com/ipfs/community/blob/master/CONTRIBUTING_JS.md. I expect we want our own place to add copy just so there is no friction to be documenting Helia-related things. I worry folks would shy away from touching https://github.com/ipfs/community/blob/master/CONTRIBUTING_JS.md because they don't know whereall it's linked from or because Helia specifics aren't relevant there.

Items that would want to see included:

  • Link to the release process/philosophy (#81)
  • Set clear expectations on how PRs are to be titled/written.
  • Where can someone show up to engage with other contributors (e.g., chat, triage)
  • Where the inprogress and ondeck work is tracked (presumably linking to a Helia project board)
  • Expectation that PRs are linked to issues that provide rationale and justification for the work. (It should be clear to a reviewer "why are we doing this")
  • Include some of the verbiage that is copy/pasted various places like "Please be aware that all interactions related to this repo are subject to the IPFS Code of Conduct" or "Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions." We can then remove that copy/pated text.
  • How someone become a maintainer? (We can say that this process needs to be fully determined, but starts with showing making contributions and demonstrates good judgment)

(urgent) facing issues while saving data permanently...

I have build a decentralized chatting web app and trying to save the user data permanently in users system. when user open my web app and start chatting, I am able to save his/her data till the chat ends and fetch the CID (like :bafkreih5bdsgt6g6ohr7ma6ifbfar6fytbbjyvnyb5hlbytlnen5vtnhyy) . but when I reload the page and try to fetch the data using the CID, I'm getting an error:

here is the code: for creating a node...

const [fs,setFs] = useState();

const node = await createLibp2p({
    addresses: {
      listen: [
        "/dns4/wrtc-star1.par.dwebops.pub/tcp/443/wss/p2p-webrtc-star",
        // "/dns4/wrtc-star2.sjc.dwebops.pub/tcp/443/wss/p2p-webrtc-star",
      ],
    },
    transports: [webSockets(), wrtcStar.transport],
    connectionEncryption: [noise()],
    streamMuxers: [mplex()],
    peerDiscovery: [
      wrtcStar.discovery,
      bootstrap({
        list: [
          "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
          "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
        ],
      }),
    ],
    dht: kadDHT(),
    pubsub: floodsub(),
  });

  await node.start();
  setNodE(node);

  //creating helia node.
  const blockstore = new MemoryBlockstore();
  const datastore = new MemoryDatastore();

  const helia = await createHelia({
    node,
    blockstore,
    datastore

  });

  const fsTemp = unixfs(helia);
  setFs(fsTemp);       
} 

Here is the code: to save data

const saveChat = async()=>{
const chatStr = localStorage.getItem(account+'room');
if (chatStr && account) {
  
  const encoder = new TextEncoder();
  const bytes = encoder.encode(chatStr);

  const _cid = await fs.addBytes(bytes);
  const cid = _cid.toString();
  console.log("cid is : "+cid);
  localStorage.setItem(account + "data", cid);
  
} else {
  toast.error(`connect ur πŸ‘› & 🫡 Data is empty.`);
}
}

here is the code: to fetch data

const getChat = async()=>{

const cid = localStorage.getItem(account+'data');

if(nodE && cid){

  const decoder = new TextDecoder();
  let text = "";

   for await (let chunk of fs.cat(cid)) {     // code were error occured
     text += decoder.decode(chunk, {
       stream: true,
     });
   }

  console.log(text)

  const chatObj = JSON.parse(text);
  console.log(chatObj);

}else toast.error(`nothing to show...`)

}

Saving data and fetching data is working fine at one instance of helia but when i restart and try to fetch data (I'm saving the cid locally) I am getting error:

Error: Not Found

` console.log(b); 
> 310 |      for await (let chunk of fs.cat(cid)) {
      |                    ^
  311 |        text += decoder.decode(chunk, {
  312 |          stream: true,
  313 |        });`

I just want to save user data permanently on their system using ipfs/helia.

more about use of whatwg/fs

Currently the web uses indexeddb
but there is a new kid on the web called whatwg/fs and also the use of file system access

i would love to be able to create a mutable file system and just simple create a ipfs node and share it's identifier and load it as just <cid>/path/src/index.html or something

having file system access also means that the file can be changed or added dynamically without js even knowing anything about it.
so it would be cool to have some kind of sort of dynamic support for this.

i don't want to have to put all my files in indexeddb to duplicate all the data being stored.

i want to see code examples, docs, best practices for how to best use ipfs with whatwg/fs, (opfs) & file system access.

Pinning and Garbage Collection

Context

Garbage collection in js-ipfs originally followed the go-ipfs model whereby pins were stored in a big DAG that was traversed to work out which blocks could be deleted and which couldn't while running garbage collection.

ipfs/js-ipfs#2771 changed that to store the pins in the datastore instead of a big DAG which yielded a massive speed up when adding new pins, but garbage collection was still slow because the algorithm has to walk every dag that's pinned to build up a list of blocks in those dags.

Helia gives us an amazing opportunity to solve that slow garbage collection problem, this would be incredibly valuable to pinning services, for example, who typically don't garbage collect anything as their blockstores are so large the time it takes to run gc makes it impractical to do so.

Gotchas

  • Js-ipfs GC is a stop-the-world model, e.g. the blockstore cannot be used while GC occurs in case add operations add blocks that GC then immediately deletes. Coming up with a clever way to not have to do this would be greatly appreciated
  • Two CIDs with different versions and/or codecs can have the same multihash - if both are pinned the removal of one pin should not delete the blocks for the other
  • If the application crashes while creating a pin, it should mark the pin as failed
  • Manually deleting blocks from the blockstore should be prevented when the block being deleted is part of a pinned DAG
  • Datastore keys are used for storing pin metadata - these can be stored on filesystems so all keys should be case-insensitive

Interface

An interface to the pinning system might look like this (somewhat similar to js-ipfs):

import { CID } from 'multiformats/cid'
import type { AbortOptions } from '@libp2p/interfaces'

enum PinStatus {
  /**
   * All blocks in the pin have been stored in the blockstore
   */
  pinned = 'pinned',

  /**
   * The pin is being created, blocks in the DAG are still being fetched from
   * the network
   */
  pending = 'pending',

  /**
   * Not all blocks could be fetched from the network - this is usually because
   * abort signal passed into the `pin.add` operation emitted it's `abort` event.
   */
  failed = 'failed'
}

interface AddOptions extends AbortOptions {
  /**
   * When pinning a DAG, Helia will ensure that all blocks in the DAG are present in
   * the blockstore which may involve network operations. By default Helia will traverse
   * the entire DAG but pass a depth here to limit that behaviour.
   */
  depth?: number

  /**
   * A user-chosen name for the pin
   */
  name?: string

  /**
   * User-specific metadata for the pin
   */
  metadata?: Record<string, string | number | boolean>

  /**
   * Receives progress events
   */
  progress?: (evt: Event) => void
}

interface RmOptions extends AbortOptions {
  /**
   * Receives progress events
   */
  progress?: (evt: Event) => void
}

interface LsOptions extends AbortOptions {
  type?: PinType
}

interface Pin {
  /**
   * The current status of the pin
   */
  status: PinStatus

  /**
   * The pinned CID
   */
  cid: CID

  /**
   * The pin name
   */
  name?: string

  /**
   * `Infinity` for a recursive pin, 1 for a direct pin or an arbitrary number
   */
  depth: number

  /**
   * User-specific metadata for the pin
   */
  metadata: Record<string, string | number | boolean>
}

interface Pinning {
  /**
   * Pin the block that corresponds to the passed CID. If the DAG in the pinned block
   * contains CIDs, the blocks corresponding to those CIDs will also be pinned.  Pass
   * `{ direct: true }` to only pin the top level block.
   */
  add: (cid: CID, opts?: AddOptions) => Promise<void>

  /**
   * Unpin the block that corresponds to the passed CID. If the DAG in the pinned block
   * contains CIDs, the blocks corresponding to those CIDs will also be unpinned.  Pass
   * `{ direct: true }` to only unpin the top level block.
   */
  rm: (cid: CID, opts?: RmOptions) => Promise<void>

  /**
   * List all pins stored by this node
   */
  ls: (opts?: LsOptions) => AsyncGenerator<Pin>
}

interface GCOptions {
  /**
   * Receives progress events
   */
  progress: (evt: Event) => void
}

interface Helia {
  // ...other methods here...

  /**
   * Run garbage collection on this node - any blocks that are not pinned will be deleted
   */
  gc: (opts?: GCOptions) => Promise<void>

  /**
   * The pinning API
   */
  pin: Pinning
}

Strategies

Some benchmarking will be required to choose the appropriate pinning strategy. These should store several 100k of pins of varying depths before running gc.

Classic

  • Store a /pin/${cid.multihash} object for each pin:
interface Pin {
  /**
   * A user friendly name for the pin
   */
  name?: string

  /**
   * `Infinity` for a recursive pin, 1 for a direct pin or an arbitrary number
   */
  depth: number

  /**
   * User-specific metadata for the pin
   */
  metadata: Record<string, string | number | boolean>

  /**
   * The codec from the CID that was pinned
   */
  codec: number

  /**
   * The version from the CID that was pinned
   */
  version: number
}
  • When running GC the CID is recreated from the version & codec from the pin and the multihash from the pin datastore key
  • All recreated CIDs are traversed, a set of all pinned CIDs is created, then all blocks in the datastore that do not have CIDs with multihashes corresponding to the pinned CIDs are deleted
  • Really slow!

Reference counting

  • Store a /pin/${cid} object for each pin:
interface Pin {
  /**
   * A user friendly name for the pin
   */
  name: string

  /**
   * `Infinity` for a recursive pin, 1 for a direct pin or an arbitrary number
   */
  depth: number

  /**
   * User-specific metadata for the pin
   */
  metadata: Record<string, string | number | boolean>
}
  • Also store an object in the datastore for every pinned block referenced by the multihash of the block, e.g. '/pinned-block/${cid.multihash}'
interface PinnedBlock {
  pinCount: number
  pinnedBy: CID[]
}
  • When a block is pinned, increment pinCount by 1, creating the PinnedBlock entry if necessary
  • Also store the root CID that is being pinned so the user can be informed of which pin to remove in order to delete the block
  • When a block is unpinned, decrement pinCount - if it's then zero, remove the /pinned-block/${mh} key from the datastore
  • When running GC, delete any block without a corresponding PinnedBlock entry in the datastore - this should be nicely parallelizable
  • When using the helia.blockstore.delete method only checking for the presence of a PinnedBlock entry should be sufficient to prevent a pinned block from being deleted accidentally

Something else?

We are open to suggestions, but all implementations should be benchmarked.

Compatibility with deno

For an ActivityPub project I need ipfs in deno and this seems most promising to me.
Currently it seems that the crypto.generateKeyPair is not supported, coming from
https://github.com/libp2p/js-libp2p-crypto

See https://stackoverflow.com/questions/72584422/how-to-convert-node-crypto-generatekeypairsync-to-deno

This is the error stack:

[AsyncFunction: createLibp2p] [Class: MemoryDatastore] [Class: MemoryBlockstore] [Function: unixfs] [Class: CID] [AsyncFunction: createHelia]
error: Uncaught Error: Not implemented: crypto.generateKeyPair
    at notImplemented (ext:deno_node/_utils.ts:7:11)
    at generateKeyPair (ext:deno_node/internal/crypto/keygen.ts:8:5)
    at ext:deno_node/internal/util.mjs:83:15
    at new Promise (<anonymous>)
    at generateKeyPair (ext:deno_node/internal/util.mjs:68:12)
    at Module.generateKey (file:///Users/ed/Library/Caches/deno/npm/registry.npmjs.org/@libp2p/crypto/1.0.14/dist/src/keys/ed25519.js:31:23)
    at Module.generateKeyPair (file:///Users/ed/Library/Caches/deno/npm/registry.npmjs.org/@libp2p/crypto/1.0.14/dist/src/keys/ed25519-class.js:105:52)
    at generateKeyPair (file:///Users/ed/Library/Caches/deno/npm/registry.npmjs.org/@libp2p/crypto/1.0.14/dist/src/keys/index.js:35:34)
    at createEd25519PeerId (file:///Users/ed/Library/Caches/deno/npm/registry.npmjs.org/@libp2p/peer-id-factory/2.0.3/dist/src/index.js:6:23)
    at createLibp2pNode (file:///Users/ed/Library/Caches/deno/npm/registry.npmjs.org/libp2p/0.43.3/dist/src/libp2p.js:395:32)

Outdated libp2p-interface

Currently, helia uses "@libp2p/interface-libp2p": "^1.1.0", but the actual version in this repo is "version": "3.1.0". I am using typescript and I'm getting a lot of errors. One example, to be more specific is with using pubsub. On version 3.1.0, libp2p uses pubsub on services property (i.e. libp2p.services.pubsub), where older version uses pubsub in root of an object (i.e. libp2p.pubsub). This produces linting errors, but maybe it could be fixed easily, not sure..? There are probably a lot more incompatibilities, but the one I also noticed is while passing libp2p in options of createHelia function.

Cheers

FATAL ERROR: heap out of memory, code from docs crashes

i am using ubuntu linux, using nodejs v20.1.0.
i am running basic code from the helia/unixfs docs.

async function test(){
const {FsDatastore} = await import('datastore-fs')
const {FsBlockstore} = await import('blockstore-fs')
const helias = await import('helia')
const {unixfs} = await import('@helia/unixfs')
const { CID } = await import('multiformats/cid')

const helia = await helias.createHelia({
  blockstore: new FsBlockstore(require('path').join(__dirname, 'data')),
  datastore: new FsDatastore(require('path').join(__dirname, 'data'))
})

const fs = unixfs(helia)

for await (const entry of fs.addAll([{
  path: 'foo.txt',
  content: Uint8Array.from([0, 1, 2, 3])
}])) {
  console.info(entry)
}
return null
}

test().then(console.log).catch(console.error)

the script returns what it should, but the problem is that after a few minutes, the script crashes. the error says that it is out of memory.

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

this is rest of the error message.

1: 0xc83eb0 node::Abort() [node]
 2: 0xb698df  [node]
 3: 0xe9ea20 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
 4: 0xe9ed07 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
 5: 0x10b0375  [node]
 6: 0x10b0904 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]
 7: 0x10c77f4 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*) [node]
 8: 0x10c800c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 9: 0x1120dcc v8::internal::MinorGCJob::Task::RunInternal() [node]
10: 0xcfb386  [node]
11: 0xcfdf3f node::PerIsolatePlatformData::FlushForegroundTasksInternal() [node]
12: 0x185f1e6  [node]
13: 0x187164d  [node]
14: 0x185fb8e uv_run [node]
15: 0xba3ba6 node::SpinEventLoopInternal(node::Environment*) [node]
16: 0xcd1476 node::NodeMainInstance::Run() [node]
17: 0xc3d19f node::LoadSnapshotDataAndRun(node::SnapshotData const**, node::InitializationResultImpl const*) [node]
18: 0xc40542 node::Start(int, char**) [node]
19: 0x7f0050229d90  [/lib/x86_64-linux-gnu/libc.so.6]
20: 0x7f0050229e40 __libc_start_main [/lib/x86_64-linux-gnu/libc.so.6]
21: 0xba12ae _start [node]
Aborted (core dumped)

am i doing something wrong here, or did i use the wrong options? any ideas would be great.

Unable to connect to ipfs cluster from helia node

I'm trying to create a helia browser node and trying to connect it to a ipfs cluster running in my local, but keep getting the below error

The dial request has no valid addresses

Provided the code below:

await createLibp2p({
  datastore,
  addresses: {
    listen: [],
    swarm: [
      '/dns4/wrtc-star1.par.dwebops.pub/tcp/443/wss/p2p-webrtc-star'
    ]
  },
  transports: [
    webRTC()
  ],
  connectionEncryption: [
    noise()
  ],
  streamMuxers: [
    yamux()
  ],
  services: {
    identify: identifyService()
  }
})
const heliaNode = await getHelia();
const addr = multiaddr("/ip4/127.0.0.1/tcp/9094");
heliaNode.libp2p.dial(addr) //Throws error

Thanks

Provide a default libp2p instance

Done Criteria

Helia can be constructed without a libp2p instance. In this case, a "good default" libp2p instance is provided.

Why Important

libp2p exposes a lot of knobs. Knowing the right one sets for a good connectivity story is not cear to new users. This reduces the barrier to entry and also serves as a base set to potentially tweak.

Notes

This default libp2p instance should be well commented so that interested users can learn from it to determining how they might follow suit or tweak.

bug: `heliaInstance.libp2p.addEventListener('peer:discovery', cb)` calls cb twice for same peer

Peer discovery should only emit once, unless we are in fact discovering the peer twice via different methods somehow.

Either way, once a peer is discovered, we should cancel any in-progress operations and emit that we discovered the peer only once.

While working on a new helia-script-tag example (via #43), I was able to get the following event log:

1263ms - Created LibP2P instance
1263ms - Created Helia instance
2248ms - Discovered peer QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN
2252ms - Discovered peer QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN
2252ms - Discovered peer QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa
2252ms - Discovered peer QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa
2253ms - Discovered peer QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb
2254ms - Discovered peer QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb
2254ms - Discovered peer QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt
2254ms - Discovered peer QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt
74326ms - Connected to QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb
77805ms - Connected to QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt
135298ms - Disconnected from QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb
138782ms - Disconnected from QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt

feat: display helia metrics in probelab

Note: the probelab side of things is being documented at https://pl-strflt.notion.site/Helia-as-part-of-CMI-4ccbd109ce404f8fa4b4b02fd8af0b3f

Tasks

Multiple/alternative data retrieval implemenations

Currently Helia takes a blockstore that it enhances with bitswap. This creates a hard dependency on bitswap.

To enable experimentation and adoption of faster/more use-case specific retrieval protocols (cars, graphsync, XYZNewFutureProtocol etc) we should allow this to be a configuration option.

At this point blocks may not be the correct abstraction since it limits us to a block as the unit of data you get in response to a CID.

A better read abstraction might be a CID to a stream of Uint8Arrays? The the underlying retrieval method can apply whatever optimisations it can to fetch the data quickly and the calling code doesn't have to keep going back to fetch another block for another CID.

interface Options {
  offset?: number
  length?: number
}

interface ContentReader {
  get (cid: CID, options: Options): AsyncGenerator<Uint8Array>
}

Questions:

  • Does this shift complexity of interpreting block data on to the content reader?
  • What does the writer interface look like?
    • Can the writer/reader interfaces be asymmetric? E.g. CIDs/Blocks in, CID/Stream out?
  • Does this assume file data?
  • What about structures like unixfs where the root block has file metadata and then file data in leaf nodes?
  • If DAGs are all dag-pb, dag-cbor or dag-json we can make some assumptions about structure?

Update public written assets (e.g., docs)

This is a tracking issue for tracking all the areas that need to be updated once Helia is released and js-ipfs is being phased out.

The specific done criteria for now is TBD. We are for now dumping TODOs here so we don't lose track of these.

@biglep 202305 feedback on Migration Guide and FAQ

Below is some feedback on recent new wiki pages. (Thanks for putting these together.)

Cross-cutting:

  • βœ… Is it js-ipfs or js-IPFS? I see we ware using "js-IPFS" in these new docs, but previously had seen it mostly written as "js-ipfs" (e.g., https://js.ipfs.tech/ )
    • Agreed to go with "js-IPFS"
  • βœ… Idea: at least while these docs live in wiki, consider not having an inline TOC but rely on the autogenerated TOC that github wiki provides on the right-hand-side. This is then less that someone needs to update.
    • Agreed to rely on wiki-generated TOCs. The pages below have been updated.
  • βœ… Update README calling out that there are more/other docs in the wiki.

https://github.com/ipfs/helia/wiki/Migrating-from-js-IPFS

  • Bitswap
    • βœ… Can we show the code snippet for what this looks like to get all the metrics? I probably didn't poke around long enough, but this wasn't immediately jumping out to me from the links provided.
  • Other

https://github.com/ipfs/helia/wiki/FAQ

Wiki link formatting

I noticed that the headers for various wiki pages were recently changed to use emojis

image

However, this means that links to pages now look like:
https://github.com/ipfs/helia/wiki/%E2%9D%93-FAQ
https://github.com/ipfs/helia/wiki/%F0%9F%9A%9B-Migrating-from-js-IPFS

Also, this means that links like the ones below trigger the "GitHub wiki new page workflow" since this is technically a different link than the emoji-ed link:
https://github.com/ipfs/helia/wiki/FAQ
https://github.com/ipfs/helia/wiki/Migrating-from-js-IPFS

While the emojis do add a nice visual touch, this will probably cause some confusion when sharing links. Also thinking about linking into these docs from other doc sets like ipfs-docs, etc.

Suggestion:

Remove emojis from titles to avoid confusion, formatting issues, etc.

Why two code bases?

Why not stick w/ golang, compile it to webassembly for running in the browser using what's available in pion, and then create an event bus to wrap the whole thing? Then the typescript wrapper can make calls via the event bus?

[Epic] v1 release

This is a placeholder issue for tracking all the working items to get this new IPFS-in-JS implementation released. More details will be added soon...

Document what runtime environments Helia is tested and supported on

Done Criteria

It is self-service for a user to understand what runtimes the maintainers test Helia with and commit to spending time maintaining (e.g., troubleshooting issues).

Why Important

Sets clear expectations for users as they decide whether to adopt the project.
Make clear for maintainers with what level support they need to maintain as they make changes.

Notes

  1. We should be clear about what the is around versinos.
  2. Where we don't support something, let's be clear that we aren't supporting it.
  3. I assume this is largely dependent on js-libp2p's support policy. We can link to that (or if we need to use this as a forcing function to document it, let's create it.)
  4. I assume we're going to say something like:
    • Node JS: current and active LTS versions
    • Browsers
      • Desktop (last two major versions)
        • Chromium
        • Firefox
        • Safari
      • Mobile/Tablet (last two OS releases)
        • Android Chromium
        • iOS/iPadOS Safari

Write up manifesto explaining the motivation for this IPFS implementation

  • Problems we saw & learned about from js-ipfs
  • Goals for a new implementation
    • Support multiple filesystems, make winfs/unixfs swappable
    • Permissions: want UCAN (or something) to control access to the HTTP RPC API
      • If people mistakenly expose their node to the public, third parties can't do anything harmful
  • Pitfalls, and how we'll avoid them
    • Non-goals: what we don't want to re-invent
      • e.g. Networking layer

Examples porting from js-ipfs to helia-examples

Done Criteria

The relevant examples below from https://github.com/ipfs-examples/js-ipfs-examples have been replicated in https://github.com/ipfs-examples/helia-examples

Why Important

  1. Tangible examples that are verified as part of CI are a great hands-on way for a user to get started.
  2. The act of porting/writing examples is a great way for new contributors to help and to learn, flex, and give feedback on the Helia API

User/Customer

  1. New users of Helia
  2. Maintainers

List of examples (in priority order)

Tooling integration

Features

Backlog

Features (Bigger endeavors since are standalone sites)

  1. P2 effort/days exp/intermediate kind/enhancement kind/maintenance need/maintainer-input status/blocked
    SgtPooki
  2. P1 effort/weeks exp/expert kind/enhancement status/ready topic/dependencies
    SgtPooki

Features (Skip for now, external dependency on webrtc browser to browser transport)

Features (Skip for now, external dependency on rpc client)

Back burner (Skip for now, may need more thought or not be necessary)

Notes

Document the release process / expectations

Done Criteria

We have a durablble document that covers Helia's release philosophy and mechanics, and is easily discoverable.

Why Important

Releases are our delivery mechanism for translating the great development work into a form that consumers can easily use.
We want it documented:

  1. for resieliency so we're not reliant on tribal knowledge
  2. helps build confidence and set expectations for consumers
  3. gives insights to other contributors

Notes

Specific steps I assume we'll need to take:

  • Create a durable document like RELEASE_PROCESS.md
  • Add a link to the release process document from a Release Process section in the README

Questions I think the doc needs to answer:

  • Who does a release?
  • How often do we release? Basically help answer for someone what the triggering conditions are for a release. Is it whenever maintainers feel like it, when someone asks, time bound? I want to make sure we set people's expectations appropriately.
  • What relationship do Helia releases have to js-libp2p releases? For example, if there's a new js-libp2p release, what's the upper bound we'll go before issuing a new Helia release?
  • What are the steps in doing a release?
  • Some description/overview of the machinery under the covers that does the automation
  • Is there anything we have to be sensitive when doing releases (e.g., we don't do them on Fridays)
  • What to do when a commit makes it in that doesn't follow the expected naming format
  • Do we do any editorializing on top of the release to notes to give a "tldr" or "what matters in the release". (I think it can be important for to make sure we step back and think about why this release matters for consumers beyond bullet points of one liners)
  • Where do we document migration or upgrade steps?
  • Is there any manual testing or can the release happen as long CI is green?
  • What announcing do we do after the release? (I think ideally we should make a discuss.ipfs.tech post which also posts into chat, and a blog.ipfs.tech post which just redirects to the release notes)
  • After there's a Helia release, do we proactively update the version anywhere (e.g., other helia repos, helia examples).

I know I'm throwing a lot here. I do think it's better for us to get something going that we progressively add to than nothing.

Potentially related documents to draw from that answers a lot of the questions above for their context:

  1. Boxo: ipfs/boxo#170
  2. Kubo release process: https://pl-strflt.notion.site/Kubo-Release-Process-5a5d066264704009a28a79cff93062c4
  3. Kubo release steps: https://github.com/ipfs/kubo/blob/master/docs/RELEASE_ISSUE_TEMPLATE.md

Name this project

What

IPFS Stewards are choosing a name for this project. As explained in ipfs/ipfs#470, the name will not include "IPFS".

We are using the placeholder "Pomegranate" name for now. We are now having a vote for a new name. The initial brainstorm & vote for a new name was held during IPFS Camp 2022.

Note: we reserve the right to ignore every instance of Boaty McBoatface.

πŸ‘‰ Feel free to comment in ipfs/ipfs#495 with a name (and, if possible, a logo).

Why

This is part of a wider ecosystem epic (ipfs/ipfs#470) where we clarify that IPFS is a set of interoperable protocols and conventions, and not a specific implementation.

  • This project won't use the "IPFS" name.
  • We want to encourage others to create their own implementations.
  • A unique name makes it easier to think in terms of "IPFS protocol, libraries and specs" and "IPFS implementations".

When

As mentioned above, we are now collecting name ideas. See that discussion for details.

How

🟒 Where to put the new name:

Immediate:

  • README and docs
  • Git repo name
  • npm package

Additional items once this project ships will be tackled in #4


WANT TO PROPOSE A NEW NAME?

πŸ‘‰ Feel free to comment in ipfs/ipfs#495 with a name (and, if possible, a logo).

Documentation improvement - system diagram

A laid visual could likely help here showing the relationship between the various components

helia
libp2p node
datastore
blockstore
bitswap
other helia modules
user application (consumer of helia)

I don't exactly know how the picture should look but it seems a bit surprising not to have any architecture diagrams anywhere in the repo. It will also be useful for those who are more visual in their understanding.

I cannot find in IPFS gateway / online (using CID) the content I uploaded through Helia

Hi all, my name's Elias,

I'm new to Helia, very cool project.
I'm currently writing a nodeJS script to add JSON files to IPFS using Helia:

import { createHelia } from 'helia';
import { strings } from '@helia/strings';
import { content } from './uri.mjs';
import fs from 'fs/promises';
import { createLibp2p } from 'libp2p';
import { tcp } from '@libp2p/tcp';
import { noise } from '@chainsafe/libp2p-noise';
import { yamux } from '@chainsafe/libp2p-yamux';
import { MemoryBlockstore } from 'blockstore-core';
import { MemoryDatastore } from 'datastore-core';

const blockstore = new MemoryBlockstore();
const datastore = new MemoryDatastore();

const libp2p = await createLibp2p({
  addresses: {
    listen: ['/ip4/127.0.0.1/tcp/8080'],
  },
  transports: [tcp()],
  connectionEncryption: [noise()],
  streamMuxers: [yamux()],
  datastore: datastore,
  identify: {
    host: {
      agentVersion: 'helia/0.0.0',
    },
  },
});

const helia = await createHelia({ libp2p, blockstore, datastore });
const s = strings(helia);

const addedCIDs = [];

async function uploadMetadata(content) {
  const metadata = {
    name: content.name,
    description: content.description,
    external_url: content.external_url,
    image: `ipfs://${content.image}`,
    attributes: content.attributes,
  };

  try {
    const cid = await s.add(JSON.stringify(metadata));
    addedCIDs.push(cid.toString());

    // Test that the CID can be retrieved
    const myString = await s.get(cid);
    console.log(JSON.parse(myString));

    // Parse CID
    const cidString = cid.toString();

    console.log(`Uploaded metadata for ${waifu.name}. CID info: ${cidString}`);
  } catch (err) {
    console.error(`Failed to upload metadata for ${waifu.name}. Error: ${err}`);
  }
}

async function main() {
  for (const item of content) {
    await uploadMetadata(item);
  }

  console.log(addedCIDs);


  await fs.writeFile('./addedCIDs.json', JSON.stringify(addedCIDs));

  console.log('All CIDs have been saved to addedCIDs.json');
}

main()
  .then(() => process.exit(0))
  .catch(console.error)
  .finally(() => process.exit(1));

The script successfully adds creates the CIDs in my addedCIDs.json file:

[
  "bafkreicbkmd3ba2y35co7v7n2kjnqfh2rrrw72j3etm5fmoumz5r65pa54",
  "bafkreihaparff6nojjm5qvnt55xnrnv4g4gxkmhexfbm45yozhqupko6am",
  "bafkreic3qaase5us7jkdd55mokytbed7fe4seujknmm52pff756t5nyn4m",
  "bafkreifhg3jgse6g2khv4npzg35buiqjn36xuoz7mu5aoobeh5ckqapspm",
  "bafkreigb5nozyw232voxad2vllafwgmwfhsla46uggzpsrrxrhkiszlk2q",
  "bafkreicwjokn4jhnpmesnhtj6vmktgl3vl2b6ebzocn5jl6fdq74ocyslu",
  "bafkreihptw6rz43otvvgx3ntf6rlxh5knfiod6l64nlaieo3t2inw4x5qm",
  "bafkreiaoekhluctmyehexg3loeb7jspx4pkkw6k2aixhhnypl7lofeujsm",
  "bafkreib5ilj4vm3yk35nyzvgup6kcybz2g7mzey4k4qssz733ftynybq5q",
  "bafkreiaa4zvkv563usfbbljci56jdkf7ven5lt5vbcxt6ki7sipcbezbfq",
  "bafkreieju7xdev3z4sabaeejm5f2r2mnaqeltm4adzklk6yeeyancuccle",
  "bafkreig7r5duyx2fcxqocx5q7opaqq4oqiahgguodc65zu45jj2sdimq2i",
  "bafkreibs6k2gbncsel2apz6m2oighgiy5zftfaeeq6l2swmss2h22liayq",
  "bafkreicytf3eapch4ghte2ybyjgz2lbkiw5xgywsmo4w7e3l3j7n445gg4",
  "bafkreibmsb6z3fd44bua3grtep3ah3cpcrpmy2lgooj3sjhdy5mba3zk5u",
  "bafkreigc3waovtz6zgqvfscqdmphwarmaie6vrf7q6sf7j35qcip6kkeze",
  "bafkreibhm37epellasuaqvc44geqv32cpittak2zcoqevkxreb7qp4uyhu"
]

But when I look them up on IPFS, I cannot seem to find them.

Is the problem linked with pinning? Or something I missed?

Add benchmarks for data transfer

A benchmark suite should be added to test transfer speeds:

  • Helia -> Helia
  • Kubo -> Helia
  • Helia -> Kubo
  • Kubo -> Kubo

These should test variations in block sizes, DAG layer size, overall file size etc.

They should follow the same format and use the same testing libraries a the existing gc benchmarks.

Ignore non-IPFS content paths in DNSLink records

Version:
js-ipfs version: 0.5.4-511147bedd51be3151de44a90fefe9425bfbcd50
interface-ipfs-core version: ^0.144.2
ipfs-http-client version: undefined
Repo version: 10
System version: x64/darwin
Node.js version: v16.1.0
Commit: 511147bedd51be3151de44a90fefe9425bfbcd50
Platform
Darwin xxx 20.4.0 Darwin Kernel Version 20.4.0
  • Subsystem:
    dns-nodejs

Severity:

Low - An optional functionality does not work.

Description:

dns-nodejs returns currently the first dnslink= txt entry that can be found for a given domain, independent from the given protocol prefix, which prevents other decentralized protocols from using dnslink. It would probably be good for dnslink-nodejs to look for the ipfs dnslink entry only.

Note: I also opened related issues on go-dnslink#14 and js-dnslink#5.

Steps to reproduce the error:

Create a domain with a dnslink=/hyper/... entry and dnslink=/ipfs option and see how (depending on the order returned by the dns function!) how it will not consistently return the correct entry.

Establish mechanism for usage / adoption monitoring

Done Criteria

We have a mechanism that highlights on a ~monthly basis if/who is using Helia. We can see month-over-month growth, can see who started/stopped using Helia in a given month, and can annotate any notes about usage.

Why Important

We want to make sure we're focused on adoption. No/slow growth is a signal. Knowing who is using it is important for understanding user needs better.

Notes

For seeing which packages are depending, there are different tools including:

We can look at things like downloads (e.g., https://npm-stat.com/charts.html?package=helia&package=helia-examples&package=helia-ipns&package=helia-unixfs&from=2023-01-01&to=2023-06-30 ), but that isn't very useful right now. Knowing specific dependencies is more interesting.

One process would be to monthly as part of PL EngRes All Hands prep to take the latest dependent list, and in a spreadsheet, highlight anyone new, and potentially followup.

Helia identifies itself to the network

Done Criteria

Helia identifies itself to the network by default. Specifically this mean:

  • libp2p identity - will be identifiable from DHT bootstrappers for example. Kubo does this.
  • user-agent header when making HTTP calls - This is relevant for example when making calls to cid.contact

We want to also get the Helia version as well (e.g., β€œhelia/vX.Y.Z”) just as Kubo does.

We should only set the "user agent" if a user hasn't specified an override one.

Why Important

This enables Helia's prevalence to be detected/measured. This is important for getting signal on Helia's adoption in production, which in turn affects things like:

  1. funding and resourcing decisions
  2. client considerations for network/protocol upgrades

Notes

  1. At the end of this, problab should be able to generate the Helia equivalent of https://probelab.io/ipfsdht/#active-kubo-versions

2023/2024 Contributor Hospitality Ideas

Below are items that stuck out when coming to https://github.com/ipfs/helia with fresh eyes imagining what it's like for someone who hears about Helia, wants to learn more, and wants to contribute potentially:

README Items

  • Reference helia-unixfs, helia-ipns, etc. We don't give pointers to these important modules. #76
  • How do I get started as a consumer? Reference https://github.com/ipfs-examples/helia-examples
  • Reference demo-slides, recordings, etc: #80
  • Remove erroneous verbiage: "The core of IPFS is the Files API, which will likewise be implemented in Helia. These initial building blocks are in development now; have a look at this repo's PR(s)." Is this true. Isn't this in helia-unixfs?
  • @whizzzkid : Fix API docs links: ipfs/aegir#1228
  • Link to code coverage. The badge lists it as unknown. This is a new endeavor. We should be able to show off good numbers to start. ipfs/aegir#1195
  • @BigLep Have a section about who some of our users are. I expect we don't have any currently. That's ok, but lets solicit for people to share us. We want a "your logo here": #83
  • @BigLep Update project status section: #83
  • Answer "Where should a user ask questions?"
  • @achingbrain Answer "What is the relationship of this project to js-ipfs?"
  • @achingbrain Answer "How do we prove interoperability with other IPFS implementations like Kubo?"
  • @achingbrain Answer "How does it perform compared to other implementations including js-ipfs?"
  • Answer "What runtimes do we support (node versions, browser distributions/versions including mobile/desktop)"
  • I think we could use more visuals on the readme. I think having something visual showing the relationship between ipfs/helia, js-libp2p, and the other Helia repos. I think seeing this helps drive home the point that Helia is modular.

Manifesto Items

  • @achingbrain "The core of Helia will be very focused on use as a library: just js-libp2p, a blockstore, js-bitswap and a POSIX-like API which will be extendable to add additional features such as IPNS, an RPC-API, etc." and " instead it will present an abstraction of posix filesystem operations (ls, cat, etc) as an API but the underlying filesystem(s) will be configurable." The POSIX-like API is no longer part of it right?
  • @achingbrain Permissions discussion should probably have a callout that this is for when Helia is run as a daemon, but that isn't the primary usecase.

Other specific things to do

General thoughts

  1. Provide more bite-size chunks for would-be contributors to pick off. As of 2023-02-20, we basically have a really large contribution in #28 or we have smaller things in examples which is great but it could be clarified with a prioritized list of examples. It would be useful to get other work items scoped out with clear done criteria and likely some labeling to make clear the sizing.
  2. I'm curious how useful people find the Helia API docs we link to. It seems like to do much, need to the unixfs repo for example. Does it make sense to generate all the Helia-related projects docs together so it's clearer to a user what are all the Helia APIs are exposed?
  3. I think our tagline needs to be more than "An implementation of IPFS in JavaScript". I think the statement down below is more compelling "A lean, modular, and modern implementation of IPFS for the prolific JS and browser environments.".

TODO items for Community Meetings

  • 1. Make sure they are easily discoverable (including link from README)
  • 2. Recorded and shared afterwards in dedicated Helia channel (and link this from README)
  • 3. It would be great if they could have the tone of a sprint demo where folks can share: here was the problem to solve, here is that problem fixed with a demo, what next steps are. We want to get to a place where people are making a commitment to get certain work done for the next demo/meetup.

Allow for custom data transfer mechanisms

Please correct me if I'm wrong, but it seems like bitswap is always activated if you provide a libp2p object to the helia constructor. I would like a way to either:

  1. Disable bitswap, and allow me to provide my own custom blockstore that may or may not use bitswap. If we do this it would be nice to expose the helia's BlockStorage so I can reuse that code with my own custom bitswap.
  2. Allow for a custom bitswap to be passed in as an option (and maybe even allow for it to be set to false so it's disabled like 1).

I would like this so I can plug in alternative data transfer mechanisms like using the HTTP gateway api.

Examples repo/infra setup

js-ipfs has a comprehensive example suite at https://github.com/ipfs-examples/js-ipfs-examples - it would be amazing for helia to have something similar.

The example suite is a monorepo with all examples in the examples directory. Each example is a self-contained module that showcases one feature of js-ipfs or how to integrate it with a build tool, and they all contain tests to prevent regressions.

Each example is copied into it's own repo in the ipfs-examples org by the fork & go github action to aid discoverability yet make the maintenance of these examples manageable.

Syncing changes

Changes to the monorepo are pulled into the split-out repos, any changes that have been made to the split-out repos are discarded. This is done by a sync job copied from the fork & go template.

An improvement for helia would be to switch this around and have the monorepo push changes out to the split-out repos, the reason being the sync job runs on a timer and GitHub disables the timer if no changes are observed for a month or so, which means a maintainer has to manually go through and re-enable the timer for every split-out repo periodically - see ipfs-examples/js-ipfs-examples#44

Where to start

Create a repo in the ipfs-examples org called helia-examples.

Start with just one example, perhaps bundling with esbuild (easy since aegir already builds helia with esbuild for browser tests so there shouldn't be any extra config required) - port the example, the docs and the tests.

The sync job should be run from the helia-examples repo instead of from the split-out repos.

Prequisites

After #17 is merged helia should be installable with npm i helia@next. An automated PR will also be created that sets up a gated release of v1 of helia - when that is merged dependabot should take care of upgrading all the example deps.

Have public coordination channel across chat platforms

Done Criteria

There is a README-documented channel across chat platforms for coordinating contributions to Helia.

Why Important

Lower the barrier to entry for potential contributors to get involved and get there questions answered so they can contribute.

User/Customer

Would-be and current Helia contributors.

Notes

Right now there is #ip-js in Filecoin Slack. While PL EngRes lives there, it doesn't seem ideal to require other Helia contributors to join Filecoin slack to engage. Ideally we bridge that channel (or another/new channel like #helia) to ipfs.io Matrix and IPFS Discord.

The README should be updated as a result.

Resource (staffing) options

Purpose

This issue serves as a pointer to various ways that one can help resource/staff this project to accelerate its development. For example, blog posts, social media, etc. link to this issue which we in turn update with the latest info and options.

Background

At least as of 2022-10-27, the biggest thing holding back this IPFS implementation from being delivered and released is developer resources. Per the roadmap, a key focus for the remainder of 2022 is to increase the staffing so we can make this project a reality and have impact.

Options

  1. πŸ«‚ Join the core/maintainer team as a contractor or full-time employee - Protocol Labs' Engineering and Research group, which is initiating this effort, is hiring and needs more JavaScript and TypeScript developers who are eager to make this project a reality. It’s ideal if you have experience working at the protocol/bytes/streams level. Please apply here.
  2. βœ‹ Contribute - Open source contributors welcome. Do you have a great idea for a part you could own and need some funding? Consider a grant request.

helia/unixfs and helia/strings producing different results from same CID

import { createHelia } from "helia";
import { strings } from "@helia/strings";
import { MemoryDatastore } from "datastore-core";
import { MemoryBlockstore } from "blockstore-core";
import { unixfs } from "@helia/unixfs";
import { CID } from "multiformats/cid";
import util from "util";

const node = await createHelia({
  blockstore: new MemoryBlockstore(),
  datastore: new MemoryDatastore(),
});
const fs = unixfs(node);
const s = strings(node);
let cid = CID.parse("Qmc5gCcjYypU7y28oCALwfSvxCBskLuPKWpK4qpterKC7z");

console.log("strings.get implementation");
let val = await s.get(cid);
console.log(val);
console.log(util.inspect(val));

console.log("fs.cat implementation");
for await (const buf of fs.cat(cid)) {
  // console.info(buf);
  let val = new TextDecoder().decode(buf);
  console.log(val);
  console.log(util.inspect(val));
}

console.log("ctrl+c to exit");

Produces

strings.get implementation

β˜»β†•Hello World!
↑
'\n\x14\b\x02\x12\x0EHello World!\r\n\x18\x0E'
fs.cat implementation
Hello World!

'Hello World!\r\n'

The question is, why does fs.cat and strings.get produce different results, and it appears as though they're different results on a binary level.

kadDHT Incompatible with dht

In the following config based on the latest examples:

Screenshot 2023-05-23 at 4 26 39 PM

Screenshot 2023-05-23 at 4 41 54 PM

Current Versioning:
"libp2p": "^0.45.1",
"@libp2p/kad-dht": "^9.3.3",

Unsure if this is related, but it seems to not be getting any event listeners working suggesting to me that I won't be able to dial and connect to peers I want to connect to. Here is my config.

const libp2p = await createLibp2p({
        // transports allow us to dial peers that support certain types of addresses
        transports: [webSockets(), webTransport()],
        connectionEncryption: [noise()],
        streamMuxers: [mplex()], // streamMuxers: [yamux(), mplex()],
        peerDiscovery: [
            bootstrap({
                list: [
                    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
                    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
                    "/dnsaddr/bootstrap.libp2p.io/p2p/QmZa1sAxajnQjVM8WjWXoMbmPd7NsWhfKsPkErzpm9wGkp",
                    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
                    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
                ],
            }),
        ],
        contentRouters: [ipniContentRouting("https://cid.contact")],
        services: {
            // the identify service is used by the DHT and the circuit relay transport
            // to find peers that support the relevant protocols
            identify: identifyService(),
            // the DHT is used to find circuit relay servers we can reserve a slot on
            dht: kadDHT({
                clientMode: true,
                validators: {
                    ipns: ipnsValidator,
                },
                selectors: {
                    ipns: ipnsSelector,
                },
            }),
        },
    });

I did have it working before but I forget what versions it worked with

Cannot find module 'helia' from './src/Wrapper.ts'

import { createHelia } from 'helia';
import orbitdb from 'orbit-db';

export let helia;
export let orbit;

export const Connect = async() =>
{
  helia = await createHelia();
  orbit = await orbitdb.createInstance(helia);

  return [helia, orbit];
}

This code sample shows the error Cannot find module 'helia' from 'src/lib/OrbitWapper.ts'

My package.json has the correct dependencies

"dependencies": {
    "helia": "^1.3.1",
    "orbit-db": "^0.29.0"
},

EDIT

The issue is related to jest not finding module so I created a simple test project from the Create helia node example.
It fails with the following error

Error: No "exports" main defined in /mnt/storage/p2p/node_modules/blockstore-core/package.json
    at new NodeError (node:internal/errors:405:5)
    at exportsNotFound (node:internal/modules/esm/resolve:259:10)
    at packageExportsResolve (node:internal/modules/esm/resolve:533:13)
    at resolveExports (node:internal/modules/cjs/loader:569:36)
    at Function.Module._findPath (node:internal/modules/cjs/loader:643:31)
    at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1056:27)
    at Function.Module._load (node:internal/modules/cjs/loader:923:27)
    at Module.require (node:internal/modules/cjs/loader:1137:19)
    at require (node:internal/modules/helpers:121:18)
    at Object.<anonymous> (/mnt/storage/p2p/packages/test/test-orbit/src/index.ts:2:1)

Unable to PIN existing CID in IPFS browser node

Pinning only works If I upload some data using unixfs and then use that CID. I couldn't pin a CID thats already available on ipfs.

So how do I use a CID thats already available on ipfs and pin it in my node?

example: I want to pin this CID: QmRrzxiXGefcF9VbbThGrkhiQeuhPFjRbGgqYZmxYXYkHr on my node

At the moment Im getting the below error:

errors.js:27 Uncaught (in promise) Error: Not Found
at Module.notFoundError (errors.js:27:1)
at MemoryBlockstore.get (memory.js:19:19)
at #walkDag (pins.js:97:1)
at eval (pins.js:52:1)
at eval (index.js:111:1)
at PQueue._PQueue_tryToStartAnother (index.js:285:1)
at eval (index.js:135:1)
at new Promise ()
at PQueue.add (index.js:99:1)
at PinsImpl.add (pins.js:51:1)
at async onPin (ipfs.js:69:17)

FAQ follow ups

Extra questions:

  • why consider using Helia over js-ipfs
  • why use Helia over the kubo-rpc-client
  • what mechanisms does Helia have for content and peer routing?
  • what data transports does Helia have

blog.ipfs.tech/state-of-ipfs-in-js was a snapshot in time. I think we should extract ou the parts that are relevant that are still true. We can point back to it as a historical pointer, but I think that should be a "Related / Historial Items" section likely in the manifesto doc. I think we want this repo to represent the latest state of affairs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.