Code Monkey home page Code Monkey logo

polkadot-api / polkadot-api Goto Github PK

View Code? Open in Web Editor NEW
72.0 15.0 15.0 8.8 MB

Polkadot-API is a meticulously crafted suite of libraries, each designed to be composable, modular, and fundamentally aligned with a "light-client first" philosophy. Our overarching mission is to empower dApp developers by equipping them with a comprehensive array of tools, pivotal for the creation of truly decentralized applications.

Home Page: https://polkadot-api.github.io/polkadot-api-docs/

License: MIT License

Shell 0.17% TypeScript 98.60% JavaScript 1.05% Dockerfile 0.18%
blockchain dapp polkadot substrate web3

polkadot-api's Introduction

polkadot-api

Features

  • Light client first: built on top of the new JSON-RPC spec to fully leverage the potential of light-clients (i.e: smoldot).
  • Delightful TypeScript support with types and docs generated from on-chain metadata.
  • First class support for storage reads, constants, transactions, events and runtime-calls.
  • Performant and lightweight: ships with multiple subpaths, so dApps don't bundle unnecessary assets.
  • Uses native BigInt, instead of large BigNumber libraries
  • Leverages dynamic imports to favour faster loading times.
  • Promise-based and Observable-based APIs: use the one that best suit your needs and/or coding style.
  • Use signers from your browser extension, or from a private key.
  • Easy integration with PJS-based extensions.

... and a lot lot more.

Overview

smoldot.ts

import { startFromWorker } from "polkadot-api/smoldot/from-worker"
import SmWorker from "polkadot-api/smoldot/worker?worker"

// Starting smoldot on a Worker (strongly recommended)
export const smoldot = startFromWorker(new SmWorker())

// Alternatively, we could have smoldot running on the main-thread, e.g:
// import { start } from "polkadot-api/smoldot"
// export const smoldot = start()

main.ts

import { createClient } from "polkadot-api"
import { getSmProvider } from "polkadot-api/sm-provider"
import { polkadotTypes } from "@polkadot-api/descriptors"
import { smoldot } from "./smoldot"

// dynamically importing the chainSpec improves the performance of your dApp
const smoldotRelayChain = import("polkadot-api/chains/polkadot").then(
  ({ chainSpec }) => smoldot.addChain({ chainSpec }),
)

// getting a `JsonRpcProvider` from a `smoldot` chain.
const jsonRpcProvider = getSmProvider(smoldotRelayChain)

// we could also create a `JsonRpcProvider` from a WS connection, eg:
// const jsonRpcProvider = WsProvider("wss://some-rpc-endpoint.io")

const polkadotClient = createClient(jsonRpcProvider)

// logging blocks as they get finalized
polkadotClient.finalizedBlock$.subscribe((block) => {
  console.log(`#${block.number} - ${block.hash} - parentHash: ${block.parent}`)
})

// pulling the latest finalized block
const block = await polkadotClient.getFinalizedBlock()

// obtaining a delightfully typed interface from the descriptors
// previously generated from the metadata
const polkadotApi = polkadotClient.getTypedApi(polkadotTypes)

// presenting the transferrable amount of a given account
const {
  data: { free, frozen },
} = await polkadotApi.query.System.Account.getValue(
  "15oF4uVJwmo4TdGW7VfQxNLavjCXviqxT9S1MgbjMNHr6Sp5",
)
console.log(`Transferrable amount: ${free - frozen}`)

Documentation

Browse our latest docs here!

Announcement: Transfer of Ownership

As of 2024-02-01, the original owner and maintainer of the Polkadot-API project, Parity Technologies, has officially transferred ownership of the project to me, Josep M Sobrepere. Read more about this transfer here.

polkadot-api's People

Contributors

josepot avatar voliva avatar ryanleecode avatar kratico avatar carlosala avatar wirednkod avatar tien avatar 0xkheops avatar peetzweg avatar

Stargazers

 avatar  avatar just a dev avatar swenthebuilder avatar Raj Raorane avatar Marc Cornellà avatar Ivan Subotic avatar  avatar  avatar Sumeet Naik avatar  avatar Hoon Kim avatar MOZGIII avatar Shawn Tabrizi avatar Haiko Schol avatar Karim avatar Prometheus avatar Martin Barreto avatar S E R A Y A avatar BEEFY on Polkadot avatar Emmanuel Thomas  avatar LV  avatar Paulo Martins avatar  avatar misoobu avatar Alex Won avatar Alex Bean avatar Alec WM avatar Viktor Todorov avatar Tom Mi avatar  avatar  avatar tugy avatar Dónal Murray avatar Tarik Gul avatar Luke Schoen avatar Raphael Flechtner avatar Chralt avatar Will Pankiewicz avatar Gabriel Facco de Arruda avatar PG Herveou avatar James Shih avatar Zeek avatar  avatar Shaun Wang avatar Xiliang Chen avatar Aaron Bassett avatar Squirrel avatar Miami San avatar jiangplus avatar BenWhiteJam avatar gabriel klawitter avatar Leonardo Razovic avatar Javier Viola avatar 0x avatar Valery Gantchev avatar Jakub Pánik avatar  avatar  avatar bader y avatar Hamid Alipour avatar Rafael Aggos avatar Ricardo Rius avatar Liam Aharon avatar Toni avatar Peter Mai avatar Alejandro Martinez Andres avatar  avatar  avatar Shannon Wells avatar Jonathan Dunne avatar  avatar

Watchers

Bruno Škvorc avatar Divgun avatar  avatar Rafael Aggos avatar  avatar James Shih avatar Peter Mai avatar  avatar  avatar just a dev avatar  avatar Chralt avatar  avatar  avatar  avatar

polkadot-api's Issues

Feedback `PolkadotProvider`

Just re-watched the presentation of the PolkadotProvider interface and have the following feedback & questions:

As this is intended to be an unchanged interface to be used for a long time. I wonder what the official distinction between naming something Network and Chain is. If you call Network.connect() you get ChainProvider and if you PolkadotProvider.addNetwork() you pass chainspec`. In my head this does make sense, as probably it always has been like this. However, I just want to challenge this again to make it potentially less confusing for people using this spec in the future.

https://github.com/paritytech/polkadot-api/blob/1b7b96e84cb57881c54c86dacb9b527d4fbf4829/experiments/src/PolkadotProvider.ts#L12-L22


As mentioned during the presentation the Account.createTx(callData) function receives hex encoded unsigned callData and expects signedCallData in return. Maybe calling it signTx(callData) would be a better suited name? In my head createTx would suggested building the actual callData bytes from input variables.

https://github.com/paritytech/polkadot-api/blob/1b7b96e84cb57881c54c86dacb9b527d4fbf4829/experiments/src/PolkadotProvider.ts#L6-L10


Regarding the client field in the ChainProvider being GetProvider or SubstrateClient. Why not let the SubstrateClient be a GetProvider compatible and give GetProvider a way to identify it's type, so it can be inferred that it's a SubstrateClient, to ease usage of the arbitrary client instance.

https://github.com/paritytech/polkadot-api/blob/1b7b96e84cb57881c54c86dacb9b527d4fbf4829/experiments/src/PolkadotProvider.ts#L6-L10

built in assertion guard for v14 metadata

Since we don't support metadata versions other than v14 (currently). Should we also export a v14 assertion guard so the user doesn't have to write it themselves? i.e.

import { metadata as $metadata } from "@unstoppablejs/substrate-codecs"
type Metadata = ReturnType<typeof $metadata.dec>["metadata"]

function assertIsv14(
  metadata: Metadata,
): asserts metadata is Metadata & { tag: "v14" } {
  if (metadata.tag !== "v14") {
    throw new Error("unreachable") // "this should never be hit since the decoded metadata is always v14"
  }
}

@polkadot-api/cli: ux improvements

Currently making use of @polkadot-api/cli and @polkadot-api/client in the extension project. I'm getting a bit frustrated by the CLI already. I'm using the main branch version.

It takes quite the while to load the metadata for polkadot, at least 30 seconds. Is this normal? Every time the cli crashes on me it needs refetches the metadata which is probably what we want to be the most up to date but than we should fail more gracefully. If the CLI breaks I need to restart it and fetching the metadata again.

Loading meta data in real time:

forever-smol.mov

Furthermore, the selection of storage item can only be done with space instead of enter, whcih confused me in the beginning.

Another thing which happened to me over and over is it either crash or just get stuck when selecting Extrinsics.

There also seems to be the submenu "Select Events for this extrinsic" to not work. It tells me to use a,i or space but nothing works and breaks or does add nothing to the descriptors.

papi-cli-smol.mov

Another crash I had was when I was using the same string for file and folder name. As I'm yet a bit confused what all of these files are I was just typing in papi to see what it spits out. A very minimal doc in the README would help how to use it and what these output files are.

Screenshot 2023-10-18 at 15 58 22

@polkadot-api/cli E2E test

The Polkadot API CLI needs to do 2 things.

  1. Generate a proper descriptor output in JSON format
  2. Generate descriptors using Typescript

Rather than write a bunch of unit tests (which imo isnt worth the time investment atm), write a single E2E test that validates whatever descriptor spec is outputed matches what is generated by the codegen.

The comparison can be done by parsing the codegen into a AST using the typescript module and comparing each descriptor to what is in the JSON.

Then we can run this in CI to give us a basic sanity check that the CLI is working correctly.

`@polkadot-api/client` health-info API

Currently the only way to infer the health of the connection is to check whether the finalized blocks are being produced in a timely manner... The top-level API should provide more/better insights about the health of the connection, specially taking into account the fact that the internals are going to try to deal with operationInaccessible events, stop events, etc.

Now that paritytech/json-rpc-interface-spec#91 has been merged, we can start working on integrating it, so that we are ready for when smoldot implements it.

Re-exported `scale-ts` types from `@polkadot-api/substrate-bindings` are not found with `"moduleResolution": "bundler"`

For example, adding @polkadot-api/substrate-bindings and creating a test.ts file with

import { Decoder } from "@polkadot-api/substrate-bindings"

declare const x: Decoder

And using a tsconfig.json with "moduleResolution": "bundler"

Outputs the following error

➜  polkadot-api git:(dfcee8e) ✗ cd examples/extension-dapp 
➜  extension-dapp git:(dfcee8e) ✗ pnpm build

> [email protected] build /Users/kratico/ghq/github.com/paritytech/polkadot-api/examples/extension-dapp
> tsc && vite build

src/test.ts:1:10 - error TS2459: Module '"@polkadot-api/substrate-bindings"' declares 'Decoder' locally, but it is not exported.

1 import { Decoder } from "@polkadot-api/substrate-bindings"
           ~~~~~~~

  ../../packages/substrate-bindings/dist/index.d.mts:2:26
    2 import { Codec, Encoder, Decoder, CodecType } from 'scale-ts';
                               ~~~~~~~
    'Decoder' is declared here.


Found 1 error in src/test.ts:1
image

Then, the substrate-bindings/dist/index.d.mts file from @polkadot-api/substrate-bindings starts with

import * as scale_ts from 'scale-ts';
import { Codec, Encoder, Decoder, CodecType } from 'scale-ts';
export * from 'scale-ts';

`@polkadot-api/client`: expose runtime API (with constants)

We should figure out what the best public API should be for allowing the consumers know about runtime changes.

Also, it probably makes sense to have an API that allows the consumer to get an instance of the "current" runtime, which would allow them to synchronously perform certain operations like: reading constant values, encoding calls, etc.

Currently, it's only possible to perform those operations asynchronously, because accessing the runtime is an asynchronous task. We really need to come up with a good API for these things.

Pontential incorrect runtime codecs after runtime updates

Summary

The @polkadot-api/client library currently uses the latest version of the runtime codecs for evaluating all storage requests, regardless of the block against which the request is made. This approach can lead to incorrect handling of storage requests made against blocks finalized before a runtime update.

Problem Description

In our current implementation, storage requests are always evaluated using the latest runtime codecs. This becomes problematic when dealing with blocks finalized before a runtime update. For example:

----------------------------------------->
TIME               00000000001111111111
                   01234567890123456789
----------------------------------------->
finalized blocks:  a b c d e f g 
new runtime     :        x
user request    :     *----#
                           *--#
----------------------------------------->
TIME               00000000001111111111
                   01234567890123456789
----------------------------------------->

In this timeline, a user makes a storage request right after block "b" is finalized. While the request resolves, new blocks are finalized, including block "d", which introduces a new runtime. If a subsequent request is made to query data from block "b", the correct approach would be to use the runtime codecs from before the update introduced in block "d". However, our implementation incorrectly uses the latest codecs for this request.

Impact

This issue can lead to incorrect data retrieval and processing. While this scenario might be rare, its potential impact on data integrity is significant.

Proposed Solution

We should modify the implementation to:

  1. Determine the runtime version applicable to the block against which a storage request is made.
  2. Use the correct runtime codecs corresponding to that specific block, rather than the latest version.

This change ensures that storage requests are accurately processed, maintaining data integrity across runtime updates.

unable to encode latest metadata with `metadata` codec

Given https://gist.github.com/ryanleecode/c0d2b7322f2252378dc50a969bd88772 as the latest metadata. Using metadata from substrate codec to encode it fails with:

file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:350
var VectorEnc = (inner, size) => size >= 0 ? (value) => mergeUint8(...value.map(inner)) : (value) => mergeUint8(compact.enc(value.length), ...value.map(inner));
                                                                                                                                  ^

TypeError: Cannot read properties of undefined (reading 'length')
    at Array.<anonymous> (file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:350:131)
    at file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:331:99
    at Array.map (<anonymous>)
    at file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:331:66
    at file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:93:54
    at Array.map (<anonymous>)
    at Array.<anonymous> (file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:350:149)
    at file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:331:99
    at Array.map (<anonymous>)
    at file:///home/ryan/Documents/Repositories/capi/node_modules/scale-ts/dist/scale-ts.mjs:331:66

Node.js v20.3.1

Enum Codecs

As I've been working more extensively with the top-level client, I've encountered a fundamental limitation in the current typing system, especially evident in the use of Enum higher-order codecs of scale-ts. This experience has led me to believe that the assumption that a Codec should have identical types for both its encoder and decoder is not always ideal.

Current Challenge:

The decoder's functionality, particularly its ability to return discriminated unions, is exceptional and provides a great developer experience. However, the requirement to use the same object structures for encoding that are returned by the decoder is not only inconvenient but also counterintuitive in many scenarios.

Current Codec Typing:

type Encoder<T> = (value: T) => Uint8Array
type Decoder<T> = (value: Uint8Array | ArrayBuffer | string) => T

type Codec<T> = [Encoder<T>, Decoder<T>] & {
  enc: Encoder<T>
  dec: Decoder<T>
}

In this structure, both encoding and decoding operations are expected to work with the same type T, which limits flexibility.

Proposed Typing Enhancement:

type Encoder<E> = (value: E) => Uint8Array
type Decoder<D> = (value: Uint8Array | ArrayBuffer | string) => D

type Codec<E, D = E> = [Encoder<E>, Decoder<D>] & {
  enc: Encoder<E>
  dec: Decoder<D>
}

This modification introduces separate generic types for encoders (E) and decoders (D). This change would allow for more nuanced and ergonomic handling of different structures for encoding and decoding operations.

Practical Example:

Consider the following Enum definition:

const event = Enum({
    _void,
    one: str,
    many: Vector(str),
    allOrNothing: bool,
})

With the proposed change, the inferred types could be:

type Event = Codec<
  | { _void: null }
  | { one: string }
  | { many: string[] }
  | { allOrNothing: boolean },

  | { tag: '_void' }
  | { tag: 'one'; value: string }
  | { tag: 'many'; value: string[] }
  | { tag: 'allOrNothing'; value: boolean }
>

This approach allows us to encode an event more naturally, like event.enc({ one: 'thing' }), instead of the current less ergonomic method event.enc({ tag: 'one', value: 'thing' }).

Conclusion:

I think that this breaking change would significantly enhance the developer experience by providing more flexibility and intuitiveness in handling different data structures for encoding and decoding. This change seems particularly crucial for effective use of Enum types and could potentially benefit other use cases within scale-ts.

I'm wondering if this could have some undesired effects that I can't foresee right now 🤔

feature: view-builder

In order to seamlessly display the decoding of binary data based on metadata definitions, we have identified a need for a view-builder. This would stand alongside our current static-builder, dynamic-builder, and checksum-builder, enhancing the typing support for our visual components.

I had initially asked @peetzweg to spearhead this feature, especially considering its utilization for a presentational component within the Polkadot Provider extension. However, gauging the complexity and the nature of our project, I think it's best for me to take the lead on the view-builder development. This would allow @peetzweg to channel his efforts into developing the necessary presentational components.

To ensure smooth collaboration, I'd like to provide an initial outline of the view-builder's public API. This will help @peetzweg kickstart the components work while I focus on the builder's implementation.

import type {
  Decoder,
  HexString,
  StringRecord,
  V14,
} from "@polkadot-api/substrate-bindings"

export type GetViewBuilder = (metadata: V14) => {
  buildDefinition: (idx: number) => {
    shape: Shape
    decoder: Decoder<Decoded>
  }
  callDecoder: Decoder<{
    pallet: {
      value: {
        name: string
        idx: number
      }
      input: HexString
    }
    call: {
      value: {
        name: string
        idx: number
      }
      input: HexString
    }
    args: StructDecoded
  }>
}

type WithInput<T> = T & { input: HexString }

export type VoidDecoded = WithInput<{
  codec: "_void"
  value: undefined
}>

export type BoolDecoded = WithInput<{
  codec: "bool"
  value: boolean
}>

export type StringDecoded = WithInput<{
  codec: "str" | "char"
  value: string
}>

export type NumberDecoded = WithInput<{
  codec: "u8" | "u16" | "u32" | "i8" | "i16" | "i32" | "compactNumber"
  value: number
}>

export type BigNumberDecoded = WithInput<{
  codec: "u64" | "u128" | "u256" | "i64" | "i128" | "i256" | "compactBn"
  value: bigint
}>

export type BitSequenceDecoded = WithInput<{
  codec: "bitSequence"
  value: {
    bitsLen: number
    bytes: Uint8Array
  }
}>

export type BytesDecoded = WithInput<{
  codec: "Bytes"
  value: Uint8Array
}>

export type AccountIdDecoded = WithInput<{
  codec: "AccountId"
  value: {
    ss58Prefix: number
    address: string
  }
}>

export type PrimitiveDecoded =
  | VoidDecoded
  | BoolDecoded
  | StringDecoded
  | NumberDecoded
  | BigNumberDecoded
  | BitSequenceDecoded
  | BytesDecoded
  | AccountIdDecoded

export interface SequenceShape {
  codec: "Sequence"
  inner: Shape
}

export interface ArrayShape {
  codec: "Array"
  len: number
  inner: Shape
}

export interface TupleShape {
  codec: "Tuple"
  inner: Array<Shape>
}

export interface StructShape {
  codec: "Struct"
  inner: StringRecord<Shape>
}

export interface EnumShape {
  codec: "Enum"
  inner: StringRecord<Shape>
}

export type ComplexShape =
  | SequenceShape
  | ArrayShape
  | TupleShape
  | StructShape
  | EnumShape

export type Shape = { codec: PrimitiveDecoded["codec"] } | ComplexShape

export interface SequenceDecoded extends WithInput<SequenceShape> {
  value: Array<Decoded>
}

export interface ArrayDecoded extends WithInput<ArrayShape> {
  value: Array<Decoded>
}

export interface TupleDecoded extends WithInput<TupleShape> {
  value: Array<Decoded>
}

export interface StructDecoded extends WithInput<StructShape> {
  value: StringRecord<Decoded>
}

export interface EnumDecoded extends WithInput<EnumShape> {
  value: {
    tag: string
    value: Decoded
  }
}

export type ComplexDecoded =
  | SequenceDecoded
  | ArrayDecoded
  | TupleDecoded
  | StructDecoded
  | EnumDecoded

export type Decoded = PrimitiveDecoded | ComplexDecoded

Next Steps:

  • I will start developing the view-builder.
  • @peetzweg can begin working on the presentational components based on the provided API.

`@polkadot-api/client`: add `bestBlock` API

We should expose an API for easily accessing the best block, and its corresponding Error class for knowing whether a query/request got canceled/errored due to the fact that it was performed against a block that was pruned.

Optimizing IDE Integration for Chain Interactions: Balancing DX with Performance and Bundle Size

Introduction

Enhancing Developer Experience with Improved Chain Interaction Selection in IDEs

When initiating this project, our goal was to offer a flexible interface for developers to select key chain interactions for their dApps, either through an interactive CLI or a web-based UI. However, developer feedback suggests a need for a different approach.

Feedback and Challenges

Shifting to Integrated Development Environment (IDE) Based Solutions

Developers have expressed a preference for having all possible chain interactions accessible directly within the IDE, rather than through separate tools. This shift presents two primary challenges:

  1. Performance Concerns: Integrating all chain interactions could slow down TypeScript server performance, impacting IntelliSense responsiveness.
  2. Bundle Size: A comprehensive integration leads to larger bundle sizes, potentially degrading user experience (UX).

Despite these concerns, the demand for IDE-based chain interaction selection is strong.

Current Solutions and Limitations

Progress with Code Generation and Bundle Size Optimization

Our recent efforts in code generation have significantly improved TypeScript server performance, even with extensive chain interactions. However, despite having decreased the bundle size significantly, it still remains a concern. For example, using Kusama metadata for descriptor generation can increase the bundle size for a typical dApp by approximately 80Kb, which is less than ideal.

Proposed Solutions

Exploring Options for Efficient Code Management

Option 1: Manual Descriptor Import

  • Approach: Developers manually import each descriptor, allowing unused code to be automatically trimmed. The code would look like this:
import { createClient } from "@polkadot-api/client"
import {
  StorageSessionValidators,
  StorageStakingValidators
} from "./codegen/ksm"
import { getProviderChain } from "./provider"

const client = createClient(getProviderChain.connect)

const sessionValidators = await client.query(StorageSessionValidators).getValue();
const validatorsData = await client.query(StorageStakingValidators).getValues(sessionValidators);
  • Drawback: This contradicts the desired developer experience, as it limits the discovery of available chain interactions within the client.

Option 2: Advanced Tree Shaking Solutions

The code looks like this:

import { createClient } from "@polkadot-api/client"
import ksm from "./codegen/ksm"
import { getProviderChain } from "./provider"

const client = createClient(getProviderChain.connect, ksm)

const sessionValidators = await client.query.Session.Validators.getValue();
const validatorsData = await client.query.Staking.Validators.getValues(sessionValidators);

Option 2a: Bundler Specific Plugins

  • Approach: Develop plugins for various bundlers (Vite, Rollup, Webpack, etc.) to analyze the AST and identify used interactions for custom tree-shaking.

Option 2b: Bundler Agnostic Solution

  • Approach: Create a tool that inputs a bundled JS file and its source map, performs analysis similar to Option 2a, and outputs a new, optimized JS file with an updated source map.
  • Challenge: This approach is complex and requires significant development effort.

Seeking Community Input

Collaboration on Plugin Development and Solution Refinement

I propose initially developing a plugin for Vite/Rollup as a starting point. I invite the community's thoughts on these solutions and encourage contributions towards developing plugins for other bundlers.

Conclusion and Next Steps

Balancing Developer Experience with Performance and UX

Our aim is to strike a balance between providing a seamless developer experience and maintaining optimal performance and UX. I look forward to the community's feedback and suggestions on these proposed solutions.

Create enhancer to avoid unnecessary requests

Background: In our current implementation using @polkadot-api/client, all consumer requests are relayed directly to the provider, typically smoldot.

Problem: The direct relay of requests leads to a situation where the same request might be triggered multiple times from different parts of the dApp. This redundancy can result in unnecessary load on the provider and inefficiency in the system.

Proposed Solution: I propose to create an enhancer for the @polkadot-api/substrate-client that implements caching of all ongoing and resolved requests. This cache would be aligned with the currently pinned blocks from various follow subscriptions. As a block gets unpinned, the associated cache would be cleared. This approach can significantly reduce redundant requests, especially in scenarios where multiple parts of the dApp interact with the same set of data.

capi: npm package (access and CI automation)

For now what I would like to have is something similar to the following flow: every time that we make changes into any of the packages of the monorepo, then its corresponding package gets published on npm under the dev dist-tag.

So, in order to accomplish that, then I guess that the CI would have to:

  • figure out which package(s) have changed
  • generate a new random experimental version for those packages (in the working directory)
  • publish those packages with the new experimental version under the dev dist-tag

Once we are getting closer to publishing stable versions, then we will revisit and improve this workflow, of course.

Also, we should rename all the packages to their corresponding "@capi/*" name and then find out if this can be leveraged for the workflow that's described above.

cli improvements

  • typescript dependency issue
  • zod dependency issue
  • add cli as dependency to experiments
  • .d.ts file naming
  • unused imports / variables in codegen

Docs with codegen

I find it very upsetting that we are unable to display the docs that we have in the metadata though the top-level client. Unfortunately, though it's very tricky (possibly impossible) to make that happen via jsdocs, at least with the current approach of just generating the descriptor types and passing them into a client-creator function.

However, it would be interesting to investigate whether we could "abuse" TS in order to display the docs: the -possibly unsound- idea would be to add the docs as a literal property type in different types with generics build for Storage, Calls, etc. It's something worth investigating IMO.

Leverage metadata V15

Currently many of our libraries rely on metadata v14, which makes sense, because this version is not going anywhere. However, now that metadata v15 is widely available we should consider migrating to v15 and properly leverage its new features. Mainly adding proper support for runtime calls.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.