Code Monkey home page Code Monkey logo

mlabs-haskell / lambda-buffers Goto Github PK

View Code? Open in Web Editor NEW
29.0 15.0 0.0 8.02 MB

LambdaBuffers toolkit for sharing types and their semantics between different languages

Home Page: https://mlabs-haskell.github.io/lambda-buffers/

License: Apache License 2.0

Nix 16.03% Dhall 2.72% Haskell 55.60% Prolog 9.86% Shell 0.16% Makefile 0.01% PureScript 2.50% JavaScript 0.73% HTML 0.03% Rust 4.94% Emacs Lisp 0.03% TypeScript 7.37%
cardano code-generation flake-parts haskell nix plutarch plutus purescript types protocol-buffers

lambda-buffers's Introduction

Lambda Buffers

LambdaBuffers banner

Introduction

LambdaBuffers is a schema language (similar to ProtoBuffers, ADL, ASN.1, JSON Schema, etc.) and associated code generation toolkit. The goal of this project is to provide developers tools to define algebraic data types in a language-agnostic format such that shared data types can be declared in one place while maintaining compatibility across a plethora of supported languages.

Users may refer to the comparison matrix for an in-depth comparison of LambdaBuffers' features against the feature-set of other popular schema-languages.

At a glance, you may wish to choose LambdaBuffers instead of one of its competitors if your project requires:

  1. Parameterized Data Types (aka. type functions): Unlike ProtoBuffers or JSON Schema, LambdaBuffers allows users to define algebraic data types which take type variable arguments. If your project's domain is most accurately represented by parameterized data types, LambdaBuffers may be a good choice for your needs.

  2. Opaque Types: Almost every competing schema language provides users a fixed set of builtin or primitive types, which are handled in a special manner by the code generation and cannot be extended. LambdaBuffers, by contrast, allows users to add their own builtin types and extend the existing code generation framework to handle those builtins in a manner intended by the users. There are no special primitive types in LambdaBuffers; a user-defined primitive type is defined in exactly the same way (i.e. as an opaque type) as a LambdaBuffers "builtin".

  3. Typeclass Support: While nearly every schema language supports generating type definitions in supported target languages, to our knowledge no schema language supports generating commonly used functions that operate on those types. Unlike other schema languages, LambdaBuffers supports code generation for typeclass instances (or the equivalent in languages that lack support for typeclasses) to reduce the amount of boilerplate required to productively make use of the generated types. While LambdaBuffers is still a work-in-progress, we expect that, upon completion, an extensive test suite will provide a high degree of assurance that the instances/methods generated by the LambdaBuffers code generator behave identically.

Documentation

Visit LambdaBuffers Github Pages.

Acknowledgements

This project was graciously funded by the Cardano Treasury in Catalyst Fund 9 and Catalyst Fund 10.

Authors:

Contributors:

lambda-buffers's People

Contributors

aciceri avatar bladyjoker avatar chfanghr avatar cstml avatar dependabot[bot] avatar gnumonik avatar hercules-ci[bot] avatar jaredponn avatar nini-faroux avatar seungheonoh avatar szg251 avatar t4ccer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lambda-buffers's Issues

rustFlake doesn't fail on `cargo clippy` AND `cargo build` warnings

warning: very complex type used. Consider factoring parts into `type` definitions
  --> src/indexer/callback.rs:22:5
   |
22 |     Arc<dyn Fn(Event) -> Pin<Box<dyn Future<Output = Result<(), E>> + Send + Sync>> + Send + Sync>,
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#type_complexity
   = note: `#[warn(clippy::type_complexity)]` on by default
warning: using `Result.or_else(|x| Err(y))`, which is more succinctly expressed as `map_err(|x| y)`
  --> src/indexer/callback.rs:43:9
   |
43 | /         rt.block_on(handle_event(input, |ev: Event| f(ev), &retry_policy, utils))
44 | |           .or_else(|err| {
45 | |             event!(Level::ERROR, label=%Events::EventHandlerFailure, ?err);
46 | |             Err(err)
47 | |           })
   | |____________^
   |
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#bind_instead_of_map
   = note: `#[warn(clippy::bind_instead_of_map)]` on by default
help: try
   |
44 ~           .map_err(|err| {
45 |             event!(Level::ERROR, label=%Events::EventHandlerFailure, ?err);
46 ~             err
   |

Catalyst milestone 3: Testing and documentation

Outputs

  • A test suite checking for correct mapping from schema data types to PlutusData encodings against a known-good corpus of such mappings (golden tests).

    • A dedicated lbt-plutus test suite was implemented for both Haskell and Purescript backends. They leverage both golden unit testing approach and randomized property based testing to assert the essential properties of the LambdaBuffers Plutus package:
      • Plutus.V1.PlutusData derivation tests
        • Golden unit tests: forall (golden : Days.Day.*.pd.json): (toJson . toPlutusData . fromPlutusData . fromJson) golden == golden
        • Property tests: forall (x : Foo.*): (fromPlutusData . toPlutusData) x == x
      • Plutus.V1.PlutusData instance tests
        • Golden unit tests: forall (golden : *.pd.json): (toJson . toPlutusData . fromPlutusData . fromJson) golden == golden
  • A test suite checking for roundtrip compatibility between codegened target environments.

    • A dedicated lbt-plutus test suite was implemented for both Haskell and Purescript backends.
    • A dedicated lbt-prelude test suite was implemented for both Haskell and Purescript backends.
    • Both include golden unit tests that provide assurances that these backends implement the LambdaBuffers packages in a mutually compatible manner.
  • A modular and contract-based test suite architecture streamlining codegen testing compliance for any of the supported typeclasses.

  • A document mapping the schema data types and typeclasses to their corresponding code-generated variants in the target environments.

Acceptance Criteria

  • The test suites are passing for the Haskell+PlutusTx codegen module.
    • CI targets:
      • checks.x86_64-linux."check-lbt-prelude-haskell:test:tests"
      • checks.x86_64-linux."check-lbt-plutus-haskell:test:tests"
  • The test suites are passing for the Purescript+CTL codegen module.
    • CI targets:
      • checks.x86_64-linux."purescript:lbt-plutus:check"
      • checks.x86_64-linux."purescript:lbt-prelude:check"
  • The “Mappings” document is made available in the project repository.

Evidence of Milestone Completion

References:

Complete the Plutus .lbf schemas (with TxInfo and ScriptContext)

At the time of writing https://github.com/mlabs-haskell/lambda-buffers/blob/main/libs/lbf-plutus/ several PLA types were not available in CTL which is why I placed those types in:

  1. https://github.com/mlabs-haskell/lambda-buffers/blob/main/libs/lbf-plutus/Plutus/V1/Todo.lbf
  2. https://github.com/mlabs-haskell/lambda-buffers/blob/main/libs/lbf-plutus/Plutus/V2/Todo.lbf

This came back to bite us as now we have incomplete PLA libraries:

  1. mlabs-haskell/plutus-ledger-api-rust#11
  2. mlabs-haskell/plutus-ledger-api-typescript#12

TODO

  • lbf-plutus: Defined opaques for missing TxOut, TxInInfo, TxInfo and ScriptContext for both V1 and V2
  • lbr-plutus-haskell: Defined Json instances for these opaques
  • lbr-plutus-purescript: Defined these types here or in CTL directly? Json is not implemented because we're waiting on plutus-ledger-api-purescript
  • lbt-plutus: Define new goldens for the new opaques and generate their Json and Json PlutusData files. Implement all tests in haskell/purescript/plutarch/plutustx/typescript/rust

Catalyst project completion

Outputs

  • We will produce a system for easily sharing interfaces between languages which include the necessities required for development in Plutus and PlutusTx.
    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • This project will generate code that integrates closely with target language/PAB environments, concretely Cardano Transaction Lib, PlutusTx, and Plutarch. Given that some of these projects are currently under active development, we anticipate that API breakages will be a common occurrence and we intend to bring respective development teams into a common social circle in order to organise for mutual evolution and growth.
    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • We will adapt a project that uses Haskell/Plutarch for on-chain code and Haskell/PlutusTx for off-chain code. However, after Lambda Buffs project concludes, we will very likely be using it on other projects that do use Purescript/CTL
    • As demonstrated in LB Cardano Demo project, LambdaBuffers now supports vanilla Haskell and Purescript which we call this support LB Prelude support that enables cross-language type sharing of basic types. Additionally, LambdaBuffers supports specialized backends used in Cardano dApp development, namely Plutarch and PlutusTx for Plutus script development and Cardano Transaction Library for writing 'offchain' orchestration. All types are seemingly shared as promised.

Final output

  • A simple Plutus codebase can be created which demonstrates the capabilities for newcomers to work across multiple languages without interface/serialisation issues.
    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • A user will be able to specify types required in their smart contract and seamlessly share these types across at least 2 languages used in the cardano ecosystem.
    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • [] A demonstration video can be created to discuss the more robust capabilities of the dApp Schemas product, the video will be shared with Catalyst and on social Media.

Adding rustfmt to pre-commit-hooks

By default, rustfmt looks for a Cargo.toml file in the root directory to determines which files should be formatted. However, as we're using a monorepo structure there's no such file, so the pre-commit-hook fails. We must find a way to tell rustfmt where the package roots are.
Previously I configured cargo workspaces, which solved the problem, but it is not in alignment with the rules of our monorepo (the repo should be language agnostic), so this is a no-go.

Parsing Stack Enhancements

This issue collects some internal discussion regarding possible improvements to the parsing stack.

Slack discussion identified the following tasks (discussion has been summarized):

  1. Iterate on syntax in the current parsing stack, so we can settle on something aesthethicaly pleasing for users.

    For example:

    • remove sum/records/prod keywords and just rely on RHS to be uniquely parseable { record }, (prod), | sum |. This would be helpful for the issue identified here

    • special unit type syntax, MaybeInt = Maybe Int (which would now be prod MaybeInt = (Maybe Int))

    • standalone derive statements are a bit verbose (rather why not just derive Eq, Json, PlutusData)

  2. Update the formatter (I think that's basic stuff we need for automated code quality)

    • lbf format should be tested with a forall source. meaningOf lbf format source = meaningOf source,
  3. treesitter grammar to have highlighting and symbol extractions in GH and editors for .lbf. files (improvement of life)

  4. Align the parsing stack with something we fund robust and aligned with best practices (whilst making sure our error messages are amazing). Related issue comment

Catalyst milestone 4/final: Project adoption

Outputs

  • Integration tooling for the build environment (Cabal, Spago. Nix).

    • LambdaBuffers team developed and provided their users with a set of Nix utility functions for building .lbf schemas into all the supported target language environments. Additionally, we extended the Nix support for working with Haskell and Purescript projects to allow for adding data and library dependencies which is crucial for adoption of any tool that leverages automated code generation.
    • The LB Cardano Demo project demonstrates the use of said Nix functions which results in a very concise and complete scaffold for any LambdaBuffers powered project.
    lambdabuffers-cardano-demo $ nix repl
    nix-repl> :lf .
    nix-repl> inputs.lbf.lib.x86_64-linux.
       inputs.lbf.lib.x86_64-linux.haskellData
       inputs.lbf.lib.x86_64-linux.haskellFlake
       inputs.lbf.lib.x86_64-linux.haskellPlutusFlake
       inputs.lbf.lib.x86_64-linux.lbfBuild
       inputs.lbf.lib.x86_64-linux.lbfHaskell
       inputs.lbf.lib.x86_64-linux.lbfPlutarch
       inputs.lbf.lib.x86_64-linux.lbfPlutarch'
       inputs.lbf.lib.x86_64-linux.lbfPlutusHaskell
       inputs.lbf.lib.x86_64-linux.lbfPlutusPurescript
       inputs.lbf.lib.x86_64-linux.lbfPreludeHaskell
       inputs.lbf.lib.x86_64-linux.lbfPreludePurescript
       inputs.lbf.lib.x86_64-linux.lbfPurescript
       inputs.lbf.lib.x86_64-linux.purescriptFlake
       inputs.lbf.lib.x86_64-linux.rustFlake
  • Continuous integration for regularly deploying toolkit packages to a package repository.

    • Hercules CI has been operating on the LambdaBuffers repo since the very start, and all the packages are readily available using Nix.
  • A Cardano dApp project partnership to help integrate the toolkit in their development and build environments.

    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • Documentation for integrating the build environment.

Acceptance Criteria

  • Toolkit packages are available in a package repository (eg. Nixpkgs, Hackage)

    • Bash shell is available in the repo that users can simply use to try out and use the LambdaBuffers toolkit.
    $ nix develop github:mlabs-haskell/lambda-buffers#lb
    $ lbf<TAB>
      lbf                        lbf-plutus-to-haskell      lbf-plutus-to-purescript   lbf-prelude-to-haskell     lbf-prelude-to-purescript  
  • Cardano dApp project maintains all the Plutus domain types in the configuration file and has fully equipped type libraries made available automatically via build environment integration tooling.

    • A demo project exists at https://github.com/mlabs-haskell/lambdabuffers-cardano-demo that showcase end to end use of the LambdaBuffers toolkit. The project defines the plutus and configuration API in api directory, the validation directory contains the same onchain script logic implemented using PlutusTx and Plutarch, the transactions directory contains transaction building logic using Cardano Transaction Library. All the devops is achieved using concise build.nix Nix based build recipes for each sub-project. Finally, the CI is also instructed to run the end to end test that tries out both Plutarch and PlutusTx scripts assuring it works as intended.

Evidence of Milestone Completion

  • Completed and reviewed build environment integration tooling and source code are available in the project repository.
    • The extras contains all the Nix libraries the LB team developed to facilitate and streamline Cardano dApp development using LambdaBuffers and supported language ecosystems (Haskell, Purescript, Cardano Transaction Library, PlutusTx and Plutarch).
  • Toolkit packages are available in the package repository.
    • All LambdaBuffers tools are available using Nix.
  • Proof of use is provided by the partner dApp.
    • The LB Cardano Demo project demonstrates the end to end Cardano dApp that uses LambdaBuffers and Plutus scripts written in both Plutarch and PlutusTx, whereas Cardano Transaction Library is used to construct transactions and test against the real network (using Plutip) that everything works as intended.
  • Tooling use documentation is available in the project repository.

References:

Catalyst milestone 1: Research

Outputs

  • A report summarizing user interviews and containing a qualitative analysis of the discovered use cases.
    • STATUS: Done (#17)
    • An interview with 3 MLabs engineers was performed (1.5h) who's work span multiple Cardano dApp projects. Their feedback is made available in the repo.
    • Additionally, a survey was sent out to MLabs engineers and their feedback is made available in the repo.
  • An architecture design document.
  • A language specification document elaborating on the data type model features.
  • A related work document comparing the proposed technology via a feature matrix with others in the same space.
    • STATUS: Done (#17, #18)
    • Document comparing different schema techologies to LambdaBuffers is made available in the repo
  • An initial compiler implementation that performs some basic checks in accordance with the language specification.

Acceptance Criteria

  • At least 3 users/projects have been interviewed about their desired use case for this technology.
    • An interview with 3 MLabs engineers was performed (1.5h) who's work span multiple Cardano dApp projects. Their feedback is made available in the repo.
    • Additionally, a survey was sent out to MLabs engineers and their feedback is made available in the repo.
  • The architecture design document is completed and available in the project repository.
  • The initial compiler implementation is completed, capturing SOME of the intended language semantics as described in the Language Specification

Evidence of Milestone Completion

  • Completed and reviewed design document is available in the project repository.
  • Completed and reviewed initial version of the compiler command line tool made available in the project repository.
    • The Frontend CLI called lambda-buffers-frontend-cli is made available in the repo and is currently able to parse, validate and format .lbf documents that contain the LambdaBuffers type modules:
lambda-buffers/lambda-buffers-frontend$ cabal run 
Usage: lambda-buffers-frontend-cli COMMAND

  LambdaBuffers Frontend command-line interface tool

Available options:
  -h,--help                Show this help text

Available commands:
  compile                  Compile a LambdaBuffers Module (.lbf)
  format                   Format a LambdaBuffers Module (.lbf)

There's ongoing work to integrate the Compiler CLI in the Frontend CLI.

  • Test case: Compiler is able to validate a schema that uses a subset of types and capabilities from the spec.

References:

Consolidating Unification with fd-unification

PRIORITY: MEDIUM

There's an opportunity we could use to consolidate and structure our internals in a way that signals the underlying intention more clearly, removes unnecessary code in favor of reusing a well-tested library https://hackage.haskell.org/package/unification-fd-0.10.0.1/docs/Control-Unification.html.

We introduced a separate ad-hoc machinery to work with unifiable terms:

And as a result of that we have small implementations laying around that perform what is essentially unify, subsumes and freshVar in Control.Unification.

Logic programming requires us to clearly state:

  1. What are 'ground' terms,
  2. How do you lift 'ground' terms into 'unifiable' terms,
  3. What are 'unification variables'.

By using Control.Unification we could have a common module in LambdaBuffers.Compiler.Unification where the necessary ProtoCompat.Types are lifted to their 'unifiable' counterpart and the monad stack is defined with the BindingMonad and Fallible.

This could then be used throughout the Compiler, but it could be also used in Codegen.

Strange Parses

There are some strange things that parse / don't parse for the front end.

For example,


module TEST
sum A = A

and

-- Module documentatoin
module TEST
sum A = A

do not parse.

Also,

module TEST

sum A = A
class MyClass a

deriveMyClass A

parses deriveMyClass as derive and MyClass.

There's other instances of this as well.

This issue proposes to fix these!

Sum and Product type expressions

Hey, we need to establish what type bodies we support and in which context.

Currently, we only support Sum of Tuples.

sum Foo a = MkFoo Int a | MkBar (Maybe a) String

However, the following questions arise:

  1. Do we enable Record expressions in Sum context?
sum Foo a = MkFoo {foo :: Int, fooz :: a} | MkBar { bar:: Maybe a, baz :: String}
  1. Do we enable first class Tuples in TyDef context?
prod Foo a = a Int (Maybe a)

For example, Haskell tydef codegen for this would be:

data Foo a = MkFoo a Int (Maybe a)
  1. Do we enable first class Records in TyDef context?
rec Foo a = { foo: a , bar:: Int, baz:: (Maybe a)}

For example, Haskell tydef codegen for this would be:

data Foo a = MkFoo { foo: a , bar:: Int, baz:: (Maybe a)}

Please share your thoughts about how that would codegen in different languages:

  1. Haskell
  2. Plutarch
  3. Purescript

haskell.nix: One must pick out targets explicitly or infinite recursion hell

Check out some of the Haskell build.nix files in the repo...

runtimes/haskell/lbr-prelude/build.nix:

...
   {
      devShells.dev-lbr-prelude-haskell = hsFlake.devShell;

      packages = {

        lbr-prelude-haskell-src = pkgs.stdenv.mkDerivation {
          name = "lbr-prelude-haskell-src";
          src = ./.;
          phases = "installPhase";
          installPhase = "ln -s $src $out";
        };

      } // hsFlake.packages;

      inherit (hsFlake) checks;

    };
}

or

runtimes/haskell/lbr-plutus/build.nix:

      devShells.dev-lbr-plutus-haskell = hsFlake.devShell;

      packages = {

        lbr-plutus-haskell-src = pkgs.stdenv.mkDerivation {
          name = "lbr-plutus-haskell-src";
          src = ./.;
          phases = "installPhase";
          installPhase = "ln -s $src $out";
        };

        lbr-plutus-haskell-lib = hsFlake.packages."lbr-plutus:lib:lbr-plutus";
        lbr-plutus-haskell-tests = hsFlake.packages."lbr-plutus:test:tests";
      };

      inherit (hsFlake) checks;

    };

Ideally one would just inherit hsFlake (packages, devShell, checks) but for SOME reason this ends in infinite recursion encountered.

It's not urgent, but I foresee people being very confused by this.

Kind Checker - Update 1

Scope

From comments to PR #10:

  • #37
  • consider using monomorphic Kinds - resolve for all remaining variables in Kinds.

Additionally:

  • document the approach #13

Breaking up the typescript runtime to separate repos.

In #150, the Typescript runtime includes goodies that can be put into different repos / libraries.

In particular, it would be nice to break it up into repos

  • json-ts: for the json parser / serializer
  • prelude-ts: for the prelude types (Eq, Ord, Integer, Char, Map, Set, etc.)
  • plutus-ledger-api-ts for Plutus Ledger API types.

We could also bundle tarballs up and put them on Github so npm users could just grab and extract the tarballs they are interested in without necessarily using nix.

See discussion here: #150 (comment)

Another TODO is that prelude-ts has basic unit tests, and this can be changed to property based testing with fast-check

Compiler: Testing tasks

TODO:

  • Benign mutation that shuffles Constructors in a Sum - easy
  • Benign mutation that shuffles Fields in a Record - easy
  • Benign mutation that shuffles Tys in a NTuple - easy
  • Benign mutation that shuffles TyArgs in a TyAbs - difficult (requires updating call sites)
  • Corrupting mutation causing a NamingError
  • Corrupting mutation causing a ProtoParseError
  • Corrupting mutation causing a KindCheckError

Check out Mutations in https://github.com/mlabs-haskell/lambda-buffers/blob/134ff00445aac6c09bfe63ecea433c3c708c3753/lambda-buffers-compiler/test/Test/LambdaBuffers/Compiler/Mutation.hs

And how to install them in

Leftovers for `docs/syntax.md`

In #116, docs/syntax.md was created to specify a LambdaBuffers Frontend file. The chapter is a first draft, and needs improvement.

The remaining work is as follows:

  • Rearranging grammar productions to be more consistent with internals: #117 (comment)
  • Documenting behavior of imports: #117 (comment)
  • General wording improvements / cohesiveness with other chapters.

Document Assumptions for TypeClass System

As Andrea pointed out to me, our typeclass system appears to be sound, but is only sound given a boatload of assumptions. I should write those down somewhere in order that we don't accidentally make a small change and break everything a few months from now.

Haskell codegen: No instance for ‘LambdaBuffers.Runtime.Prelude.Json LambdaBuffers.Plutus.V1.TxOutRef’

Once everything is put together, everything should compile, the fact that it doesn't means this is a bug.

Some deets:

    • No instance for ‘LambdaBuffers.Runtime.Prelude.Json
                         LambdaBuffers.Plutus.V1.TxOutRef’
        arising from a use of ‘LambdaBuffers.Runtime.Prelude.toJson’

Imports printed

import qualified LambdaBuffers.Plutus.V1
import qualified LambdaBuffers.Prelude
import qualified LambdaBuffers.Runtime.Prelude
import qualified PlutusTx
import qualified PlutusTx.Eq
import qualified PlutusTx.Maybe
import qualified PlutusTx.Prelude
import qualified Prelude

Cabal printed

cabal-version:      3.0
name:               lbf-infinity-plutus-api
version:            0.1.0.0
synopsis:           A Cabal project that contains LambdaBuffers generated Haskell modules
build-type:         Simple

library
    exposed-modules: LambdaBuffers.Infinity.Validation.Plutus.Vault LambdaBuffers.Infinity.Validation.Plutus.UAsset LambdaBuffers.Infinity.Validation.Plutus.UAsset.Location LambdaBuffers.Infinity.Validation.Plutus.Minting LambdaBuffers.Infinity.Validation.Plutus.UCoin LambdaBuffers.Infinity.Validation.Plutus.Location LambdaBuffers.Infinity.Validation.Plutus.Identity LambdaBuffers.Infinity.Validation.Plutus.Main LambdaBuffers.Infinity.Validation.Plutus.Entity 
    autogen-modules: LambdaBuffers.Infinity.Validation.Plutus.Vault LambdaBuffers.Infinity.Validation.Plutus.UAsset LambdaBuffers.Infinity.Validation.Plutus.UAsset.Location LambdaBuffers.Infinity.Validation.Plutus.Minting LambdaBuffers.Infinity.Validation.Plutus.UCoin LambdaBuffers.Infinity.Validation.Plutus.Location LambdaBuffers.Infinity.Validation.Plutus.Identity LambdaBuffers.Infinity.Validation.Plutus.Main LambdaBuffers.Infinity.Validation.Plutus.Entity 
    hs-source-dirs:     autogen

    default-language: Haskell2010
    default-extensions: NoImplicitPrelude
    build-depends: lbf-plutus, lbf-prelude, base, lbr-plutus, lbr-prelude, plutus-tx

Workaround

Add this to your problematic schema which will bring in the necessary imports.

sum XY = X | Y
derive Eq XY
derive PlutusData XY
derive Json XY

Create a Rust Plutus runtime library

Create a prelude module for Plutus specific functions in Rust.
There are two options to be used as the supporting libraries: cardano-serialisation-lib or pallas-primitives.

  • research and decide which of the above two libraries to use
  • implement and test

Codegen tasks

Tasks

  1. Communicate with CTL team about extracting CTL plutus-ledger-api types from CTL and into a separate library where we manage JSON/PlutusData/Eq (Otherwise we need to pull in the entire CTL -.-)
  2. Deep dive into Haskell plutus-ledger-api and check what built in encodings we must reuse (PlutusData ofc, but JSON? Perhaps we need to write a JSON library for them),
  3. Plutarch type def generation from LB type defs, and how to implement Eq and PlutusData for these Plutarch types,
  4. Formulate a non-Plutus standard library of opaques, what goes in (numbers, text, string, list, arrays, set/maps etc)? Collect target types in target languages (Purescript, Haskell, Typescript and Rust in that order of priority)

Rust crate publishing and versioning

Publish lbr-prelude and lbr-prelude-derive to crates.io

TODO

  • bump version to v1
  • configure CI to publish crates to crates.io on tag push
  • if not to difficult, generate documentation to github pages (otherwise, using docs.rs is also sufficient)
  • push a git tag

Leftovers #114

#114

TODO:

  • Update documentation to reflect to new tools (cc @jaredponn). See #126

  • #127

  • Implement lbf-prelude-to-purescript and lbf-plutus-to-purescript clis (see

    lbf-prelude-to-haskell = pkgs.writeShellScriptBin "lbf-prelude-to-haskell" ''
    ). #121

  • Implement devShells with PlutusTx, CTL and other environments such that users can conveniently play with lbf-prelude... and lbf-plutus clis.

    • I basically want to do nix develop github:mlabs-haskell/lambda-buffers#dev-plutustx and be able to generate the PlutusTx code with lbf-plutus-to-haskell and then simply ghci so users can inspect and work with generated libraries. #122
    • #129
  • #128

Document LambdaBuffers packages (Prelude and Plutus)

When implementing runtimes for LB packages (Prelude, Plutus), we need to have a 'language neutral' specification of:

  1. Which opaques types are listed and what do they map to in different languages.
  2. How opaque types support a type class
    • Json encodings
    • PlutusData encodings
    • Equality
  3. How transparent types support a type class via derivation
    • Given a sum/record/prod type how does a Json/PlutusData encoding look like? How is equality performed?

Let's add necessary documentation for this.

I sense that documentation should be associated with an LB package. So it's basically a documentation for the LB package.

  • docs/lb-prelude.md
  • docs/lb-plutus.md

Plutarch codegen: Recursive data type support

One does not simply recurse in Plutarch.

My raw lambda calculus skills are not at the level where I'm just popping out fixpoint based terms so let's get back to the drawing board:

For a canonical recursive data type example and some mutually recursive ones:

sum List a = Cons a (List a) | Nil
sum F a = Rec (G a) | Nil
sum G a = Rec (F a) | Nil

What we do eventually is invoke some polymorphic class method on the constituents of each constructor (if it's a sum type, but also for products and records). This is where the problem happens right? How would we use pfix in this situation?

(let's imagine we have an annotation that tells us whether a type is infinite or not).

pfix :: Term s (((a :--> b) :--> (a :--> b)) :--> (a :--> b))

fib :: Term s (PInteger :--> PInteger)
fib = phoistAcyclic $
  pfix #$ plam $ \self n ->
    pif
      (n #== 0)
      0
      $ pif
        (n #== 1)
        1
        $ self # (n - 1) + self # (n - 2)

Main question is: How do we generate code in a uniform manner such that we can recurse properly?

Figure out a nicer Rust version handling, and updating

The current setup uses git revisions to refer to versions, in many different places, including plutus-ledger-types. This will get us into a chasing git revisions game which we all now too well from plutus-apps, so it would be best to avoid...

CTL: Implement Prelude.Json instances for the Plutus schema

This feature completes the Plutus package with LB Json support which enables users to exchange these types across language boundaries.

TODO

  • Extract CTL plutus-ledger-api types in a separate repo
  • purescript-plutus-ledger-api: Implement LambdaBuffers.Runtime.Prelude.Json class instance for all the types plutus-ledger-api types
  • lbt-plutus: Tests against golden instances

Catalyst milestone 2: End to end proof of concept

Outputs

  • A configuration DSL for specifying domain data types.
    • LambdaBuffers Frontend supports specifying modules with type definitions with opaque, product/record and sum types. Additionally, type class definitions are supported as well as type class rule definitions using the 'instance clause' and 'derive' syntax.
    • Refer to the standard LambdaBuffers library lbf-base to get a sense of what the language looks like.
  • A compiler tool that outputs the interpreted configuration.
  • A Codegen module that takes in the interpreted configuration and outputs a Haskell+PlutusTx (was Plutarch) Cabal project containing all the types and necessary type class wiring.
  • A Codegen module that takes in the interpreted configuration and outputs a Purescript+CTL Spago project containing all the types and necessary wiring.

Acceptance Criteria

  • The generated Haskell+Plutarch Cabal project can successfully be built.
  • The generated Purescript+CTL Spago project can successfully be built.
  • All the above codegen modules are reviewed and made available in the project repository.

Evidence of Milestone Completion

Demo recordings

Demo files:

References:

Compiler: Performance degradation

It seems like #75 introduced some performance issues.

lbf-comp -w goldens/good/work-dir -i goldens/good -f goldens/good/LambdaBuffers.lbf

Takes a couple of seconds to finish.

Support Plutarch

TODO:

  • Map Plutus LB types to Plutarch equivalent.
  • Type definition printing.
  • Implementation printing (PlutusData class/encoding).
  • Testsuite.

PlutusData typeclass implementation should be unconditionally printed during TyDef

@t4ccer
Alright, I have first maybe bug. If I have

sum NftMarketplaceRedeemer = Buy | Cancel

and run it through lbf-plutus-to-plutarch and try to compile, I'll get a GHC error saying

    • No instance for (Plutarch.Prelude.PlutusType
                         NftMarketplaceRedeemer)
        arising from the 'deriving' clause of a data type declaration
      Possible fix:
        use a standalone 'deriving instance' declaration,
          so you can specify the instance context yourself
    • When deriving the instance for (Plutarch.Show.PShow
                                        NftMarketplaceRedeemer)
   |
40 |   deriving anyclass Plutarch.Show.PShow
   |                     ^^^^^^^^^^^^^^^^^^

Of course adding derive PlutusData NftMarketplaceRedeemer fixes the issue, but I have a feeling that it should be caught before going to ghc

Align error reporting with GNU standard

Could LB please follow some more or less standard error format like https://www.gnu.org/prep/standards/html_node/Errors.html? Emacs, vscode and many console emulators can recognise it and make them clickable, etc.
Very tiny difference but huge experience boost

[lbf][ERROR][COMPILER]types/NftMarketplace.lbf:(16:11)-(16:12) An unbound type variable 'b' was found in module 'NftMarketplace' in a type definition for 'Bar'

./types/NftMarketplace.lbf:16.11-16.12: [lbf][ERROR][COMPILER] An unbound type variable 'b' was found in module 'NftMarketplace' in a type definition for 'Bar'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.