Code Monkey home page Code Monkey logo

rewatch's Introduction

Rewatch

Release

Info

Rewatch is an alternative build system for the Rescript Compiler (which uses a combination of Ninja, OCaml and a Node.js script). It strives to deliver consistent and faster builds in monorepo setups. Bsb doesn't support a watch-mode in a monorepo setup, and when setting up a watcher that runs a global incremental compile it's consistent but very inefficient and thus slow.

We couldn't find a way to improve this without re-architecting the whole build system. The benefit of having a specialized build system is that it's possible to completely tailor it to ReScript and not being dependent of the constraints of a generic build system like Ninja. This allowed us to have significant performance improvements even in non-monorepo setups (30% to 3x improvements reported).

Project Status

This project should be considered in beta status. We run it in production at Walnut. We're open to PR's and other contributions to make it 100% stable in the ReScript toolchain.

Usage

  1. Install the package
yarn add @rolandpeelen/rewatch
  1. Build / Clean / Watch
yarn rewatch build
yarn rewatch clean
yarn rewatch watch

You can pass in the folder as the second argument where the 'root' bsconfig.json lives. If you encounter a 'stale build error', either directly, or after a while, a clean may be needed to clean up some old compiler assets.

Full Options

Find this output by running yarn rewatch --help.

Rewatch is an alternative build system for the Rescript Compiler bsb (which uses Ninja internally). It strives to deliver consistent and faster builds in monorepo setups with multiple packages, where the default build system fails to pick up changed interfaces across multiple packages

Usage: rewatch [OPTIONS] [COMMAND] [FOLDER]

Arguments:
  [COMMAND]
          Possible values:
          - build: Build using Rewatch
          - watch: Build, then start a watcher
          - clean: Clean the build artifacts

  [FOLDER]
          The relative path to where the main bsconfig.json resides. IE - the root of your project

Options:
  -f, --filter <FILTER>
          Filter allows for a regex to be supplied which will filter the files to be compiled. For instance, to filter out test files for compilation while doing feature work

  -a, --after-build <AFTER_BUILD>
          This allows one to pass an additional command to the watcher, which allows it to run when finished. For instance, to play a sound when done compiling, or to run a test suite. NOTE - You may need to add '--color=always' to your subcommand in case you want to output colour as well

  -n, --no-timing <NO_TIMING>
          [possible values: true, false]

  -c, --create-sourcedirs <CREATE_SOURCEDIRS>
          This creates a source_dirs.json file at the root of the monorepo, which is needed when you want to use Reanalyze
          
          [possible values: true, false]

      --compiler-args <COMPILER_ARGS>
          This prints the compiler arguments. It expects the path to a rescript.json file. This also requires --bsc-path and --rescript-version to be present

      --rescript-version <RESCRIPT_VERSION>
          To be used in conjunction with compiler_args

      --bsc-path <BSC_PATH>
          A custom path to bsc

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version

Contributing

Pre-requisites:

  • Rust
  • NodeJS - For running testscripts only
  • Yarn or Npm - Npm probably comes with your node installation
  1. cd testrepo && yarn (install dependencies for submodule)
  2. cargo run

Running tests:

  1. cargo build --release
  2. ./tests/suite.sh

rewatch's People

Contributors

jfrolich avatar rolandpeelen avatar fhammerschmidt avatar dzakh avatar endosama avatar cknitt avatar diogomqbm avatar zth avatar yummy-sk avatar tenst avatar tomis avatar

Stargazers

文宇祥 avatar Alex Strand avatar Tillmann Rendel avatar Bikal Lem avatar Tom Ekander avatar Javier Chávarri avatar  avatar Changwan avatar Lucas Teixeira avatar Nguyen Long avatar Andreas Thoelke avatar  avatar Tristan Yang avatar Victor Ginelli avatar Apaar Madan avatar Pablo Henrique avatar Joshua Hernandez avatar Justin Bennett avatar Dillon Mulroy avatar Russell Dillin avatar Darren Baldwin avatar  avatar  avatar Zhao Xiaohong avatar Christian Rotzoll avatar kevan avatar lubega simon avatar Kevin Abatan avatar Shohei Shimomura avatar Lan Qingyong avatar Sora Morimoto avatar Martin Kinoo avatar Ben avatar Jeong-Sik Yun avatar Tsotne Nazarashvili avatar ૮༼⚆︿⚆༽つ avatar Jeff Carpenter avatar Sascha avatar Antonín Simerský avatar  avatar Anderson avatar Aitor Oses avatar Ananda Umamil avatar Maxiim3 avatar Luis Diego García avatar  avatar  avatar Dario Garcia Moya avatar Nathanael Ribeiro avatar woonki avatar Juri Hahn avatar Jihchi Lee avatar Gorka Cesium avatar orzogc avatar Paul Tsnobiladzé avatar Sergey Samokhov avatar  avatar  avatar Maxime Alardo avatar Medson Oliveira avatar Denis Strelkov avatar Danni Friedland avatar David Sancho avatar Nikita avatar Iha Shin (신의하) avatar  avatar Jaeho Lee (Jay) avatar Paz Aricha avatar Douglas Gomes avatar Paolo Bueno avatar Evgeny avatar bragamat avatar sara.codes avatar Andrejs Agejevs avatar Łukasz avatar Hyunwoo Nam avatar Jiseok CHOI avatar Enieber Cunha avatar Jason Smythe avatar Fernando Daciuk avatar Matheus Henrique avatar Pedro Castro avatar Hyeseong Kim avatar Gabriel avatar Ben Pony avatar Vitalii Shapoval avatar  avatar Leandro Ostera avatar Bartosz Kalinowski avatar

Watchers

Danni Friedland avatar Itay Adler avatar  avatar James Cloos avatar  avatar David Sancho avatar  avatar  avatar  avatar  avatar Jeong-Sik Yun avatar Ben Pony avatar

rewatch's Issues

Embed languages spec

This is a WIP discussion for implementing generators support in the style of https://github.com/zth/rescript-embed-lang natively in rewatch and the compiler itself.

Relevant compiler PR: rescript-lang/rescript-compiler#6823. That PR does the following in the compiler:

  • Make bsc output an .embeds file together with the .ast file, if the file processed has embeds. It'll also print 1 to stdout if it found embeds. More about .embeds and its format later.
  • Run a PPX that replaces the embed tags with links to the generated module for that content. More on that later too.

Generators and embeds are used a bit interchangeably in the text below. Generators are the program that generates code from some source input. Embeds is that source input embedded into ReScript source itself.

Configuring generators in the consuming project

We need a way to configure what generators to use, so the build system knows what to run for each embed. This should be done in rescript.json for consistency.

Suggestion: Like PPXes, point to a path

In this alternative, you point to a path. That path should be some sort of configuration file, that the build system can read once, and figure out what it needs for what generator this is, and how to run it. Example:

rescript.json in the consuming project.

{
  "embeds": {
    "generators": ["pgtyped-rescript/embed"]
  }
}

Example embed.json in the pgtyped-rescript package:

{
  "tags": ["sql", "sql.one", "sql.many", "sql.expectOne"],
  "command": "bun generator.js"
}

We'll go more into how to build generators later, but the build system would expect to be able to send some configuration as arg to that command and have it generate from that config.

Note that the command could be any type of binary. It's bun here but it could be node, or a Rust/OCaml/whatever binary. Doesn't matter. It's up to the user to have what's needed installed on its system to be able to run the generation.

This leaves us room to add more configuration if wanted, as well as give good DX with minimal manual work.

So, to recap what the build system would do:

  • Read embeds in rescript.json
  • Resolve each embed as it resolves the path to a PPX today
  • Append .json if it's not already in the file path
  • Read the configuration in the embed json file

It now knows what generator this is, how to run it, and what tags to run it for.

Configuring where to emit the generated content

I think we should force the user to configure a central place where to emit generated files, like ./src/__generated__. This will simplify a lot, and scale well up to the point where there's so many files in the same folder that you start to get perf issues. At which point we can solve that in a number of ways.

A proposed config could look like this:

{
  "embeds": {
    "generators": ["pgtyped-rescript/embed"],
    "artifactFolder": "./src/__generated__"
  }
}

We need to check that that folder is inside of a configured ReScript source folder etc, but that should be fine.

Questions and things to figure out

  • What if things clash, as in several embeds operate on the same tag names?

Overview of potential setup in build system

Here's an overview of how the build system could handle running generators.

This is how it looks at a high level:

Finding embeds

You can embed other languages or any string content into tags inside of ReScript. Example:

let findOne = %sql.one(`select * from users where id = :id!`)

let findMany = %sql.many(`select * from users`)

If there's a generator configured for sql.one, bsc will spit out a .embeds file next to .ast when it's asked to produce the .ast file. It looks roughly like this (format very much subject to change, we'll make it whatever makes most sense and is easiest/most efficient to read from the build system):

<<- item begin ->>
sql.one
select * from users where id = :id!
1:23-1:60

<<- item begin ->>
sql.many
select * from users
3:88-3:109

If bsc found embeds and printed a .embed file, it'll output 1 to stdout.

Running generators

Now, if we found embeds we'll want to run the appropriate generator for that file, if the embedded content has changed.

Generators are expected to be idempotent. We're building a pretty aggressive cache mechanism into this. This is important and will make the DX much better, including not having to run any generators in CI etc unless you really want to. Control that by simply committing or not committing the generated files.

So, we load the .embeds file, go through each of the embeds, and check whether they've already been generated. If they've been generated, we check if the generated content was generated from the same input, via a comment with a hash of the source content at the top of the generated file. If the generated file wasn't generated from the same source, or if it hasn't been generated yet, we run the appropriate generator and write the generated file.

Here's a number of hands on examples:

First time a generation runs
// SomeFile.res
let findOne = %sql.one(`select * from users where id = :id!`)

let findMany = %sql.many(`select * from users`)
  1. bsc extracts 2 embeds from SomeFile.res and prints 1 to stdout to signify that
  2. The build system reads the SomeFile.embeds file generated by bsc, and figures out that 2 files are to be generated: src/__generated__/SomeFile__sql_one__M1.res and src/__generated__/SomeFile__sql_many__M1.res. Notice the file format <sourceModuleName>__<tagName.replace(".", "_")>__M<indexOfTagInFile>. If multiple embeds of the same tag exists in the same file (multiple %sql.one for example), the M part is incremented, like src/__generated__/SomeFile__sql_one__M2.res for the next embed.
  3. The build system checks if the generated files exist already. They don't, so...
  4. ...the build system triggers the appropriate generator for each embedded content. Maybe by passing stringified JSON as the sole argument to the generator: /command/to/run/generator '{"tag":"sql.one","content":"select * from users where id = :id!","loc":{"start":{"line":1,"col":23},"end":{"line":1,"col":60}}}'. This can all be done in parallell, since the generators should be idempotent (at least to start with).
  5. The generator runs, and returns either the generated content, or errors. More about errors below.
  6. The build system writes the generated content, including a source hash for the input it was generated from at the top of each generated file. Here's how a file could look:
    src/__generated__/SomeFile__sql_one__M1.res
// @sourceHash 83mksdf8782m4884i34
type response = {...}
// More generated content in here
  1. New files were added, so we need to add these new files to the build system build state, and trigger ast generation of them. Notice that embeds in files generated by other embeds are not allowed. That way we avoid potentially slow and recursive embeds.
  2. The build system cleans up any lingering embeds that are now irrelevant, if they exist. Maybe by just querying the file system for src/__generated__/SomeFile__sql_one__*.res and src/__generated__/SomeFile__sql_many__*.res and then remove any of them that aren't in use any more. This also needs to be updated in the build state.
  3. Finally, when things have settled and the build system is ready, we move on to the compilation phase, as usual.
When generated content hasn't changed

The same setup as the first example, up until point 3, where instead:
3. Generated files exist for both embeds: src/__generated__/SomeFile__sql_one__M1.res and src/__generated__/SomeFile__sql_many__M1.res
4. The build system reads the first line of each of those files, and extracts the @sourceHash
5. It then compares the hash from the file with hashing the content extracted from the .embeds file.
6. All hashes match, so no generation needs to run, and the build state can be considered valid. Continue to regular compilation.

When generated content has changed

The same setup as above, but from point 5:
5. The hashes does not match. Run the generation again, as noted by point 4 in the first example.

Cleaning up

We'll need to continuously ensure that we clean up:

  • .embeds files when there aren't any embeds anymore (as notced by bsc not writing 1 to stdout)
  • Generated files when their parent source tag don't exist anymore

When errors in generation happen

We can flesh this out more, but ideally, when errors in generation happen, we can propagate those to the build system and have the build system both fail and write them to .compiler.log so that they end up in the editor tooling.

The one thing to take care of here is to translate the error locations so that the generator can return errors relative to the content it received, whereas the error itself is presented by the build system and in the editor tooling offset to the correct location in the source file.

Regenerating content?

The idea is that you can simply remove the generated file, at which point it'll be regenerated the next time the build system processes the file with the source content.

Questions and thoughts

  • Should generators be idempotent? This makes things a lot easier, and faster, but what about the scenario where for example a GraphQL schema changes, and we want to regenerate because of that? We probably need to figure out a few more strategies.

Circular dependency that doesn't exist

I tried rewatch in a large monorepo and got an non-existent circular dependency error.

The code looks like this:

  • ProjectFrontend namespace: false
    • Server
    • Component
    • Api
    • Context
  • ProjectBackend namespace: true
    • UnrelatedModule
      • Make Functor
        • module Server = ...

Component -> Api -> ProjectBackend.UnrelatedModule -> Server -> Api

ProjectFrontend uses types from ProjectBackend, but not from UnrelatedModule. It appears that it is mixing up the Server module inside the functor with Server module in ProjectFrontend.

I believe it has to do with the namespace option. Changing ProjectFrontend to namespace: true results in different errors that are fixable.

pnpm monorepo support - ppx resolution failing

Following the recent addition of pnpm support I was looking to move our monorepo to rewatch and noticed a couple of issues which I've reproduced in a thin repo.

Firstly, ppx binary resolution isn't working correctly when ppx dependency is from a child.

In this monorepo example (mind the branch!) we have dependencies set up as follows

  • @monorepo/root has no rescript of its own, but is configured with @monorepo/main as a bs-dependency
  • @monorepo/main depends on @monorepo/library
  • @monorepo/library depends on rescript-logger and uses "ppx-flags": ["rescript-logger/ppx"]

Actual outcome, running rewatch build in root:

❯ rewatch build .
[1/7]📦 Building package tree...Could not read folder: test/intl...
[1/7] 📦 Built package tree in 0.00s
[2/7] 🕵️  Found source files in 0.00s
[3/7] 📝 Read compile state 0.01s
[4/7] 🧹 Cleaned 0/92 0.00s
[5/7] 🧱 Parsing... ⠁ 1/1                                                                                                                                                                
err: sh: /Users/tiago/src/tabazevedo/rewatch-pnpm-test/node_modules/rescript-logger/ppx: No such file or directory

  We've found a bug for you!
  /Users/tiago/src/tabazevedo/rewatch-pnpm-test/packages/library/src/Library.res

  Error while running external preprocessor
Command line: /Users/tiago/src/tabazevedo/rewatch-pnpm-test/node_modules/rescript-logger/ppx '/var/folders/fl/k9vmqsxx3yl92r6_ch6rvc5w0000gn/T/ppx6b9ce9Library.res' '/var/folders/fl/k9vmqsxx3yl92r6_ch6rvc5w0000gn/T/ppx74d7b1Library.res'



[5/7] ️🛑 Error parsing source files in 0.01s
sh: /Users/tiago/src/tabazevedo/rewatch-pnpm-test/node_modules/rescript-logger/ppx: No such file or directory

  We've found a bug for you!
  /Users/tiago/src/tabazevedo/rewatch-pnpm-test/packages/library/src/Library.res

  Error while running external preprocessor
Command line: /Users/tiago/src/tabazevedo/rewatch-pnpm-test/node_modules/rescript-logger/ppx '/var/folders/fl/k9vmqsxx3yl92r6_ch6rvc5w0000gn/T/ppx6b9ce9Library.res' '/var/folders/fl/k9vmqsxx3yl92r6_ch6rvc5w0000gn/T/ppx74d7b1Library.res'


  ️🛑 Could not parse Source Files

It's expecting the rescript-logger library to be hoisted to the top-level and looking up the binary there:
PROJECT_ROOT/node_modules/rescript-logger/ppx

Expected outcome

Binary lookup path is non-hoisted variant:
PROJECT_ROOT/node_modules/@monorepo/main/node_modules/@monorepo/library/node_modules/rescript-logger/ppx


Let me know if I'm missing something, I'll raise a couple of other issues with similar findings in other scenarios.

Quiet mode / CI mode

For CI it would be nice if there was a quiet mode where there is no output other than error messages.

sourcedirs.rs panics after upgrading from v1.0.4 to 1.0.5

run rewatch clean and rewatch watch. Then i see the following error

> pnpm rewatch watch . 

[1/7] 📦 Built package tree in 0.00s
[2/7] 🕵️  Found source files in 0.00s
[3/7] 📝 Read compile state 0.00s
[4/7] 🧹 Cleaned 0/0 0.00s
[5/7] 🧱 Parsed 120 source files in 0.14s
[6/7] ️🌴 Collected deps in 0.00s
[7/7] 🤺 ️Compiling... ⠂ 122/122                                                        thread 'thread '<unnamed><unnamed>' panicked at src/sourcedirs.rs:26:10:
called `Result::unwrap()` on an `Err` value: StripPrefixError(())
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
' panicked at src/sourcedirs.rs:26:10:
called `Result::unwrap()` on an `Err` value: StripPrefixError(())
 ELIFECYCLE  Command failed with exit code 101.

I rolled back and it works fine again.

dependencies

  • "@rescript/react": "^0.12.1",
  • "rescript": "11.1.0",

[RFC] First class codegen support

This is a brain dump of an idea I've had for a long time around how we can make codegen a first class citizen that's easy to use and orchestrate in ReScript. I'm posting this here in the Rewatch repo because exploring it involves changes to the build system, and Rewatch looks like a great place to try that type of changes.

Summary

Proposing first class support for code generation in the ReScript build system and compiler. This can enable easily embedding other languages directly in your code. SQL, EdgeQL, GraphQL, markdown, CSS - anything really. Generators can be written in any language, and the build system will take care of everything from when to trigger the generators most efficiently, to managing the generated files from each generator (regenerate, delete, etc).

Here's a quick pseudo example of how this idea could work for embedding other languages, implementing a type safe SQL code generator:

// UserResolvers.res
// module Query is replaced with a reference to the file generated by the sql generator
module Query = %gen.sql(`
  select * from users where id = $1
`)

// The file generated by the sql generator has a function called `query`, that takes an argument `id`
let getUserById = (~id) => Query.query(id)

Let's break down at a high level how this pseudo example could work.

  1. The build system scans UserResolvers.res before it compiles it, and sees that it has %gen.sql. It looks for a generator registered under the sql name.
  2. It finds our sql generator and calls it with some data including the file name, the string inside of %gen.sql(), and a few other things that can help with codegen. The generator in this example will leverage information from a connected SQL database to type the query fed to it, and generate a simple function to execute the query. Since the generator is responsible for emitting an actual .res file and not rewrite an AST, it can be written in any language, as long as we can call it and feed it data via stdin.
  3. The generator runs and outputs UserResolvers__sql.res. The build system knows this and now handles UserResolvers__sql.res as a dependency, meaning it knows when to clean up the generated file, and so on.
  4. A built in PPX in the compiler turns the module Query = %gen.sql part into module Query = UserResolvers__sql. A very simple heuristics-based swap from the embedded code definition to the module its generator generates, powered by rules around how to name files emitted by generators.

Generation will be easily cacheable, since regeneration of the files is separate from the compiler running. This means that the build system and the generator in tandem decides when to regenerate code. And this in turn means that you pay the cost of code generation only when the source code for the generation itself changes.

There's of course a lot of subtlety and detail to how to make this work well, be performant, and so on. But the gist is the above. I'll detail with more examples later.

Goals

The idea behind this is that codegen is a fairly simple tool that's efficient in many use cases, but is too inaccessible right now. In order to do codegen today, you need to either write a PPX, or for separate codegen have:

  • Your own watcher that watches whatever source files you generate from
  • Your own dependency management of the files you generate
  • Separate build commands/processes for your code generators

With the approach to codegen outlined above, you'll instead need:

  • A code generator written in whatever language you want
  • Some simple configuration

...and that's it. The ReScript compiler and build system handles the rest.

Concerns

Performance

Performance is king. We need to be very mindful to keep build performance as fast as possible. This includes intelligent cacheing etc, but also setting up good starter projects for building performant generators.

We can of course ask users to write generators in performant languages like Rust and OCaml. But, one strength of this proposal is that you should be able to write generators in JS and ReScript directly. This has several benefits:

  • Using ReScript to write ReScript tooling is nice because ReScript is obviously a nice language
  • The JS ecosystem is huge and has tooling and packages for almost everything
  • All of the regular reasons JS is nice to write - not having to build and distribute binaries for each target platform, etc

In order to make the JS route as performant as possible, we can for example recommend using https://bun.sh/, a JS runtime with fast startup, and include tips on how to keep Bun startup performance fast.

As for the design of the generators themselves, they can hopefully be designed in a way so that they can:

  • Run async in dev mode, so they don't slow down the regular compiler
  • Be possible to run in parallell
  • Be heavily cacheable

Tooling (LSP, syntax highlighting, etc)

Embedding languages in other languages is a pretty common practice. For example, we already have both graphql-ppx and RescriptRelay embedding GraphQL in ReScript. So for tooling, it's a matter of adjusting whatever tooling already exists to be able to understand embedded code in ReScript.

Error reporting

In an ideal world, code generators can emit build errors that the build system picks up, and by extension reports to the user via the editor tooling. This would be the absolute best solution, if codegen errors are picked up and treated like any compiler error.

Future and ideas

Here are some loose ideas and thoughts:

  • We can have a dedicated editor code action to rerun a code generator whenever needed. Good for generators where you want full control of when they're rerun.
  • Generators could be driven both by embedded languages (%gen.sql as example is above) or by fully separate files (.gql, .sql, etc).
  • Generators could be both installable (npm packages) and local hand rolled (point to local file that's the code generator). In the package case, we could find a way for each package to be able to provide its own configuration.
  • We can provide "optimized" general tooling for writing code generators in ReScript (and OCaml?).
  • Could support AST based generation, as in allow regular ReScript code in %gen.<generator>, and pass a representation of that AST to generators.

Use case examples

Not sure we actually want to encourage all of these, but just to show capabilities.

Embedding EdgeDB

I did an experiment a while back for embedding EdgeDB inside of ReScript: https://twitter.com/___zth___/status/1666907067192320000

That experiment would fit great with this approach:

  • A generator for EdgeDB is written in JS and registered for %gen.edgedb.
  • That generator calls out to the general EdgeDB tooling to produce the types needed.
  • That's it. The build system handles the rest.

Embedding GraphQL

The same goes for GraphQL. For those who don't want to use a PPX-based solution, it'd be easy to build a generator (something similar to https://the-guild.dev/graphql/codegen perhaps) that just emits ReScript types and helpers.

Type providers: OpenAPI clients

F# has a concept of "type providers": https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/type-providers/
We could do something similar with this approach.

Imagine you have a URL to an open API specification. We'll take GitHub's as example: https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/ghes-3.9/ghes-3.9.json

Now, imagine there's a generator for turning an OpenAPI spec into a ReScript client, ready to use. We could write a generator to hook up that OpenAPI generator:

module GitHubAPIClient = %gen.openapi("https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/ghes-3.9/ghes-3.9.json")

// Pseudo
GitHubAPIClient.getUserById(~id="githubUserId")

Roll your own simple CSS modules

You could use this to roll your own simple CSS modules.

Imagine a code generator registered for gen.cssModules.

// SomeModule.res
module Styles = %gen.cssModules(`
  .primary {
    color: black;
  }
`)

let button = <Button className=Styles.primary />

The code generator is called with the CSS string above, and relevant meta data. It reads the CSS using standard CSS tooling, and just like CSS modules it hashes each class name based on the file name it's defined in, plus the local class name. It then outputs two files:

/* __generated__/SomeModule__cssModules.css */

/* This file is automatically generated. Do not edit manually. */

.dzs16n {
  color: black;
}
// __generated__/SomeModule__cssModules.res

// This file is automatically generated. Do not edit manually.
// @sourceHash("<file-hash-here>")

@inline let primary = "dzs16n"
%raw(`import "./SomeModule__cssModules.css"`)

And, the original file after it's transformed by the internal compiler PPX for the code gen:

// SomeModule.res
module Styles = SomeModule__cssModules

let button = <Button className=Styles.primary />

There, we've reinvented a small version of CSS modules, but fully integrated into the ReScript compiler.

Next step: a PoC

There's a lot to explore and talk about if there's interest in this route. A good next step would be to pick one simple generator, and PoC how it could look integrating it into the build system. @jfrolich we talked about this briefly.

If there's interest from you to explore this further, we could set up a simple spec of what needs to happen where to explore this further. What do you say?

Handle deletion/renaming of files

I found that rewatch doesn't seem to handle deletion/renaming of files correctly yet.

If I delete a .res together with its .resi, no error is reported even though the module is referenced elsewhere. I just get

[1/7] ️✅  Built package tree in 0.04s
[2/7] ️✅  Found source files in 0.01s
[3/7] ️✅  Cleaned 2/1391 0.05s
[4/7] ️✅  Parsed 0 source files in 0.02s

[5/7] ️✅  Collected deps in 0.01s
[6/7] ️✅  Compiled 0 modules in 0.00s
[7/7] ️✅  Finished Compilation in 0.13s

If I delete a .res file, but leave the .resi, I get

thread '<unnamed>' panicked at 'Could not get basename', src/helpers.rs:100:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5

Detect changes to config and compiler version

In case the rescript.json or the compiler version changes, rewatch should automatically clean before doing a build.

(Also, if the project was compiled with bsb before, do a clean before the build.)

Does not work on Windows

PS C:\Users\admin\Desktop\my-rescript-app> npx rewatch
'"sh"' is not recognized as an internal or external command,
operable program or batch file.

"Could not read folder: test/intl" when having @rescript/core as a dependency

When building my project, I get the following output

[1/7]📦 Building package tree...Could not read folder: test/intl...
[1/7] 📦 Built package tree in 0.01s
[2/7] 🕵️  Found source files in 0.00s
[3/7] 📝 Read compile state 0.03s
[4/7] 🧹 Cleaned 0/1503 0.00s
[5/7] 🧱 Parsed 0 source files in 0.00s
[6/7] ️🌴 Collected deps in 0.01s
[7/7] 🤺 ️Compiled 0 modules in 0.00s

Problems:

  1. It is not clear in the output which package the message "Could not read folder: test/intl" applies to. Took me some searching to find out that it is @rescript/core.
  2. test/intl is a dev source dir in @rescript/core and therefore should not be attempted to build when @rescript/core is used as a dependency (to my understanding).

Custom JSX module

Hey,
I want to use rewatch in a project with a custom jsx module. But it isn't allowed.
The parser throws an error like: unknown variant MyCustomJSX, expected react at line 25 column 24".

Cheers
Daniel

Build compiles a different number of modules each time

Rerunning build without any code or modification time changes results in compiled modules. Each time it is a different count.

Here are the compiled counts after each run: 1252 -> 600 -> 199 -> 13 -> 30 -> 2 -> 2 -> 2

Output
[1/7] 📦 Built package tree in 0.07s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 2 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 1252 modules in 15.97s

✨ Finished Compilation in 16.51s

[1/7] 📦 Built package tree in 0.07s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 2 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 600 modules in 11.98s

✨ Finished Compilation in 12.52s

[1/7] 📦 Built package tree in 0.06s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.13s
[7/7] 🤺 ️Compiled 199 modules in 4.36s

✨ Finished Compilation in 4.90s

[1/7] 📦 Built package tree in 0.06s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.13s
[7/7] 🤺 ️Compiled 13 modules in 0.76s

✨ Finished Compilation in 1.29s

[1/7] 📦 Built package tree in 0.07s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 30 modules in 1.19s

✨ Finished Compilation in 1.74s

[1/7] 📦 Built package tree in 0.06s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.11s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 2 modules in 0.08s

✨ Finished Compilation in 0.61s

[1/7] 📦 Built package tree in 0.07s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.12s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 2 modules in 0.12s

✨ Finished Compilation in 0.68s

[1/7] 📦 Built package tree in 0.07s
[2/7] 🕵️  Found source files in 0.04s
[3/7] 📝 Read compile state 0.12s
[4/7] 🧹 Cleaned 0/2298 0.01s
[5/7] 🧱 Parsed 1 source files in 0.04s
[6/7] ️🌴 Collected deps in 0.14s
[7/7] 🤺 ️Compiled 2 modules in 0.09s

✨ Finished Compilation in 0.64s

Duplicate files handling

Thank you very much for trying to tackle the ol' ReScript workspace repo problem.

I was excited to try it out, but encountered an error as soon as a ran yarn rewatch build .. It gave me

[2023-03-22T10:38:51Z ERROR rewatch::build] Duplicate files found for module: ChatAttachmentView
[2023-03-22T10:38:51Z ERROR rewatch::build] file 1: /Users/florian-cca/workspace/alert/node_modules/app-web/src/chat/ChatAttachmentView.res
[2023-03-22T10:38:51Z ERROR rewatch::build] file 2: /Users/florian-cca/workspace/alert/node_modules/app-mobile/src/chat/ChatAttachmentView.res

It seems like you still require unique filenames over all packages? It would be cool if that was supported, then rewatch would basically be a drop-in replacement since otherwise our repo structures are pretty similar to your testrepo.

Fully Support and Test PNPM

We're already testing for Yarn / Yarn Workspaces. Not Pnpm. This results in some issues that arise in an ad-hoc fashion. Perhaps it's better to get those out of the way from the get-go and use the snapshot tests for PNPM as well. This issue is to track the progress.

  • - Insert into tests. Perhaps we can use this test repo from this issue
  • - Sourcedirs support
  • - Dynamically determine package manager, and add it to build state so we don't have to ad-hoc try and go up the tree to find packages?

genType support

I'm using @genType in one of my projects. Rewatch doesn't seem to be able to generate *.gen.tsx yet, is there any plan to support it?

Resolving dependencies with pnpm

I have a project using pnpm with the following structure:

|
- apps/backoffice - uses packages/rescript-sandbox as a dependency
- packages/rescript-sandbox - uses rescript-nodejs as a dependency

Which fails with the error:

[1/6] 🌴  Building package tree...thread 'main' panicked at 'Errors reading bsconfig: "Could not read bsconfig. /Users/dzakh/code/carla/web-new/apps/backoffice/node_modules/rescript-nodejs/bsconfig.json - No such file or directory (os error 2)"', src/bsconfig.rs:251:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5
/Users/dzakh/code/carla/web-new/apps/backoffice/node_modules/.bin/rewatch: line 13: 11690 Abort trap: 6           "$basedir/../../../../node_modules/.pnpm/@[email protected]/node_modules/@rolandpeelen/rewatch/rewatch" "$@"

Since rescript-nodejs is a dependency of the rescript-sandbox it should be taken from the packages/rescript-sandbox/node_modules instead of apps/backoffice/node_modules.

No such file or directory for non-hoisted dependencies

I'm using a yarn v4.0 monorepo setup and I can't build using rewatch because non-hoisted dependencies cause the following error:

[1/7] 🌴  Building package tree...[1/2] ️🛑   Error building package tree (are node_modules up-to-date?)... 
 More details: No such file or directory (os error 2)%     

I have such a file tree:

monorepo
├── node_modules
└── packages
     └── packageA
           └──node_modules
               └── @rescript/react

dependencies in rescript.json/bsconfig.json that are in the top level node_modules don't cause issues, only the ones inside the package node_modules, like @rescript/react in this case.

Could be related to #74.

Build error for unused attribute: @bs.uncurry

 Warning number 101 (configured as error) 
  /Users/woonki/Github/works/gl/sources/farmmorning-app/node_modules/@ryyppy/rescript-promise/src/Promise.res:7:18-28

  5 │ 
  6 │ @bs.new
  7 │ external make: ((@bs.uncurry (. 'a) => unit, (. 'e) => unit) => unit) =>
    │  t<'a> = "Promise"
  8 │ 
  9 │ @bs.val @bs.scope("Promise")

  Unused attribute: bs.uncurry
This means such annotation is not annotated properly. 
for example, some annotations is only meaningful in externals

It's not included in my project dependencies directly, but I think it exists in the node_modules folder because one of the npm packages it depends on uses it. I think this build error is preventing me from making progress.

Package-specs not interpreted correctly

There appears to be at least two different issues

  1. in-source: false does not work
  2. Multiple package-specs (like in example below) do not work. This is specifically problematic when one wishes to implement code that can be run in both node (commonjs) and modern web client (es6)
  "package-specs": [
    {
      "module": "es6",
      "in-source": false
    },
    {
      "module": "commonjs",
      "in-source": false
    }
  ]

ps. Nice work with rewatch! This library is definitely something many needs.

how to compile tests directory?

in my rescript.json I have this

"sources": [
    { "dir": "src/components", "subdirs": false },
    {
      "dir": "tests",
      "subdirs": false,
      "type": "dev"
    }
  ],

But i noticed that rewatch doesn't compile my .test.res files within the ./tests/ directory. It works with rescript compiler but not with rewatch.

deps

  • "rescript": "^11.1.0",
  • "@rolandpeelen/rewatch": "^1.0.4",

VSCode plugin does not stay up to date with errors

Sorry I cannot be any more specific, but the main problem seems to be that while the build works quite nicely, the VSCode plugin does not seem to receive any changes from the rewatch build system. So errors do not show up, etc.

I am not running the internal build provided by the vscode plugin nor am I running any other external builds (such as the one provided by the vite rescript plugin) as I'm under assumption these should not be necessary or preferred.

Are there any specific configurations or things I should know?

I run rewatch through yarn 1.22 on Ubuntu 22.04

ps. Nice job with the rewatch. After struggling a lot with a couple of new monorepo setups during last few weeks I find this project no less than absolutely critical to the success of ReScript.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.