stateful / runme Goto Github PK
View Code? Open in Web Editor NEWDevOps Notebooks Built with Markdown
Home Page: https://runme.dev
License: Apache License 2.0
DevOps Notebooks Built with Markdown
Home Page: https://runme.dev
License: Apache License 2.0
Managing Protocol Buffer files and code generation is not easy, hence, we decided to delegate the work to buf and try Buf Schema Registry.
If there is a deprecation mechanism it'd be great to transition everyone to runme
. However, distributing them in parallel is fine too.
Let's also rename the public repo.
name
directly into the metadata if it was parsed from a respective markdown file's code block.runme.dev/name
(including on-the-fly generated ones) to runme.dev/*
special attributes (ephmeral) which won't be subject so serialization for all blocks, including 1.). The name
is required to be able to call the CLI from the extension.This will maintain "good" UX where name's are only being serialized if set by the user.
cool project!
WDY about an interactive cli?
Won't run because of #
inside of https://github.com/stateful/runme.dev
rdme run macos
Similar issue where \
aren't being handled properly in one-liner:
rdme run deno-install
The preparation of the script is platform specific:
The prepareScript
function is currently agnostic and likely run into issues on Windows:
Lines 124 to 132 in 4b2fce8
Windows issue for more context: stateful/vscode-runme#20 (never fully figure out if this is a bash vs shell problem or Windows ๐คทโโ๏ธ)
It's a bit awkward that runme fmt
requires the filename to be passed as mandatory argument.
Why can't it work like runme ls
where it defaults to --filename README.md
.
Consistency is important.
Rather than using text interpolation and pattern matching, maybe GPT3's summarization is a good fit here?
Type: Bug
I noticed that there is a node
process in WSL1 which is eating insane amounts of memory (7 GB) and occupies one CPU core fully. This seems to be the extension host. Extension Bisect showed that stateful.runme
is the culprit. Disabling this extension resolves the issue.
top
output:
Performance graph in Process Hacker:
VS Code version: Code 1.74.3 (97dec172d3256f8ca4bfb2143f3f76b503ca0534, 2023-01-09T16:59:02.252Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Sandboxed: No
Remote OS version: Linux x64 4.4.0-19041-Microsoft
Remote OS version: Linux x64 4.4.0-19041-Microsoft
Item | Value |
---|---|
CPUs | AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1697) |
GPU Status | 2d_canvas: enabled canvas_oop_rasterization: disabled_off direct_rendering_display_compositor: disabled_off_ok gpu_compositing: enabled multiple_raster_threads: enabled_on opengl: enabled_on rasterization: enabled raw_draw: disabled_off_ok skia_renderer: enabled_on video_decode: enabled video_encode: enabled vulkan: disabled_off webgl: enabled webgl2: enabled webgpu: disabled_off |
Load (avg) | undefined |
Memory (System) | 31.23GB (0.36GB free) |
Process Argv | |
Screen Reader | no |
VM | 0% |
Item | Value |
---|---|
Remote | WSL: Ubuntu-20.04 |
OS | Linux x64 4.4.0-19041-Microsoft |
CPUs | AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1700) |
Memory (System) | 31.23GB (0.34GB free) |
VM | 0% |
Item | Value |
---|---|
Remote | WSL: Ubuntu-20.04 |
OS | Linux x64 4.4.0-19041-Microsoft |
CPUs | AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1700) |
Memory (System) | 31.23GB (0.34GB free) |
VM | 0% |
Extension | Author (truncated) | Version |
---|---|---|
auto-close-tag | for | 0.5.14 |
vscode-graphql-syntax | Gra | 1.0.5 |
dotenv | mik | 1.0.1 |
vscode-aql | mon | 1.7.0 |
jupyter-keymap | ms- | 1.0.0 |
remote-containers | ms- | 0.266.1 |
remote-ssh | ms- | 0.94.0 |
remote-ssh-edit | ms- | 0.84.0 |
remote-wsl | ms- | 0.72.0 |
remote-explorer | ms- | 0.0.3 |
ejs-language-support | Qas | 0.0.1 |
vscode-todo-highlight | way | 1.0.5 |
vscode-devdocs | akf | 1.0.3 |
svelte-intellisense | ard | 0.7.1 |
vscode-intelephense-client | bme | 1.9.3 |
npm-intellisense | chr | 1.4.4 |
js-codeformer | cms | 2.6.1 |
compulim-vscode-closetag | Com | 1.2.0 |
vscode-eslint | dba | 2.2.6 |
devdocs | dei | 0.2.0 |
vscode-new-file | dku | 4.0.2 |
dbux-code | Dom | 0.7.9 |
xml | Dot | 2.5.1 |
EditorConfig | Edi | 0.16.4 |
copilot | Git | 1.65.7705 |
vscode-graphql-execution | Gra | 0.1.6 |
vscode-graphql-syntax | Gra | 1.0.5 |
vscode-git-blamer | how | 1.1.2 |
join-comment-aware | joh | 0.0.3 |
solidity | Jua | 0.0.141 |
edge | luo | 0.3.2 |
rainbow-csv | mec | 3.5.0 |
csharp | ms- | 1.25.2 |
vscode-dotnet-runtime | ms- | 1.6.0 |
isort | ms- | 2022.8.0 |
python | ms- | 2022.20.1 |
vscode-pylance | ms- | 2023.1.10 |
jupyter | ms- | 2022.11.1003412109 |
jupyter-keymap | ms- | 1.0.0 |
jupyter-renderers | ms- | 1.0.12 |
vscode-jupyter-cell-tags | ms- | 0.1.6 |
vscode-jupyter-slideshow | ms- | 0.1.5 |
cpptools | ms- | 1.13.9 |
hexeditor | ms- | 1.9.9 |
hexeditor | not | 1.8.2 |
vscode-print | pdc | 0.10.20 |
svelte-extractor | pro | 0.0.3 |
vscode-data-preview | Ran | 2.3.0 |
vscode-data-table | Ran | 1.12.0 |
vscode-yaml | red | 1.11.0 |
ActiveFileInStatusBar | Ros | 1.0.3 |
vscode-paste-and-indent | Rub | 0.0.8 |
bracket-jumper | sas | 1.1.8 |
trailing-spaces | sha | 0.4.1 |
vscode-standard | sta | 2.1.3 |
runme | sta | 0.4.2 |
ignore-gitignore | stu | 1.0.1 |
svelte-vscode | sve | 107.0.1 |
es6-string-html | Tob | 2.12.0 |
sort-lines | Tyr | 1.9.1 |
use-strict-everywhere | vee | 0.1.3 |
bracket-padder | via | 0.3.0 |
change-case | wma | 1.0.0 |
html-css-class-completion | Zig | 1.20.0 |
Let's verify the CLI functionality before.
Currently, the Kernel service offers only execution of separate commands. It works in the way that you send a command and passively listen for incremental output data and final exit code.
We would like instead to be able to have a raw bi-directional stream between xterm and the kernel. This is an alternative execution which might prove to be useful.
tested with https://github.com/stateful/baldaquin/blob/main/README.md (internal repo). The enumerated list is being turned into a series of 1s... I know this might not be trivial but I wonder if it's possible to parse them as is (text) vs an enumerated list?
Originally posted by @sourishkrout in #71 (review)
Python code block in https://github.com/stateful/vscode-runme/blob/main/examples/README.md
The readonly
(should be camel-case it?) would render the cell as type markdown
instead of code
.
Consider the parsed snippet (taken from private repo tortuga/api
) below which is a curl
command followed by its expected output. Currently this command will fail due to the attempt to execute both.
"$ curl -XPOST -H \"Content-Type: application/json\" localhost:8080/tasks/ -d '{\"duration\": \"10s\", \"exit_code\": 0, \"name\": \"Run task\", \"runbook_name\": \"RB 1\", \"runbook_run_id\": \"6e975f1b-0c0f-4765-b24a-2aa87b901c06\", \"start_time\": \"2022-05-05T04:12:43Z\", \"command\": \"/bin/sh\", \"args\": \"echo hello\", \"feedback\": \"this is cool!\", \"extra\": \"{\\\"hello\\\": \\\"world\\\"}\"}'\n{\"id\":\"6e975f1b-0c0f-4765-b24a-2aa87b901c06\"}\n"
Runme has a bunch of unit tests which validate isolated examples of Markdown. They are mostly edge cases we have come across during development. Additionally, we should run end-to-end tests of the CLI against known Markdowns to avoid introducing any regressions.
This blog post might offer a nice approach to testing CLIs.
Related to #58.
Observed issues with this markdown: https://github.com/webdriverio/webdriverio/blob/main/CONTRIBUTING.md
Please make recommendations for follow-up actions.
If the markdown file has no code block, runme just prints:
runme ls
NAME FIRST COMMAND # OF COMMANDS DESCRIPTION
I suggest to be more explicit and throw an error, e.g. No code block found in "Readme.md"
, get started writing code blocks here: ....`
The goal is to figure out if we can implement VS Code's NotebookSerializer
interface in runme. Doing that will enable us to easily add editing capabilities to notebooks rendered from Markdown files.
Highlights:
NotebookSerializer
interface and expose it in the WASM module.NotebookData
. There are multiple steps though: parse a byte array to AST which is a flat tree structure. Then, flatten it and produce a list of cells which are a foundation of NotebookData
.NotebookData
is more challenging. It's a reverse process of deserialization so creating an AST from a list of cells and finally a byte array.Suggest to remove the --chdir
and filename
parameter as it feels not very familiar. Given the natural way devs us ls
is suggest to allow passing in the readme path as parameter, e.g.:
Instead of:
runme ls --chdir ./examples --filename README.md
Just do:
runme ls ./examples/README.md
Same for executing code blocks:
runme run ./examples/README.md myExampleCodeBlock
This is a place to collect all issues related to Markdown formatting. There are two places where runme
formats Markdown:
runme fmt
is called, it will reformat the source file as well as try to keep the original structure, in particular nested blocks.Runme.deserialize()
is called, it will use the formatter to parse the source data, however, it will additionally flatten blocks. runme fmt --flatten
is an equivalent.Bugs resulting from flattening should be submitted separately.
None
*
or _
as described in the CommonMark spec. Currently, the formatter will always default to *
as the underlying Markdown parser do not distinguishes between them in the AST.It seems that for markdown sections we return a languageId
property that is always empty. I guess we can remove this?
value
of code blocksWe don't want to check in the WASM file into the extension as it blows up the repo size. Rather we would like to bundle it as part of the extension release. Therefor we need a downloadable wasm file, preferably from the GitHub release.
Should be possible to expose this as a function as part of the WASM interface? That way the extension can call at will.
It would be great if users could add additional information to the code block which then can be parsed and send along, e.g.:
```html type="svelte"
...
```
For the CLI we could use this for filtering certain blocks and for using rdme
as wasm file this could be helpful to annotate important information about the code block.
Popular tooling like docusaurus, Jekyll, and Hugo use frontmatter to store metadata. Runme's parser should at the minimum make sure to preserve it.
Related to #88
Consider providing a default command that will execute if it's explicitly named using annotation.
If we have a valid e.g. Python or Ruby snippet, I should be able to run it with rdme run NAME
.
```shell
echo "hello!"
this results in
unknown executable: "shell"
To get ahead of a plan to use the CLI to run commands inside of vscode-runme
, let's distribute the gobinaries and wasm builds as NPM package. This will also allow to transparently manage the dependencies between vscode-runme
<> runme
. I remember having seen a way to repackage a gobinary as module elegantly.
Ideally we could pick up on the existing tag based publication workflows: https://github.com/stateful/runme/actions/workflows/release.yml
@admc should be able to add you to the runme
npm module name which we own.
example:
runme run n
could be enough to run this command:
npm i
As per @christian-bromann suggestion, let's use the latest git tracked version of a markdown file to see if it's possible to discriminate intentional changes from e.g. whitespaces ones as part of serialization.
The following example of markdown is not rendered correctly:
## Running
### In Minikube
Deploy the application to Minikube using the Linkerd2 service mesh.
1. Install the `linkerd` CLI
```bash
curl https://run.linkerd.io/install | sh
```
1. Install Linkerd2
```bash
linkerd install | kubectl apply -f -
```
1. View the dashboard!
```bash
linkerd dashboard
```
1. Inject, Deploy, and Enjoy
```bash
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
```
1. Use the app!
```bash
minikube -n emojivoto service web-svc
```
It should be rendered like this:
Deploy the application to Minikube using the Linkerd2 service mesh.
Install the linkerd
CLI
curl https://run.linkerd.io/install | sh
Install Linkerd2
linkerd install | kubectl apply -f -
View the dashboard!
linkerd dashboard
Inject, Deploy, and Enjoy
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
Use the app!
minikube -n emojivoto service web-svc
But gets rendered like this:
Showing off the functionality of the runme cli
I did some testing based on this example: https://github.com/lifeiscontent/realworld
However, since it's not an interactive shell, the directory changes aren't retained across commands. Quick-fix involved concatenating the commands with ;
making them insensitive to non-zero exit codes. Ran into an issue where RoR's migrations failed due to "duplication" which is a red herring.
Perhaps there's a better way to do this?
diff --git a/internal/runner/shell.go b/internal/runner/shell.go
index d9967d2..abf3422 100644
--- a/internal/runner/shell.go
+++ b/internal/runner/shell.go
@@ -5,6 +5,7 @@ import (
"io"
"os"
"os/exec"
+ "strings"
"github.com/pkg/errors"
)
@@ -20,10 +21,9 @@ func (s Shell) Run(ctx context.Context) error {
sh = "/bin/sh"
}
- for _, cmd := range s.Cmds {
- if err := execSingle(ctx, sh, s.Dir, cmd, s.Stdin, s.Stdout, s.Stderr); err != nil {
- return err
- }
+ concatCmds := strings.Join(s.Cmds, "; ")
+ if err := execSingle(ctx, sh, s.Dir, concatCmds, s.Stdin, s.Stdout, s.Stderr); err != nil {
+ return err
}
return nil
In the VS Code extension, we have a way to detect a language using https://github.com/microsoft/vscode-languagedetection. It has a model embedded which comes from https://github.com/yoeo/guesslang. It executes it using https://github.com/tensorflow/tfjs which comes with various backends like CPU or WebGL.
For runme, which is built and distributed as a statically linked binary, the best would be to use Tensorflow Lite which is fairly small (<6MB). Unfortunately, the original guesslang model might not be easily converted to TF Lite. I tried that and got the following output:
2022-12-09 18:49:10.637176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-09 18:49:17.220826: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-09 18:49:20.342003: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2022-12-09 18:49:20.342041: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
2022-12-09 18:49:20.343338: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.346015: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2022-12-09 18:49:20.346046: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.354134: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
2022-12-09 18:49:20.372631: W tensorflow/core/common_runtime/type_inference.cc:339] Type inference failed. This indicates an invalid graph that escaped type checking. Error message: INVALID_ARGUMENT: expected compatible input types, but input 1:
type_id: TFT_OPTIONAL
args {
type_id: TFT_PRODUCT
args {
type_id: TFT_TENSOR
args {
type_id: TFT_INT64
}
}
}
is neither a subtype nor a supertype of the combined inputs preceding it:
type_id: TFT_OPTIONAL
args {
type_id: TFT_PRODUCT
args {
type_id: TFT_TENSOR
args {
type_id: TFT_INT32
}
}
}
while inferring type of node 'dnn/zero_fraction/cond/output/_44'
2022-12-09 18:49:20.378047: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2022-12-09 18:49:20.409092: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.425946: I tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 82615 microseconds.
2022-12-09 18:49:20.538034: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2022-12-09 18:49:20.820819: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2046] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexBincount, FlexCast, FlexConcatV2, FlexSparseFillEmptyRows, FlexSparseReshape, FlexSparseSegmentMean, FlexSparseSegmentSum, FlexStringSplit, FlexStringToHashBucketFast, FlexTensorListReserve, FlexTensorListSetItem, FlexTensorListStack
Details:
tf.Bincount(tensor<?xi32>, tensor<i32>, tensor<0xi64>) -> (tensor<?xi64>) : {device = ""}
tf.Cast(tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) -> (tensor<!tf_type.variant>) : {Truncate = false}
tf.ConcatV2(tensor<?x!tf_type.string>, tensor<10000x!tf_type.string>, tensor<i32>) -> (tensor<?x!tf_type.string>) : {device = ""}
tf.SparseFillEmptyRows(tensor<?x2xi64>, tensor<?xi64>, tensor<2xi64>, tensor<i64>) -> (tensor<?x2xi64>, tensor<?xi64>, tensor<?xi1>, tensor<?xi64>) : {device = ""}
tf.SparseReshape(tensor<?x2xi64>, tensor<2xi64>, tensor<2xi64>) -> (tensor<?x2xi64>, tensor<2xi64>) : {device = ""}
tf.SparseSegmentMean(tensor<?x70xf32>, tensor<?xi32>, tensor<?xi64>) -> (tensor<?x70xf32>) : {device = ""}
tf.SparseSegmentSum(tensor<?x54xf32>, tensor<?xi32>, tensor<?xi64>) -> (tensor<?x54xf32>) : {device = ""}
tf.StringSplit(tensor<1x!tf_type.string>, tensor<!tf_type.string>) -> (tensor<?x2xi64>, tensor<?x!tf_type.string>, tensor<2xi64>) : {device = "", skip_empty = false}
tf.StringToHashBucketFast(tensor<?x!tf_type.string>) -> (tensor<?xi64>) : {device = "", num_buckets = 5000 : i64}
tf.TensorListReserve(tensor<i32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) : {device = ""}
tf.TensorListSetItem(tensor<!tf_type.variant>, tensor<i32>, tensor<?x!tf_type.string>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) : {device = ""}
tf.TensorListStack(tensor<!tf_type.variant<tensor<*x!tf_type.string>>>, tensor<1xi32>) -> (tensor<?x?x!tf_type.string>) : {device = "", num_elements = -1 : i64}
See instructions: https://www.tensorflow.org/lite/guide/ops_select
2022-12-09 18:49:20.820999: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2057] The following operation(s) need TFLite custom op implementation(s):
Custom ops: StringNGrams
Details:
tf.StringNGrams(tensor<?x!tf_type.string>, tensor<2xi64>) -> (tensor<?x!tf_type.string>, tensor<2xi64>) : {Tsplits = i64, device = "", left_pad = "", ngram_widths = [2], pad_width = 0 : i64, preserve_short_sequences = false, right_pad = "", separator = " "}
See instructions: https://www.tensorflow.org/lite/guide/ops_custom
Using:
from pathlib import Path
import tensorflow as tf
DATA_DIR = Path(__file__).absolute().parent.joinpath('data')
DEFAULT_MODEL_DIR = DATA_DIR.joinpath('model')
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(str(DEFAULT_MODEL_DIR))
converter.allow_custom_ops = True
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
It actually generated a model.tflite
which might be possible to execute if StringNGrams
is provided.
To be continued...
cells = toCells()
and serializeCells(cells)
should retain them). Add also name
which will be consistent with the CLI.In priority order:
py
)Once fixed release as v0.4.
Nice to have:
one could run a command based on its index.
e.g. if we have this list:
$ runme list
ID NAME FIRST COMMAND # OF COMMANDS DESCRIPTION
1 pip-install pip install approvaltests 3 Installs
2 npx-ava npx ava 1 Run the Tests
we could run:
$ runme 1
to execute pip-install
Logs are collected in BigQuery.
Hello,
First thanks for this program, very very very usefull
Is it possible to add some functionnalities ? (Sorry to add everything in one issue depending on your answer I can split it and only keep the most interesant)
Alias
I have a function which will initiate the frontend ... so the name is init_front ... but I would like to alias it if
So to add an alias or to add several name could help
Multi run
To have a clear readme I add 2 blocks, one to download frontend dependencies, one to download backend dependencies
So to add a functionnality to launch both in one command could be cool
dependencies
dependencies, post task pre task, a way to add a task "install all in one"
for exemple
```sh {name=download_front_deps}
cd src
npm install
```
```sh {name=build_front,deps=[download_front_deps]}
cd src
npm run build
```
```sh {name=download_back_deps}
cd src
go mod download
go mod verify
```
```sh {name=run_program,deps=[download_back_deps]}
cd src
go run main.go
```
```sh {name=install_program,deps=[run_program,build_front]}
echo "cool you installed it"
```
Even without those evol thanks for this exceptional program, it s so cool and so usefull
Currently a Runme cell has the following metadata interface:
export interface Metadata {
background?: string
interactive?: string
closeTerminalOnSuccess?: string
mimeType?: string
['runme.dev/name']?: string
}
When we deserialize the markdown currently the metadata object can be empty which requires us to transform in our extension. Given that these capabilities are implemented here I would suggest to have runme
sanitize the values and ensure it is passed on with good default values so that the interface becomes:
export interface Metadata {
// @defaults false
background: boolean
// @defaults: true
interactive: boolean
// @defaults: true
closeTerminalOnSuccess: boolean
// @defaults: "text/plain"
mimeType: string
// @defaults: random string
['runme.dev/name']: string
}
I am also wondering why the name (runme.dev/name
) has the runme prefix? Seems unnecessary to me. Any reasoning behind?
Wdyt?
The example readme mentions a --allow-unknown
option, but the latest release (0.2.4)
doesn't have it. Even couldn't find on master branch.
The related commit:
59c2c9a#r88258426
gh repo view stateful/tdme
will display markdown READMEs in the terminal. Should be straightforward to replicate the functionality.
Frontmatter is commonplace is markdown tooling, e.g. https://jekyllrb.com/docs/front-matter/
Few different things it could be useful for:
Let's use the same data source for both the CLI-subcommand and WASM consumers.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.