Code Monkey home page Code Monkey logo

runme's People

Contributors

adambabik avatar admc avatar catdevman avatar chenrui333 avatar christian-bromann avatar dantehemerson avatar degrammer avatar dependabot[bot] avatar duvanmonsa avatar jlewi avatar kawarimidoll avatar lalyos avatar lorenzejay avatar mimikun avatar mislav avatar musabshakeel576 avatar mxsdev avatar pastuxso avatar sourishkrout avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

runme's Issues

Code block name in metadata

  1. Let's include name directly into the metadata if it was parsed from a respective markdown file's code block.
  2. Additionally, let's add runme.dev/name (including on-the-fly generated ones) to runme.dev/* special attributes (ephmeral) which won't be subject so serialization for all blocks, including 1.). The name is required to be able to call the CLI from the extension.

This will maintain "good" UX where name's are only being serialized if set by the user.

Reconcile shell script prep functions

The preparation of the script is platform specific:

https://github.com/stateful/vscode-runme/blob/a61f235ab9c97d0ae57135cddcb2270fd4aae162/src/extension/utils.ts#L51-L73

The prepareScript function is currently agnostic and likely run into issues on Windows:

runme/web/main_js.go

Lines 124 to 132 in 4b2fce8

func prepareScript(this js.Value, args []js.Value) interface{} {
lines := args[0]
len := lines.Length()
scriptLines := make([]string, 0, len)
for i := 0; i < len; i++ {
scriptLines = append(scriptLines, lines.Index(i).String())
}
return runner.PrepareScript(scriptLines)
}

Windows issue for more context: stateful/vscode-runme#20 (never fully figure out if this is a bash vs shell problem or Windows ๐Ÿคทโ€โ™€๏ธ)

Insane memory usage (7GB+)

Type: Bug

I noticed that there is a node process in WSL1 which is eating insane amounts of memory (7 GB) and occupies one CPU core fully. This seems to be the extension host. Extension Bisect showed that stateful.runme is the culprit. Disabling this extension resolves the issue.

top output:

image

Performance graph in Process Hacker:

image

VS Code version: Code 1.74.3 (97dec172d3256f8ca4bfb2143f3f76b503ca0534, 2023-01-09T16:59:02.252Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Sandboxed: No
Remote OS version: Linux x64 4.4.0-19041-Microsoft
Remote OS version: Linux x64 4.4.0-19041-Microsoft

System Info
Item Value
CPUs AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1697)
GPU Status 2d_canvas: enabled
canvas_oop_rasterization: disabled_off
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_renderer: enabled_on
video_decode: enabled
video_encode: enabled
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: disabled_off
Load (avg) undefined
Memory (System) 31.23GB (0.36GB free)
Process Argv
Screen Reader no
VM 0%
Item Value
Remote WSL: Ubuntu-20.04
OS Linux x64 4.4.0-19041-Microsoft
CPUs AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1700)
Memory (System) 31.23GB (0.34GB free)
VM 0%
Item Value
Remote WSL: Ubuntu-20.04
OS Linux x64 4.4.0-19041-Microsoft
CPUs AMD Ryzen 7 PRO 4750U with Radeon Graphics (16 x 1700)
Memory (System) 31.23GB (0.34GB free)
VM 0%
Extensions (64)
Extension Author (truncated) Version
auto-close-tag for 0.5.14
vscode-graphql-syntax Gra 1.0.5
dotenv mik 1.0.1
vscode-aql mon 1.7.0
jupyter-keymap ms- 1.0.0
remote-containers ms- 0.266.1
remote-ssh ms- 0.94.0
remote-ssh-edit ms- 0.84.0
remote-wsl ms- 0.72.0
remote-explorer ms- 0.0.3
ejs-language-support Qas 0.0.1
vscode-todo-highlight way 1.0.5
vscode-devdocs akf 1.0.3
svelte-intellisense ard 0.7.1
vscode-intelephense-client bme 1.9.3
npm-intellisense chr 1.4.4
js-codeformer cms 2.6.1
compulim-vscode-closetag Com 1.2.0
vscode-eslint dba 2.2.6
devdocs dei 0.2.0
vscode-new-file dku 4.0.2
dbux-code Dom 0.7.9
xml Dot 2.5.1
EditorConfig Edi 0.16.4
copilot Git 1.65.7705
vscode-graphql-execution Gra 0.1.6
vscode-graphql-syntax Gra 1.0.5
vscode-git-blamer how 1.1.2
join-comment-aware joh 0.0.3
solidity Jua 0.0.141
edge luo 0.3.2
rainbow-csv mec 3.5.0
csharp ms- 1.25.2
vscode-dotnet-runtime ms- 1.6.0
isort ms- 2022.8.0
python ms- 2022.20.1
vscode-pylance ms- 2023.1.10
jupyter ms- 2022.11.1003412109
jupyter-keymap ms- 1.0.0
jupyter-renderers ms- 1.0.12
vscode-jupyter-cell-tags ms- 0.1.6
vscode-jupyter-slideshow ms- 0.1.5
cpptools ms- 1.13.9
hexeditor ms- 1.9.9
hexeditor not 1.8.2
vscode-print pdc 0.10.20
svelte-extractor pro 0.0.3
vscode-data-preview Ran 2.3.0
vscode-data-table Ran 1.12.0
vscode-yaml red 1.11.0
ActiveFileInStatusBar Ros 1.0.3
vscode-paste-and-indent Rub 0.0.8
bracket-jumper sas 1.1.8
trailing-spaces sha 0.4.1
vscode-standard sta 2.1.3
runme sta 0.4.2
ignore-gitignore stu 1.0.1
svelte-vscode sve 107.0.1
es6-string-html Tob 2.12.0
sort-lines Tyr 1.9.1
use-strict-everywhere vee 0.1.3
bracket-padder via 0.3.0
change-case wma 1.0.0
html-css-class-completion Zig 1.20.0

Build a prototype of raw bidi stream between xterm.js and Kernel

Currently, the Kernel service offers only execution of separate commands. It works in the way that you send a command and passively listen for incremental output data and final exit code.

We would like instead to be able to have a raw bi-directional stream between xterm and the kernel. This is an alternative execution which might prove to be useful.

Newlines `\n` in snippets aren't being broken up properly

Consider the parsed snippet (taken from private repo tortuga/api) below which is a curl command followed by its expected output. Currently this command will fail due to the attempt to execute both.

"$ curl -XPOST -H \"Content-Type: application/json\" localhost:8080/tasks/ -d '{\"duration\": \"10s\", \"exit_code\": 0, \"name\": \"Run task\", \"runbook_name\": \"RB 1\", \"runbook_run_id\": \"6e975f1b-0c0f-4765-b24a-2aa87b901c06\", \"start_time\": \"2022-05-05T04:12:43Z\", \"command\": \"/bin/sh\", \"args\": \"echo hello\", \"feedback\": \"this is cool!\", \"extra\": \"{\\\"hello\\\": \\\"world\\\"}\"}'\n{\"id\":\"6e975f1b-0c0f-4765-b24a-2aa87b901c06\"}\n"

Test CLI end-to-end against known READMEs

Runme has a bunch of unit tests which validate isolated examples of Markdown. They are mostly edge cases we have come across during development. Additionally, we should run end-to-end tests of the CLI against known Markdowns to avoid introducing any regressions.

This blog post might offer a nice approach to testing CLIs.

Related to #58.

If no code block can be found, throw an error

Current Behavior

If the markdown file has no code block, runme just prints:

runme ls
NAME  FIRST COMMAND  # OF COMMANDS  DESCRIPTION

Desired Behavior

I suggest to be more explicit and throw an error, e.g. No code block found in "Readme.md", get started writing code blocks here: ....`

Implement NotebookSerializer interface with Markdown rendering from AST

The goal is to figure out if we can implement VS Code's NotebookSerializer interface in runme. Doing that will enable us to easily add editing capabilities to notebooks rendered from Markdown files.

Highlights:

  • Implement NotebookSerializer interface and expose it in the WASM module.
  • Deserialization is quite straightforward: take a byte array of a Markdown file and return NotebookData. There are multiple steps though: parse a byte array to AST which is a flat tree structure. Then, flatten it and produce a list of cells which are a foundation of NotebookData.
  • Serialization of NotebookData is more challenging. It's a reverse process of deserialization so creating an AST from a list of cells and finally a byte array.

Accept filename and code block id as parameter

Suggest to remove the --chdir and filename parameter as it feels not very familiar. Given the natural way devs us ls is suggest to allow passing in the readme path as parameter, e.g.:

Instead of:

runme ls --chdir ./examples --filename README.md

Just do:

runme ls ./examples/README.md

Same for executing code blocks:

runme run ./examples/README.md myExampleCodeBlock

Markdown formatter issues

This is a place to collect all issues related to Markdown formatting. There are two places where runme formats Markdown:

  • When runme fmt is called, it will reformat the source file as well as try to keep the original structure, in particular nested blocks.
  • When the WASM Runme.deserialize() is called, it will use the formatter to parse the source data, however, it will additionally flatten blocks. runme fmt --flatten is an equivalent.

Bugs resulting from flattening should be submitted separately.

Bugs

None

Nice-to-haves

  • Emphasis and strong emphasis can be denoted using * or _ as described in the CommonMark spec. Currently, the formatter will always default to * as the underlying Markdown parser do not distinguishes between them in the AST.
  • Alignment of paragraphs, code blocks, and other blocks in list items is not well-defined (especially consider long ordered lists with ordering numbers >= 10).

First past WASM integration

  • New lines trailing every code block
  • Markdown triple backticks included in value of code blocks
  • Byte arrays vs strings as return value of serializer: let's be sure unicode is being preserved
  • Sebastian will tinker with thread-safety of de-/serialization to better understand exposure

Create WASM asset to be downloaded by the extension

We don't want to check in the WASM file into the extension as it blows up the repo size. Rather we would like to bundle it as part of the extension release. Therefor we need a downloadable wasm file, preferably from the GitHub release.

Support tick annotations

It would be great if users could add additional information to the code block which then can be parsed and send along, e.g.:

```html type="svelte"
...
```

For the CLI we could use this for filtering certain blocks and for using rdme as wasm file this could be helpful to annotate important information about the code block.

Frontmatter is not being preserved

Popular tooling like docusaurus, Jekyll, and Hugo use frontmatter to store metadata. Runme's parser should at the minimum make sure to preserve it.

Related to #88

Distribute runme as npm module `runme`

To get ahead of a plan to use the CLI to run commands inside of vscode-runme, let's distribute the gobinaries and wasm builds as NPM package. This will also allow to transparently manage the dependencies between vscode-runme <> runme. I remember having seen a way to repackage a gobinary as module elegantly.

Ideally we could pick up on the existing tag based publication workflows: https://github.com/stateful/runme/actions/workflows/release.yml

@admc should be able to add you to the runme npm module name which we own.

Problem parsing with lists containing code blocks

The following example of markdown is not rendered correctly:

## Running

### In Minikube

Deploy the application to Minikube using the Linkerd2 service mesh.

1. Install the `linkerd` CLI

    ```bash
    curl https://run.linkerd.io/install | sh
    ```

1. Install Linkerd2

    ```bash
    linkerd install | kubectl apply -f -
    ```

1. View the dashboard!

    ```bash
    linkerd dashboard
    ```

1. Inject, Deploy, and Enjoy

    ```bash
    kubectl kustomize kustomize/deployment | \
        linkerd inject - | \
        kubectl apply -f -
    ```

1. Use the app!

    ```bash
    minikube -n emojivoto service web-svc
    ```

It should be rendered like this:

Running

In Minikube

Deploy the application to Minikube using the Linkerd2 service mesh.

  1. Install the linkerd CLI

    curl https://run.linkerd.io/install | sh
  2. Install Linkerd2

    linkerd install | kubectl apply -f -
  3. View the dashboard!

    linkerd dashboard
  4. Inject, Deploy, and Enjoy

    kubectl kustomize kustomize/deployment | \
        linkerd inject - | \
        kubectl apply -f -
  5. Use the app!

    minikube -n emojivoto service web-svc

But gets rendered like this:

Screenshot 2022-10-27 at 10 38 34

Command block: execute one-by-one vs entirety

I did some testing based on this example: https://github.com/lifeiscontent/realworld

However, since it's not an interactive shell, the directory changes aren't retained across commands. Quick-fix involved concatenating the commands with ; making them insensitive to non-zero exit codes. Ran into an issue where RoR's migrations failed due to "duplication" which is a red herring.

Perhaps there's a better way to do this?

diff --git a/internal/runner/shell.go b/internal/runner/shell.go
index d9967d2..abf3422 100644
--- a/internal/runner/shell.go
+++ b/internal/runner/shell.go
@@ -5,6 +5,7 @@ import (
 	"io"
 	"os"
 	"os/exec"
+	"strings"
 
 	"github.com/pkg/errors"
 )
@@ -20,10 +21,9 @@ func (s Shell) Run(ctx context.Context) error {
 		sh = "/bin/sh"
 	}
 
-	for _, cmd := range s.Cmds {
-		if err := execSingle(ctx, sh, s.Dir, cmd, s.Stdin, s.Stdout, s.Stderr); err != nil {
-			return err
-		}
+	concatCmds := strings.Join(s.Cmds, "; ")
+	if err := execSingle(ctx, sh, s.Dir, concatCmds, s.Stdin, s.Stdout, s.Stderr); err != nil {
+		return err
 	}
 
 	return nil

Language detection

In the VS Code extension, we have a way to detect a language using https://github.com/microsoft/vscode-languagedetection. It has a model embedded which comes from https://github.com/yoeo/guesslang. It executes it using https://github.com/tensorflow/tfjs which comes with various backends like CPU or WebGL.

For runme, which is built and distributed as a statically linked binary, the best would be to use Tensorflow Lite which is fairly small (<6MB). Unfortunately, the original guesslang model might not be easily converted to TF Lite. I tried that and got the following output:

2022-12-09 18:49:10.637176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-09 18:49:17.220826: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-09 18:49:20.342003: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2022-12-09 18:49:20.342041: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
2022-12-09 18:49:20.343338: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.346015: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2022-12-09 18:49:20.346046: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.354134: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
2022-12-09 18:49:20.372631: W tensorflow/core/common_runtime/type_inference.cc:339] Type inference failed. This indicates an invalid graph that escaped type checking. Error message: INVALID_ARGUMENT: expected compatible input types, but input 1:
type_id: TFT_OPTIONAL
args {
  type_id: TFT_PRODUCT
  args {
    type_id: TFT_TENSOR
    args {
      type_id: TFT_INT64
    }
  }
}
 is neither a subtype nor a supertype of the combined inputs preceding it:
type_id: TFT_OPTIONAL
args {
  type_id: TFT_PRODUCT
  args {
    type_id: TFT_TENSOR
    args {
      type_id: TFT_INT32
    }
  }
}

	while inferring type of node 'dnn/zero_fraction/cond/output/_44'
2022-12-09 18:49:20.378047: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2022-12-09 18:49:20.409092: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: /Users/adambabik/projects/github.com/yoeo/guesslang/guesslang/data/model
2022-12-09 18:49:20.425946: I tensorflow/cc/saved_model/loader.cc:305] SavedModel load for tags { serve }; Status: success: OK. Took 82615 microseconds.
2022-12-09 18:49:20.538034: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2022-12-09 18:49:20.820819: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2046] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexBincount, FlexCast, FlexConcatV2, FlexSparseFillEmptyRows, FlexSparseReshape, FlexSparseSegmentMean, FlexSparseSegmentSum, FlexStringSplit, FlexStringToHashBucketFast, FlexTensorListReserve, FlexTensorListSetItem, FlexTensorListStack
Details:
	tf.Bincount(tensor<?xi32>, tensor<i32>, tensor<0xi64>) -> (tensor<?xi64>) : {device = ""}
	tf.Cast(tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) -> (tensor<!tf_type.variant>) : {Truncate = false}
	tf.ConcatV2(tensor<?x!tf_type.string>, tensor<10000x!tf_type.string>, tensor<i32>) -> (tensor<?x!tf_type.string>) : {device = ""}
	tf.SparseFillEmptyRows(tensor<?x2xi64>, tensor<?xi64>, tensor<2xi64>, tensor<i64>) -> (tensor<?x2xi64>, tensor<?xi64>, tensor<?xi1>, tensor<?xi64>) : {device = ""}
	tf.SparseReshape(tensor<?x2xi64>, tensor<2xi64>, tensor<2xi64>) -> (tensor<?x2xi64>, tensor<2xi64>) : {device = ""}
	tf.SparseSegmentMean(tensor<?x70xf32>, tensor<?xi32>, tensor<?xi64>) -> (tensor<?x70xf32>) : {device = ""}
	tf.SparseSegmentSum(tensor<?x54xf32>, tensor<?xi32>, tensor<?xi64>) -> (tensor<?x54xf32>) : {device = ""}
	tf.StringSplit(tensor<1x!tf_type.string>, tensor<!tf_type.string>) -> (tensor<?x2xi64>, tensor<?x!tf_type.string>, tensor<2xi64>) : {device = "", skip_empty = false}
	tf.StringToHashBucketFast(tensor<?x!tf_type.string>) -> (tensor<?xi64>) : {device = "", num_buckets = 5000 : i64}
	tf.TensorListReserve(tensor<i32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) : {device = ""}
	tf.TensorListSetItem(tensor<!tf_type.variant>, tensor<i32>, tensor<?x!tf_type.string>) -> (tensor<!tf_type.variant<tensor<*x!tf_type.string>>>) : {device = ""}
	tf.TensorListStack(tensor<!tf_type.variant<tensor<*x!tf_type.string>>>, tensor<1xi32>) -> (tensor<?x?x!tf_type.string>) : {device = "", num_elements = -1 : i64}
See instructions: https://www.tensorflow.org/lite/guide/ops_select
2022-12-09 18:49:20.820999: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2057] The following operation(s) need TFLite custom op implementation(s):
Custom ops: StringNGrams
Details:
	tf.StringNGrams(tensor<?x!tf_type.string>, tensor<2xi64>) -> (tensor<?x!tf_type.string>, tensor<2xi64>) : {Tsplits = i64, device = "", left_pad = "", ngram_widths = [2], pad_width = 0 : i64, preserve_short_sequences = false, right_pad = "", separator = " "}
See instructions: https://www.tensorflow.org/lite/guide/ops_custom

Using:

from pathlib import Path

import tensorflow as tf

DATA_DIR = Path(__file__).absolute().parent.joinpath('data')
DEFAULT_MODEL_DIR = DATA_DIR.joinpath('model')

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(str(DEFAULT_MODEL_DIR))
converter.allow_custom_ops = True
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

It actually generated a model.tflite which might be possible to execute if StringNGrams is provided.

To be continued...

v0.4 release punch list

In priority order:

  • For code block lang normalization: if code block lang is unknown include as CODE (vs MARKUP). However, keep returning MARKUP for explicitly set languages that are not supported (e.g. py)
  • Unify supported languages between the ext and CLI (golang is not supported)

Once fixed release as v0.4.

Nice to have:

  • Additional new lines after converting to Code->Markdown->Code? Could be my mistake

command index

one could run a command based on its index.

e.g. if we have this list:

$ runme list
ID  NAME         FIRST COMMAND              # OF COMMANDS  DESCRIPTION
1   pip-install  pip install approvaltests  3              Installs
2   npx-ava      npx ava                    1              Run the Tests

we could run:

$ runme 1

to execute pip-install

[Evol] Multiname, alias, multi run and dependencies

Hello,

First thanks for this program, very very very usefull
Is it possible to add some functionnalities ? (Sorry to add everything in one issue depending on your answer I can split it and only keep the most interesant)

Alias
I have a function which will initiate the frontend ... so the name is init_front ... but I would like to alias it if
So to add an alias or to add several name could help

Multi run
To have a clear readme I add 2 blocks, one to download frontend dependencies, one to download backend dependencies
So to add a functionnality to launch both in one command could be cool

dependencies
dependencies, post task pre task, a way to add a task "install all in one"
for exemple

```sh {name=download_front_deps}
cd src
npm install
```

```sh {name=build_front,deps=[download_front_deps]}
cd src
npm run build
```

```sh {name=download_back_deps}
cd src
go mod download
go mod verify
```

```sh {name=run_program,deps=[download_back_deps]}
cd src
go run main.go
```

```sh {name=install_program,deps=[run_program,build_front]}
echo "cool you installed it"
```

Even without those evol thanks for this exceptional program, it s so cool and so usefull

Set "good" default metadata

Currently a Runme cell has the following metadata interface:

  export interface Metadata {
    background?: string
    interactive?: string
    closeTerminalOnSuccess?: string
    mimeType?: string
    ['runme.dev/name']?: string
  }

When we deserialize the markdown currently the metadata object can be empty which requires us to transform in our extension. Given that these capabilities are implemented here I would suggest to have runme sanitize the values and ensure it is passed on with good default values so that the interface becomes:

  export interface Metadata {
    // @defaults false
    background: boolean
    // @defaults: true
    interactive: boolean
    // @defaults: true
    closeTerminalOnSuccess: boolean
    // @defaults: "text/plain"
    mimeType: string
    // @defaults: random string
    ['runme.dev/name']: string
  }

I am also wondering why the name (runme.dev/name) has the runme prefix? Seems unnecessary to me. Any reasoning behind?

Wdyt?

Leverage GH's README viewer

gh repo view stateful/tdme will display markdown READMEs in the terminal. Should be straightforward to replicate the functionality.

Handle frontmatter

Frontmatter is commonplace is markdown tooling, e.g. https://jekyllrb.com/docs/front-matter/

Few different things it could be useful for:

  • Preserving it for compatibility with markdown tooling
  • De/-serialize document level metadata (all or subset)
  • Notebook UX/UI to modify frontmatter and/or deserialized metadata

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.