Code Monkey home page Code Monkey logo

wash's Introduction

Warning

wash, the wasmCloud Shell has moved, as discussed in our 2023/10/25 Community meeting! The code that powers wash is now found in the wasmCloud/wasmCloud monorepo, in the wasmCloud/crates/wash-cli folder.

We look forward to receiving your contributions, feedback, and user reports at the new home for wash code!

Latest Release Rust Version Contributors wash-cli

                                     _                 _    _____ _          _ _
                                ____| |               | |  / ____| |        | | |
 __      ____ _ ___ _ __ ___  / ____| | ___  _   _  __| | | (___ | |__   ___| | |
 \ \ /\ / / _` / __| '_ ` _ \| |    | |/ _ \| | | |/ _` |  \___ \| '_ \ / _ \ | |
  \ V  V / (_| \__ \ | | | | | |____| | (_) | |_| | (_| |  ____) | | | |  __/ | |
   \_/\_/ \__,_|___/_| |_| |_|\_____|_|\___/ \__,_|\__,_| |_____/|_| |_|\___|_|_|

Why wash

wash is a bundle of command line tools that, together, form a comprehensive CLI for wasmCloud development. Everything from generating new wasmCloud projects, managing cryptographic signing keys, and interacting with OCI compliant registries is contained within the subcommands of wash. Our goal with wash is to encapsulate our tools into a single binary to make developing WebAssembly with wasmCloud painless and simple.

Installing wash

Cargo

cargo install wash-cli

Linux (deb/rpm + apt)

# Debian / Ubuntu (deb)
curl -s https://packagecloud.io/install/repositories/wasmcloud/core/script.deb.sh | sudo bash
# Fedora (rpm)
curl -s https://packagecloud.io/install/repositories/wasmcloud/core/script.rpm.sh | sudo bash

sudo apt install wash

Linux (snap)

sudo snap install wash --edge --devmode

MacOS (brew)

brew tap wasmcloud/wasmcloud
brew install wash

Windows (choco)

choco install wash

Nix

nix run github:wasmCloud/wash

Using wash

wash has multiple subcommands, each specializing in one specific area of the wasmCloud development process.

build

Builds and signs the actor, provider, or interface as defined in a wasmcloud.toml file. Will look for configuration file in directory where command is being run.
There are three main sections of a wasmcloud.toml file: common config, language config, and type config.

Common Config

Setting Type Default Description
name string Name of the project
version string Semantic version of the project
path string {pwd} Path to the project directory to determine where built and signed artifacts are output
wasm_bin_name string "name" setting Expected name of the wasm module binary that will be generated
language enum [rust, tinygo] Language that actor or provider is written in
type enum [actor, provider, interface ] Type of wasmcloud artifact that is being generated

Language Config - [tinygo]

Setting Type Default Description
tinygo_path string which tinygo The path to the tinygo binary

Language Config - [rust]

Setting Type Default Description
cargo_path string which cargo The path to the cargo binary
target_path string ./target Path to cargo/rust's target directory

Type Config - [actor]

Setting Type Default Description
claims list [] The list of provider claims that this actor requires. eg. ["wasmcloud:httpserver", "wasmcloud:blobstore"]
registry string localhost:8080 The registry to push to. eg. "localhost:8080"
push_insecure boolean false Whether to push to the registry insecurely
key_directory string ~/.wash/keys The directory to store the private signing keys in
filename string <build_output>_s.wasm The filename of the signed wasm actor
wasm_target string wasm32-unknown-unknown Compile target
call_alias string The call alias of the actor

Type Config - [provider]

Setting Type Default Description
capability_id string The capability ID of the provider
vendor string The vendor name of the provider

Type Config - [interface]

Setting Type Default Description
html_target string ./html Directory to output HTML
codegen_config string . Path to codegen.toml file

Example

name = "echo"
language = "rust"
type = "actor"
version = "0.1.0"

[actor]
claims = ["wasmcloud:httpserver"]

[rust]
cargo_path = "/tmp/cargo"

call

Invoke a wasmCloud actor directly with a specified payload. This allows you to test actor handlers without the need to manage capabilities and link definitions for a rapid development feedback loop.

claims

Generate JWTs for actors, capability providers, accounts and operators. Sign actor modules with claims including capability IDs, expiration, and keys to verify identity. Inspect actor modules to view their claims.

completions

Generate shell completion files for Zsh, Bash, Fish, or PowerShell.

ctl

Interact directly with a wasmCloud control-interface, allowing you to imperatively schedule actors, providers and modify configurations of a wasmCloud host. Can be used to interact with local and remote control-interfaces.

ctx

Automatically connect to your previously launched wasmCloud lattice with a managed context or use contexts to administer remote wasmCloud lattices.

drain

Manage contents of the local wasmCloud cache. wasmCloud manages a local cache that will avoid redundant fetching of content when possible. drain allows you to manually clear that cache to ensure you're always pulling the latest versions of actors and providers that are hosted in remote OCI registries.

gen

Generate code from smithy files using weld codegen. This is the primary method of generating actor and capability provider code from .smithy interfaces. Currently has first class support for Rust actors and providers, along with autogenerated HTML documentation.

keys

Generate ed25519 keys for securely signing and identifying wasmCloud entities (actors, providers, hosts). Read more about our decision to use ed25519 keys in our ADR.

lint

Perform lint checks on .smithy models, outputting warnings for best practices with interfaces.

new

Create new wasmCloud projects from predefined templates. This command is a one-stop-shop for creating new actors, providers, and interfaces for all aspects of your application.

par

Create, modify and inspect provider archives, a TAR format that contains a signed JWT and OS/Architecture specific binaries for native capability providers.

reg

Push and Pull actors and capability providers to/from OCI compliant registries. Used extensively in our own CI/CD and in local development, where a local registry is used to store your development artifacts.

up

Bootstrap a wasmCloud environment in one easy command, supporting both launching NATS and wasmCloud in the background as well as an "interactive" mode for shorter lived hosts.

validate

Perform validation checks on .smithy models, ensuring that your interfaces are valid and usable for codegen and development.

Shell auto-complete

wash has support for autocomplete for Zsh, Bash, Fish, and PowerShell. See Completions for instructions for installing autocomplete for your shell.

Contributing to wash

Visit CONTRIBUTING.md for more information on how to contribute to wash project.

wash's People

Contributors

ahmedtadde avatar aish-where-ya avatar autodidaddict avatar billyoung avatar brooksmtownsend avatar byblakeorriver avatar ceejimus avatar chrisrx avatar connorsmith256 avatar dependabot[bot] avatar emattiza avatar frigus02 avatar iceber avatar jordan-rash avatar lachieh avatar lee-orr avatar matthewtgilbride avatar mattwilkinsonn avatar protochron avatar ricochet avatar rvolosatovs avatar stevelr avatar stuartharris avatar theduke avatar thomastaylor312 avatar tiptop96 avatar vados-cosmonic avatar vestigej avatar wallysferreira avatar yordis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wash's Issues

Don't overwrite claims on insert

Currently, when wash par performs the insert operation, it follows this process

  1. Load provider archive
  2. Retrieve keypairs for signing archive
  3. Open binary and insert into archive
  4. Write out provider archive

This process will always overwrite the embedded claims JWT, which was previously good as you could not use the embedded claims in order to re-sign a modified archive. However, this is lossy and an insert only retains the capid, name and vendor fields, notably omitting the revision and version fields from the JWT.

It's unclear to me immediately if this can be fixed just from wash or if it needs to be a patch to https://github.com/wascc/provider-archive, will need more investigation

sign crashes if key file contains trailing newline

Hi there. I just started playing with wasmCloud and it's going great so far. I started generating my own key files before realizing wash does this automatically if I don't have any, yet.

Anyway, my files ended with a trailing newline and that caused wash claims sign to crash with a syack overflow message:

$ wash claims sign ...
thread 'main' has overflowed its stack
fatal runtime error: stack overflow

You should be able to reproduce like this:

$ wash keys gen account -o json | jq '.seed' -r > account.nk
$ wash claims sign --issuer account.nk path-to-a-wasm-file.wasm --name dummy

What would be the correct behaviour for key files with a trailing newline?

Implement wait flag for ctl commands

Imperatively interacting with a wasmcloud host, or a lattice of interconnected hosts, it's important to be able to know when certain events take place. For example, when you issue this command to a wasmcloud host:

wash ctl start actor wasmcloud.azurecr.io/subscriber:0.2.0

You receive an acknowledgment that the actor start event has been received. All this means is that your command was successfully delivered to the host and that the host is going to proceed with pulling the actor from our OCI registry and starting it. What you don't get from this acknowledgment is the assurance that the actor actually was started.

I propose a --wait flag that can be appended to the start, stop, link and update subcommands under ctl. These are all commands that return with an acknowledgment but not a confirmation of the command executing. When the --wait flag is supplied, the command will instead block until the appropriate event can be observed (e.g. ActorStarted for start actor).

In order to monitor the event stream, there are two possibilities:

  • When connected to hosts running in a lattice via NATS, events are published as NATS messages that can be monitored. This can be used to monitor events when connected via NATS.
  • When connected to a host without NATS (which will be a possibility in the REPL after #92 ) the host can be built with an event sender that can be monitored for the same events that would be sent across NATS.

Both of these possibilities are important. When using wash as a command-line interface, we'll want to monitor events that are flowing over NATS messages, as the control interface operates on a lattice that's connected to via NATS. We'll want to do the same thing when wash up is launched with a NATS connection. When the REPL is launched without a NATS connection (after #92 ) we should directly build a host with an event receiver that can be used to monitor these events.

Fully contained REPL host (e.g. no NATS)

Currently, the wash REPL launches with its own instantiated host. This host can be interacted with via the wasmcloud control-interface, and as such requires NATS to be running so that ctl commands can be issued to that host.

Requiring NATS to be present for the REPL host, while it is representative of a more "production-ready" architecture for wasmcloud, adds an additional layer of friction to the initial developer experience. I propose that we add the ability, in the event of failing to connect to NATS, to launch a REPL host without the rpc and control clients. This host should have an event collector (mentioned in #91) once that's implemented in wasmcloud-host.

With a host that does not have a NATS connection, all ctl commands can be instead directly invoked on that host struct. This means that, with our current architecture, the REPL host will need to be able to receive commands instead of just executing in its own thread. An implementation of this might look like using channels to pass commands / events to the thread containing the host. Adding a boolean flag denoting the NATS connection to some form of REPL state may be beneficial to easily determine if a command should be published to a control interface or invoked directly on the local host.

Support "up" REPL environment

In order to simplify the initial waSCC developer experience, wash should provide a REPL interface for interacting with actors, capability providers, and our preconfigured examples.

up should launch a split-screen environment where one pane is the classic REPL where users can type in commands such as:

wash> link wascc.azurecr.io/kvcounter:v1 wascc:key_value wascc.azurecr.io/redis-provider:v1 URL=redis://localhost:6379

wash> start wascc.azurecr.io/redis-provider:v1

wash> link Mxxxx wascc:http_server PORT=8080

wash> start wascc.azurecr.io/kvcounter:v1 

wash> start wascc.azurecr.io/http-server-provider:v1

wash> curl localhost:8080/mycounter

Which as you can see is the full kvcounter example demo for waSCC.
The other pane should display the output of these commands, so for the above example it should appear like this:

Link between actor Mxxx and provider Vxxx defined for 'default' named instance of 'wascc:key_value'
Starting wascc.azurecr.io/redis-provider:v1 (Vxxxxx)... [Redis Key-Value]
Started Mxxxx (Key-Value Demo)
Enabling link between Mxxx and Vxxx (wascc:key_value/default)
Link between actor Mxxxx and provider Vxxx defined for 'default' named instance of 'wascc:http_server'
Starting wascc.azurecr.io/http-server-provider:v1 (Vxxx) [waSCC HTTP Server]...
Enabling link between Mxxx and provider Vxxx defined for 'default' named instance of 'wascc:http_server'
{"value": 1}

In this scenario each command results in a one-line output, but allowing output and the typing environment to be separate we can let users watch output fly by while leaving their command history in plain view.

TODO:

  • Implement command history that can be browsed with up/down arrow keys
  • Hook up commands to the command interface
  • (investigate) When I tab over to the smart widget and select a log, it doesn't change the output
  • (investigate) I can't scroll the log output on the right side log collector

Support generating wasmcloud projects with wash

CC @pkedy

Currently, the waPC CLI (repo here) allows you to generate waPC projects (and therefore wasmcloud projects) for our three main supported languages (Rust, AssemblyScript, and TinyGo). After the project is generated, a widl file can be dropped into the project structure and used directly to generate This is incredibly useful for generating scaffold code for wasmcloud actor interfaces, actors, and providers.

As it stands today, the preferred method to generate an actor for wasmcloud development is to run the following:

cargo generate --git https://github.com/wasmCloud/new-actor-template

As you can see, this is a Rust specific implementation. Currently, the best way to suggest writing AssemblyScript or TinyGo actors is to point developers to our actor interfaces which provide sample actor code in their README's or documentation.

An ideal solution to this problem would be to utilize the code generation functionality of the waPC CLI, or otherwise an actor scaffold template, encapsulated into a wash command. The CLI API could look like:

wash generate rust myactor
wash generate assemblyscript myactor-as
wash generate tinygo myactor-go

Generating an actor in this way should provide similar functionality to the above cargo generate, but for all of the languages that we support. This would allow developers new to wasmcloud to directly generate their project in their language of choice using wash.

Write a proper README

The current wash README is essentially our "command-driven-development" from when we created this repository. A proper README should be written up to welcome newcomers to the wash project.

Support all wash commands in REPL

At the time of writing this issue the REPl is focused on providing a fast feedback & development environment for wasmCloud developers. All of the command interface (ctl) commands are supported in the REPL currently, making the current state of the REPL primarily a command interface tool.

wash up is a separate command from ctl, and it should support all wash operations (e.g. claims, keys, par ...) in order to provide a fully featured REPL for the cli. There are some complications that come with this, like needing to pay attention to where you are in the local filesystem (at least a pwd equivalent would be necessary).

Because wash up re-uses the StructOpt structs from the ctl module, the same can be done for the other modules so there should not be large re-uses of code to implement this.

Support parGEEzy compression

Currently the create, insert and inspect subcommands of wash only operate on uncompressed .par files. Using the provider-archive crate at version 0.2.0 introduces 2 new functions, compress and decompress, that should allow wash to interface with compressed provider archives.

My imagined feature list

  1. create should have a boolean flag that enables the creation of a compressed .par.gz file rather than an uncompressed .par
  2. insert should be able to decompress a parGEEzy, insert another library into the par, and output a modified compressed file
  3. inspect should be able to print metadata information about a compressed file, same as uncompressed, without modifying the file itself

Allow manual purge of wasmCloud host caches

A command, perhaps wash drain (or just drain while inside wash up) should purge the caches used by the wasmCloud host in order to verify that you're getting the newest/most current bits from an OCI source. wasmCloud's caches are TEMP_DIR/wasmcloudcache and TEMP_DIR/wasmcloud_ocicache for the extruded PAR libraries and downloaded OCI binaries respectively.

Command output logic should be handled at the CLI / REPL level

Currently, across the wash modules, there are a few different ways of outputting information. For the CLI commands, this is generally accomplished with println and for the REPL this is accomplished either with the log level macros (info, debug, etc) or with the log_to_output function. With the latest PR where all commands are available in the REPL, these differences become more clear and in the REPL environment frustrating as printlns do not display properly in the REPL.

I think that wash can be refactored to where the top-level call in each module (e.g. handle_command) should return a String object that contains any appropriate output for that command. Refactoring in this way will allow the main.rs module to println! the output, and allow the REPL to call log_to_output for the appropriate commands. Throughout the code we can maintain appropriate log level filter macros, as those will be logged to the overall REPL log and should be available in the CLI with the appropriate RUST_LOG filter.

A few considerations that come to mind:

  • The REPL delegation should get simpler, as each command should just be directly passed to the appropriate handle_command
  • The spinner logic in reg and ctl will need to be examined, as we do not want to display a spinner in the REPL. This relates to #57 as well, and can probably be taken care of at the same time
  • Since the TUI_LOGGER widget accepts log targets, log targets should be defined as constants to keep the level logs organized
  • Because this will involve touching a lot of code, this should be tackled quickly to avoid merge conflicts getting out of hand. If I (Brooks) take this on, I'll comment on this issue to denote that I started.

Include public key in `wash par inspect`

wash par inspect is responsible for inspecting a provider archive file and dumping relevant information to stdout, it currently outputs something similar to:

╔════════════════════════════════════════════╗
║       HTTP Server - Provider Archive       ║
╠════════════════════════╦═══════════════════╣
║ Capability Contract ID ║ wascc:http_server ║
╠════════════════════════╬═══════════════════╣
║ Vendor                 ║             waSCC ║
╠════════════════════════╩═══════════════════╣
║       Supported Architecture Targets       ║
╠════════════════════════════════════════════╣
║ x86_64-linux                               ║
║ x86_64-macos                               ║
╚════════════════════════════════════════════╝

However, the provider archive public key is missing from this output. This should be added as another row in this table, and should be available in the current inspect function as the subject of the claims struct

Update provider-archive crate version to 0.3.1

0.3.1 allows rev and ver information to be preserved when updating a par with wash par insert. This dependency should be updated and tested (I've already done some intermediate tests before issuing the provider-archive PR) to ensure that this information is correctly being preserved.

Support par verification

It would be valuable for wash to support some form of parJEEzy verification method, perhaps wash par check or wash par verify. This verification process would involve inspecting the par file, determining that your system configuration is compatible (architecture / OS pair) and also ensuring that the par instantiates properly.

This is somewhat related to wasmCloud/wasmCloud#59, where this command would be able to tell us that the arm-linux arch/os pair was valid, however the redis parJEEzy fails to instantiate.

No newline between spinner output and following println

⢃⠀ Advertising link between MCFMFDWFHGKELOXPCNCDXKK5OFLHBVEWRAOXR5JSQUD2TOFRE3DFPM7E and VC4PGPGUM3UHVBDFATPAMFHJJLTLE5LWZIJ36ZTVH4FTEP4QP24D3UR7 ... Advertised link (MCFMFDWFHGKELOXPCNCDXKK5OFLHBVEWRAOXR5JSQUD2TOFRE3DFPM7E) <-> (VC4PGPGUM3UHVBDFATPAMFHJJLTLE5LWZIJ36ZTVH4FTEP4QP24D3UR7) successfully

These two outputs are usually shown on the same line, which vastly overflows the 80 character line wrap that we usually try and stick to. At a minimum, spinner outputs and the success println! after should be separated with a newline.

par inspect does not output revision and version

wash par inspect is a way of inspecting a provider archive and printing the contents of the embedded claims to stdout in a human readable format (or, after #39, potentially as json)

The claims information displayed does not include revision or version information, which is invaluable for identifying the correct version of a parJEEzy to use. In order to implement this, one should just need to add additional rows to the table if rev and ver information is present on the claims.

Add ability to "skip" words in REPL.

In a terminal you can use option/alt + left/right arrow to move the cursor across a word. We should support this in the REPL as well.

To do this we need to define what constitutes a word/whitespace. E.g. in some terminals (I think most) this operation would skip to the next dash but would pass trough an underscore. I suggest we aim for something along those lines, meaning we atleast recognizing dashes and spaces as "whitespace". Any suggestions are more than welcome here!

Also we noticed that there is no out of the box support for detecting a mac keyboards option key as a KeyModifier in crossterm so we need to figure out how to do that.

CLI should globally support JSON output

There should be a top-level flag that is supplied (possibly --json or -j) that switches between user-friendly output and automation-friendly (JSON) output. This lets the CLI be scriptable in many different ways, not the least of which allows the output to be further refined or interrogated with tools like jq.

Automated pipeline

Turn on github actions and enable automatic building, unit, and integration testing. Additionally, support automatic release (tag) publication to crates-dot-io from inside the GH action as well as publication to our various packagecloud (apt, brew, etc) resources.

Support pluggable decoders

One thing that we've discovered while working on the wash CLI and, more specifically, the up REPL, is that we cannot provide contextual meaning to the output of actor function calls or provider calls. Because of the way MessagePack works, we cannot reliably determine how to decode the data without having some form of schema... coincidentally, this is exactly what we have with the widl files that produce generated code for actors and providers alike.

Without a concrete decoder, we can't actually display the data coming from actors and providers. To address this, we should support the notion of pluggable decoders. The following is one way in which this could work:

[RFC] Flexible Plugins

Each plugin is a waPC module created in any of the languages we support for waPC widl generation. This module would accept incoming messages of type PerformDecode that would have the following fields:

  • namespace - A namespace for the message. This corresponds directly to the namespace of a given widl file
  • message_type - The type of the message in need of decoding. Note that this is NOT the operation name, it is a discrete message type scoped to a namespace. Again, this corresponds directly to a data type that was generated from a widl pass.
  • format - a numeric value indicating the format we'd like the data decoded in. (1 = JSON, 2 = Debug/Human, for example)
  • payload - the actual payload that we want decoded. This is a binary blob (byte array) containing the messagepack-encoded data.

This plugin (remember, it's "just a wapc module") then returns a DecodeResponse containing the following fields:

  • handled - a boolean value indicating whether this plugin handled the request. A value of false here should be assumed to indicate that the module in question is unaware of the data type requested. Anything executing said plugin should then move on to the next plugin in the chain to see if the next one can handle the decode request.
  • decoded_string - A string that corresponds to the requested format containing the decoded value.

By allowing developers to simply drop wapc decoder modules into the ~/.wash directory, we not only give people the automatic ability to decode first party messages (we can host a coredecoder module in our OCI and download it if it's not present), but we give consumers, developers, and even enterprise organizations the ability to generate "pretty-printers" for their respective data types. This even means they can customize them so that the decoded version of a message excludes PII/sensitive fields.

Par Inspect should support OCI URI

Currently wash par inspect only works with a relative/absolute filepath to a provider archive on disk. inspect should also support an OCI URI as an argument, which will then pull + inspect the par (without persisting on disk, potentially unless a flag is toggled like --store?). That way we can verify the claims of a remote parJEEzy without downloading it first.

Extra auto-generated key created for Windows binaries

When using wash par to create a provider archive and not explicitly providing signing keys, wash will auto-generate keys based upon the binary filename passed in. For example, wash par insert --binary ./libwascc_fs.so will create keys (if they do not already exist) named libwascc_fs_{account,service}.nk. This becomes an issue when passing in a Windows binary since they do not contain the lib prefix as unix-like systems do. So the above example for Windows will be wash par insert --binary ./wascc_fs.dll and will result in auto-generated keys for wascc_fs_{account,service}.nk.

This only matters when the keys are lazily created for the user, which seems to be mostly for development/testing. However, it could be beneficial to base the auto-generated key name on something that will be consistent across platforms. A naive solution might be simply trimming lib prefix.

can't scroll Tui Log pane

As a wash user, I want to be able to read the logs, but sometimes they scroll out of view and there isn't a way to read them. Could a keycode be added to enable scrolling?

Tab completion for REPL

As a user it would be nice to have tab completion for the REPL, especially for things like actor/provider IDs. I think a good initial implementation would be upon pressing tab:

  • Complete any verbs/commands (e.g. link, start actor, get host)
  • If following a command that expects an argument, refresh a list of actors/providers/etc from the host (depending upon context) and allow completion from the generated list.

Error building on armv7

error: attributes are not yet allowed on `if` expressions
   --> /home/flo/.cargo/registry/src/github.com-1ecc6299db9ec823/parity-wasm-0.42.1/src/elements/types.rs:196:3                                                                            
    |
196 |         #[cfg(not(feature="multi_value"))]
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

error: could not compile `parity-wasm`.

Spinner library panics often

The spinner library, while great in a normal execution environment, panics often depending on the terminal. I've observed the following error on GitHub Actions runs and on a Windows Powershell environment:

thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: NotSupported', C:\Users\brooks\.cargo\registry\src\github.com-1ecc6299db9ec823\spinner-0.5.0\src\lib.rs:164:37
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

This needs more investigation to find out what's causing the panic. I already have disabled the spinner with JSON output, but I think it might be wise to catch errors stemming from instantiating the spinner: https://github.com/wasmCloud/wash/blob/main/src/ctl.rs#L794.

Revisit default timeouts for commands

With asynchronous commands, and quick-acks of ctl start commands, we should revisit timeouts to ensure that the REPL feels responsive and quick to the user. I believe the current timeouts sit around 5 seconds for start provider for example, which is longer than should be needed for an information gathering function.

REPL asynchronous calls block user input and REPL drawing

When using the REPL, many commands involve a command interface request that is asynchronous and immediately awaited for the timeout specified, or until the function is finished executing. A good example of one of these calls that takes a long time is start provider wascc.azurecr.io/redis:v0.9.2 --timeout 30, which takes up to 30 seconds to download, verify, auction and schedule a provider on a wasmcloud-host instance. When any call like this takes over 3 seconds, the REPL appears "frozen" as no more logs are allowed to display and no user input is recognized until after the call completes.

To combat a similar issue with the host blocking operations, I've refactored the REPL host to start in its own thread and await events there. This works great to allow the host to run without blocking the REPL, but these long standing awaited calls are still blocking the execution of the REPL.

Without exactly knowing the difficulty or code mess that may ensue (with ownership and closures) a naive solution here may be to send each await call into its own thread and set up the log_to_output function to instead be something that receives channel messages so that results can be awaited in the background while the user can continue to execute other commands.

Autogeneration of keys should be optional

Currently, the keys module will follow a few steps in order to locate a key that is supplied on the command line:

  1. Attempt to open a file at the input location (e.g. user provided /path/to/issuer.nk)
  2. Use input as a seed key (e.g. user provided SAASDAHSDKJAHSDJKAHSD)
  3. Look in the directory provided as a CLI flag for directory/modulename_moduletype.nk e.g. ~/tmp/httpsrver_account.nk
  4. Look in the directory provided as the environment variable $WASH_KEYS (same as above, different directory)
  5. Otherwise the keys module will generate a key for you and store it in the directory supplied in step 3 or 4 (respectively)

This autogeneration is a fallback, but it always occurs and is not optional if no keys are found / given. In an attempt to locate keys, the user should be able to pass a boolean flag e.g. --no-generate to prevent this step. In the event that wash is being used in production, or in a CI/CD step, an autogenerated key will allow for the process to continue without finding a key, but in the case of a failure to find the production keys it could cause issues with actor / capability provider validation when attempting to deploy.

Support environment variables for signing keys

Currently wash supports supplying the seeds for signing keys via command line argument, and it can be supplied either as the seed string (e.g. "SAAK6QHHYMNRDYHYHBFBX4JK5Y5JIXROP5LBDKJMP3IK32W44KXJ43OBRE") or as a file (e.g. issuer.nk)

Supplying the seed as a string was meant to cover the scenario where you would like to avoid placing your seed in a file and supply it as an environment variable instead, however if the shell history is not cleaned properly the seed will be viewable in plaintext.

For commands that request seed keys (e.g. wash claims token, wash par create, etc) these should have environment variable defaults defined via structopt.

claims inspect and par inspect have different table formats

par inspect output should be updated to be the same simple table format as claims inspect

➜ wash par inspect wasmcloud.azurecr.io/redis:0.10.0
╔════════════════════════════════════════════════════════════════════════════════════╗
║                         wasmcloud-redis - Provider Archive                         ║
╠═════════════════════════╦══════════════════════════════════════════════════════════╣
║ Public Key              ║ VAZVC4RX54J2NVCMCW7BPCAHGGG5XZXDBXFUMDUXGESTMQEJLC3YVZWB ║
╠═════════════════════════╬══════════════════════════════════════════════════════════╣
║ Capability Contract ID  ║                                       wasmcloud:keyvalue ║
╠═════════════════════════╬══════════════════════════════════════════════════════════╣
║ Vendor                  ║                                                wasmCloud ║
╠═════════════════════════╬══════════════════════════════════════════════════════════╣
║ Version                 ║                                                   0.10.0 ║
╠═════════════════════════╬══════════════════════════════════════════════════════════╣
║ Revision                ║                                                        2 ║
╠═════════════════════════╩══════════════════════════════════════════════════════════╣
║                           Supported Architecture Targets                           ║
╠════════════════════════════════════════════════════════════════════════════════════╣
║ arm-linux                                                                          ║
║ x86_64-windows                                                                     ║
║ x86_64-macos                                                                       ║
║ x86_64-linux                                                                       ║
║ aarch64-linux                                                                      ║
╚════════════════════════════════════════════════════════════════════════════════════╝
➜ wash claims inspect wasmcloud.azurecr.io/kvcounter:0.2.0

                           KVCounter - Module
 Account      ACOJJN6WUP4ODD75XEBKKTCCUJJCY5ZKQ56XVKYK4BEJWGVAOOQHZMCW
 Module       MCFMFDWFHGKELOXPCNCDXKK5OFLHBVEWRAOXR5JSQUD2TOFRE3DFPM7E
 Expires                                                         never
 Can Be Used                                               immediately
 Version                                                     0.2.0 (2)
                              Capabilities
 wasmcloud:keyvalue
 wasmcloud:httpserver
                                  Tags
 None

cursor can be in wrong position after resize

I use a tiling window manager and often move a window from small to full screen and back to small again. In wash, sometimes if I resize the window larger or smaller, the REPL cursor appears a line above or below the line where you would type the next command. It turns out that if the REPL window has had any wrap-around (such as with a 'ctl get inventory -long id-'), where the command wraps onto the second line, and then resize larger so that the command fits on one line, the cursor is drawn one line too low. It's as if it remembered the y position from before the resize and didn't recalculate it after the resize. If you go the other way around - long command on large window, then shrink the app window so wrap is required, the cursor is one line too high.

After a brief search about tui resizing, I tried adding "terminal.autoresize()?;" at the beginning of draw_ui, but that didn't fix it. If you type "clear" in the REPL window, it does clear and the cursor position is in the correct place.

Not a high priority bug as it doesn't affect functionality, and using the "clear" command is a workaround.

claims token only supports seed files for operators, accounts and providers

When generating actor JWTs with wash claims, it utilizes the function extract_keypair which will attempt to source the keypair from a command line literal, filepath, or environment variable.

The functions for generating operator, account and provider JWTs still use old logic and not the extract_keypair function, so those JWTs need seed keys in a file on disk in order to generate the JWT. For uniform functionality, and in the interest of scripting, all of these JWTs should be able to be generated without a seed key on disk, and without exposing a seed key in the bash history.

Remove git dependencies from Cargo.toml

Currently wash depends on oci-distribution, control-interface and wasmcloud-host as git dependencies rather than pulling them from crates.io. Once these crates are published, wash should be changed to pull them from crates.io. After this is completed, we can resume pushing releases of wash to crates.io for users to install via

cargo install wash-cli

Integration tests

With our latest milestone release of 0.2.0, I began thinking about an API promise to ensure that we aren't pushing breaking changes as patch versions. Because we are pre-release, the versioning of wash from here on out will follow a pre-release semver, e.g. as long as our current API is compatible, new features can be added as patch versions. Any breaking API change will be added as minor versions.

Because of this, a full integration test suite would benefit the project. Structopt allows constructing commands from str slices in Rust, so we can fully mock a user input. If we mocked the full commands with multiple (all) command line flags, the tests would fail as soon as our API changes (e.g. flag is removed, renamed). This would be a large-ish undertaking to fully test.

I want to avoid testing the specific libraries that wash depends on (e.g. we don't need to test that wash key gen module generates a valid nkey, that's the nkeys crate's job). Instead, we should focus on each command that you would type in the CLI / REPL is executed as expected. Because the REPL re-uses commands from the CLI, I don't believe we will need to write tests for the REPL commands themselves.

determine_directory panics on Windows

fn determine_directory(directory: Option<String>) -> String {
    match directory {
        Some(d) => d,
        None => format!("{}/.wash/keys", env::var("HOME").unwrap()),
   }
}

This function, https://github.com/wasmCloud/wash/blob/main/src/keys.rs#L147 at the time of writing this issue, causes a panic in a Windows Powershell environment. Instead the following snippet works in the case of windows:

format!("{}{}/.wash/keys", env::var("HOMEDRIVE").unwrap(), env::var("HOMEPATH").unwrap())

This should just be an additional statement in the match, if $HOME is available use that, otherwise use the HOMEDRIVE + HOMEPATH combination. It may be useful to, instead of just unwrapping, provide an error message stating that we attempted to find a home directory and request that the user set $WASH_HOME manually.

claims inspect should support OCI link

Just as wash par inspect supports an OCI URL for inspecting remote provider archive files, wash claims inspect should support an OCI URL for inspecting remote actor modules. In the par module, we solve this problem by attempting to open a file with that name, and if it does not exist then we use crate::reg::pull_artifact to load the archive into memory before inspecting its claims.

Current functionality:

➜ wash claims inspect wascc.azurecr.io/kvcounter:v0.1.1
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/claims.rs:595:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Panic in Arbiter thread.

Desired functionality:

➜ wash claims inspect wascc.azurecr.io/kvcounter:v0.1.1
╔════════════════════════════════════════════════════════════════════════╗
║                           KVCounter - Module                           ║
╠═════════════╦══════════════════════════════════════════════════════════╣
║ Account     ║ ACOJJN6WUP4ODD75XEBKKTCCUJJCY5ZKQ56XVKYK4BEJWGVAOOQHZMCW ║
╠═════════════╬══════════════════════════════════════════════════════════╣
║ Module      ║ MC6Z57M3HOUPIAMTZDMKFBJ5AZ3UOWZ4WJYO7DZNA6567XSJNBUQP55I ║
╠═════════════╬══════════════════════════════════════════════════════════╣
║ Expires     ║                                                    never ║
╠═════════════╬══════════════════════════════════════════════════════════╣
║ Can Be Used ║                                              immediately ║
╠═════════════╬══════════════════════════════════════════════════════════╣
║ Version     ║                                               v0.1.1 (1) ║
╠═════════════╩══════════════════════════════════════════════════════════╣
║                              Capabilities                              ║
╠════════════════════════════════════════════════════════════════════════╣
║ K/V Store                                                              ║
║ HTTP Server                                                            ║
╠════════════════════════════════════════════════════════════════════════╣
║                                  Tags                                  ║
╠════════════════════════════════════════════════════════════════════════╣
║ None                                                                   ║
╚════════════════════════════════════════════════════════════════════════╝

Capability provider logs are not captured by Tui_Logger

tui-logger has been fantastic for capturing wasmCloud logs and displaying them in an easy to understand and filter format. However, log messages that are logged from inside capability providers are not captured by tui. This also notably includes when actors utilize the wasmcloud:logging capability to create log messages.

This is likely being caused by multiple interactions with the env_logger crate. Here is where the tui-logger is being initialized, which then captures logs that emit from wash. This does include wasmCloud logs that are emitted from the embedded host.

When running the wasmCloud binary manually, e.g. in the wasmCloud repo:

RUST_LOG=info cargo run --release

Logs are automatically set to capture at an info level, and setting the environment variable RUST_LOG does capture capability provider logs. You can verify this by running

RUST_LOG=info wash up
wash ctl start provider wasmcloud.azurecr.io/httpserver:0.10.0 # remove "wash" if in the REPL

With the environment variable set, you will see a log printed to the screen inline with the REPL input stating that the HTTPServer capability provider's dispatch has been configured. If you run the command without setting RUST_LOG, the message will not be captured at all.

In short, my hypothesis is that capability provider logs are considered somewhat separate modules in terms of a Rust program, so the environment variable RUST_LOG enables capturing those modules. Initializing an env_logger will only capture logs that directly result from a module running in the current program. The ideal solution to this problem is to find a way to capture all logs, especially capability provider logs, and have them displayed in the tui log output.

Add support for automatic updates

Wash should check for newer versions of itself whenever it starts and, if it finds one, should automatically upgrade to that version. A corresponding wash update can be made available to perform a manual check / verbose update.

Not sure if this will "play nice" with versions of wash that were installed via packagecloud/etc, but the one thing that we should never force our developers to do is figure out how to manually upgrade wash after they've been using it for a while. Once it's on their machine, it should always be the newest version.

Auto-generated keys to use a single account key for a user

The current behavior for auto-generated keys is to create a new account/service key for every provider added to a provider archive. I am proposing to instead only generate a single account key for a user, as this aligns more with typical usage of signing keys in wascc. So this would result in something like $HOME/.wash/keys/account.nk vs. $HOME/.wash/keys/libwascc_fs_account.nk (with a new one for each capability provider).

Get rid of waSCC mentions

waSCC is very "last week", the old way of doing things. A project-wide replacement of waSCC terms should be done for the wasmCloud 0.15.0 release.

Catch panics, errors and printlns during REPL execution

During REPL execution, a panic in a submodule can corrupt the UI of the repl (by constantly showing an error message across the screen). These panics should ideally be logged as errors to the log window and be handled, potentially via something like https://doc.rust-lang.org/edition-guide/rust-2018/error-handling-and-panics/controlling-panics-with-std-panic.html, so that REPL execution is not interrupted.

One common panic that has affected me in testing the REPL is binding the http_server capability provider on a port that is already in use. This is a common mistake and should be something we expect from REPL execution.

Move this crate under the wasmCloud repository

wash is a command line tool directly related and used for wasmCloud, and in order to consolidate some of our repositories we should move wash under the crates/ directory there. Github actions will need to be modified, as well as README and Cargo.toml metadata elements that involve a repository location to ensure it is still locatable from the crate. We will also need to change the github actions located in this repository so they still are usable in the main wasmcloud repository.

Last but not least, we should move all issues over, and merge the two git histories in order to preserve contributors and a consistent history.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.