suborbital / e2core Goto Github PK
View Code? Open in Web Editor NEWServer for sandboxed third-party plugins, powered by WebAssembly
Home Page: https://suborbital.dev
License: Apache License 2.0
Server for sandboxed third-party plugins, powered by WebAssembly
Home Page: https://suborbital.dev
License: Apache License 2.0
Atmo fails to build due to a missing library from wasmer-go/wasmer
which produces a linker error on M1 Macs.
Related: wasmerio/wasmer-go#167
Hello 👋
Any plan to add or update dynamically a runnable?
make docker/dev
docker build . -t suborbital/atmo:dev
[+] Building 9.3s (17/23)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.18kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 55B 0.0s
=> [internal] load metadata for docker.io/library/debian:buster-slim 2.1s
=> [internal] load metadata for docker.io/library/golang:1.17 1.8s
=> [auth] library/debian:pull token for registry-1.docker.io 0.0s
=> [auth] library/golang:pull token for registry-1.docker.io 0.0s
=> [stage-1 1/8] FROM docker.io/library/debian:buster-slim@sha256:47e092810f101be8 0.0s
=> [builder 1/8] FROM docker.io/library/golang:1.17@sha256:285cf0cb73ab995caee61b9 0.0s
=> [internal] load build context 5.4s
=> => transferring context: 85.70MB 5.4s
=> CACHED [stage-1 2/8] RUN groupadd -g 999 atmo && useradd -r -u 999 -g atmo 0.0s
=> CANCELED [stage-1 3/8] RUN apt-get update && apt-get install -y ca-certificate 7.0s
=> CACHED [builder 2/8] RUN mkdir -p /go/src/github.com/suborbital/atmo 0.0s
=> [builder 3/8] COPY . /go/src/github.com/suborbital/atmo/ 0.5s
=> [builder 4/8] WORKDIR /go/src/github.com/suborbital/atmo/ 0.0s
=> [builder 5/8] RUN mkdir -p /tmp/wasmerio 0.4s
=> [builder 6/8] RUN cp -R ./vendor/github.com/wasmerio/wasmer-go/wasmer/packaged/ 0.3s
=> ERROR [builder 7/8] RUN ./scripts/copy-libs.sh 0.3s
------
> [builder 7/8] RUN ./scripts/copy-libs.sh:
#17 0.326 using /tmp/wasmerio/linux-aarch64/libwasmer.so
#17 0.331 cp: cannot stat '/tmp/wasmerio/linux-aarch64/libwasmer.so': No such file or directory
------
executor failed running [/bin/sh -c ./scripts/copy-libs.sh]: exit code: 1
make: *** [docker/dev] Error 1
Atmo should be able to connect to a remote AppSource
(as defined in #41) that can be relied on as a source of things like a Directive, Wasm Runnables, and more.
This would allow Atmo to be deployed in a completely 'inert' state, and load its configuration from a remote source.
For example, Atmo could load individual Runnable modules on demand when in Headless mode, or it could be configured to use a different Directive based on the hostname of a request, loading on-demand as requests are handled.
Atmo could also use this remote AppSource as a means to deploy new bundle versions or rollback. This could be used to facilitate something akin to GitOps.
I envision a data source being configured by setting anATMO_APP_SOURCE
variable with an HTTP or gRPC endpoint and including some form of authentication to ensure the data source can be trusted and vice versa.
When building Atmo on Linux, the Wasmer library isn't found:
$ make atmo
go build -o .bin/atmo ./main.go
# github.com/wasmerio/wasmer-go/wasmer
/usr/bin/ld: cannot find -lwasmer
collect2: error: ld returned 1 exit status
make: *** [Makefile:3: build] Error 2
Workaround:
Wasmer ships pre-built libraries in their repo.
go get github.com/wasmerio/wasmer-go/wasmer
followed by go mod vendor
fixes the dependency issue.
Right here
This might lead to heisenbugs due to go's shadowing mechanism
SCN is current doing a nasty hack to avoid this!
When as
is used to alias the output of the last step for a resource, Atmo returns nothing. For example the following returns nothing:
handlers:
- type: request
resource: /hello
method: POST
steps:
- fn: helloworld
as: something
Without looking at the codebase yet I'm assuming this is because Atmo implicitly assumes the final output matches the name of the last step. I think this should either be explicitly documented, or it should be surfaced as something you can modify. For example an output
field that would allow you to declare where to look for the final output. Hopefully I'm not misunderstanding how data is passed between runnables 🤞
The AppSource
definition should be modified to replace Meta
with Applications
.
Applications
should return []Meta
, an array of the available applications.
Meta
should be extended to include Domain
to allow each application to exist in the same cluster with unique domains.
The other methods in AppSource
should be modified to take in (identifier, version)
to indicate which application the information is being requested for. For example, calling source.Capabilities("com.suborbital.appname", "v0.1.0")
should return the capabilities for that specific application.
👋 First and foremost, I love the solution you are all building.
Some questions I had while playing with atmo were:
Thanks!
In the httpsource code, the response body is only closed when the dest, one of the input parameters, is not nil, but the request that produces the response body gets called either way.
That can lead to situations where the request is sent, there's a response, but the body is not closed, leading to possible memory leaks.
Code is here: https://github.com/suborbital/atmo/blob/main/atmo/appsource/httpsource.go#L275-L276
In vkrouter
This should not happen. Gets confusing, and go's shadow rules will make it hard to understand which one is being used.
Atmo should expose a health endpoint that can be used to determine the health of Atmo. This can include whether or not a bundle has been successfully loaded and Atmo is available to serve requests.
Just wanted to know if tinygo or the standard golang compiler is supported ?
https://atmo.suborbital.dev/getstarted/building-and-running
This section:
You can test the /hello route in a second terminal by sending a POST request with a body to it:
curl localhost:8080/hello -d 'from the Kármán line!'
I think it is a good idea to explicitly state that a successful test should return a 200 with the body returned to you, and everything else needs to be looked at.
Atmo result messages are currently being sent across grav in a few different ways. Messages should use the "atmo.fnresult"
constant to specify the message type and encode the rest of the runnable information with the existing sequence.FnResult
type used in Sat and Atmo (in some places).
We don't want to do this anymore:
e.Send(grav.NewMsgWithParentID(fmt.Sprintf("local/%s", jobType), ctx.RequestID(), nil))
In order to facilitate #40 , access to the bundle, Directive, and Runnables will need to be decoupled from the Coordinator. The coordinator cannot be the owner of those things, as it will make things like dynamic loading and unloading difficult.
I propose an AppSource
interface that dynamically returns the Directive and Runnables to the Coordinator rather than the current setup of the Coordinator loading and holding everything.
Create a first-class non-Wasm sidecar app that connects to Atmo's Grav bus and allows calling single functions used to access resources that Wasm wouldn't be able to, like a db or something.
Essentially a pressure relief valve for Wasm's lack of capabilities.
Subo should be able to build it and perhaps bundle the binary into the bundle?
I've been tinkering with Atmo and some of the suborbital modules (specially Reactr) for a few days, and I love what you are building. I have been flirting myself with a similar idea to your SUFA design pattern but as a way to enable global computation over content-addressed decentralized networks (i.e. IPFS).
The basic idea is that if every peer in a decentralized network includes a common runtime, and all functions and data are uniquely identified in the network, you can run anything, anywhere. And the fact that content-addressed networks give a CDN-by-default capability, would allow an IPFS-based Atmo to scale seamlessly as long as is a peer with available resources to run your bundle. This would enable a global serverless infrastructure and a seamless developer experience (no more worrying about what cloud provider to choose).
The Atmo architecture could be adapted so that we have:
I did a quick proof-of-concept of this idea for a hackathon before I learned about Atmo (code here, and presentation here) just to realize that some of the modules and challenges a system as the one I envision would require have been already hacked and tackled in Suborbital.
What are your thoughts on decentralizing Atmo? Would an Atmo over IPFS make sense? Would it be worth the effort? I would love to gather opinions 🙏
(I am also planning to actively contribute to the Suborbital ecosystem as far as my free time allows it in order to get familiar with the base code and the technology. I would also love to get some insight about things that may be needed and that I can start getting my hands dirty with.)
Blocked by suborbital/reactr#84
Once GraphQL support is available in Reactr, Atmo will need to provide a way to configure a connection to a GraphQL server, likely via a section in the Directive and/or env vars.
This is on the heels of making atmo multiple app source capable
Starting comment: #121 (comment)
Each different application on a different domain should have its own router, because if both of them expose a /hello
endpoint, some of those will be overwritten.
https://atmo.suborbital.dev/usage/connections
Currently, Atmo can connect to NATS and Redis, and upcoming releases will include additional types such as databases and more.
vs further down
SQL databases and caches can be connected to Atmo to be made available to your Runnables using the Runnable API:
...
SQL database connections of type mysql and postgresql are available, and they are discussed in detail in the next section.
I think we can remove the "databases are coming" part from the top.
I'd like to be able to build runnables that handle HTTP requests and then output messages to a Kafka stream. These functions would be composable in the Directive.yaml file just like other runnables. We would use these to build relay functions for event-driven FaaS.
This is for when you have another app that needs to run on the same port - if Atmo's router can't handle a request, proxy it to the fallback server.
Great for having a Next.JS app and an API on the same server and port.
Atmo should automatically create OpenTelemetry spans for requests, including wiring this into Reactr somehow to allow viewing execution patterns of particular Runnables as part of the request chain.
An evolution to the Directive format involves the ability to define your application with a programming language rather than YAML.
This process would still result in a declarative format, but that format could be hidden from the developer by generating it under the hood at build time rather than having them write it by hand.
This would allow us to for example have strong types, better code editor integration, etc.
The choices that would need to be made include:
https://github.com/suborbital/atmo/blob/main/atmo/appsource/httpsource.go#L266
Generally all requests should have a timeout set on them. If there isn't one, like in this case, there is no timeout, so a request could be pending indefinitely holding up execution of everything.
Servers should not be trusted to always be available or behave in expected ways, like always return within x amount of time.
The Thank you for subscribing email response to confirming a newsletter subscription includes:
You will receive updates straight to your inbox, but you can also check out the backlog for past and future issues.
The tracking link on backlog redirects to https://subscribe.suborbital.dev/ which does not resolve.
(Sorry, I couldn't find your website repository.)
Create testing infrastructure for Atmo.
We want automated tests.
Unit tests exist for the coordinator and sequence runner but no integration tests.
This can lead to heisenbugs due to shadowing.
Hello,
subo dev
, if my Redis database is running on localhost (my laptop), how can I set up the connection in Directive.yaml
?We should be able to open a bundle, cache its contents to a temp location on disk, and create lazy loading wasmInstances rather than reading all the binaries into memory on startup
Hello,
I'm trying to deploy an Atmo application to fly.io (a CaaS service)
ATMO_DOMAIN=helloatmo-manual.fly.dev
(to run with HTTPS) when deploying on fly.ioI get this in the logs:
{"log_message":"(I) configured for HTTPS using domain helloatmo-manual.fly.dev","timestamp":"2021-11-06T05:34:00.075867912Z","level":3,"app":{"atmo_version":"0.3.2"}}
{"log_message":"(I) serving TLS challenges on :8080","timestamp":"2021-11-06T05:34:00.076123702Z","level":3,"app":{"atmo_version":"0.3.2"}}
{"log_message":"(I) loaded bundle from ./runnables.wasm.zip","timestamp":"2021-11-06T05:34:00.11194338Z","level":3,"app":{"atmo_version":"0.3.2"}}
{"log_message":"(I) loaded bundle from ./runnables.wasm.zip","timestamp":"2021-11-06T05:34:00.14019717Z","level":3,"app":{"atmo_version":"0.3.2"}}
{"log_message":"(I) starting Atmo ...","timestamp":"2021-11-06T05:34:00.140434765Z","level":3,"app":{"atmo_version":"0.3.2"}}
{"log_message":"(I) serving on :443","timestamp":"2021-11-06T05:34:00.140474189Z","level":3,"app":{"atmo_version":"0.3.2"}}
Error: failed to server.Start: listen tcp :443: bind: permission denied
Reading the fly.io documentation https://fly.io/docs/getting-started/troubleshooting/#host-checking about the host checking, I wondered if you use 0.0.0.0
or localhost
?
Remark: I already deployed the same application on Civo with success but without HTTPS
This page: https://atmo.suborbital.dev/getstarted should include all the steps to get your first server up and running before dropping into the concepts etc.
It should include subo build .
and subo dev
, along with explanations of what each of them do.
https://github.com/stretchr/testify#suite-package provides a really nice framework to building test suites
I've ran into the init in tests while trying to figure out why the atmo tests were failing, specifically this file: https://github.com/suborbital/atmo/blob/577ef538e9316e8cb502907467024ba99501c06d/atmo/coordinator/coordinator_test.go#L19-L30, where the global coord variable is being set in an init
function.
Thanos.io has a heavy handed approach to no globals: https://thanos.io/tip/contributing/coding-style-guide.md/#avoid-globals
In the ardan training (notebook and ultimate service) there's a good argument for not using globals, unless ALL the following three are true:
The existing HTTPClient
AppSource will reach out to the origin AppSource for every request. We should create a version (or an option for the existing version) that adds a cache.
This cache should be keyed on the identifier and version for an application because, in theory, data should not change unless the application version is bumped.
This would allow an AppSource server to act as a proxy, keeping relevant local data cached while deferring to a central server for authoritative data.
Atmo currently has a request
type handler. This issue proposes a new stream
handler.
This would create an endpoint that accepts a webhook connection. This would behave as a webhook normally would, accepting sequential messages. Atmo would handle each message by executing the sequence described in the handler. This would allow for streaming connections to Atmo.
This will involve some work in the Directive as well as work to generate these handlers when a Bundle is being mounted, similarly to how HTTP handlers are generated today.
The serialization and deserialization of CoordinatedRequests should be evaluated closely. This will become a performance issue, and serialization should be avoided at all costs so long as a request's execution stays within the same instance. This could be related to suborbital/reactr#81 or whatever comes of that effort, as it will hopefully allow for strongly typed execution, even with Wasm Runnables.
I think the ideal here is that in the directive you’d be able to have a file
step (in addition to the existing fn
and group
) that lazy-loads a file, that way you could have a handler that looks like this:
handlers:
type: request
resource: /file/:something
steps:
- file: /var/html/:something
And if you DID have fn
types afterwards, it could access the file contents using the state
API, and the file would be read at the time of use.
The time of use could either be when accessed by a Runnable or when the handler returns and the file step is the thing being returned.
When atmo doesn't have a handler registered for the fqfn, it asks the coordinator whether it knows of anything that could handle the request.
When that says no, currently we log an error and return here: https://github.com/suborbital/atmo/blob/main/atmo/atmo.go#L105
That results in multiple questions to the same request that can't be handled be answered multiple times with the exact same "nope" message.
If instead we assigned a noop handler that would return an error message to the client:
The main decision we need to make is whether to classify a request that atmo can't fulfill a user error (in which case a 404 is better), or a server issue, in which case a 501 is better
Atmo should be able to start up in a 'headless' mode wherein it does not build a router based on routes in the Directive, but rather makes each Runnable in a bundle available as an individual endpoint.
This would essentially be a 'FaaS' mode that wouldn't require routes to be explicitly defined.
I envision this being activated with an ATMO_HEADLESS
env var.
code: https://github.com/suborbital/atmo/blob/main/atmo/coordinator/coordinator.go#L148
Not sure whether the database connections would need any sort of special treatment, but if they don't, at least a blurb in the code comment would be nice.
Research should be put towards creating a BuildPack for Atmo applications. I am currently fairly unfamiliar with this process, and as such it may not belong in this repo, but this issue can be used to track progress across other repos such as Subo if needed.
Buildpacks open up many different deployment avenues such as Heroku, for example.
Allow Atmo to listen to GitHub webhooks and re-deploy itself (any maybe even build the project?)
It would be handy if the port could be defined via flag in addition to the ATMO_HTTP_PORT
env variable.
When I was deploying an application I wrote with Atmo to Heroku I had a lot of trouble with this. Heroku expects you to use an arbitrary port found in the env variable PORT
. What I wanted to do was something like atmo --port $PORT
. In the end I had to write a little bash script that exported the PORT
variable to ATMO_HTTP_PORT
.
Currently, if a Runnable returns an error that status code gets passed all the way through to Atmo.
Atmo's HTTP response codes should reflect the state of Atmo, not the state of the Runnable.
The interface shouldn't care about authorization headers at all, that is an implementation level detail.
Currently there are 3 implementations for the appsource interface:
Currently there's no straightforward way of removing the auth argument from the call, because the AppSourceVKRouter takes the auth header from the request and passes it on. We need to rethink how this is done, which is probably a bigger task.
Add Test case to find out if there are any bugs in the site. As it improves your application architecture and maintainability.
Add various testing case to test several components.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.