federicoponzi / horust Goto Github PK
View Code? Open in Web Editor NEWHorust is a supervisor / init system written in rust and designed to run inside containers.
Home Page: https://federicoponzi.github.io/Horust/
License: MIT License
Horust is a supervisor / init system written in rust and designed to run inside containers.
Home Page: https://federicoponzi.github.io/Horust/
License: MIT License
Add an horust parameter to force death if any service is incorrect.
This is useful if you have container with horust and a bunch of services, and you want to bring everything down if horust cannot spin up some process.
Fail fast and loudly.
Macbook running Catalina 10.15.5, rustc 1.44.1 (c7087fe00 2020-06-17)
Compiling fails, giving:
Compiling horust v0.1.1 (/Users/t0mgs/IdeaProjects/Horust)
error[E0432]: unresolved imports `libc::prctl`, `libc::PR_SET_CHILD_SUBREAPER`
src/horust/mod.rs:14:12
|
14 | use libc::{prctl, PR_SET_CHILD_SUBREAPER};
| ^^^^^ ^^^^^^^^^^^^^^^^^^^^^^ no `PR_SET_CHILD_SUBREAPER` in the root
| |
| no `prctl` in the root
error[E0425]: cannot find function `execvpe` in module `nix::unistd`
--> src/horust/runtime/process_spawner.rs:149:18
|
149 | nix::unistd::execvpe(program_name.as_ref(), arg_cptr.as_ref(), env_cptr.as_ref())?;
| ^^^^^^^ help: a function with a similar name exists: `execve`
|
::: /Users/t0mgs/.cargo/registry/src/github.com-1ecc6299db9ec823/nix-0.16.1/src/unistd.rs:734:1
|
734 | pub fn execve(path: &CStr, args: &[&CStr], env: &[&CStr]) -> Result<Void> {
| ------------------------------------------------------------------------- similarly named function `execve` defined here
error: aborting due to 2 previous errors
Some errors have detailed explanations: E0425, E0432.
For more information about an error, try `rustc --explain E0425`.
error: could not compile `horust`.
Haven't actually figured out what these deps are, just wanted to persist this if anyone else has the same issue.
@FedericoPonzi gettin' late round these parts, if you can take a look that would be great.
I am using this awesome program in my project to monitor two binaries. I would like to be able to give Docker a healthcheck of Horust.
I use Horust in https://github.com/sudo-bot/docker-openldap
Just as the defination of the option,it means to start after these other services,what people can expect here is the service.Therefore,this option should be filled with the service name,instead of the configuration filename.
Example,if we have two configuration files:another.toml and second.toml,we can use this:start-after = ["another", "second"].
Something similar to: https://github.com/just-containers/s6-overlay#fixing-ownership--permissions
Usually when you start a docker container, you want to fix permissions on the volumes (so needs to happen at runtime and not build time). Without this feature, you would need to create a service which runs and won't be restarted, which is a bit ugly because services are long running processes and one shot services are not suitable for being supervised.
The idea is to provide a format similar to s6-overlay's.
This is suitable if you want to avoid spawning another process and thus makes faster startups.
This issue is for fixing this todo: https://github.com/FedericoPonzi/Horust/blob/master/src/horust/healthcheck/mod.rs#L48
Right now if more than 2 healthchecks in a row fail, the service will be killed. This might be too much aggressive for some programs, and not enough aggressive for others.
It would be nice to have in the healthiness section of the config a parameter for setting the max failed amount of requests in a row before the service is killed.
Consider the output in here, under the cargo fmt --check
stage:
/usr/share/rust/.cargo/bin/cargo fmt --all -- --check
Warning: can't set `indent_style = Block`, unstable features are only available in nightly channel.
Warning: can't set `wrap_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `format_code_in_doc_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `comment_width = 80`, unstable features are only available in nightly channel.
Warning: can't set `normalize_comments = false`, unstable features are only available in nightly channel.
Warning: can't set `normalize_doc_attributes = false`, unstable features are only available in nightly channel.
Warning: can't set `license_template_path = ""`, unstable features are only available in nightly channel.
Warning: can't set `format_strings = false`, unstable features are only available in nightly channel.
Warning: can't set `format_macro_matchers = false`, unstable features are only available in nightly channel.
Warning: can't set `format_macro_bodies = true`, unstable features are only available in nightly channel.
Warning: can't set `empty_item_single_line = true`, unstable features are only available in nightly channel.
...
This looks similar to rust-lang/rustfmt#2227, needs a bit more investigation.
src/dummy.rs
that was added in #41 is removed.Horust currently uses "/" as working directory for services that have no working-directory
specified. It would be more useful if the working directory defaulted to the working directory of the Horust process. That way, when not running in a container, it would be easier to reference e.g. configuration files passed to services as relative paths (i.e. `command = "foo -c foo.conf"). Before coming up with a PR, I would like to know whether it was a conscious decision to default to "/".
It would be nice to let user use custom commands when doing healthchecks. A command returning 0 would indicate an healthy service, unhealthy otherwise.
i pull the latest code from master and build it
here is the configuration i used
command = "top -b"
working-directory = "/tmp/"
start-delay = "0s"
[restart]
strategy = "always"
then i run horust and kill the top process , waiting ... but i found that the top do not restart
I was trying to shut down Apache2 gracefully by sending it a "SIGWINCH" (as for whatever reason that's the signal they decided on for that), but I'm getting the following error when setting signal = "WINCH"
as it's not part of the list of allowed signals:
Failed loading toml file: /etc/horust/services/apache2.toml
Caused by:
unknown variant `WINCH`, expected one of `TERM`, `HUP`, `INT`, `QUIT`, `USR1`, `USR2` for key `termination.signal` at line 10 column 1
Maybe it would be possible to also allow an integer for the signal
option, or to add all available signals there?
uname -a
: Linux pop-os 5.13.0-7614-generic #14~1631647151~20.04~930e87c-Ubuntu SMP Fri Sep 17 00:26:31 UTC x86_64 x86_64 x86_64 GNU/Linux
Hey there! I'm trying to use horust as an init system for an experimental tiny operating system and I tried to build against the musl C library to keep things small and dependency-less, but I'm running into a segfault when starting the program. The segfault happens immediately with no extra info:
➜ horust
fish: “horust” terminated by signal SIGSEGV (Address boundary error)
If you have service-a.toml, and service-b.toml which both start-after db.toml, and db.toml specifies as failure strategy "kill-dependencies", both service-a and service-b will be killed.
This will kill both, despite the fact that service-b might be able to survive and keep working by relying on a cache or something.
This issue propose add a new "die-if-failed" parameter under the termination
section.
My first thought was to add a depends
parameter, but this is too much generic because:
start-after
As a side note, I don't like the naming. It sort of implies the existence of die-if-*
. But I don't see the use case for die after other statuses for now. If there is some demand, we can thing of having a generic die
with an array of service_name : status.
Check the comment here: #16 (comment)
Every time there is a merge to master, and the tests are passing, the latest version should be pushed on dockerhub, under some :master
tag.
During the run of the chained stress test, I've found a funny bug.
After the fork()
, using strace I've found the process stuck on a futex()
syscall, and wasn't even printing the debug line.
Futex is a thread level locking mechanisms, and according to this:
In POSIX when a multithreaded process forks, the child process looks exactly like a copy of the parent, but in which all the threads stopped dead in their tracks and disappeared.
and:
This is very bad if the threads are holding locks.
Since stdout access is synchronized across threads, we should spawn the threads using pthread_atfork call.
Avoiding prints should be enough for now, but we should use pthread_atfork to be sure that the code is safe.
In single command mode, horust is run like this:
cargo run -- -- /bin/bash
If started without Horust, bash handles SIGINT signals (CTRL^C). If run via horust, and we do CTRL^C, horust will intercept it and send a SIGTERM to bash which will then stop.
When running it via single command mode, horust should just proxy the signals and reap children processes.
As we spoke before, Gitter might be a great idea here. WDYT?
As a normal User,
As a root User,
Any connections to occultist society?
I am the only person who worries when there are Egyptian mythological deities,
get involved in a linux init starter?
what about allusions to sects like illuminati and horus eye, who want to enslave the world with Bill Gates and Klaus Schwab and Elon musk?
Similar to how services are started in a specific order (using 'start-after'), would it be possible to shutdown services in a specific order as well? Maybe even just in the reverse order that the services were started?
My use case is that I run a VPN (Tailscale), logs collector (Grafana Agent) and an API in the same container. When a shutdown occurs, ideally the API would shutdown first, then the logs collector, then the VPN. But currently, since the signal is sent to all services at once, the order is fairly random.
Adds some tests for the http based healthchecks.
In Rust, when we try to allocate memory, the behaviour of the default allocator in case of error is to panic. In order to make horust more resilient to those edge cases, it would be nice to get rid of all the heap allocations.
I'm not sure how to tackle this, but for sure some research will need to be done in order to figure how to tackle this.
It is common in other CLI similar to Horust, to gracefully stop all services on first CTRL+C
, but terminate it immediately on following signals.
I'd expect similar behavior in Horust, especially for local-dev environment purposes where graceful-stop is not always necessary.
Of course, I could just prepare another service configuration for that purpose, but I think this is a justified case to implement this enhancement.
I'm willing to dig into signal handling in Horust and prepare PR if you accept it.
It would be nice to have another section for allowing per-service resource limits. This is just a draft and would need some more thoughts on it.
Simple feature, but i don't know how to bite it from rust standpoint.
Technically, it would be awesome to create the default config file if it's missing. This also entails creating default directories. It may be done on setup, but since horust is working on top of cargo, cargo install
just copies exec to bin location.
Right now, without it, Horust just bails out with ENOENT.
can be done either on startup or installation
I'm using Horust to start Apache2, which is by itself a very quiet service - even with the "debug" log level, it doesn't show when it started up. It would be great if there was a -v
flag to Horust that would cause it to output messages like the following ones:
Started service "apache2"
Service "apache2" is now healthy
Service "php-fpm" failed with strategy "shutdown", stopping all services
Stopped service apache2
This would make the log output way more useful, as also information the services processes themselves can't know would be logged.
I currently have a rust program that starts another via Popen. I need to do better than that, like restart on crash with backoff, et cetera. Not difficult, but I'm sure there are edge cases I'd miss if I reinvented the wheel. Horust looks like it's solved a lot of this stuff, so it'd be nice to use it.
After a 30 second glance at docs.rs and the code the sort of thing I'd be looking for would be a way to instantiate Horus with a Service or list of Service's rather than a service file, and an interface to restart and/or reload a service.
Is this the sort of thing that'd be easy for me to add to Horust, or should I look elsewhere?
keep-env = bool: default: true. Pass over all the environment variables.
https://federicoponzi.github.io/Horust/
The default appears to be keep-env = false
. The environment variables were not passed to the command until I added:
[environment]
keep-env = true
`/etc/horust/services/azure-functions-host.toml
command = "dotnet /azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost.dll"
[restart]
strategy = "on-failure"
backoff = "5s"
attempts = 10
[failure]
strategy = "shutdown"
Problem:
Service uses specific user in service file. Let's say UserA.
For root this service file is ok, since root can ::setuid() to any other user easily without assistance of the user. So running user's stuff as a root should never be a problem.
However, running the service as any other user, let's say UserB, will fail to do so, with very weird error code. So, lets fix that.
Proposition:
Let's introduce precheck stage, when each loaded file will be checked against logic issues (since serde already handles syntax). In that stage we will check if specified directories/files exist, if user can setuid to specified user, if program specified is runnable and has correct permissions, etc... etc…
That also would allow us to implement --create-if-absent (of course this switch's name is placeholder for sake of example) that will create directories, for example, for logs.
Currently, there are two ways to run something:
<command>
: run single command--services-path
: run all services from a pathBut it's missing something like --services-file
or --service-configuration
to run specified services only. My use case: I build an "self-contained" image with frontend, backend and auxiliary services. On the test system all services should run. But for production only backend and auxiliary services are relevant. I can workaround it by creating multiple "services-path" directories, but it would be nice just specify multiple service configurations.
Or probably better, if --services-path
could accept wildcards like:
--services-path /dir/*
: all services inside /dir
. Maybe also same as --services-path /dir
to maintain compatibility--services-path /dir/**
: all services inside /dir
and children--services-path /dir/backend.toml
: only backend serviceWDYT?
First noticed the bug on my modified branch (see PR-draft #57), however, the same bug occurs on fresh master
and when my friend installed horust
via cargo install
.
Scenario:
Then Horust will terminate all already running services, but waits forever for this late service which is never changing its state from Starting
to Started
.
It means that in the main loop Horust checks whether all services are Finished
or Failed
, but this one service will just wait forever in Starting
state. I guess for some reason it never receives SpawnFailed
event?
The quickest way to test it is to add start-delay = "2s"
in service.toml
.
I made some eprintln!
and tried to debug it:
Finished dev [unoptimized + debuginfo] target(s) in 0.19s
[2021-04-29T07:51:21Z INFO horust] Loading services from directories:
* ./core
* ./extra
[2021-04-29T07:51:21Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:21Z INFO horust::horust::supervisor] Applying events... [
Run("1.toml"), Run("2.toml"), Run("3.toml"), Run("4.toml"), Run("5.toml"), Run("6.toml"), Run("7.toml"), Run("8.toml"), Run("9.toml"), Run("10.toml")
]
[2021-04-29T07:51:22Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z INFO horust::horust::supervisor] Applying events... []
^C[2021-04-29T07:51:22Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z WARN horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("6.toml")
[2021-04-29T07:51:22Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:22Z WARN horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("7.toml")
[2021-04-29T07:51:23Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:23Z WARN horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("8.toml")
[2021-04-29T07:51:23Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:23Z WARN horust::horust::supervisor] 1. SIGTERM received
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: SpawnFailed("4.toml")
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("2.toml", Pid(66623))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("5.toml", Pid(66624))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("1.toml", Pid(66625))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("10.toml", Pid(66626))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("3.toml", Pid(66627))
SPAWN_FOR_EXEC_HANDLER_LOOP BROKEN: PidChanged("9.toml", Pid(66628))
[2021-04-29T07:51:23Z INFO horust::horust::supervisor] Applying events... [
PidChanged("2.toml", Pid(66623)),
PidChanged("5.toml", Pid(66624)),
PidChanged("1.toml", Pid(66625)),
PidChanged("10.toml", Pid(66626)),
PidChanged("3.toml", Pid(66627)),
PidChanged("9.toml", Pid(66628))
]
[2021-04-29T07:51:23Z WARN horust::horust::supervisor] 1. SIGTERM received
PID CHANGED: "2.toml" - 66623
PID CHANGED: "5.toml" - 66624
PID CHANGED: "1.toml" - 66625
PID CHANGED: "10.toml" - 66626
PID CHANGED: "3.toml" - 66627
PID CHANGED: "9.toml" - 66628
[2021-04-29T07:51:24Z INFO horust::horust::supervisor] Applying events... [
ShuttingDownInitiated(Gracefuly),
StatusChanged("2.toml", Started),
StatusChanged("5.toml", Started),
StatusChanged("1.toml", Started),
StatusChanged("10.toml", Started),
StatusChanged("3.toml", Started),
StatusChanged("9.toml", Started)
]
[2021-04-29T07:51:24Z WARN horust::horust::supervisor] 1. SIGTERM received
[2021-04-29T07:51:24Z WARN horust::horust::supervisor] Gracefully stopping...
[2021-04-29T07:51:24Z INFO horust::horust::supervisor] Applying events... [
ShuttingDownInitiated(Gracefuly),
StatusUpdate("1.toml", InKilling),
Kill("1.toml"),
StatusUpdate("2.toml", InKilling),
Kill("2.toml"),
StatusUpdate("3.toml", InKilling),
Kill("3.toml"),
StatusUpdate("5.toml", InKilling),
Kill("5.toml"),
StatusUpdate("11.toml", Finished),
StatusUpdate("9.toml", InKilling),
Kill("9.toml"),
StatusUpdate("10.toml", InKilling),
Kill("10.toml")
]
[2021-04-29T07:51:24Z WARN horust::horust::supervisor] Gracefully stopping...
[2021-04-29T07:51:24Z INFO horust::horust::supervisor] Applying events... [
StatusChanged("1.toml", InKilling),
StatusChanged("2.toml", InKilling),
StatusChanged("3.toml", InKilling),
StatusChanged("5.toml", InKilling),
StatusChanged("11.toml", Finished),
StatusChanged("9.toml", InKilling),
StatusChanged("10.toml", InKilling)
]
[2021-04-29T07:51:25Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:25Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:26Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:27Z INFO horust::horust::supervisor] Applying events... [
ForceKill("1.toml"),
ForceKill("2.toml"),
ForceKill("3.toml"),
ForceKill("5.toml"),
ForceKill("9.toml"),
ForceKill("10.toml")
]
[2021-04-29T07:51:28Z INFO horust::horust::supervisor] Applying events... [
StatusChanged("1.toml", Failed),
StatusChanged("2.toml", Failed),
StatusChanged("3.toml", Failed),
StatusChanged("5.toml", Failed),
StatusChanged("9.toml", Failed),
StatusChanged("10.toml", Failed),
StatusUpdate("1.toml", FinishedFailed),
StatusUpdate("2.toml", FinishedFailed),
StatusUpdate("3.toml", FinishedFailed),
StatusUpdate("5.toml", FinishedFailed),
StatusUpdate("9.toml", FinishedFailed),
StatusUpdate("10.toml", FinishedFailed)
]
[2021-04-29T07:51:28Z INFO horust::horust::supervisor] Applying events... [
StatusChanged("1.toml", FinishedFailed),
StatusChanged("2.toml", FinishedFailed),
StatusChanged("3.toml", FinishedFailed),
StatusChanged("5.toml", FinishedFailed),
StatusChanged("9.toml", FinishedFailed),
StatusChanged("10.toml", FinishedFailed)
]
[2021-04-29T07:51:28Z INFO horust::horust::supervisor] Applying events... []
[2021-04-29T07:51:28Z INFO horust::horust::supervisor] Applying events... []
At this point there are no new events forever (loop with empty Applying events... []
My modifications:
main
branch.SPAWN_FOR_EXEC_HANDLER_LOOP
eprintln! in process_spawner
just after loop
ends (before bus.send_event(ev)
. Thanks to that, we can see that SpawnFailed
event is created!exprintln!
: in supervisor/mod.rs::handle_event()
to see if we ever process SpawnFailed
or PidChanged
. One is processed, another isnt.Applying events
to info
-> We can see that SpawnFailed
event is never received!PidChanged
is received. Why?Extra conclusion:
Whats interesting, one service ("11.toml") had start-delay = 10s
instead of 2s
, and it had start-after=["8.toml"]
.
So horust correctly were waiting till 8.toml
goes up, but its hanged on Starting
(Spawn Failed has not been processed), therefore 11.toml
is in Initial
state, so service_handler.rs
changed its status to Finished
correctly.
The only way to kill horust then is killall horust -9
Okay, as a user i would like to be able to use variables inside service.toml files.
From my understanding there is no possibility to do that out of the box, i.e.
setup i have depends on various things, i.e. where the user has its own repo, what's user's username is, etc. etc.
imagine following service file:
....
command = "../../repos/service/target/debug/service"
user = "esavier"
....
i would love to be able to modify it to look like that:
....
command = "${WORKSPACE}/target/${TARGET}/service"
user = "${USER}"
....
of course this is the variable format used by bash, and it's only here to provide an example and a background.
flexibility, right now i have to go through some steps that will ensure that path exists (like symlinking, bleeh)
I'm not sure how much this is used in the wild, but it looks like something interesting to program.
There is an issue with upstream serde. 1.0.119 (works with serde = {version = "=1.0.118", features = ["derive"] }
Horust is using serde::export and this module was renamed in serde 1.0.119.
problematic files:
src/horust/formats/service.rs
src/lib.rs
I would like to use horust as development tool. That means, for example, attaching a bunch of horust configs inside the repo, and instead of meddling with docker, use horust to start and manage several applications that would work in tandem.
bin/service1 bin/service2 cfg/service1.toml cfg/service2.toml
horust --here
to start horust (hopefuly nondemonized) instance to start and run application configured by horust's cfg/*toml configs. Console output would be horust log, while application outputs would follow the rules inside configsuse case would be, for example, testing a bunch of services or applications that have to work together.
Part of the horust that manages the applications and keeps them under supervision would be great for that purpose.
When I run my services under Horust supervision, I often split them under different categories. For example, "Services that are core for my stack" or "Extra services that are optional".
These services often need to run in parallel. Both "Extra" ones and "Core", because one depends on another. I don't want to run ALL services by providing a root directory (for example, "Another Extra service pack" I don't want to run in one scenario).
For now, I was just using two different instances of Horust to do the job. One for core and another for dependent services.
But do I have to?
I could ( I guess, I have never tried to be honest ) use symlinks and link core services to dependent services.
Or I could extend Horust to accept more than one directory path, merge fetched services into one vector and pass it to validation, and then create one single instance of Horust to rule them all :)
Service docker-compose
can read multiple docker-compose.yml
by accepting multiple -f docker-compose-1.yml -f docker-compose-2.yml
etc.
I think (thanks to StructOpt) it is trivial to implement such behavior in Horust.
It wouldn't be a breaking change (Horust would still accept a single argument), and it would make my life much easier.
Example:
horust --service-path ./services/core --service-path ./services/extra
This is basically stopsignal
and stopwait
from supervisord:
[termination]
signal = "TERM"
wait = "10s"
The strategy is already there, but backoff and attemps are still missing:
strategy
= always|on-failure|never
: Defines the restart strategy.backoff
= string
: Use this time before retrying restarting the service.attempts
= number
: How many attempts before considering the service as Failed.Reload services and configuration via SIGHUP (or another signal because as of now horust can be run in a foreground).
@FedericoPonzi While the general concept of an init system is clear to most everyone who's ever dealt with a *NIX system, I'm not exactly sure whether a container init system is something most people spend time thinking about.
To be clearer - why do I need systemd
inside a container? Don't containers already have systemd
? Is Horust a replacement for systemd
, even? How does it stack up against other tools? Can it replace something I have in my stack now, even in its alpha phase?
I'm not saying that Horust should be anything re the above, I'm saying that outside of the immediate people who you and I talked to about the project, it's unclear exactly what it's doing here and what it... well, is.
Let's dream for a sec: If Horust gets big, what would it like? Who would use it? What would it replace?
First of all, I want to thank you for writing an excellent init system for containers. Horust finds a fantastic balance of simplicity and functionality.
There appears to be a memory leak caused by health checks as a copy of an object describing the service is created every check interval. Additionally, disabling the health check feature flag does not resolve the issue as the code is not completely removed. To workaround the issue I have manually patched out the checking code (I am not proficient with rust, so I am not confident in raising a PR for this).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.