unleash / unleash-client-rust Goto Github PK
View Code? Open in Web Editor NEWUnleash client SDK for Rust language projects
License: Apache License 2.0
Unleash client SDK for Rust language projects
License: Apache License 2.0
crate has been renamed to
cipher
Details | |
---|---|
Status | unmaintained |
Package | block-cipher |
Version | 0.7.1 |
URL | RustCrypto/traits#337 |
Date | 2020-10-15 |
This crate has been renamed from block-cipher
to cipher
.
The new repository location is at:
<https://github.com/RustCrypto/traits/tree/master/cipher>
See advisory page for additional details.
Give the user the option to use a tokio-based HTTP stack.
At $DAILY_JOB, our Rust codebase is 100% tokio-based, using the unleash-api-client
crate as dependency added 100+ crates to our Cargo.lock
, which is bad.
Checking the added dependencies, many of them are crates of the async-std
ecossystem, which we would not need to pull if we had the option to use a tokio-based HTTP stack.
We could gate both the async-std and a tokio-based http (e.g. hyper) implementations behind cargo features (keeping the async-std enabled by default).
From what I understood of the implementation, the only method that does HTTP calls is Client::poll_for_updates
(edit: Client::register
also does a HTTP call). If so, we would to change the code to something like:
impl Client<F> {
pub async fn poll_for_updates(&self) {
#[cfg(feature = "async-std")]
self.poll_for_updates_async_std().await;
#[cfg(feature = "tokio")]
self.poll_for_updates_tokio().await;
}
#[cfg(feature = "async-std")]
async fn poll_for_updates_async_std(&self) {
... // current implementation
}
#[cfg(feature = "tokio")]
async fn poll_for_updates_tokio(&self) {
...
}
}
I'm open to work on fixing this issue.
Currently poll_for_updates
is !Send
. It would be nice if it could be made Send
.
Because poll_for_updates
is !Send
, it can't be spawned using tokio::spawn
or other similar methods and must be on either an actual thread or a local task set. This is not super ergonomic: a common pattern in async applications seems to be spawning persistent tasks like poll_for_updates
with tokio::spawn
in a 'fire and forget' manner.
I haven't looked at the code yet to see if making the function Send
is an easy or hard task.
It would be helpful to understand the design decision not to automatically support execution of poll_for_updates.
Is the library trying to remain as agnostic as possible to different execution models - std::thread
tokio async/await
etc?
Or was there a total misunderstanding and we are able to configure the library to fetch updates?
No response
No response
enum_dispatch could likely the box'd vtable code in the strategy implementation, leading to perhaps a 6x performance improvement, absent synchronisation issues. https://docs.rs/enum_dispatch/0.3.1/enum_dispatch/
In order to reduce load on servers, as well as save our users some bandwidth, I'd like for this SDK to react to http status codes, and not just keep requesting at the same frequency if the server is telling it to chill out.
Part of a cross-SDK initiative to make all our SDKs respect http statuses in order to save ourselves and our users for bandwidth/cpu usage that adds no value to either the server or the client.
Unleash/unleash-client-node#537 follows the correct pattern. Use 404, 429 or 50x statuses to reduce polling frequency. On 401 or 403 log and don't keep polling - your user probably needs to update their key before there's any point in continuing to hit the server.
stdweb is unmaintained
Details | |
---|---|
Status | unmaintained |
Package | stdweb |
Version | 0.4.20 |
URL | koute/stdweb#403 |
Date | 2020-05-04 |
The author of the stdweb
crate is unresponsive.
Maintained alternatives:
See advisory page for additional details.
In poll_for_updates
, errors are consumed and a fixed log message is printed instead. For example, if the HTTP request fails it simply prints warn!("poll: failed to retrieve features")
. This is unhelpful for debugging. It'd be helpful to print the particulars of the error.
I am currently attempting to use unleash-api-client in a cloud-hosted project. It has started sporadically failing, but without information on the particulars I can't easily debug it.
It'd be a case-by-case basis, but generally would just need to include a debug-formatted version of the originating error.
Currently lacking custom stickiness for variants, as described here: Unleash/client-specification#11 (reference implementation in Node JS is here: https://github.com/Unleash/unleash-client-node/pull/202/files).
This is blocking us from rolling the client specifications forwards
The unleash server can return the following error when polling for features (poll_for_update()
):
\n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>502 Server Error</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Server Error</h1>\n<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>\n<h2></h2>\n</body></html>\n
The client currently naively tries to convert the returned body into json, fails, then dumps the error and proceeds as if an unrecoverable error happened, by not updating the features until the next poll interval/
This should be easily fixable by inspecting the response's status code and reacting to a 502, before converting it into a json.
Unfortunately I am unsure as to how to reproduce the error, as it may have something to do with specific server configurations or even the environment. However, it might be possible to mock the response from the server.
Unleash client should probably retry the request as said by the server.
No response
No response
No response
Client: 0.6.1 alpha
Server: 3.17.6
Open source
Self-hosted
No response
This log shows a test with a custom strategy that hasn't been enabled using .strategy()
: it requires a TRACE level log statement to even tell that that is whats happened.
2022-09-27T20:12:34.518Z TRACE [unleash_api_client::client] memoize: start with 1 features
2022-09-27T20:12:34.518Z TRACE [unleash_api_client::client] memoize: swapped memoized state in
2022-09-27T20:12:34.518Z DEBUG [unleash_api_client::client] poll: waiting 500ms
2022-09-27T20:12:35.013Z TRACE [unleash_api_client::client] is_enabled: feature project_test default false, context Some(Context { user_id: None, session_id: None, remote_address: None, properties: {"projectId": "project", "cluster": "clustername"}, app_name: "app", environment: "clustername" })
2022-09-27T20:12:35.013Z TRACE [unleash_api_client::client] is_enabled: feature project_test default false, context Some(Context { user_id: None, session_id: None, remote_address: None, properties: {"projectId": "project", "cluster": "clustername"}, app_name: "app", environment: "clustername" })
2022-09-27T20:12:35.013Z TRACE [unleash_api_client::client] is_enabled: feature project_test has no strategies: enabling
feature 'project_test' is true
We should probably log the silent dropping of unknown strategies much more visibly.
No response
No response
No response
No response
No response
No response
No response
No response
No response
Hi!
Testing out this Rust sdk and it works well for getting feature toggle status, but it always fails when submitting metrics, with this in the log:
WARN unleash_api_client::client > poll: error uploading feature metrics
after adding some debug lines in the library, it turns out that the actual error (that unfortunately gets swallowed in post_json
) is along the lines of
413 Content Too Large
Error
Payload Too Large
Our unleash installation (at FINN.no) has a large amount of features, and digging through the code, it looks like the rust client tries to upload metrics for every single feature that exists -- even if I've only configured a UserFeatures
enum with a single variant -- resulting in over 150KB of data.
Is this intentional and required? Naively I'd have guessed that it only made sens to upload metrics for the features defined in the enum passed to the client builder.
No response
No response
No response
No response
No response
3.17
Open source
Self-hosted
No response
This is the likely cause of contention in benchmark results preventing effective scaling of workloads hitting the same features shown in cognitedata#1 (comment)
rustc 1.47.0 (18bf6b4f0 2020-10-07)
cargo build
error[E0107]: wrong number of type arguments: expected 0, found 1
--> src/http.rs:14:26
|
14 | client: surf::Client<C>,
| ^ unexpected type argument
error[E0107]: wrong number of type arguments: expected 0, found 1
--> src/http.rs:36:62
|
36 | pub fn get(&self, uri: impl AsRef<str>) -> surf::Request<C> {
| ^ unexpected type argument
error[E0107]: wrong number of type arguments: expected 0, found 1
--> src/http.rs:50:63
|
50 | pub fn post(&self, uri: impl AsRef<str>) -> surf::Request<C> {
| ^ unexpected type argument
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0107`.
error: could not compile `unleash-api-client`.
Tag 0.5.0
from master.
Fetching feature flags fails due to the following error:
Err(reqwest::Error { kind: Decode, source: Error("unknown variant `SEMVER_LT`, expected `IN` or `NOT_IN`", line: 1, column: 44320) })
This means its not possible to use the client at all with those constraints defined in the project. It should be possible to use the client, even if those strategies are not supported.
No response
No response
No response
No response
No response
No response
None
None
No response
Just a feature that needs to be implemented - Global Segments. Unleash v4.13 supports enhanced responses for global segments, it would be great if this SDK can make use of this.
Segments are effectively a way for Unleash users to define a list of constraints in such a way that makes them reusable across toggles without manually copying the constraint across to another toggle. Segments have two modes of operation, from the SDK's perspective, the inline mode will have no impact, segments will be remapped on the server side into constraints on the toggle information, no changes need to be . The second mode, global segments, requires that the SDK both opt in and handle the response differently. The handling should effectively result in unpacking the segments referenced in the feature strategies into a set of constraints. The changes required are described below.
Control Header
The SDK needs to pass up a Unleash-Client-Spec
header with a semver value greater or equal to 4.2.0
(i.e. be greater or equal to the version of the unleash client spec tests where global segments are described) when hitting the get toggles endpoint on the Unleash server. This will enable the Unleash server to respond with the enhanced format
Example of the difference between enhanced and standard format:
Standard Format (default)
{
"version": 2,
"features": [
{
"strategies": [
{
"name": "flexibleRollout",
"constraints": [
{
"values": [
"31"
],
"inverted": false,
"operator": "IN",
"contextName": "appName",
"caseInsensitive": false
}
],
"parameters": {
"groupId": "Test1",
"rollout": "100",
"stickiness": "default"
}
}
],
"name": "Test1"
},
{
"strategies": [
{
"name": "flexibleRollout",
"constraints": [
{
"values": [
"31"
],
"inverted": false,
"operator": "IN",
"contextName": "appName",
"caseInsensitive": false
}
],
"parameters": {
"groupId": "Test2",
"rollout": "100",
"stickiness": "default"
}
}
],
"name": "Test2"
}
],
"query": {
"environment": "default"
}
}
Enhanced Format (requires opt in)
{
"version": 2,
"features": [
{
"strategies": [
{
"name": "flexibleRollout",
"constraints": [],
"parameters": {
"groupId": "Test1",
"rollout": "100",
"stickiness": "default"
},
"segments": [
1
]
}
],
"name": "Test1"
},
{
"strategies": [
{
"name": "flexibleRollout",
"constraints": [],
"parameters": {
"groupId": "Test2",
"rollout": "100",
"stickiness": "default"
},
"segments": [
1
]
}
],
"name": "Test2"
}
],
"query": {
"environment": "default"
},
"segments": [
{
"id": 1,
"constraints": [
{
"values": [
"31"
],
"inverted": false,
"operator": "IN",
"contextName": "appName",
"caseInsensitive": false
}
]
}
]
}
The relevant changes between the two formats are that in the enhanced format the segments are defined once as a global list and referenced within the strategy on the toggle by its ID. What's important to note is that the two above packets should be
handled identically, they reference the same toggle state.
Considerations
Unknown features are meant to use rcu to insert a thunk on demand for unknown features to make the use of an unknown feature a once-per-polling-cycle overhead. Benchmarking shows that this isn't working: the rcu slow code path is being hit every time, and performance tanks: pathologically with 32 threads of totally unknown features we get:
Benchmarking across 32 threads with 50000 iterations per thread
...
batch/parallel unknown-features
time: [1.7025 s 1.7239 s 1.7392 s]
thrpt: [919.98 Kelem/s 928.13 Kelem/s 939.80 Kelem/s]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.