Code Monkey home page Code Monkey logo

hardlight's People

Contributors

617a7aa avatar beast4coder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

hardlight's Issues

Support a no-IO feature

In some cases, we can't run Tokio in the current environment for either the client or the server.

This issue tracks the separation of HardLight's core state machine and logic from the networking IO stack which is currently using Tokio and tungstenite. This will allow developers to handle their own networking, providing just Vec per incoming message. This means HardLight can delegate its IO to the application and handle server and client logic internally as usual.

Scope out the Events feature

RPC currently has a decently stable implementation and an idea of how and why it works.

The Events feature has yet to be expanded on, so we need to work out what the API for it will look like for developers so we can start on implementation.

Track in-flight RPC calls on `Server`

We currently can have a maximum of 256 (u8::MAX) active RPC calls on a single connection. handle_connection should ensure it doesn't spawn multiple RPC calls on the same ID.

Crate features to switch between functionality

HardLight is an all-in-one package currently. This includes a multithreaded RPC server using Tokio and macros and helpers for developers using HardLight.

Some dependencies for certain functionalities should be shipped in a different build. For example, the WebAssembly client should NOT be shipping tokio and a multithreaded async runtime to the browser or the other server-specific dependencies/features.

We can separate these by using "features" in the crate to allow users to enable/disable parts of the crate based on where and what they're shipping.

This shouldn't be too hard, mostly meta stuff in the Cargo.toml and enabling/disabling code based on that.

Switch compression to Zstd

We currently use deflate using the zlib C implementation when unpure compression is enabled. Zstd is much faster:

zstd 1.5.1 -1
Ratio: 2.887
Compress 530 MB/s
Decompress 1700 MB/s

This issue will change HL to v3 as the wire protocol is changing again, and compression will be set to true or false based on the opening connection header instead of a uint level 0-9.

Consider shipping modded tungstenite

We’re using tokio-tungstenite currently for websockets. We should consider forking it into this repo to make any optimisations. For example, integrating the decompressor using async-compression may reduce the latency of enabling compression as we can decompress frame payloads by streaming them off the tokio socket. This would mean there would be no separate step to decompress the data by the time it reaches our business logic.

Support unencrypted streams

Sometimes, the client and server may already have an encrypted private connection. Double encrypting with TLS is not worth the performance loss in these cases. An example is when not using pure physical network links and instead using a VPN like WireGuard. In production, we use 6PN - an iPv6 WireGuard mesh network - between server instances.

In these cases, encryption is handled outside HardLight. We should support using regular, non-TLS connectors in both the client and the server not to slow this down.

Enhanced client/server version agreement

Tasks

TLDR

We currently (#7) do a basic check to ensure the client and server use the same major version of HardLight. We want to explore additional methods to ensure the client and server don't encounter issues during a connection because they use different trait versions.

Relevant code

https://github.com/valeralabs/hardlight/blob/8a945ed4a5aaa0c501c5609bb13609981a3acd37/src/server.rs#L138-L152

https://github.com/valeralabs/hardlight/blob/8a945ed4a5aaa0c501c5609bb13609981a3acd37/src/client.rs#L123-L129

Advanced metrics support

Hardlight is quite cool. Apparently, it's quite fast too. But how fast? I bet our users would like to know.

We should implement some sort of metrics collection, so the outer application can gain some insights into how the client/server is working.

Tasks

No tasks being tracked yet.

Investigate supporting NATS as a transport layer

Currently, we use direct TCP/IP connections (1:1) between servers and clients. These are WebSocket connections, widely supported in native environments and on the web.

At Valera, we use an internal messaging layer called NATS. This is a glorified Pub/Sub with many excellent features. This real-time messaging layer is what we use to make HardLight emit valuable events to our users. For example, each API instance connects to Natalie, a global supercluster "nervous system", and subscribes to topics its users (connected over HardLight) are interested in.

To be continued

Better connection reliability

#28 will help with this but other things need to be done as well.

HL is a stateful protocol on either side - this state is stored both server side and client side. We perform L4 load balancing in our deployments in front of HL for protection. Sometimes connections have to reconnect due to a server failure. We should consider a way to resume connection state, potentially on a different server to the one the state was created on. They will have compatible types, but we have to work out how to do this safely (our servers rely on this state for authentication). HL connections should be as reliable as possible, and RPC functions that can't be handled immediately should be queued - the application shouldn't notice reconnects to the server at all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.