Code Monkey home page Code Monkey logo

team's Introduction

Rust 2019 Async Ecosystem Working Group

⚠️ Deprecation notice ⚠️

The Rustasync working group has sunset Runtime is no longer active. It was active from mid-2018 until fall 2019. It was disbanded in anticipation of async/await stabilizing in Rust 1.39, as ecosystem adoption had reached a point that a dedicated working group was no longer needed to help shepherd it.

About

This repo is for coordinating the Rust Async Ecosystem Working Group.

The issue tracker on this repo is a primary point of coordination. If you have an async-related topic you'd like to raise, please feel free to open an issue!

Goals and structure

The WG is focused on progressing the ecosystem around the async foundations in 2019. If you want to get involved in these efforts, hop on Discord and say hello, or take a look at the issue tracker. Our goal is to improve the async library ecosystem in Rust by:

  • Bolstering web components, i.e. assessing the state of foundational crates for web programming (like http and url), and working to improve it by writing documentation and examples, making API improvements, standardizing interfaces, and in some cases writing whole new crates.
  • Building Tide, which is a combination of a simple, modular web framework built on the above components, and extensive documentation on what the components are, how to use them directly, and how they integrate into a framework. The name "Tide" refers to "a rising tide lifts all boats", conveying the intent that this work is aimed to improve sharing, compatibility, and improvements across all web development and frameworks in Rust.
  • Experimenting with projects such as the Juliex executor, the Romio reactor, and the Runtime crate
  • The Asynchronous Programming in Rust book should have a complete draft, covering async/await, core futures concepts, Tokio, and enough of the ecosystem to give good examples and guidance. It should also explicitly talk about the stabilization situation, including how to bridge between stable 0.1 and unstable 0.3 worlds.

team's People

Contributors

aajtodd avatar aturon avatar bigbv avatar cramertj avatar levex avatar nemo157 avatar pjenvey avatar xenith avatar yoshuawuyts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

team's Issues

Synchronisation primitives for async code

In synchronous rust, we have great things like mutexes and channels and barriers. In asynchronous rust, we have async-channels that are pretty cool, but we don't have the rest. At this point, it feels like the only current option is to use the synchronous synchronization primitives in your asynchronous code, but this has issues, mainly tied around the fact that the std::sync primitives are scheduled by the operating system and futures are scheduled by the executor. Some issues that I can think of are

  • you can't await! a synchronous primitive being ready to unblock with the way they're currently implemented
  • if you await! something else while holding a lock on a mutex a whole bunch of things may go wrong deadlock-wise

My suggestion

We have async channels in the futures crate that are basically mirrors of the sync channels that instead use our own scheduling / waking logic. I suggest making something like futures::sync::{Mutex, Barrier, ...} to do a similar thing. Thanks to the change to explicit contexts and wakers with futures 0.2/0.3, the code behind the old BiLock could be refactored to be a lot more safe, and extended to form more general (and publicly exposed) asynchronous synchronization primitives.

Alternatives I'm ware of

keep the normal synchronous primitives, but do compiler magic to double check the interaction between them and async code (or futures in general) isn't going to cause issues

Not my idea, so I may not have given it the best justice in this explanation.
If I understand it correctly, I'd have some worries that such checks would be really hard without a lot of false negatives or false positives in terms of what is considered "not a good idea in async".

just let people model everything as channels

Seems to be working pretty fine for go, and if I understand correctly erlang. However, there's a reason we didn't go with this model for the synchronous synchronization primitives. I worry that there are situations where you'd suffer potentially unacceptable performance losses modelling your synchronization purely through channels instead of using less abstract synchronisation primitives.

Low-level networking

Is/will this working group be focused on the lower-level layers of the network stack (e.g. projects like libpnet and smoltcp)? If so are there particular areas that are of interest? There may be some overlap with the embedded-wg here.

Create a resource for developers new to rust networking

Currently much of the ecosystem is in flux and it can be difficult to determine what crates are suitable for production, are stable, being phased out, work together, etc.

This would likely be part of the working group website (#21)

There are some resources that work toward this goal already.
https://github.com/rust-unofficial/awesome-rust#web-programming
http://www.arewewebyet.org/
https://github.com/flosse/rust-web-framework-comparison
https://wiki.alopex.li/AnOpinionatedGuideToRustWebServers

I think http://www.arewewebyet.org/ is the closest to what we'd want to make but covering networking more broadly and more opinionated/updated (e.g. the first crate listed for json is rustc-serialize despite it being deprecated for over a year)
Ideally we can give some example stacks for specific use cases and say we know these work together and you can put them in production (and provide references to any documentation needed).

Edit: https://wiki.alopex.li/AnOpinionatedGuideToRustWebServers is the type of opinionated information about a framework I'd like to see. We would probably want to have feedback that reflects some consensus and not just have one person trying a bunch of frameworks.


Beyond just listing what frameworks are available there is a lot of ecosystem level information that isn't immediately obvious that would be nice to consolidate into a reference for people who aren't familiar with the situation. This includes things like ring/openssl dependency hell and what versions of libs will work together or what versions/crates to use if you are using sync or async (especially if more things do what hyper did and have sync/async versions that aren't mentioned anywhere obvious).

A general overview of any ongoing transitions in the ecosystem is probably also valuable. This could just be larger things like moving to async in general but could also cover adoption of new versions of widely used crates like ring 0.11 -> 0.13 or futures 0.1 -> 0.3 and what versions work for commonly used combinations of crates. (tracking adoption of new versions of dependencies across the ecosystem is obviously a non-trivial task but currently the task is just shifted to individual developers in a lot of cases)


There has also been discussion of writing guides for getting started with various networking topics. For guides on specific crates I think we should focus on making the crate's docs more discoverable and/or contributing to them directly but including guides in cases where the topic doesn't clearly belong in a given crates docs makes sense.
meta: should guides and general ecosystem information be separate issues?


This is obviously strongly related to tracking what crates are in the ecosystem and the general state of the ecosystem which has already been started in other tickets.
#27
#28

Review web frameworks

Conduct a comprehensive survey of the existing Rust web frameworks, their status and attributes, and compare against what's readily available in other major languages.

End-to-end user story: Tractor Control Channel

There was recently talk on Discord #wg-net about gathering end-to-end user stories that aren’t REST or HTTP servers.

Here is a real world example of an application that exists that I would like to write in Rust. It is currently written in Node.js (server) and C++ (client).

I’m not sure how representative it is.

Tractor Control Channel

The company I work for builds displays and cellular modems to put in agricultural tractors. When possible, we maintain a Control Channel between the display, over the cellular modem, back to our servers.

This Control Channel has the following requirements.

  • Low-ish bandwidth – our customers pay for the data plan; we don’t want to use all of their data; currently about 22MB/month. To achieve this, we typically compress the data stream.
  • Fast communication – when a message is generated from either side, it should be received by the other side on the order of seconds, not minutes. To achieve this we use a constant TCP connection.
  • Secure – the channel must be encrypted.

To accomplish this, the client opens a TCP connection to one of our servers and maintains it for as long as it is open. The data sent over the TCP connection is a stream of length-prefixed protocol buffer messages, compressed with standard Gzip, encrypted with standard TLS, and sent over the TCP connection.

The communication is two-way (i.e. we can’t replicate it using a request/response pattern). Some examples of this:

  • When the client has telemetry data, to send, it sends it. The service does not respond to every telemetry message, but occasionally responds in aggregate.
  • When the client wants to upload a file, it notifies the service, which responds with how to upload the file (like the hostname, username, password of an out of band FTP upload)
  • When the service wants the client to download a file, it notifies the client with out of band information, and the client downloads the file, providing periodic progress updates (e.g. “17% done”)
  • There are more examples of both sides sending control messages to the other, but it’s probably belaboring the point.

Both sides take the Control Channel messages and store them in databases. On the client side, this is typically SQLite. On the server side, this is MySQL, Kinesis (AWS-hosted event stream), etc.

Roadblocks when moving from a Tokio example to a full application:

  • It’s not just a request/process/response pattern. We need to write to streams based on requests, based on external events, based on timers. How to do that was unclear (clone the TCP writer? Pipe an MPSC into the writer, and clone the sender side? Other?)
  • How to layer Gzip and TLS into the stream?
  • How to separate business logic from network-level code?
  • What to do when business logic needs to use a sync crate?

Tracking issue: futures 0.3

This issue tracks high-level progress toward a final futures 0.3 release.

  • Start the branch

0.3-alpha.1

  • Work through implications of removing Error, #3
  • Work through implications of pinning, #4
  • Work through implications of borrowing, #5

0.3-beta.1

  • Task-local hooks, #7
  • Debugging hooks, #6

0.3.0

  • futures-core APIs stabilized in libcore

book: proposal for introduction and overall theme

I've been pondering how best to present the book. My feeling is that the approach that most tutorials take of going bottom up from event loops and how futures are scheduled up to the high level concepts like async/await is not great for beginners to the area - there are a lot of moving parts to get your head around, lots of jargon, and lots of libraries, most of which take some hand-waving by the author.

I believe a better way is to start with the concept of async code and async/await. Then move on to futures in the abstract and then drill into the details of event loops and how it all works. Then the reader can concentrate on the mental model of async at first, before having to also learn the details.

I started hashing out a rough outline, but I quickly realised I didn't know the area well enough. Instead I wrote just the introduction (taking some pieces from the old apr book). It relies heavily on examples which use async/await and minimal external libraries (the final example does use bits of tokio in order to get something interactive, but hopefully it is not overwhelming in the way that Hyper examples can be).

The intro draft is at https://github.com/nrc/apr-intro. If others like the sound of this approach, I can polish and submit a PR. I'm happy to work on more too, but my time is a little scarce at the moment.

cc @cramertj @aturon @mgattozzi

Review Transport Layer Security solutions

There are lots of options available, we should look into answering the following questions for each of the options to a newcomer (or myself, because I personally want to know what to use)

  • is this library secure + able to be trusted
  • does this library require dynamic linking against system wide libraries + how do I do that
  • how do I write a docker file to build my project that uses this library
  • is this library usable with {Async, no_std, etc}
  • how featureful is this library (does it only do a subset of what TLS allows)
  • how well designed is the api of this library (am I able to accidentally write insecure code easily using this api)

wg-net newsletter #1

Let's collect content for the first issue of the Net Web WG newsletter here! ✨

Establish a clear vision for Rust 2018

This WG hasn't managed to take off, largely because the leads have been heads-down trying to get futures 0.3 and async/await working. With an alpha nearly out the door, we want to try to get this broader group going again, and see what we can accomplish by the Rust 2018 release date (December 6th).

Add debugging hooks for futures

Make it possible to debug deadlocks/lost wakeups by examining the graph of tasks and what they believe they're blocked on.

Contribute Doctests to Tokio

Contribute Doctests to Tokio

Summary

We propose to improve Tokio's documentation by adding doctests for each public
method.

Motivation

Tokio is a highly important library in Rust's ecosystem. It provides async
counterparts for many of stdlib's methods. However, unlike stdlib not every
method has usage examples yet (also known as doctests).

We propose to make a coordinated effort to track down missing usage examples,
and make pull requests to add them. This will hopefully both help people get
involved with Tokio, as help people new to the project find their way on how to
the libraries.

Implementation

I ran a script to find all pub fn names in tokio, and added them to the
overview
at the end of this issue. To contribute a doctest:

  1. Clone tokio-rs/tokio.
  2. Choose a file from the overview.
  3. Comment on this issue which file you've chosen to work on.
  4. Write a few doctests for the file.
  5. Read Tokio's contributing guidelines.
  6. Make a PR, and link back to this issue.
  7. Once the PR is approved, we can cross off doctests from the file.

Example from current documentation

Script

#!/bin/bash

base_url="https://github.com/tokio-rs/tokio"
curr_commit="cc3b6af7a3927751b12a82c61fae97b4cca30c12"

# Find all public methods
matches="$(rg -g '!names' --vimgrep 'pub fn' .)"
matches="$(echo "$matches" | grep -v main | grep -v tests | grep -v examples)"

# Get all method names, file names, and merge them on one line
fn_names="$(echo "$matches" | perl -nE '/pub fn (\w+)/ and say "$1";')"
line_nums="$(echo "$matches" | perl -nE '/\:(\d+)\:/ and say "$1";')"
lines="$(echo "$matches" | awk '{print $1}' | sed 's/\:.*$//')"
files="$(paste <(printf "$lines") <(printf "$fn_names") <(printf "$line_nums"))"

# Iterate over all file-method names combinations, and create a markdown list
echo "$files" | while read line; do
  file="$(echo "$line" | awk '{print $1}')"
  fn_name="$(echo "$line" | awk '{print $2}')"
  num="$(echo "$line" | awk '{print $3}')"
  url="$(printf '%s/blob/%s/%s#L%s' "$base_url" "$curr_commit" "$file" "$num")"
  echo "- [ ] [$file#$fn_name]($url)"
done

Drawbacks

There seems to be some duplication in the APIs between tokio and tokio-*
modules. If both modules are going to be merged, perhaps we should prioritize
one implementation first.

Secondly Tokio doesn't support [email protected] yet, which means async/await
support is in somewhat of a transitioning phase. This might be an argument to
hold off until tokio upgrades to [email protected]. But there doesn't seem to be a
clear timeline yet for when such a transition might be done, and there seem to
be clear benefits to improve documentation today.

There are over 600 API methods in tokio. That means there's probably quite a
few examples to be written. Therefor helping out with reviewing doctest PRs
would be fantastic too!

Rationale and alternatives

The idea of doctests is that they exist in harmony with more tutorials and
examples. It does not replace any of the above, but instead makes the existing

Unresolved Questions

We should probably figure out which modules to prioritize to prevent duplicate
work.


cc/ @carllerche @seanmonstar Your input would be valued a lot!

Overview

Tokio

tokio-channel

tokio-codec

tokio-executor

tokio-current-thread

tokio-fs

tokio-io

tokio-reactor

tokio-tcp

tokio-threadpool

tokio-timer

tokio-tls

tokio-udp

tokio-uds

tokio-io

tokio-threadpool

tokio-timer

tokio-async-await

tokio-fs

Increase bus factor/improve maintenance

It seems just a few core maintainers are responsible for large parts of the futures/tokio/http ecosystem. I worry that this doesn't scale very well to serve all the crates that depend on these, and with strict semantic versioning, a lack of new releases can easily become a choke point for an entire stack. It would be good to make sure that multiple people are available for these infrastructure crates.

(This issue inspired by the current blockage in tokio-tls, with some PRs needed to move it forward sitting idly for a few weeks.)

IO traits for datagram based protocols

futures-io currently includes two traits for streaming IO, these are very similar to the traits of the same name that tokio::net::TcpStream implements and will hopefully be very useful for developing transport agnostic higher-level protocols (e.g. for transparently inserting TLS). On the other hand tokio::net::UdpSocket currently implements no useful traits, so it would be more effort to have a protocol that is agnostic over an insecure UDP connection or a secured DTLS connection.

This abstraction may also be useful in lower layers of a networking stack, e.g. while TCP presents a streaming interface to higher layers it is built on top of the datagram based IP layer, so a usermode TCP/IP stack could use such an abstraction.

One alternative would be to avoid defining new traits and instead standardise the ecosystem around something like Stream<Item = Bytes> + Sink<Item = Bytes>. At first glance this seems like it would have issues in supporting allocation/copy-less usecases, but I think it's definitely worth looking into.

(CC @levex, @Ralith from our discord discussion)

Call For Example Web Projects

As part of the Net::Web WG we want to help people find their way around Web
programming in Rust (#37). One part of this story will be by sending out surveys
(#27, #40). But another thing we want to do is create a collection of examples
for common tasks in web programming.

We think that gathering a collection of documented example projects can help
us with a few things:

  • Having a collection of example programs for common tasks is a helpful resource
    for people that are new to web programming in Rust.
  • Inform which parts of the web story need to be improved.
  • Serve as a useful reference for the upcoming Web Book.

What does an example program look like?

An example program should have the following things:

  • A dedicated (GitHub) repository.
  • A description in the README of the task it solves, and notable architecture
    decisions.
  • A list of things that were tricky to figure out, or wish you could be
    improved.
  • A link back from the project to this issue so people can explore other
    projects.

And that's about it I think. I don't think we'd need to have many other
restrictions in place, as our purpose is to explore the different ways that
people solve web-related tasks in Rust.

What kind of examples are you looking for?

The goal is to have small examples that generally cover a single task. Examples
would include:

  • Forward incoming connections to other servers using Tokio.
  • Create an API around a full-text search engine to search in documents.
  • Perform user authentication using PostgreSQL and cookies.

But these are just some example. We'd love to gather more! So if you have some
good ideas go ahead and comment below!

Where should we link these examples from?

Let's link them from this issue for now, but if we get enough examples we should
probably make a dedicated repo / link it from other web resources. But I propose
we do those things as they're rolled out, and instead place an initial focus on
creating content.

How is this different from the Rust Cookbook?

The Rust Cookbook is a resource that helps show how to perform a variety
of tasks in a wide range of domains using code snippets.

The goal of the example projects is to show off how to do common tasks in the
web domain using complete projects.

How can I get involved?

We'd love to have people both contribute ideas for common web tasks, and
implementations of projects!

If you have ideas that would be worth exploring, please comment in this thread.
Similarly, if you've implemented a task, feel free to link to it from this too.

Please include a description of the task your project solves, and any notable
architecture decision.


Hope this all makes sense. We're happy answer any questions people have, and
update this post to help clarify things.

Happy hacking!

Standardisation of traits for (buffered) IO

There is a lot of fragmentation in this space, which is an especially big problem for something like buffering where we should be aiming for all libs to be able to share their buffers to reduce copying. However, traits for IO in general are still not really a solved problem it seems. Most things seem to be coming down to a difference between (Async)(Buf)Read/Write and Stream/Sink based solutions. We need to evaluate the pros and cons of each of these, and come to a decision, so that we can start using the same generic traits and patterns for IO across different crates and achieve consistency. I'm going to briefly go through the current solutions, and some that are being developed (please correct me if any of the code below is wrong), then we can compare and contrast them to find the best option.

Options

AsyncRead / AsyncWrite

trait AsyncRead {
    fn poll_read(
        &mut self, 
        cx: &mut Context, 
        buf: &mut [u8]
    ) -> Poll<Result<usize, Error>>;

    // ... initializer and vectored reading
}

trait AsyncWrite {
    fn poll_write(
        &mut self,
        cx: &mut Context,
        buf: &[u8]
    ) -> Poll<Result<usize, Error>>;

    fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), Error>>;

    fn poll_close(&mut self, cx: &mut Context) -> Poll<Result<(), Error>>;

    // ... vectored writing
}

Very minimal. Mirrors the standard library, so also very familiar. Can only work with bytes, which limits it to relatively low level operations. Can work without allocation, but does require copying of bytes.

Stream / Sink

trait Stream {
    type Item;

    fn poll_next(
        self: PinMut<Self>, 
        cx: &mut Context
    ) -> Poll<Option<Self::Item>>;
}

trait Sink {
    type SinkItem;

    type SinkError;

    fn poll_ready(
        self: PinMut<Self>, 
        cx: &mut Context
    ) -> Poll<Result<(), Self::SinkError>>;

    fn start_send(
        self: PinMut<Self>, 
        item: Self::SinkItem
    ) -> Result<(), Self::SinkError>;

    fn poll_flush(
        self: PinMut<Self>, 
        cx: &mut Context
    ) -> Poll<Result<(), Self::SinkError>>;

    fn poll_close(
        self: PinMut<Self>, 
        cx: &mut Context
    ) -> Poll<Result<(), Self::SinkError>>;
}

Generic over more than just IO. Can work with things that aren't bytes. Needs ownership of data being sent through (for IO applications) which will typically mean allocations are required. IO types don't directly implement these traits, you'd need to create wrappers such as Framed.

AsyncBufRead

trait AsyncBufRead {
    fn poll_fill_buff(&mut self, cx: &mut Context) -> Poll<Result<&[u8], Error>>;

    fn consume(&mut self, size: usize) -> Result<(), Error>;
}

This isn't a fully fledged idea yet, but I found myself using AsyncRead and a BytesMut to roughly this effect a lot in my http parsing crate. Allows for all the benefits of AsyncRead, as well as the advantages of buffering - increased performance and not needing to worry too much about over reading.

BufStream

trait BufStream {
    type Item: Buf;

    type Error;

    fn poll(self: PinMut<Self>, cx: &mut Context) -> Poll<Result<Option<Self::Item>, Self::Error>>;
}

More IO focussed than stream on its own. Still looks like it will require ownership of the bytes being sent in most cases. Also, the caller cannot choose how many bytes are read in each go.

Comparison

Firstly, BufStream and Stream appear very similar, it's just that BufStream is more specialised for IO than Stream. As we are trying to find a good trait to make IO functions operate with, I think we can consider only BufStream for reading, and perhaps a similar BufSink equivalent.

For our Read apis, we are then looking at AsyncRead vs AsyncBufRead vs BufStream.

From an API consumer's perspective, the main difference between each of these is who chooses how much reading is done and when.

  • Users of AsyncRead can choose an upper limit on how many bytes they receive per call, but must carefully set that upper limit so that the next attempt to read from the reader does not have the beginning of the message it is attempting to read cut off.
  • Users of AsyncBufRead can choose an upper limit on how many bytes they receive per call, and if they accidentally read too many bytes they can simply choose not to consume that many.
  • Users of BufStream have no control over how many bytes get sent through with each poll, and must adapt their code to be able to handle receiving extra bytes that are not part of the message they are trying to parse. This is especially difficult as we would need a generic interface for passing these extra unwanted bytes either back into the stream or onto the next function that tries to read from the BufStream.

While an API consumer who is doing something simple could make all three APIs work, in more complex cases the AsyncBufRead has definite advantages. Consider the case where a server is attempting to listen for two different types of messages on a single port - eg HTTP requests and websocket connections. It is necessary to be able to read exactly one HTTP request from the reader, and then immediately afterwards begin reading either further HTTP requests or websocket packets. It is therefore necessary that no excess bytes are consumed from the reader while parsing the HTTP request, as they would be missing from the start of the next message, and it is not known which parser will be used to parse that message.

From a library designer's perspective, the differences between each of these is how closely they mirror the read APIs provided by the operating system, and therefore how much overhead in both performance and complexity is necessary to emulate the given API with operating system read sources.

  • Users of AsyncRead can mirror the OS API exactly, and have no issues at all
  • Users of AsyncBufRead need to extend AsyncRead with a buffer implementation, but can do so without too much complexity by using crates such as bytes.
  • Users of BufStream would need to wrap an AsyncRead-like API with something that allocates buffers and then emits them. This would not have high costs.

All three of these cases are reasonably straightforward, and have limited performance costs. There is real disadvantage to any solution from a library designer's perspective.

From a performance perspective, the main issues are how many read calls are performed to parse a message, how much allocation is needed, and how much copying of data occurs.

  • AsyncRead will require lots of read calls, and will require that data is copied out of the reader once, and into the buffer provided. No allocation is needed for AsyncBufRead.
  • AsyncBufRead will require minimal read calls, and requires that data is copied into a buffer once, out of the inner reader. More reallocations than allocations would be necessary for AsyncBufRead
  • BufStream will require minimum read calls*, and requires that data is read into owned buffers (some optimisations may be possible that prevent copying of memory here, apparently). Several small allocations are probably necessary for BufStream.

*the caller of the BufStream API has no control over how many bytes come in per read call, meaning that while it may be possible to read in fewer calls, it is not possible to prevent excess reading from occurring.

For callers of Write APIs, we are looking at AsyncWrite vs BufSink.

The Sink and AsyncWrite APIs are very similar, with the only difference being whether attempting a write is one option or two (is it ready, followed by do the write). I think the extra complexity of Sink makes it potentially harder to misuse, but as the APIs are so similar I think we should base our decision on keeping consistency with the read API we choose to use.

Summary

In most ways, all 3 APIs could be used to achieve the same results. However, in the case of reading exactly up to the end of a message (and no further), AsyncBufRead is the only viable solution so far.

Therefore, I'm currently leaning towards adopting a recommendation that we use AsyncBufRead (or in some cases AsyncRead, with an impl provided to bridge the two) and AsyncWrite (with a buffered alternative, similar to the standard library) for IO work, and standardise on making crates generic over this trait.

stdnet Crate

Hi all, last year I submitted a RFC to get mac addresses added to the std lib, it wasn't a good fit, but after seeing the stdweb crate's trajectory and becoming a part of this effort I suggest we create a stdnet crate.

This crate will attempt to be the natural home for Structs, Parsers and sundry for re-usable datatypes in the networking area.

There is no reason why everyone should re-implement something like a TCP or IP header, for example.

There is a lot of prior art out there, if you have crates, parsers, etc that already cover this please indicate if you would be interested in a re-usable crate, what criteria it would have to meet and if you would be interested in contributing?

Also interested in what should be in such a crate, I would start with all the OSI layer 2 and 3 protocols.

Consider removing spawning from futures::task::Context

The Context struct currently conflates two independent concerns: allowing futures to arrange to be notified via Wakers, and allowing new tasks to be spawned. I don't believe these belong together. Wakeup handling is useful to practically all leaf futures that aren't serviced by kernel mechanisms (i.e. that aren't leaf I/O futures), so it makes sense to ensure these facilities are passed down to every leaf. By contrast, very few futures require the ability to spawn tasks, and those that do are typically in application code (for example, the accept loop of a server) where an executor handle can be easily made available. In the rare case where library code genuinely needs to spawn new tasks, this can be easily accomplished by explicitly taking an executor handle, or by returning an impl Stream<impl Future> whose elements can be spawned in whatever manner is appropriate to the application.

The specifics of spawning a task can also vary considerably between executors in ways the generic interface exposed by Context cannot support. For example, applications which require non-Send futures or which can't perform dynamic allocation cannot make use of Context-based spawning at all. This not only leads to awkward vestigal API surface, but also presents a subtle compatibility hazard: code using an executor that does not support spawning via Context will compile fine when combined with libraries that assume one, but fail at runtime when spawning is attempted. By contrast, if the ecosystem standardizes on returning streams of futures, spawning (and guarantees such as Sendability of futures to be spawned) naturally becomes explicit.

cc @carllerche, @Nemo157

Futures 0.3 alpha release

Put out a preliminary release of the new, std-integrated futures design that supports async/await. This will be the foundation for starting integration or shimming for Tokio etc.

Establish an operational structure for the WG

Given how large the WG is, and the number of topics covered, it seems best to try to carve out some relatively clear subgroups, with a clear lead/point person and scope.

This thread is for discussing what such a structure might look like.

Connection agnostic crates

Currently most crates use a concrete type for connections, often std::net::TcpStream, but to support async I/O often a different connection type is need, e.g. tokio::net::TcpStream.

For example take the postgres crate, which uses std::net::TcpStream for blocking I/O. But to support async I/O tokio-postgres was created which uses tokio::net::TcpStream.

I think it would be worthwhile to create a generic Connection trait, which crates like postgres (but also e.g. hyper) can use as an abstract connection type. This can be used for both blocking and async I/O, where std::io::ErrorKind::WouldBlock errors are handled by the user of the crate.

This has two advantages:

  1. Blocking and async I/O can be handled in the same code. This does require more attention then just writing blocking I/O code, i.e. the code needs to deal with an operation being restarted or it needs to support some kind of checkpointing, much like most Futures.
  2. Support for various async I/O types in the same crate. Tokio has it's own TcpStream, but so does Mio and so does the standard libary. Supporting a single generic connection type would support also three those types with the same code (assuming they implement the correct trait).

Possible design

A simple design would be the following.

use std::io::{Read, Write};

trait Connection: Read + Write { }

And add documentation that the Connection can return std::io::ErrorKind::WouldBlock which isn't really an error and the crate must gracefully handle it.

This assumes that calling read/write on a concrete type does wake itself when it returns WouldBlock, in the context of using futures. I believe this already the case for both Tokio and mio.

Future-io's AsyncRead and AsyncWrite

For async I/O the futures-io crate provides special read and write traits that hook into the task system. These trait could also be used rather then Read and Write. The downside of this is that blocking I/O won't be supported anymore.

Thoughts?

Kickoff meeting!

Meeting 2018-07-27 8am Pacific (PDT, UTC-7)

Find us in WG-net on Discord!

Agenda

This kickoff meeting will mostly be about orientation: reaching a common understanding of what the WG is about, and what is feasible to accomplish this year. There are going to be a lot of us attending, so @aturon is going to rule the floor with an iron fist this time around.

Important note: the Core Team has recently decided to have an extended beta for the Edition, which means the final release will be 1.31 on December 6. So we have 4.5 months to make progress before presenting our work to the world — definitely enough time to make some really meaningful progress!

  • Introduction and plan for the meeting
  • Quick rundown of the current state of affairs
  • Strawman proposal for goals and structuring
    • Goals
      • Great pitch for the new web site
      • Documentation!!!
      • Solid answers to:
        • “So you want to write an (async?) web server in Rust…”
        • More like this?
      • Ecosystem assessment, documentation, and improvements
      • Guidelines for networking-related libraries, e.g. approach to TLS
    • Subgroup structuring
      • Async: futures/async/await
      • Protocols: http (1/2), grpc, thrift, …
      • Service bindings: rabbitmq, redis, kafka, s3, etc
      • Middleware: Tower and related
      • Web apps: Flask/Sinatra-level “framework” bringing some of the above together
      • Guidelines: cross-cutting group developing best practices for networking libraries in Rust
  • How to approach the web site
  • Emphasis on sync vs async
  • Plan for vetting futures 0.3

Testing of asynchronous code

Right now, rust's standard way of writing tests and the standard way of executing futures (tokio) are at odds with eachother. The test framework expects a panic on test failure - tokio catches panics and treats them as normal errors. The test framework tries to run multiple tests in parallel where it can - tokio immediately spawns one thread per CPU core when you run it. This makes testing asynchronous code pretty hard, and that's something we should fix.

I think to fix this, we need a simpler futures executor that doesn't include quite so many batteries. Instead, a run function that just runs like a regular function - without trying to do anything with threads or panic catching. Ideally, in the name of keeping test and prod environments similar, I would suggest we at least consider making tokio that simpler executor*. Also possible would be taking something like toykio further, or potentially even going the full way to supporting #[test] async fn ... in future.

*and separating all of the threading magic done by tokio out into a wrapper around a simpler core executor. This would be a big move, and I may be missing a whole bunch of reasons why its a bad idea, but on the bright side the extra flexibility could help people who

  • want really high performance, and need custom control over how things work across threads in their program
  • want a really light footprint for their code (nobody likes the app that spawns 8 threads just to transfer a file to another computer)
  • don't expect high enough load for their program that the effort to make their code threadsafe is justified for the performance gains it would bring

Crate Proposal: Adaptive Compression for Hyper

Crate Proposal: Adaptive Compression for Hyper

Summary

I propose we write a crate that can apply different compression schemes based on a client's Accept-Encoding header.

Motivation

Servers have to compress their content in order to provide good throughput, particularly when serving assets. Clients can express which compression methods they accept using the Accept-Encoding. This is commonly known for use with gzip, but deflate and brotli are often included too and provide better compression.

The goal is to have a single crate that can detect which compression schemes are accepted, and can dynamically choose which compression scheme to apply. This should provide improved reliability especially in situations with non-ideal connectivity (e.g. on subways, rural Australia, etc.)

Expected Behavior

The crate should initialize using configuration, and provide an encoding method. The encoding method should take a Request, Response pair, and accept a byte slice. Ideally it would be thread-friendly, so it can spawn one instance per thread and be reused.

use hyper_compress::Compress;

let compression_ratio = 6; // Ideal for API calls.
let compressor = Compress::new(compression-ratio);

let data = b"some data";
let data = compressor.compress(mut req, mut res, &data)?; // Reads headers from `req`, sets headers on `res`.

It should both support client-side quality value preferences, and a configuration to set a default. This is important because every encoding algorithm has a tradeoff between speed & performance depending on the amount of bytes sent.

Possible crates to use would include:

API methods

Ideally there would be multiple interfaces exposed: one for streams (e.g. accept io::Reader trait and/or tokio::AsyncReader), and one that can just be passed bytes. It's probably best to start with the regular method first (as outlined above), but leave space open to implement the other two methods at a later point.

Drawbacks

The biggest drawback is that this would be tied to hyper, which makes it incompatible with actix-web. But given that this crate should mostly be glue around existing encoding crates + hyper's headers, I think it's okay to tie it to one framework.

Rationale and alternatives

Instead of taking a Request, Response pair the crate could operate on strings instead. This does remove much of the benefits Rust's type system has to offer, so apart from being able to interface with more projects it doesn't have much going for it.

Setting up encoding is often delegated to CDNs, or proxy servers (e.g. apache, nginx), but with HTTP/2 becoming more prominent it's crucial to be able to run it at the application layer too. This crate serves to make compression something you can drop into your application server and it just works.

Prior Art

There exists prior art in Node's http-gzip-maybe which was written for the Bankai web compiler. http-gzip-maybe does not support Brotli.

Middleware

At the time of writing there exist several different middleware solutions, but there is no shared standard yet. Therefore not tying into any specific middleware solution provides the most flexibility. After all, it should be able to work with any framework that uses Hyper as its base.

A way to integrate it with middleware providers would be to create wrapper packages at first. And if a shared middleware standard emerges, it should be straight forward to add a new struct to the project. But it's probably best to start as generic as possible.

Unresolved Questions

Perhaps a future version of this crate could auto-set the compression parameter based on the amount of bytes it's about to send. This would remove even more configuration needed, and further help improve performance.

hyper and actix-web use the http crate under the hood. Both frameworks seem to expose the http structs as new types only. Ideally there would be a way to operate on both hyper, actix-web, and http's structs directly - but I don't know how this can be done.

edit: apparently the http structs exported by hyper are not newtypes.

Conclusion

We propose a crate to handle user-agent dependent variable compression for Hyper. The implementation is left up to volunteers. Comment below if you'd like to work on this; I'd be happy to help mentor the development of this crate. Thanks!

References

Edits

  • Included a note about the http crate.
  • Included note about streaming compression.
  • Included note about middleware.
  • Changed title.
  • Adjusted the statement about newtypes.

Tracking issue: implications of borrowing across yield points

With the ability to work with borrowed data within futures, many APIs can change shape so that they work with e.g. &mut references rather than threading ownership. As part of the futures 0.3 work, we need to work through these implications.

  • io APIs
  • Stream
  • Sink
  • Channels

Enabling shared-nothing executors

Overview

Today's Futures design seems to be focused more on M:N scheduling. It's indeed the most popular and generic use case. But I think we should also discuss future proofing Futures for shared-nothing and no_std usecases.

Some examples:

  • OS kernels
  • embedded systems
  • databases with shared nothing design (e.g ScyllaDB)
  • NUMA targeting systems

To summarize: there's usually either only one thread or many, but in latter case every one is pinned to the CPU's core. No synchronization is allowed, including atomics. For no_std task wakening is usually custom built using platform specific tools. And std systems wake tasks only in current thread, cross thread wakening is very specific to the higher level system design. Do concurrency, leave parallelism to the user.

For implementation it means that none of the standard operations like spawning task, polling or wakening should use mutexes or atomics. There's also should be no Sync and Send bounds.

Today it feels very awkward to implement something like this with futures.

Details

Wake for example should to be inside Arc and has to be Send+Sync. For implementer it means
one of this options:

  • add synchronization everywhere
  • make Wake::wake noop dummy and use different waking strategy
  • make it really work inside only one thread and pray that it will never cross thread boundaries

Executor trait also requires every task to be Send, making every Future that wants to use a default executor incompatible with shared nothing system.

As far as can tell after a quick look this PR seems to be dealing with similar issues.

Ideas?

There might be already a way to solve all this problems that I just don't know about, in that case we should just write some docs describing how to approach this problem.

If there is no clear way, then off the top of my head, I'd say make Wake thread local, add an upgrade method that will return Option<SyncWake>. For Executor add spawn_local.

This leaves the question though, how should an intermediate Future know which API version it should use: local or sync.

What do you think?

Streaming Stream

Many (most?) of the Streams that I write want to yield data that references some underlying buffer, and it's expected that each element will be processed in series (rather than used with collect, chunk, buffered, etc.). This is similar to the StreamingIterator trait that was one of the original motivations for generic associated types. With GATs, the trait definition would look something like this:

trait Stream {
    type Item<'a>;
    fn poll_next<'a>(&'a mut self, cx: &mut task::Context) -> Async<Self::Item<'a>>;
}

Without GATs, you can hack something similar together with HRTBs, but it's obviously less ergonomic:

trait StreamLt<'a> {
    type Item: 'a;
    fn poll_next(&'a mut self, cx: &mut task::Context) -> Async<Self::Item>;
}

trait Stream: for<'a> StreamLt<'a> {}
impl<T> Stream for T where T: for<'a> StreamLt<'a> {}

Note that today's T: Stream could exist simultaneously through an extension trait with a bound like this:

fn foo<T>(x: T)
where
    T: Stream,
    T: for<'a, 'b> StreamLt<'a, Item=<T as StreamLt<'b>>::Item>
{}
// note that this doesn't compile right now, but I don't know of any technical reason why this couldn't work

The idea is to force the two different StreamLt impls to unify-- in GAT terms, it would be something like where for<'a,'b> Stream::Item<'a> = Stream::Item<'b>.

Proposal: adopt "async first" policy

Background

The sync/async distinction effectively splits the ecosystem.

If one is writing an application, one is immediately restricted by the set of crates you can use to the one that matches your app. Ideally there's an existing crate which has a complete API, works well, is well tested and actively maintained. But in practice there's often "the sync one" or "the async one", and even if both exist they are maintained by different people, and are very unlike each other at least at the API level.

Worst case, the crate just doesn't exist for the model you want, so you either need to write your own, or adapt an existing crate with the wrong model to your crate.

As we move towards async/await, the distinction between sync and async programs becomes syntactically less distinct, which means the overhead for either always using async or converting sync to async is lower.

In this world, having a complete split between the sync and async ecosystems will become an acute problem.

Proposal

We promote the idea that all network protocol crates be written in the async style first, and then provide sync shims/adapters for sync users. This is because async code is strictly more general - you can easily adapt it to sync by putting it in its own little tokio runtime environment.

On the other hand, sync code can't be easily adapted to async, except by decoupling it into threads. This is very much more expensive, and may preclude many uses (ie, its only really viable for small-scale implementations).

The main downside is that async code is a lot harder to write and debug at the moment. But the Rust community is putting lots of effort into improving this, so the overall goal aligns with this specific proposal.

Net WG newsletter #2

Leave comments here for anything you'd like highlighted in the second newsletter!

Rethinking the bounded mpsc channel

Hi all,

I'd like to bring rust-lang/futures-rs#800 to the attention of the WG. I won't reproduce the entire content of the issue here, but the short version is that the current bounded mpsc channel in futures-rs is difficult to use while preserving back-pressure.

That issue left off with a PoC channel implementation that does a better job at enforcing back-pressure, and shouldn't have much of an effect otherwise on the API or performance. The hope is that proof of concept can serve as the basis or inspiration for overhauling the current version in futures-rs.

Hopefully this is in the purview of the WG, because I think it's a critical component for building robust async systems!

Add hooks sufficient to build task-local data

As currently proposed, futures-core 0.3 will not build in task-local data. The idea is to instead address needs here through external libraries via scoped thread-local storage.

For any robust task-local data, however, we want the ability to hook into the task spawning process, so that data inheritance schemes can be set up.

This issue tracks the design of such hooks.

no_std compatible async/await

This is mostly a local tracking issue for wg-net-embedded of blocking work that needs to happen in rustc.

The initial implementation of async/await! used TLS for ease of implementation. Before we can start developing using async/await! on embedded we will need to move this onto something that is no_std compatible.

Async Closures With Borrowed Arguments

Async closures with borrowed arguments are a messy thing. Since the output type of the closure (impl Future + 'a) depends upon the input lifetime of the argument, these closures are always higher-ranked. Eventually I think we'd like to support this through syntax like this:

fn higher_order_fn(
    async_fn: impl for<'a> FnOnce(&'a Foo) -> impl Future<Output = ()> + 'a`
) { ... }

// and someday

fn higher_order_fn(
    async_fn: impl async FnOnce(&Foo),
) { ... }

Unfortunately, the current impl Trait-in-args implementation doesn't support these sorts of higher-ranked use cases. Luckily it's not backwards incompatible to add support for them, since we've currently banned impl Trait inside of Fn syntax.

Until we get one of these nicer solutions, however, we need a way to type these sorts of functions. I made an attempt at one of them that looks like this:

pub trait PinnedFnLt<'a, Data: 'a, Output> {
    type Future: Future<Output = Output> + 'a;
    fn apply(self, data: PinMut<'a, Data>) -> Self::Future;
}

pub trait PinnedFn<Data, Output>: for<'a> PinnedFnLt<'a, Data, Output> + 'static {}
impl<Data, Output, T> PinnedFn<Data, Output> for T
    where T: for<'a> PinnedFnLt<'a, Data, Output> + 'static {}

impl<'a, Data, Output, Fut, T> PinnedFnLt<'a, Data, Output> for T
where
    Data: 'a,
    T: FnOnce(PinMut<'a, Data>) -> Fut,
    Fut: Future<Output = Output> + 'a,
{
    type Future = Fut;
    fn apply(self, data: PinMut<'a, Data>) -> Self::Future {
        (self)(data)
    }
}

pub fn pinned<Data, Output, F>(data: Data, f: F) -> PinnedFut<Data, Output, F>
    where F: PinnedFn<Data, Output>,
{ ... }

Unfortunately, this fails because of rust-lang/rust#51004. Even if this worked, though, this is an awful lot of code to write for each of these functions-- we'd likely want some sort of macro.

Until progress is made on this issue, it's impossible to make a function which accepts an asynchronous closure with an argument borrowed for a non-fixed lifetime (precise named lifetimes like impl FnOnce(&'tcx Foo) -> F are possible).

Relationship between wg-net and async disk I/O?

This WG is an important effort in the Rust community related to the development of networking libraries and applications. However, by the nature of high speed networking, this is has resulted in a lot of discussion and work related to futures, async / await, and executors for them such as Tokio. I wanted to pose the question of whether it makes sense for this group to also include disk I/O as well?

I know that, obviously by the groups current name, the answer is no. However, with so much interest in high performance async code, it seems like a logical fit. As someone who has worked on high performance I/O systems, async disk I/O has not received nearly as much attention as async network I/O. While there are reasons for this phenomenon (file system support, OS support of file system features, creating a scheduler, etc) the fact remains that applications which are engineered for high speed, async network I/O can benefit from having a shared abstraction model which also encompasses disk I/O. The SeaStar framework, used by ScyllaDB and referenced in some other issues of this group, is one such example of this.

Consolidate timer mechanisms

This is a follow-up from rust-lang/futures-rs#818.

Timers (and the timeouts they enable) are an important piece of functionality for networking services, enabling them to react to slow running or stalled connections or requests.

Currently, there are two crates that provide futures-based timer functionality, futures-timer and tokio-timer. Both of these crates can define their own "global" default timer, and are generally not interoperable (though they can both be used simultaneously).

The futures-timer crate by default provides a separate thread for handling timer operations, while tokio-timer hooks into the various tokio executors to provide thread-local timer handling (there is some rationale in tokio-rs/tokio-rfcs#2 (comment) for why this might be desirable).

While there might be good reasons to have multiple timer implementations, I believe it would be desirable to align on a single API for providing and consuming timers, so that similar to executors, any timer implementation could be used with any executor and/or event loop.

Consolidate on http and websocket server and client libraries

I am creating this discussion to keep tracking of common "understanding" on how to approach HTTP and websocket server and client developments in Rust and where this evolution goes to. If it is not the right format of an issue, please advise how I should change it.

Inspirational goal:
As a rust programmer I would like to know the set of libraries, which are:

  • recommended to be used for development of HTTP servers, HTTP clients, Websocket servers and Websocket clients.
  • ranked depending on a level (eg. Hyper - lower level, Reqwest - higher level, Rocket - very high level framework)
  • work with stable Rust
  • achieved 1.0 version
  • allow sync and async code
  • support latest HTTP standards (like HTTP 2 and websocket upgrades)
  • looked after by the community as standard rust library (means RFCs, quality control, evolution roadmap, backed up owners, maybe under rust-lang organization on github or similar, etc.)

Current state:
As a rust programmer, I understand that

  • "Looked after" Hyper is a lower level lib for HTTP and websocket server and client code. "Looked after" Reqwest is higher level HTTP client. rust-websocket is higher level websocket client and server library of unknown state and guarantee of future development (at least this is the impression I have got)
  • Rocket is for higher level HTTP servers requires nightly rust, not sure about async
  • none are 1.0 versions
  • unclear or incomplete HTTP 2.0 support
  • async is unstable in Reqwest
  • looked after by respective owners, but is there long-term back up by Rust sponsors overall (sorry I am not sure how it is setup currently)

Reference state:

  • Go language is a good example where networking lib is very capable and is a part of standard lib. Note, I do not say that HTTP lib should be in std. I am saying it might be "looked after" as std and aim for more complete support in respect to HTTP standards and async development. Availability of decent networking library given Go huge advantage from the start.
  • Java has got many libs but default options are Jetty and Vertx. Decent networking libs for Java to accomplish any HTTP and websocket developments in Java.

tls-keygen

I'm not exactly sure if this is the right place to post this, but I hope it is!

Something I've recently really come to appreciate in the Node ecosystem is a package called tls-keygen. It allows creating + trusting TLS certificates on localhost, cross-platform, with no user interaction needed.

I think it might be a valuable package to port to Rust, as it would make it a lot easier to run secure connections locally. This is especially useful in the context of running http/2 locally, which is a requirement if you want to test things like PUSH frames.

The code seems fairly straight forward to port. I'm currently not doing anything related to services in Rust, so I figured it might make more sense for someone that's doing more servicy things to take a stab at this.

Hope this makes sense, and is somewhat useful. Thanks!

Reasons it's useful to generate TLS locally

  • HTTP/2 in the browser requires it.
  • Less chance people will resort to checking in SSL certificates because it's more convenient (fingers crossed).
  • No more -k flag needed in curl (I think).
  • Ability to test Web Workers on localhost.
  • Ability to test some newer APIs on localhost.
  • Helps bring production and development environments closer.

Screenshot

2018-04-20-165210_1920x1080
This is the user experience people usually have with self-signed certificates. Scary for people new to seeing it, and inconvenient for people in the long run. With tls-keygen we can do better!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.