Code Monkey home page Code Monkey logo

floneum's Introduction

Floneum is a graph editor that makes it easy to develop your own AI workflows

Screenshot 2023-06-18 at 4 26 11 PM

Features

  • Visual interface: You can use Floneum without any knowledge of programming. The visual graph editor makes it easy to combine community-made plugins with local AI models
  • Instantly run local large language models: Floneum does not require any external dependencies or even a GPU to run. It uses LLM to run large language models locally. Because of this, you can run Floneum with your data without worrying about privacy
  • Plugins: By combining large language models with plugins, you can improve their performance and make models work better for your specific use case. All plugins run in an isolated environment so you don't need to trust any plugins you load. Plugins can only interact with their environment in a safe way
  • Multi-language plugins: Plugins can be used in any language that supports web assembly. In addition to the API that can be accessed in any language, Floneum has a rust wrapper with ergonomic macros that make it simple to create plugins
  • Controlled text generation: Plugins can control the output of the large language models with a process similar to JSONformer or guidance. This allows plugins to force models to output valid JSON, or any other structure they define. This can be useful when communicating between a language model and a typed API

Floneum Quickstart

Download the latest release, run the binary, wait a few seconds for all of the plugins to download and start building!

Documentation

Kalosm

Kalosm is a simple interface for pre-trained models in rust that backs Floneum. It makes it easy to interact with pre-trained, language, audio, and image models.

There are three different packages in Kalosm:

kalosm::language - A simple interface for text generation and embedding models and surrounding tools. It includes support for search databases, and text collection from websites, RSS feeds, and search engines. kalosm::audio - A simple interface for audio transcription and surrounding tools. It includes support for microphone input and the whisper model. kalosm::vision - A simple interface for image generation and segmentation models and surrounding tools. It includes support for the wuerstchen and segment-anything models and integration with the image crate. A complete guide for Kalosm is available on the Kalosm website, and examples are available in the examples folder.

Kalosm is a simple interface for pre-trained models in rust. It makes it easy to interact with pre-trained, language, audio, and image models.

There are three different packages in Kalosm:

  • kalosm::language - A simple interface for text generation and embedding models and surrounding tools. It includes support for search databases, and text collection from websites, RSS feeds, and search engines.
  • kalosm::audio - A simple interface for audio transcription and surrounding tools. It includes support for microphone input and the whisper model.
  • kalosm::vision - A simple interface for image generation and segmentation models and surrounding tools. It includes support for the wuerstchen and segment-anything models and integration with the image crate.

A complete guide for Kalosm is available on the Kalosm website, and examples are available in the examples folder.

Kalosm Quickstart!

  1. Install rust
  2. Create a new project:
cargo new next-gen-ai
cd ./next-gen-ai
  1. Add Kalosm as a dependency
cargo add kalosm
cargo add tokio --features full
  1. Add this code to your main.rs file
use std::io::Write;

use kalosm::{*, language::*};

#[tokio::main]
async fn main() {
    let mut llm = Phi::start().await;
    let prompt = "The following is a 300 word essay about Paris:";
    print!("{}", prompt);

    let stream = llm.stream_text(prompt).with_max_length(1000).await.unwrap();

    let mut sentences = stream.words();
    while let Some(text) = sentences.next().await {
        print!("{}", text);
        std::io::stdout().flush().unwrap();
    }
}
  1. Run your application with:
cargo run --release

Community

If you are interested in either project, you can join the discord to discuss the project and get help.

Contributing

  • Report issues on our issue tracker.
  • Help other users in the discord
  • If you are interested in contributing, feel free to reach out on discord

floneum's People

Contributors

dependabot[bot] avatar ealmloff avatar haoxins avatar kerfufflev2 avatar lafcorentin avatar newfla avatar yevgnen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

floneum's Issues

Better control flow solution

Currently control flow in Floneum is limited. You can return None from a type to stop Floneum from running future connected nodes.

This makes building an if statement possible, but without state (#3), you cannot implement more complicated control flow like a loop that repeats four times.

You are also currently limited to returning a set type from an if statement which means there must be a different if statement for each type of data. (#4)

First party database integration

Specific Demand

Kalosm (and open weight llms in general) are very useful for private, local AI. As part of AI applications, you often need a database of embeddings for retrieval. Kalosm currently supports a limited vector only database which rebuilds the entire database every time you insert a new embedding

Implement Suggestion

Surreal db seems like a good candidate to support as a the main first party database, but Kalosm should defiantly allow you to bring your own database as well

Kalosm should support:

  • Easily embedding your database in your application (without a remote database)
  • Incrementally adding embeddings to your vector database without rebuilding the entire index (annoy added in #126 seems to support this now?)
  • Some way to integrate with a normal database and a plain/fuzzy text search engine

At the same time, you should be able to integrate your own database of choice into kalosm if you choose not to use the first party database we support.

Reorganize top level package

The top level folder is getting crowded. We should move some of the crates into folders to keep it clean. We could move packages into three main categories:

  • Models (rphi, rwhisper, etc)
  • Interfaces (kalosm-language, kalosm-audio, etc)
  • Floneum (plugin, floneum, floneum-cli, etc)

Offline package manager caching

Floneum should cache the index created in #24 so that new projects can be created when offline without manually inputing the file path to all plugins.

Crash at startup

I've tried to run with cargo run --release,
however it crashes instantly:

thread 'main' panicked at 'GTK has not been initialized. Call `gtk::init` first.', /home/zsombor/.cargo/registry/src/index.crates.io-6f17d22bba15001f/gtk-0.16.2/src/auto/menu_item.rs:56:9
stack backtrace:
   0:     0x55ebbf5169cf - std::backtrace_rs::backtrace::libunwind::trace::h782cc21a5acaf6cb
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
   1:     0x55ebbf5169cf - std::backtrace_rs::backtrace::trace_unsynchronized::hc579eb24ab204515
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x55ebbf5169cf - std::sys_common::backtrace::_print_fmt::h7223525cfdbacda2
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sys_common/backtrace.rs:65:5
   3:     0x55ebbf5169cf - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hbd7d55b7108d2ab8
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sys_common/backtrace.rs:44:22
   4:     0x55ebbeedb19f - core::fmt::rt::Argument::fmt::hb4f4a02b9bd9dd49
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/core/src/fmt/rt.rs:138:9
   5:     0x55ebbeedb19f - core::fmt::write::h6d54cd7c9e155ec5
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/core/src/fmt/mod.rs:1094:21
   6:     0x55ebbf4e6cc0 - std::io::Write::write_fmt::h6a453a71c692f63b
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/io/mod.rs:1713:15
   7:     0x55ebbf5182ff - std::sys_common::backtrace::_print::h1cbaa8b42678f928
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sys_common/backtrace.rs:47:5
   8:     0x55ebbf5182ff - std::sys_common::backtrace::print::h4ddf81241a51b337
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sys_common/backtrace.rs:34:9
   9:     0x55ebbf517b85 - std::panicking::default_hook::{{closure}}::hff91f1f484ade5cd
  10:     0x55ebbf518a94 - std::panicking::default_hook::h21f14afd59f7aef9
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/panicking.rs:288:9
  11:     0x55ebbf518a94 - std::panicking::rust_panic_with_hook::h45f66047b14c555c
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/panicking.rs:705:13
  12:     0x55ebbf518553 - std::panicking::begin_panic_handler::{{closure}}::h49d1a88ef0908eb4
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/panicking.rs:595:13
  13:     0x55ebbf5184e6 - std::sys_common::backtrace::__rust_end_short_backtrace::hccebf9e57f8cc425
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sys_common/backtrace.rs:151:18
  14:     0x55ebbf5184d1 - rust_begin_unwind
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/panicking.rs:593:5
  15:     0x55ebbed86ce2 - core::panicking::panic_fmt::h54ec9d0e3180a83d
                               at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/core/src/panicking.rs:67:14
  16:     0x55ebbf0d6875 - gtk::auto::menu_item::MenuItem::with_mnemonic::h0688816215ec2f0a
  17:     0x55ebbf5249b3 - tao::menu::MenuBar::add_item::h5128a97673f7ef65
  18:     0x55ebbee788e6 - floneum::main::{{closure}}::h2338f1b7d5a080f8
  19:     0x55ebbee77c10 - floneum::main::he651d00b74f40c4b
  20:     0x55ebbee04ef0 - std::sys_common::backtrace::__rust_begin_short_backtrace::h61125b8ff671c428
  21:     0x55ebbee7c0e6 - main
  22:     0x7f81b4204d90 - __libc_start_call_main
                               at ./csu/../sysdeps/nptl/libc_start_call_main.h:58:16
  23:     0x7f81b4204e40 - __libc_start_main_impl
                               at ./csu/../csu/libc-start.c:392:3
  24:     0x55ebbedbafa5 - _start

Expose Component State

Currently, there is no easy way to persist state between runs of a node instance. Persisting state can be useful to cache requests, computations, and implement more complex control flow

More efficient serialization of plugin data

Floneum stores the plugins you have loaded in a sharable save.bin file with serde. The file is compressed to make the size manageable, but we currently store the plugin bytes in an inefficient way:

Each instance of the plugin has a copy of the bytes of the WASM used to load the plugin. In addition to this, the WASM is stored globally in the list of plugins.

Instead, each instance should store a reference to the main plugin list with the bytes. DeserializeSeed could be helpful to implement deserialize on the instance. It could allow us to deserialize state with some other state from the parent

Add an open ai proxy example

Kalosm should support remote llama models with the same API as open-ai and provide an example of using a language model with that proxy with litellm

Stream incompatibility with ecosystem?

Hi, I'm trying to use kalosm on an Axum router, and to stream the text generation.
Looks like something in ChannelTextStream just won't fit the general concept of how Axum expect a regular Stream (or tokio?)

This expects a TryStream:

async fn candle_llm() -> impl IntoResponse {
    println!("building model");
    let mut model = Llama::builder()
        .with_source(LlamaSource::llama_7b_code())
        .build()
        .unwrap();
    println!("building model: done");
    let prompt = "write binary search";
    println!("{prompt}");
    let mut result = model.stream_text(prompt).await.unwrap();
    println!("stream ready");
    Body::from_stream(result.into()) // compilation error
}

Here's an example trying to use SSE, which hangs (no streaming):

struct Streamer {
    inner: ChannelTextStream<String>,
}
impl Stream for Streamer {
    type Item = Result<Event, Infallible>;

    fn poll_next(
        mut self: std::pin::Pin<&mut Self>,
        cx: &mut std::task::Context<'_>,
    ) -> std::task::Poll<Option<Self::Item>> {
        self.inner
            .poll_next_unpin(cx)
            .map(|s| Some(Ok(Event::default().data(s.unwrap_or_default()))))
    }
}

async fn candle_llm() -> Sse<impl Stream<Item = Result<Event, Infallible>>> {
    println!("building model");
    let mut model = Llama::builder()
        .with_source(LlamaSource::llama_7b_code())
        .build()
        .unwrap();
    println!("building model: done");
    let prompt = "write binary search";
    println!("{prompt}");
    let mut result = model.stream_text(prompt).await.unwrap();
    println!("stream ready");
    Sse::new(Streamer { inner: result }).keep_alive(
        axum::response::sse::KeepAlive::new()
            .interval(Duration::from_secs(1))
            .text("keep-alive-text"),
    )
}

Add examples/plugins for common software integrations

Floneum should provide plugins for some common software. Eventually I would love for the community to take over creating integrations for different programs, but until then we should provide some common adapters:

  • Exel
  • Email
  • Calendar
  • Calculations

Switch to Dioxus desktop

The egui_graph library doesn't support some features we need (mainly working with any) types. Egui itself has issues with layout (you cannot have a expandable scrollbar). Dioxus desktop is not immediate mode, so it should be easier to do complex layous. We will need to create a graph interface ourselves, but that will give us more control over how the interface looks and how types are implemented

Create nested plugins/Meta plugins

As a longer term goal, users should be able to create new plugins from a collection of nodes. These "meta plugins" should be packaged and redistributed like normal plugins

Prompt space searching/prompt optimization

Specific Demand

Prompts are very flaky in general. A prompt that performs well for one model will fail for another model. It would be nice if Kalosm and Floneum could support a more flexible way to define prompts

Implement Suggestion

Kalosm should support automatically modifying a prompt to perform better based on #113. This could be anything from selecting the best of a set of a number of predefined prefixes for your prompt to automatically modifying and evaluating prompts with another model

Integrate headless browser

A headless browser would allow Floneum plugins to view websites that are client side rendered. This would need to be implemented in the plugin-system because there are no headless browsers that run in WASI.

Kalosm Docs/website

Specific Demand

Kalosm should have a home page that explains the project and points back to the repo. It should also have a book like https://floneum.com/docs that explains different parts of the library

Implement Suggestion

The Floneum website could be a good starting place to copy from

Add calculator node

This is one part of #27

Floneum should have a calculator node that resolves an equation to a number. It should support:

  • operators: +, -, *, /, ^

This node should take in a string equation and output a number result:

use floneum_rust::*;

#[export_plugin]
/// calculates an equation
fn calculate(equation: String) -> f64 {
    todo!()
}

If possible, we should try to avoid using the python plugin because the bundle size of that plugin is quite large and we don't really need all of python's power here. We could use a parser like pest or a library in rust that solves equations

Optional enhancements:

Support more OCR models

Specific Demand

TrOCR performs well for single line text recognition, but there are other important parts for document AI. Kalosm needs to be able to detect lines within a document then recognize the text

Implement Suggestion

Visual Selector Generator

Users should be able to select different parts of the browser page they want to extract and Floneum should generate selectors from those selections

Dragging offset when loading existing workflows

Hello, developers. You have created an interesting game for Windows 11 - try to catch a node with the mouse. But this behavior seems to be only with existing projects. This is the latest 0.2 version for Windows.

bandicam_2023-11-17_22-57-51-803.mp4

Plugins as Values

Passing around plugins as if they were values can be useful to allow components to run a list of other plugins conditionally (like ChatGPT plugins), or inspect a plugin to generate a template for the large language model to use as a prompt

Improve Type Handling

Currently all plugins have set input and output types. This can make it difficult to work with data if you do not care about the content of the data or you want to accept multiple types of data.

We could allow union types as inputs and outputs. This would allow nodes to accept any set of types.

Improve controlled generation support

Currently, Floneum allows you to set a grammar that text generation should conform to. This is useful, but we could take it farther:

Text generation could be represented as a tree of words. The current position in that text is like a cursor in that tree.

In the rust API, we could allow users to either work with the raw cursor API, or use a validation callback:

let session = ModelInstance::new(ModelType::LlamaSevenB);
session.generate_validated(|text| text.len() < 5 && text.chars().all(|c| c == '*');

Batched generation support

Specific Demand

Kalosm-language should support batched generation for faster local inference. This can be very useful when generating many unrelated streams of text

Implement Suggestion

We can change the Model trait to optionally support batched generation then implement batched generation for each of the kalosm models.

Document workflow sharing

You can share workflows created with Floneum by sharing the save.bin file to another user with the same version of Floneum.

We should document this in the guide and make a clear button in the UI related to loading a workflow file.

Dead simple multi-modal data processing library

In each of the release posts there has been feedback asking for integrations with databases, additional models, and some way to use this in other tools. This issue is a response to that need.

As floneum grows we will be increasingly building out our own abstractions on top of pre-trained language, image, and audio models. In the spirit of open source, we can extract these abstractions into a separate library that other applications can use.

This library consists of three main parts that have been crucial to Floneum:

  • Collecting data: Web scraping, article extraction, listening from the microphone with whisper, using the system's camera to get image/OCR data.
  • Processing data: Summarization with LLMs, embedding data, in-memory embedding database search
  • Outputting data: Visualizations, text output, and save-load functionality

Rewrite Kalosm Llama implementation for training and modularity

The llama implementation candle uses was originally forked from candle-transformers and modified for better performance and bug fixes. However, in order to support training models, kalosm needs to support both a quantized and unquantized model with the same interface. Ideally, the implementation should be modular so that you can intercept and train models based on different parts of the output (like taking out the last few layers and training a classifier).

This is a first step towards training/fine tuning transformers in Kalosm (part of #44)

Documentation on constrained generation?

Problem

Hello! I'm interested in using KALOSM's constrained generation in a project of mine, but I can only observe a single example which instantiates an opaquely defined parser. Is there a brief doc on the capabilities of constrained generation available?

Thanks for the solid work on this project!

Loading language models from local files

Thanks for the great work.
I think the support to load models (tokenizers and weights - whether gguf wor not) would be a great feature to implement.
Something like this :

let model = Llama::builder()
        .with_source(LlamaSource::tiny_llama_1_1b_chat()
        .with_model_file("path/to/local/gguf_file".to_string()))
        .build()
        .unwrap();

Thank you

Doesn't run on mac

When I run it, it just shows this
image
Macbook Pro M1 Pro running Ventura 13.4.1 (22F82)

Add tracing support

Floneum should support tracing and output logs to either the console or a file for debugging purposes.

This should make it easier to debug problems like #22 without users needing to download rust.

Investigate canvas trails issue

From reddit:

UI leave 'trails' on canvas when dragged and when I've tried to open one of examples the PC it took very long and when it finally opened it lagged so much I've just gave up.

git install failing from missing libspeechd.h

cargo install --git https://github.com/floneum/floneum floneum-cli

error: failed to run custom build command for `speech-dispatcher-sys v0.7.0`

Caused by:
  process didn't exit successfully: `/tmp/cargo-installQJJj02/release/build/speech-dispatcher-sys-af45aba90ed16c49/build-script-build` (exit status: 101)
  --- stdout
  cargo:rustc-link-lib=speechd

  --- stderr
  wrapper.h:1:10: fatal error: 'speech-dispatcher/libspeechd.h' file not found
  thread 'main' panicked at /home/arc/.cargo/registry/src/index.crates.io-6f17d22bba15001f/speech-dispatcher-sys-0.7.0/build.rs:22:10:
  called `Result::unwrap()` on an `Err` value: ClangDiagnostic("wrapper.h:1:10: fatal error: 'speech-dispatcher/libspeechd.h' file not found\n")
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: failed to compile `floneum-cli v0.1.0 (https://github.com/floneum/floneum#c841d341)`, intermediate artifacts can be found at `/tmp/cargo-installQJJj02`.

$ uname -a

Linux clr-c72222fe0e5f4ef1b14c9aa78fd992eb 6.5.5-1367.native #1 SMP Thu Sep 28 12:58:00 PDT 2023 x86_64 GNU/Linux

$ cat /etc/os-release

NAME="Clear Linux OS"
VERSION=1
ID=clear-linux-os
ID_LIKE=clear-linux-os
VERSION_ID=40040
PRETTY_NAME="Clear Linux OS"
ANSI_COLOR="1;35"
HOME_URL="https://clearlinux.org"
SUPPORT_URL="https://clearlinux.org"
BUG_REPORT_URL="mailto:[email protected]"
PRIVACY_POLICY_URL="http://www.intel.com/privacy"
BUILD_ID=40040

$ cargo --version

cargo 1.73.0 (9c4383fb5 2023-08-26)

$ rustup --version

rustup 1.26.0 (5af9b9484 2023-04-05)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.73.0 (cc66ad468 2023-10-03)`

Implement different retriever strategies

Specific Demand

Kalosm currently only supports a fairly simple retrieval strategy for resource augmented generation: Embed each sentence and then search for documents that contain sentences with an embedding that is similar to the embedding of the prompt

Implement Suggestion

Kalosm should support more retrieval strategies including:

  • Embedding hypothetical questions about a document
  • Embedding a summary of a document
  • Embedding sentences

After a part of the document with a similar embedding is found Kalosm could:

  • Return a window of sentences around that snippet
  • Pipe the document through an LLM that returns the most relevant context
  • Rerank multiple results with a LLM

Train Models Within Kalosm

You should be able to train some models based off of a workflow in Floneum using either burn or candle.

This can be useful to "flatten" a workflow into a model. For example, you could have a workflow that does classification on reddit posts. Initially you build that workflow to use llama2 to recognize certain posts. After you get the workflow working, you can hook up a classifier to learn the task the language model is doing. After running it overnight, you should be able to replace the language model with a much faster, smaller classifier

Initially I would like to just support classifiers because you can train them quickly without a GPU, but in the future we could add fine tuning language models, or image models.

Impossible to spawn a node

Problem

Impossible to spawn a node, plugin instantiating failure.
Getting the following when I try to add Search Node :

  2024-02-03T22:38:41.989527Z ERROR floneum::plugin_search: Failed to insert plugin: import `plugins:main/imports` has the wrong type
    at floneum/floneum/src/plugin_search.rs:108

The concerned code here.

Steps To Reproduce

  • Build tailwind : npx tailwindcss -i ./input.css -o ./public/tailwind.css --watch
  • Compile floneum with the debug version : RUST_LOG=trace cargo run
  • Try to add Format, Contains, ect nodes
  • Or try to add Search node

Environment:

  • Renderer version: floneum master
  • Rust version: 1.75.0
  • OS info: macOS

Questionnaire

  • I'm interested in fixing this myself but don't know where to start
  • I would like to fix and I have a solution
  • I don't have time to fix this right now, but maybe later

Allow plugins to define example inputs

Plugins should be able to define example inputs for the node.

For example, the text generation plugin should define an example input prompt to show users how to setup a chat-like prompt, or instruction prompt, etc:

A conversation between a user and an assistant:

### USER
Who are you?

### ASSISTANT

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.