Code Monkey home page Code Monkey logo

rcpphnsw's Introduction

RcppHNSW

AppVeyor Build Status R-CMD-check Coverage Status CRAN Status Badge Dependencies CRAN Monthly Downloads CRAN Downloads Last Commit

Rcpp bindings for hnswlib.

Status

February 4 2024 RcppHNSW 0.6.0 is released to CRAN, supporting hnswlib version 0.8.0.

September 19 2023 RcppHNSW 0.5.0 is released to CRAN, supporting hnswlib version 0.7.0, a getItems method for returning the items used to build the index and some performance improvements if your data is already column-stored. Also, a small roxygen problem with the package documentation was fixed.

July 18 2022 RcppHNSW 0.4.1 is released. Unfortunately, there are valgrind problems with the version of hnswlib used in RcppHNSW 0.4.0, so that has been rolled back.

July 16 2022 RcppHNSW 0.4.0 is released. This release matches hnswlib version 0.6.2, but otherwise adds no new features. Some minor CRAN check NOTEs are fixed and there is also a minor license change: previously the license was GPLv3. From this version, it now supports GPLv3 or later.

September 6 2020 RcppHNSW 0.3.0 is now available on CRAN, with multi-threading support.

August 30 2020. Although not yet on CRAN, support for building and searching an index in parallel (via the n_threads function argument and setNumThreads object method) has been added to the current development version (available via devtools::install_github). Thanks to Dmitriy Selivanov for a lot of the work on this.

September 20 2019. RcppHNSW 0.2.0 is now available on CRAN, up to date with hnswlib at https://github.com/nmslib/hnswlib/commit/c5c38f0, with new methods: size, resizeIndex and markDeleted. Also, a bug that prevented searching with datasets smaller than k has been fixed. Thanks to Yuxing Liao for spotting that.

January 21 2019. RcppHNSW is now available on CRAN.

October 20 2018. By inserting some preprocessor symbols into hnswlib, these bindings no longer require a non-portable compiler flag and hence will pass R CMD CHECK without any warnings: previously you would be warned about -march=native. The price paid is not using specialized functions for the distance calculations that are architecture-specific. I have not checked how bad the performance hit is. The old settings remain in src/Makevars and src/Makevars.win (commented out), if you want to build the project from source directly. Otherwise, Release 0.0.0.9000 is the last version with the old behavior, which can be installed with something like:

devtools::install_github("jlmelville/[email protected]")

hnswlib

hnswlib is a header-only C++ library for finding approximate nearest neighbors (ANN) via Hierarchical Navigable Small Worlds (Yashunin and Malkov, 2016). It is part of the nmslib project.

The RcppHNSW Package

An R package that interfaces with hnswlib, taking enormous amounts of inspiration from Dirk Eddelbuettel's RcppAnnoy package which did the same for the Annoy ANN C++ library.

One difference is that I use roxygen2 to generate the man pages. The NAMESPACE is still built manually, however (I don't believe you can export the classes currently).

Installing

From CRAN:

install.packages("RcppHNSW")

Development versions from github:

devtools::install_github("jlmelville/RcppHNSW")

Function example

irism <- as.matrix(iris[, -5])

# function interface returns results for all rows in nr x k matrices
all_knn <- RcppHNSW::hnsw_knn(irism, k = 4, distance = "l2")
# other distance options: "euclidean", "cosine" and "ip" (inner product distance)

# for high-dimensional data you may see a speed-up if you store the data
# where each *column* is an item to be indexed and searched. Set byrow = TRUE
# for this.
# Admittedly, the iris dataset is *not* high-dimensional
iris_by_col <- t(irism)
all_knn <- RcppHNSW::hnsw_knn(iris_by_col, k = 4, distance = "l2", byrow = FALSE)

# process can be split into two steps, so you can build with one set of data
# and search with another
ann <- hnsw_build(irism[1:100, ])
iris_nn <- hnsw_search(irism[101:150, ], ann, k = 5)

Class Example

As noted in the "Do not use named parameters" section below, you should avoid using named parameters when using class methods. But I do use them in a few places below to document the name of the parameters the positional arguments refer to.

library(RcppHNSW)
data <- as.matrix(iris[, -5])

# Create a new index using the L2 (squared Euclidean) distance
# nr and nc are the number of rows and columns of the data to be added, respectively
# ef and M determines speed vs accuracy trade off
# You must specify the maximum number of items to add to the index when it
# is created. But you can increase this number: see the next example
M <- 16
ef <- 200
dim <- ncol(data)
nitems <- nrow(data)
ann <- new(HnswL2, dim, nitems, M, ef)

# Add items to index
for (i in 1:nitems) {
  ann$addItem(data[i, ])
}

# Find 4 nearest neighbors of row 1
# indexes are in res$item, distances in res$distance
# set include_distances = TRUE to get distances as well as index
res <- ann$getNNsList(data[1, ], k = 4, include_distances = TRUE)

# It's more efficient to use the batch methods if you have all the data you
# need at once
ann2 <- new(HnswL2, dim, nitems, M, ef)
ann2$addItems(data)
# Retrieve the 4 nearest neighbors for every item in data
res2 <- ann2$getAllNNsList(data, 4, TRUE)
# labels of the data are in res$item, distances in res$distance

# If you are able to store your data column-wise, then the overhead of copying
# the data into a form usable by hnsw can be noticeably reduced
data_by_col <- t(data)
ann3 <- new(HnswL2, dim, nitems, M, ef)
ann3$addItemsCol(data_by_col)
# Retrieve the 4 nearest neighbors for every item in data_by_col
res3 <- ann3$getAllNNsListCol(data_by_col, 4, TRUE)
# The returned neared neighbor data matrices are also returned column-wise
all(res2$item == t(res3$item) & res2$distance == t(res3$distance))

# Save the index
ann$save("iris.hnsw")

# load it back in: you do need to know the dimension of the original data
ann4 <- new(HnswL2, dim, "iris.hnsw")
# new index should behave like the original
all(ann$getNNs(data[1, ], 4) == ann4$getNNs(data[1, ], 4))

# other distance classes:
# Cosine: HnswCosine
# Inner Product: HnswIP
# Euclidean: HnswEuclidean

Here's a rough equivalent of the serialization/deserialization example from the hnswlib README, but using the recently-added resizeIndex method to increase the size of the index after its initial specification, avoiding having to read from or write to disk:

library("RcppHNSW")
set.seed(12345)

dim <- 16
num_elements <- 100000

# Generate sample data
data <- matrix(stats::runif(num_elements * dim), nrow = num_elements)

# Split data into two batches
data1 <- data[1:(num_elements / 2), ]
data2 <- data[(num_elements / 2 + 1):num_elements, ]

# Create index
M <- 16
ef <- 10
# Set the initial index size to the size of the first batch
p <- new(HnswL2, dim, num_elements / 2, M, ef)

message("Adding first batch of ", nrow(data1), " elements")
p$addItems(data1)

# Query the elements for themselves and measure recall:
idx <- p$getAllNNs(data1, k = 1)
message("Recall for the first batch: ", formatC(mean(idx == 1:nrow(data1))))

# Increase the total capacity, so that it will handle the new data
p$resizeIndex(num_elements)

message("Adding the second batch of ", nrow(data2), " elements")
p$addItems(data2)

# Query the elements for themselves and measure recall:
idx <- p$getAllNNs(data, k = 1)
# You can get distances with:
# res <- p$getAllNNsList(data, k = 1, include_distances = TRUE)
# res$dist contains the distance matrix, res$item stores the indexes

message("Recall for two batches: ", formatC(mean(idx == 1:num_elements)))

Although there's no longer any need for this, for completeness, here's how you would use save and new to achieve the same effect without resizeIndex:

filename <- "first_half.bin"
# Serialize index
p$save(filename)

# Reinitialize and load the index
rm(p)
message("Loading index from ", filename)
# Increase the total capacity, so that it will handle the new data
p <- new(HnswL2, dim, filename, num_elements)
unlink(filename)

API

DO NOT USE NAMED PARAMETERS

Because these are wrappers around C++ code, you cannot use named parameters in the calling R code. Arguments are parsed by position. This is most annoying in constructors, which take multiple integer arguments, e.g.

### DO THIS ###
dim <- 10
num_elements <- 100
M <- 200
ef_construction <- 16
index <- new(HnswL2, dim, num_elements, M, ef_construction)

### DON'T DO THIS ###
index <- new(HnswL2, dim, ef_construction = 16, M = 200, num_elements = 100)
# treated as if you wrote:
index <- new(HnswL2, dim, 16, 200, 100)

OK onto the API

  • new(HnswL2, dim, max_elements, M = 16, ef_contruction = 200) creates a new index using the squared L2 distance (i.e. square of the Euclidean distance), with dim dimensions and a maximum size of max_elements items. ef and M determine the speed vs accuracy trade off. Other classes for different distances are: HnswCosine for the cosine distance and HnswIp for the "Inner Product" distance (like the cosine distance without normalizing).
  • new(HnswL2, dim, filename) load a previously saved index (see save below) with dim dimensions from the specified filename.
  • new(HnswL2, dim, filename, max_elements) load a previously saved index (see save below) with dim dimensions from the specified filename, and a new maximum capacity of max_elements. This is a way to increase the capacity of the index without a complete rebuild.
  • setEf(ef) set search parameter ef.
  • setNumThreads(num_threads) Use (at most) this number of threads when adding items (via addItems) and searching the index (via getAllNNs and getAllNNsList). See also the setGrainSize parameter.
  • setGrainSize(grain_size) The minimum amount of work to do (adding or searching items) per thread. If you don't have enough work for all the threads specified by setNumThreads to process grain_size items per thread, then fewer threads will be used. This is useful for cases where the cost of context switching between larger number of threads would outweigh the performance gain from parallelism. For example, if you have 100 items to process and asked for four threads, then 25 items will be processed per thread. However, setting the grain_size to 50 will result in 50 items being processed per thread, and therefore only two threads being used.
  • addItem(v) add vector v to the index. Internally, each vector gets an increasing integer label, with the first vector added getting the label 1, the second 2 and so on. These labels are returned in getNNs and related methods to identify which vector in the index are neighbors.
  • addItems(m) add the row vectors of the matrix m to the index. Internally, each row vector gets an increasing integer label, with the first row added getting the label 1, the second 2 and so on. These labels are returned in getNNs and related methods to identify which vector in the index are neighbors. The number of threads specified by setNumThreads is used for building the index and may be non-deterministic.
  • addItemsCol(m) Like addItems but adds the column vectors of m to the index. Storing data column-wise makes copying the data for use by hnsw more efficient.
  • save(filename) saves an index to the specified filename. To load an index, use the new(HnswL2, dim, filename) constructor (see above).
  • getItems(ids) returns a matrix where each row is the data vector from the index associated with integer indices in the vector of ids. For cosine similarity, the l2 row-normalized vectors are returned. ids are one-indexed, i.e. to get the first and tenth vectors that were added to the index, use getItems(c(1, 10)), not getItems(c(0, 9)).
  • getNNs(v, k) return a vector of the labels of the k-nearest neighbors of the vector v. Labels are integers numbered from one, representing the insertion order into the index, e.g. the label 1 represents the first item added to the index. If k neighbors can't be found, an error will be thrown. This normally means that ef or M have been set too small, but also bear in mind that you can't return more items than were put into the index.
  • getNNsList(v, k, include_distances = FALSE) return a list containing a vector named item with the labels of the k-nearest neighbors of the vector v. Labels are integers numbered from one, representing the insertion order into the index, e.g. the label 1 represents the first item added to the index. If include_distances = TRUE then also return a vector distance containing the distances. If k neighbors can't be found, an error is thrown.
  • getAllNNs(m, k) return a matrix of the labels of the k-nearest neighbors of each row vector in m. Labels are integers numbered from one, representing the insertion order into the index, e.g. the label 1 represents the first item added to the index. If k neighbors can't be found, an error is thrown. The number of threads specified by setNumThreads is used for searching.
  • getAllNNsList(m, k, include_distances = FALSE) return a list containing a matrix named item with the labels of the k-nearest neighbors of each row vector in m. Labels are integers numbered from one, representing the insertion order into the index, e.g. the label 1 represents the first item added to the index. If include_distances = TRUE then also return a matrix distance containing the distances. If k neighbors can't be found, an error is thrown. The number of threads specified by setNumThreads is used for searching.
  • getAllNNsCol(m, k) like getAllNNs but each item to be searched in m is stored by column, not row. In addition the returned matrix of k-nearest neighbors is also stored column-wise: i.e. the dimension of the return value matrix is k x n where n is the number of items (columns) in m. By passing the data column-wise, some overhead associated with copying data to and from hnsw can be reduced.
  • getAllNNsListCol(m, k) like getAllNNsList but each item to be searched in m is stored by column, not row. In addition, the matrices in the returned list are also stored column-wise: i.e. the dimension of the return value matrix is k x n where n is the number of items (columns) in m. By passing the data column-wise, some overhead associated with copying data to and from hnsw can be reduced.
  • size() returns the number of items in the index. This is an upper limit on the number of neighbors you can expect to return from getNNs and the other search methods.
  • markDeleted(i) marks the item with label i (the ith item added to the index) as deleted. This means that the item will not be returned in any further searches of the index. It does not reduce the memory used by the index. Calls to size() do not reflect the number of marked deleted items.
  • resize(max_elements) changes the maximum capacity of the index to max_elements.

Differences from Python Bindings

  • Arbitrary integer labeling is not supported. Where labels are used, e.g. in the return value of getNNsList or as input in markDeleted or getItems, the labels represent the order in which the items were added to the index, using 1-indexing to be consistent with R. So in the Python bindings, the first item in the index has a default of label 0, but here it will have label 1.
  • The interface roughly follows the Python one but deviates with naming and also rolls the declaration and initialization of the index into one call. And as noted above, you must pass arguments by position, not keyword.

License

GPL-3 or later.

rcpphnsw's People

Contributors

dselivanov avatar jlmelville avatar ltla avatar samgg avatar yxngl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

rcpphnsw's Issues

Equivalent function to RcppAnnoy's a$getItemsVector(i)

Thanks for the great package!

There doesn't appear to be an equivalent function to a get the vector given an item's index number similar to RcppAnnoy's a$getItemsVector(i) where i is the item integer mapped to the vector within the index during build time. It appears the Python bindings has this feature but it's not been exposed in the R package -

From the readme https://github.com/nmslib/hnswlib :

get_items(ids) - returns a numpy array (shape:N*dim) of vectors that have integer identifiers specified in ids numpy vector (shape:N). Note that for cosine similarity it currently returns normalized vectors.

Would you please expose this function in the R package?

Package fails to load: cannot allocate vector of size 707042.2 Gb

Hi,

We encountered an issue when installing rcpphnsw from CRAN (ver 0.3.0) R 4.0.3. When loaded via library("RcppHNSW") the library thrown an error:

Error: package or namespace load failed for ‘RcppHNSW’ in .doLoadActions(where, attach):
 error in load action .__A__.1 for package RcppHNSW: Rcpp::loadModule(module = "HnswL2", what = TRUE, env = ns, loadNow = TRUE): Unable to load module "HnswL2": cannot allocate vector of size 707042.2 Gb

The attempt to allocate such a large amount of memory could have to do with the fact that the server we compiled and run it on has 80 CPU cores and 2TB of RAM. The problem doesn't seem to occur on R 3.6.1. I tried looking through the code but couldn't find an obvious spot where the issue might lie.

Thanks for any help!

over two batches resizeIndex

Hi. Very nice package to support find neighbors by batches. I have one question about resizeIndex.
In your example, after you addItems for the first batch, you resizeIndex of p to the number of all elements.

# Increase the total capacity, so that it will handle the new data
p$resizeIndex(num_elements)

I am wondering if I have three batches. After addItems of the first batch, shall I resizeIndex of p to the number of first + second batches or all three batches?
Generally, I am wondering if we should resizeIndex after each batch addItems. Thanks.

CRAN submission!

I'm also looking forward to a CRAN submission for this package; it'll allow me to give Annoy some company in the "approximate NN methods" section of BiocNeighbors. It looks pretty ready to go - is there anything else that needs to be done?

Also, insofar as you're using C++11, you could replace &dv[0] with dv.data().

GPL-2 (no later version) and GPL-3+ files linked to same binary

Hello,

We have prepared a Debian package for rcpphnsw. Our peer review now pointed us to the license of
https://github.com/jlmelville/rcpphnsw/blob/master/inst/include/RcppPerpendicular/RcppPerpendicular.h (GPL version 2) that is in conflict with the license of the remainder of your source tree (GPL version 3 or later). This blocks the acceptance of your package in the Debian archive and with it also all its reverse dependencies, which we ran into for the Covid-19 hackathon for Debian.

As an interim quick fix, would you possibly find it acceptable to change your license to GPL version 2 or later? Additionally/alternatively, maybe you could get a permission from the authors of RcppPerpendicular.h to redistribute your derivative work as GPL version 3+?

These are the two ideas that came to my mind, there may be others. Our concerns about contributing to the research on the pandemic aside, it would also just be nice to see this cleaned up.

Many thanks and kind regards,
Steffen

Can't load a "euclidean" index from `hnsw_build`

Build a Euclidean index via hnsw_build:

irism <- as.matrix(iris[, -5])
ann <- hnsw_build(irism, distance = "euclidean")
iris_nn <- hnsw_search(irism, ann, k = 5)
head(iris_nn$dist)
     [,1]      [,2]      [,3]      [,4]      [,5]
[1,]    0 0.1000000 0.1414212 0.1414212 0.1414213
[2,]    0 0.1414213 0.1414213 0.1414213 0.1732050
[3,]    0 0.1414213 0.2449490 0.2645751 0.2645753
[4,]    0 0.1414215 0.1732051 0.2236071 0.2449490
[5,]    0 0.1414212 0.1414213 0.1732050 0.1732050
[6,]    0 0.3316623 0.3464102 0.3605552 0.3741659

So far so good. Now save it:

ann$save("iris.hnsw")

The class of ann is:

class(ann)
[1] "Rcpp_HnswL2"
attr(,"package")
[1] "RcppHNSW"

so we should be able to load it with:

ann2 <- methods::new(RcppHNSW::HnswL2, 4, "iris.hnsw")

Now search again:

iris_nn2 <- hnsw_search(irism, ann2, k = 5)
head(iris_nn2$dist)
     [,1]       [,2]       [,3]       [,4]       [,5]
[1,]    0 0.01000000 0.01999996 0.01999996 0.01999998
[2,]    0 0.01999998 0.01999998 0.01999998 0.02999999
[3,]    0 0.01999998 0.06000003 0.07000001 0.07000010
[4,]    0 0.02000003 0.03000002 0.05000012 0.06000003
[5,]    0 0.01999996 0.01999998 0.02999996 0.02999999
[6,]    0 0.10999985 0.12000003 0.13000003 0.14000012

This is just the L2 distances (as the class name suggests).

So after saving and reloading a formerly Euclidean index, you must manually convert from L2 distances.

Fix for this will probably be to introduce a dedicated RcppHNSW::HnswEuclidean class which will do the square-rooting for you inside a method. This will be returned from hnsw_build when distance = "euclidean".

Speed vs Python implementation

Hi,

I am a big fan of your work and I really appreciate your port of HNSW to R. I am working on Windows and I heartly think that R ecosystem is the top. Yesterday, I installed Python and did a comparison concerning HNSW library. I felt quite puzzled. I did a kNN search on a dataset of 300k points, 11 dimensions and looked for the 30 NN. It took 83 sec using RcppHNSW. Keeping every parameter identical (efConstruction, efSearch, M, as far I know), the computation took 60 sec with Python, using 1 thread, and 15 sec without any core specification, which corresponds of using all 4 cores of my laptop. I do understand that multi-threading leads in a gain of 4 x. Any idea about the overhead of 23 sec when using 1 core?

Best.

Rcpp speed loss?

Hi Rcpphnsw developer,

I am wondering whether piping through Rcpp will be much slower compare to the original hnswlib. Do you have any test related to the speed.

Thanks,

Jianshu

C++ equivalent of Serialization/Deserialization example

Hi! Thank you for the package. It is really helpful.
First! I am new to c++ so I don't know whether this is right place for question like this:
In your readme examples are provided only with R bindings which are roughly equivalent to example provided in original hnsw library.
But what if somebody wants to do the exact task without 'R'? I am unable to find interface for c++. Original library refers to this library for interfaces in c++ and R. but I could only see interface for R.
It would be really nice if you translate this 'R' example into C++. Thank you!

Multicore & Test Data Queries?

Hi - thanks for this package - very useful. Just have two questions:

(1) Why no multithreading support?

(2) Is it possible to add something like:

FNN::get.knnx( t( x ), t( y ), k=k )

i.e. the ability to query against test data ( y )?

If not, why not?

Thanks for your insights on these questions.

Best,

BA

Can't build on Windows

See for instance https://github.com/jlmelville/rcpphnsw/actions/runs/2561466318

Here's an entire output of an R session:

R version 4.2.1 (2022-06-23 ucrt) -- "Funny-Looking Kid"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> devtools::load_all(".")
ℹ Loading RcppHNSW
Exports from E:/dev/R/rcpphnsw/src/hnsw.cpp:

E:/dev/R/rcpphnsw/src/RcppExports.cpp updated.
Re-compiling RcppHNSW
─  installing *source* package 'RcppHNSW' ...
   ** using staged installation
   ** libs
   g++  -std=gnu++11 -I"C:/PROGRA~1/R/R-42~1.1/include" -DNDEBUG -I../inst/include/ -I'E:/dev/R/win-library/4.0/Rcpp/include'   -I"C:/rtools42/x86_64-w64-mingw32.static.posix/include"  -DNO_MANUAL_VECTORIZATION -DSTRICT_R_HEADERS   -O2 -Wall  -mfpmath=sse -msse2 -mstackrealign  -c RcppExports.cpp -o RcppExports.o
   g++  -std=gnu++11 -I"C:/PROGRA~1/R/R-42~1.1/include" -DNDEBUG -I../inst/include/ -I'E:/dev/R/win-library/4.0/Rcpp/include'   -I"C:/rtools42/x86_64-w64-mingw32.static.posix/include"  -DNO_MANUAL_VECTORIZATION -DSTRICT_R_HEADERS   -O2 -Wall  -mfpmath=sse -msse2 -mstackrealign  -c hnsw.cpp -o hnsw.o
   g++ -shared -s -static-libgcc -o RcppHNSW.dll tmp.def RcppExports.o hnsw.o -LC:/rtools42/x86_64-w64-mingw32.static.posix/lib/x64 -LC:/rtools42/x86_64-w64-mingw32.static.posix/lib -LC:/PROGRA~1/R/R-42~1.1/bin/x64 -lR
   installing to C:/Users/jlmel/AppData/Local/Temp/RtmpqcwC7v/devtools_install_27146eb258c0/00LOCK-rcpphnsw/00new/RcppHNSW/libs/x64
─  DONE (RcppHNSW)

and then kaboom.

  • Linux builds without issue.
  • This happens with and without RStudio being involved, so it's nothing to do with that.
  • I can build some other Rcpp-using libraries locally.
  • A local build of RcppAnnoy fails in the same way. Could it be to do with the use of RCPP_EXPOSED_CLASS_NODECL, RCPP_MODULE or Rcpp::class_ (none of which I use in other projects outside of RcppHNSW and which are also used in RcppAnnoy)?
  • Problem seems to occur during loading of the library not during the compilation or linking.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.