Code Monkey home page Code Monkey logo

nimbusml's Introduction

NimbusML

nimbusml is a Python module that provides Python bindings for ML.NET.

ML.NET was originally developed in Microsoft Research and is used across many product groups in Microsoft like Windows, Bing, PowerPoint, Excel, and others. nimbusml was built to enable data science teams that are more familiar with Python to take advantage of ML.NET's functionality and performance.

nimbusml enables training ML.NET pipelines or integrating ML.NET components directly into scikit-learn pipelines. It adheres to existing scikit-learn conventions, allowing simple interoperability between nimbusml and scikit-learn components, while adding a suite of fast, highly optimized, and scalable algorithms, transforms, and components written in C++ and C#.

See examples below showing interoperability with scikit-learn. A more detailed example in the documentation shows how to use a nimbusml component in a scikit-learn pipeline, and create a pipeline using only nimbusml components.

nimbusml supports numpy.ndarray, scipy.sparse_cst, and pandas.DataFrame as inputs. In addition, nimbusml also supports streaming from files without loading the dataset into memory with FileDataStream, which allows training on data significantly exceeding memory.

Documentation can be found here and additional notebook samples can be found here.

Installation

nimbusml runs on Windows, Linux, and macOS.

nimbusml requires Python 2.7, 3.5, 3.6, 3.7 64 bit version only.

Install nimbusml using pip with:

pip install nimbusml

nimbusml has been reported to work on Windows 10, MacOS 10.13, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, CentOS 7, and RHEL 7.

Examples

Here is an example of how to train a model to predict sentiment from text samples (based on this ML.NET example). The full code for this example is here.

from nimbusml import Pipeline, FileDataStream
from nimbusml.datasets import get_dataset
from nimbusml.ensemble import FastTreesBinaryClassifier
from nimbusml.feature_extraction.text import NGramFeaturizer

train_file = get_dataset('gen_twittertrain').as_filepath()
test_file = get_dataset('gen_twittertest').as_filepath()

train_data = FileDataStream.read_csv(train_file, sep='\t')
test_data = FileDataStream.read_csv(test_file, sep='\t')

pipeline = Pipeline([ # nimbusml pipeline
    NGramFeaturizer(columns={'Features': ['Text']}),
    FastTreesBinaryClassifier(feature=['Features'], label='Label')
])

# fit and predict
pipeline.fit(train_data)
results = pipeline.predict(test_data)

Instead of creating an nimbusml pipeline, you can also integrate components into scikit-learn pipelines:

from sklearn.pipeline import Pipeline
from nimbusml.datasets import get_dataset
from nimbusml.ensemble import FastTreesBinaryClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd

train_file = get_dataset('gen_twittertrain').as_filepath()
test_file = get_dataset('gen_twittertest').as_filepath()

train_data = pd.read_csv(train_file, sep='\t')
test_data = pd.read_csv(test_file, sep='\t')

pipeline = Pipeline([ # sklearn pipeline
    ('tfidf', TfidfVectorizer()), # sklearn transform
    ('clf', FastTreesBinaryClassifier()) # nimbusml learner
])

# fit and predict
pipeline.fit(train_data["Text"], train_data["Label"])
results = pipeline.predict(test_data["Text"])

Many additional examples and tutorials can be found in the documentation.

Building

To build nimbusml from source please visit our developer guide.

Contributing

The contributions guide can be found here.

Support

If you have an idea for a new feature or encounter a problem, please open an issue in this repository or ask your question on Stack Overflow.

License

NimbusML is licensed under the MIT license.

nimbusml's People

Contributors

cclauss avatar galoshri avatar ganik avatar justinormont avatar kant avatar maherjendoubi avatar microsoftopensource avatar montebhoover avatar montehoover avatar msftgits avatar mstfbl avatar najeeb-kazmi avatar pieths avatar safern avatar shmoradims avatar stephen0620 avatar xadupre avatar zyw400 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nimbusml's Issues

Hash transform and the ranking sample

We have this sample here.

https://github.com/Microsoft/NimbusML-Samples/blob/master/samples/2.5%20%5BNumeric%5D%20Learning-to-Rank%20with%20Microsoft%20Bing%20Data.ipynb

The sample is incorrect because it suggests using ToKey to transform group IDs into keys, but this builds a dictionary. Since group IDs in the test set will almost certainly not resemble group IDs in the train set, the mapping on the test set will produce the "missing key value" as the group key, leading to the evaluator saying "hey these all have the same key," so the entire dataset gets collectively evaluated as if it were a single query, which in turn leads to nonsensical NDCG numbers (though, the scores of the documents themselves will not be impacted).

The advice for ML.NET side is to not use that -- instead we suggest the other common way of turning things in to keys, that is, the hash, which does not depend on seeing that key in the training set, allowing it to still work. Indeed, I see the entry-point for hash was published here:

https://github.com/Microsoft/NimbusML/blob/e1004720ec0c252ba87f02c190c33739d9c00f20/src/python/nimbusml/internal/entrypoints/transforms_hashconverter.py

However, it looks like the step where actually turned this into something accessible by users was overlooked.

What raises this to the dignity of a bug I think is that we've explicitly identified ranking as a targeted scenario, yet we've forgotten the key thing that will make evaluation actually work, possibly by not realizing that ToKey produces useless evaluation in this case.

Support for Python 3.7 clarity

So when installing Anaconda on the Mac, by default it installs Python 3.7. However, nimbusml does not work with Python 3.7. My understanding, which may be flawed, is that this lack of support is deliberate, since at the time of writing Python 3.7.0 is a deeply flawed release with many bugs that directly impact us.

However, without that understanding, this looks like an oversight on our part. (I now know it is not, but this is not clear from the documentation.) We ought to either I suppose (1) make it work with 3.7 but with appropriate warnings about the problems or (2) make clear in the documentation.

Build fails with "Entrypoint codegen checker failed"

After modifying a docstring in PR #46, the build failed with the following error:

'''
Entrypoint codegen checker failed:
Error: File \internal\core\cluster\kmeansplusplus.py content differs from codegen content.
Codegen files could be found in C:\Users\VSSADM~1\AppData\Local\Temp\ep_compiler_ozgg36s6
Codegen check failed. Try running tools/entrypoint_compiler.py --check_manual_changes to find the problem.
Failed with error 1
'''

Remove notes from the documentation samples

NumbusML: Samples should be available in AzureML gallery

As AML has shipped in December 04 we need to create code samples of using NimbusML on AzureML

Expected solution:
Add existing samples to AML gallery. Update them to report metrics to the AML services, and do other necessary adjustments.

Averaged Perceptron documentation format

Formatting error below "reference" section:

Reference

Wikipedia entry for Perceptron

Large Margin Classification Using the Perceptron Algorithm

<<Discriminative Training Methods for Hidden Markov Models [http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.6725](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.6725)_

https://review.docs.microsoft.com/en-us/python/api/nimbusml/nimbusml.linear_model.averagedperceptronbinaryclassifier?view=nimbusml-py-latest&branch=smoke-test&viewFallbackFrom=nimbusml-py

DotNetBridge: Refactor VBuffer

We frequently use VBuffer.Values and expect it to be an array, but now it is a ReadOnlySpan and we must deal with this properly.

NimbusML misidentifies ML.NET `FastTree` learner as `FastTrees`

So in ML.NET, we have this class.

https://github.com/dotnet/machinelearning/blob/master/src/Microsoft.ML.FastTree/FastTreeClassification.cs

The corresponding NimbusML learner is called FastTrees.

So, whether or not this is a good name or not is not really my point, but rather:

  1. If we think that trees is a better name than tree, then this is stuff that really ought to be propagated to the ML.NET ecosystem.

  2. If the goal is consistency with sklearn vs. consistency with ML.NET this seems totally fine, but then why are we bothering with keeping part of what is the brand name, that is, FastTree? If the goal is to have the name be descriptive, I think incorporating parts of brand names from ML.NET can only add to confusion. (E.g., FastTree is so named, yet AFAIK in many situations LightGBM is faster.)

Remove pytest-cov test coverage report from validation builds.

We currently run pytest-cov to get a test coverage report on all of our CI validation builds, but it seems to have errors in downloading plugins or parsing files in 10-25% of builds. (See https://dev.azure.com/aifx/public/_build/results?buildId=73&view=logs)

I like the idea of running a test coverage report on validation builds to ensure that new code has unit tests added, but the way we currently do this there is no such forcing function - the coverage report just gets silently written, and is in fact not even uploaded off the build agent (this is an oversight on my part that can be easily fixed).

Given the fact that pytest-cov causes so many build failures, I feel that the benefits are not worth the cost and I suggest we remove it from our build scripts. If we feel it is worth the effort, we could set up special builds specifically for producing coverage reports, or set aside time for debugging the pytest-cov failures.

How do you inspect the JSON entry point graph that NimbusML sends to ML.NET?

This comes out of a discussion @srsaggam were having:

When you call .fit() on any pipeline, NimbusML creates a JSON representation of the pipeline and sends it to the ML.NET entry point API. Here are steps to inspect that JSON graph and to execute arbitrary graphs in ML.NET:

  1. When you call .fit(), NimbusML creates the JSON graph here in the src.
  2. The following lines will print out the JSON graph and the name of your data file:
graph_info = pipeline._fit_graph(train_data, None, True)
graph = graph_info[0]
file = graph_info[1]
print(graph)
print(file.filename)
  1. You can run this or any arbritrary entrypoint graph with this unit test in ML.NET
    1. It's possible to copy and paste your own JSON in place of the JSON in the unit test, but it is probably simpler to just paste your JSON to a file and reference your file in the unit test as follows:
var args = new ExecuteGraphCommand.Arguments() { GraphPath = @"C:\Users\monte\Downloads\temp2\sample_graph.txt"};
var cmd = new ExecuteGraphCommand(Env, args);
cmd.Run();
  1. There are three modifications you must make to the JSON you see from print(graph):
    1. Add the file for your data set under "inputs": { "file":.
    2. Add a path to output a model file under "outputs": { "output_model":.
    3. Change "inputs":, "nodes": and "outputs": to "Inputs":, "Nodes": and "Outputs":. (Yes, I'm sorry but this is necessary...)
  2. Here is an example of a JSON that will execute properly (if you set the filepath references in it to point to your local system): sample_graph.txt

@Microsoft/pymlnet @glebuk @yaeldekel @justinormont Is this the recommended way of inspecting and running an entry point graph? Let me know if there are better methods for this.

Information in README.md

The current README.md file could include some additional information:

  • The term, machine learning is missing (fixed)
  • The why story can be expanded. Currently it states NimbusML provides the Python bindings to ML.NET which is used in Microsoft in Windows/Bing/Office. Could state: (NimbusML is fast, and more scalable than scikit-learn), which helps state why scikit-learn users would like to try NimbusML.
  • scikit-learn is generally lowercase (fixed by #56)
  • The about text (very top of the page) could be both more terse and informative. Perhaps, "NimbusML, a Python ML package offering the power of Microsoft's internal machine learning toolkit with a scikit-learn interface" (fixed)
  • We should state the simplicity of moving existing scikit-learn pipelines--"NimbusML adheres to existing scikit-learn conventions allowing simple interoperability between NimbusML and scikit-learn components. See examples here.." Linked examples should cover the ease of moving from pure scikit-learn pipelines to a mixed (or pure) NimbusML pipeline.

NimbusML trains model that ML.NET thinks is corrupted

  1. Train a model with NimbusML 0.6 using code below
  2. Save model to disk
  3. Try and use the saved model in ML.NET 0.6

Expected: model can be used in ML.NET
Actual: I see the error shown below.

Unhandled Exception: System.FormatException: Corrupt model file
   at Microsoft.ML.Runtime.Model.ModelLoadContext.LoadModel[TRes,TSig](IHostEnvironment env, TRes& result, RepositoryReader rep, String dir, Object[] extra)
   at Microsoft.ML.Runtime.Data.TransformerChain.LoadFrom(IHostEnvironment env, Stream stream)
   at nimbusmlnet.Program.Main(String[] args) in /Users/gal/Projects/NimbusML/nimbusmlnet/Program.cs:line 19

Python training code:

train_datapath = '/Users/gal/Projects/NimbusML/Sent_Train.tsv'
test_datapath = '/Users/gal/Projects/NimbusML/Sent_Test.tsv'
schema = DataSchema.read_schema(train_datapath, sep='\t', numeric_dtype=np.float32)
train_data = FileDataStream.read_csv(train_datapath, sep='\t', schema=schema)
test_data = FileDataStream.read_csv(test_datapath, sep='\t', schema=schema)
print(train_data.schema)

pipeline = Pipeline([
    TakeFilter(10000),
    NGramFeaturizer(word_feature_extractor=Ngram(weighting = 'TfIdf',
                                                             ngram_length=2),
                                char_feature_extractor=Ngram(weighting = 'Tf',
                                                             ngram_length=3),
                                columns = {"Features": "SentimentText"}),
    AveragedPerceptronBinaryClassifier(num_iterations = 10, feature="Features", label="Sentiment")
])
pipeline.fit(train_data)
pipeline.save_model("sent_model.zip")

ML.NET prediction code:

var env = new ConsoleEnvironment();
ITransformer loadedModel;
using (var file = File.OpenRead("../sent_model.zip"))
    loadedModel = TransformerChain.LoadFrom(env, file);

var predictor = loadedModel.MakePredictionFunction<SentimentData, SentimentPrediction>(env);

var prediction = predictor.Predict(new SentimentData
{
    SentimentText = "I am so happy!",
    Sentiment = 0
});
Console.WriteLine(prediction.Probability);
Console.ReadLine();

How do we deal with ML.NET API change for ColumnSelector/ColumnDropper?

In ML.NET 0.7 there was a breaking change in the entrypoint API for ColumnSelector and ColumnDropper - they were merged into one single entrypoint that takes different parameters:

Before:
ColumnSelector(columns)
ColumnDropper(columns)

After:
ColumnSelector(keep_columns, drop_columns)

There's a two different ways we can update to this:

  1. Change the entry point signature in ML.NET to wrap the new code but expose an identical API as before.
  2. Change existing NimbusML source and examples to conform to the new entry point signature. If we do this there is a small side effect:
    • These are both of type Transform and all transforms except for this new signature accept a parameter named columns. If we remove that expected named parameter, our << syntax will no longer work for this component.

I recommend going with Option 2 in order to maintain similarity with ML.NET versus maintaining backward compatibility with earlier NimbusML releases. Any thoughts? @glebuk @ganik @singlis @shmoradims

nimbusml should print training progress in the console

I am trying a long-ish to train model; takes about 4 minutes, and I find the absence of any progress reporting unsettling. I think the same progress reporting that is printed in ML.NET prior to PR #923 should be available here too.

CV.fit() throws a KeyError exception for a ranking task (LightGBM)

Describe the bug
CV.fit() throws a KeyError exception for a ranking task (LightGBM)

To Reproduce

    model = nimbusml.Pipeline([
                        nimbusml.ensemble.LightGbmRanker(feature=feature_cols, group_id='groupId')
                    ])
    cv_results = CV(model).fit(data, cv=num_folds, groups='groupId')

Throws a KeyError exception for 'data_import' in model_selection\cv.py -

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-118-3f6f92894791> in cv_model(data, num_folds, feature_cols, group_cols, label_col)
      9                                                          verbose_eval=True, silent=False)
     10                     ])
---> 11     cv_results = CV(model).fit(data, cv=num_folds, groups='groupId')
     12     return cv_results
     13 cv_model(train, 5, feature_cols, group_cols, label_col)

~\AppData\Local\Continuum\Anaconda3\envs\digestenv\lib\site-packages\nimbusml\model_selection\cv.py in fit(self, X, y, cv, groups, split_start, **params)
    451         groups = groups or group_id
    452         if groups is not None:
--> 453             if groups not in cv_aux_info[0]['data_import'][0].inputs[
    454                     'CustomSchema']:
    455                 raise Exception(

KeyError: 'data_import'

Desktop (please complete the following information):

  • OS: Windows
  • Python version: 3.6.2
  • NimbusML version: 0.6.2

Formatting issue in Readme

The first example in the Readme (sentiment prediction with NimbusML pipeline) does not have aligned indentation in the pipeline creation.

Deprecation warning when importing FastForestBinaryClassifier

from nimbusml.ensemble import FastForestBinaryClassifier
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\sklearn\externals\joblib\externals\cloudpickle\cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

LightGBM on the mac

So, LightGBM is a little funky on the Mac. In order for it to work you have to install gcc (usually via brew install gcc), though you don't have to actually build anything (which is enormously confusing). I guess this is an issue we should raise with whoever comes up with the LightGBM nuget package.

This seems like something that the documentation should be more explicit about, but really, this is something that could be handled by Python itself -- if the DLL import fails in Mac, could it not catch that failure, and provide a more helpful error message than this?

Error: *** System.DllNotFoundException: 'Unable to load DLL 'lib_lightgbm': The specified module or one of its dependencies could not be found.

Maybe this could be easy? I imagine that the DLL is being loaded from .NET side, but perhaps you could have a "preload" check in the Python code, to see if the DLL load will probably work, or something.

Specify extension modules in setup.py.

We currently build native binaries, copy them into the python package root, and use setup.py to package them into the wheel. The best-practice way to do this is to specify the native extensions in setup.py as shown here: https://docs.python.org/3/extending/building.html#building-c-and-c-extensions-with-distutils.

I would estimate that this would take between 2-4 dev days for someone (like myself) who has never packaged native binaries with setup.py before.

Default value char_feature_extractor is different from ML.NET for NGramFeaturizer

By default NGramFeaturizer in ML.NET uses CharFeatureExtractor = new NgramExtractorTransform.NgramExtractorArguments() { NgramLength = 3, AllLengths = false };
while in NimbusML its set to null. So using default values of NGramFeaturizer between ML.NET and NimbusML causes significant difference in AUC. Need to fix NimbusML NGramFeaturizer

Identify all NimbusML components that rely on entrypoints in the ML.NET Legacy namespace.

As noted in dotnet/machinelearning#1565, there are some ML.NET entrypoints required by NimbusML that are part of the Legacy namespace that will need to be migrated. Take ModelCombiner for example:

https://github.com/dotnet/machinelearning/blob/7b2461cfdad150047dbbcbc163290a32e9f4d829/src/Microsoft.ML.Legacy/Runtime/EntryPoints/ModelOperations.cs#L81

As it is an entry-point, it was duly published in NimbusML as we see here, and this internal entry-point definition actually wound up being used here.

We need to identify which NimbusML components rely on entrypoints in the ML.NET Legacy namespace so we can make a plan for migrating them.

Improve documentation or API for WordEmbedding with NGramFeaturizer

How would user know about '_TransformedText' suffix to be put in when using NGramFeaturizer + WordEmbedding?

See WordEmbedding(columns='features_TransformedText') in the example below:

# WordEmbedding: pre-trained transform to generate word embeddings

from microsoftml_scikit import FileDataStream, Pipeline
from microsoftml_scikit.datasets import get_dataset
from microsoftml_scikit.feature_extraction.text import NGramFeaturizer
from microsoftml_scikit.internal.entrypoints._ngramextractor_ngram import n_gram
from microsoftml_scikit.feature_extraction.text import WordEmbedding

# data input (as a FileDataStream)
path = get_dataset('infert').as_filepath()

# TODO: Replace with auto-inference
file_schema= 'sep=, col=id:TX:0 col=education:TX:1 col=age:R4:2 col=parity:R4:3 col=induced:R4:4 col=case:R4:5 col=spontaneous:R4:6 header=+'
data = FileDataStream(path, schema=file_schema)

# transform usage
# TODO: Bug 146763
pipeline = Pipeline([
    NGramFeaturizer(word_feature_extractor=n_gram(), output_tokens=True,
                     columns={'features': ['id', 'education']}),

    WordEmbedding(columns='features_TransformedText')
    ])

# fit and transform
features = pipeline.fit_transform(data)

# print features
print(features.head())

Originally noted by abgoswam here: https://msdata.visualstudio.com/AlgorithmsAndDataScience/_workitems/edit/149666

Extraneous xref in documentation links

Documentation links in the API Guide need to point to the actual pages. Perhaps the xref paths are not found by the documentation render causing it to leave the original in place?

Currently some links retain an extraneous xref:

<a href="xref:nimbusml.linear_model.AveragedPerceptronBinaryClassifier">...</a>

When the final rendered link should be:

<a href="../python/api/nimbusml/nimbusml.linear_model.AveragedPerceptronBinaryClassifier">...</a>

https://docs.microsoft.com/en-us/nimbusml/apiguide?view=nimbusml-py-latest#binary-classifiers

summary() fails if called a second time

Describe the bug

summary() fails if called a second time.

To Reproduce

nlr = OrdinaryLeastSquaresRegressor()
nlr.fit(diabetes_X_train, diabetes_y_train)
nlr.summary()  # ok
nlr.summary()  # fails

produces the following error:

nimbusml\base_predictor.py in summary(self)
    140         Returns model summary.
    141         """
--> 142         if hasattr(self, 'model_summary_') and self.model_summary_:
    143             return self.model_summary_
    144 

pandas\core\generic.py in __nonzero__(self)
   1574         raise ValueError("The truth value of a {0} is ambiguous. "
   1575                          "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
-> 1576                          .format(self.__class__.__name__))
   1577 
   1578     __bool__ = __nonzero__

Desktop

  • OS: [Windows]
  • Browser [Chrome]
  • Version [python 3.7]

Idea: Separate out dotnet into its own package

A large part of the NimbusML package now is the dotnet runtime. This means that on every update, users of NimbusML will have to download the dotnet runtime again, which probably won't change as fast as NimbusML itself. This seems wasteful to me.

Can we instead move the dotnet runtime into its own PyPI package that NimbusML depends on? That way, users only have to download NimbusML whenever there is an update.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.