Code Monkey home page Code Monkey logo

minerva's Introduction

Minerva: a fast and flexible system for deep learning

Latest News

  • We've cleared quite a lot of Minerva's dependencies and made it easier to build. Basically, almost all needed are:

    ./build.sh

    Please see the wiki page for more information.

  • Minerva's Tutorial and API documents are released!

  • Minerva had migrated to dmlc, where you could find many awesome machine learning repositories!

  • Minerva now evolves to use cudnn_v2. Please download and use the new library.

  • Minerva now supports the latest version of Caffe's network configuration protobuf format. If you are using older version, error may occur. Please use the tool to upgrade the configure file.

Overview

Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.

Quick try

After building and installing Minerva and Owl package (python binding) as in Install Minerva. Try run ./run_owl_shell.sh in Minerva's root directory. And enter:

>>> x = owl.ones([10, 5])
>>> y = owl.ones([10, 5])
>>> z = x + y
>>> z.to_numpy()

The result will be a 10x5 array filled by value 2. Minerva supports many numpy style ndarray operations. Please see the API document for more information.

Features

  • N-D array programming interface and easy integration with numpy

    >>> import numpy as np
    >>> x = np.array([1, 2, 3])
    >>> y = owl.from_numpy(x)
    >>> y += 1
    >>> y.to_numpy()
    array([ 2., 3., 4., ], dtype=float32)

    More is in the API cheatsheet

  • Automatically parallel execution

    >>> x = owl.zeros([256, 128])
    >>> y = owl.randn([1024, 32], 0.0, 0.01)

    The above x and y will be executed concurrently. How is this achieved?

    See Feature Highlight: Data-flow and lazy evaluation

  • Multi-GPU, multi-CPU support:

    >>> owl.set_device(gpu0)
    >>> x = owl.zeros([256, 128])
    >>> owl.set_device(gpu1)
    >>> y = owl.randn([1024, 32], 0.0, 0.01)

    The above x and y will be executed on two cards simultaneously. How is this achieved?

    See Feature Highlight: Multi GPU Training

Tutorial and Documents

  • Tutorials and high-level concepts could be found in our wiki page
  • A step-by-step walk through on MNIST example could be found here
  • We also built a tool to directly read Caffe's configure file and train. See document.
  • API documents could be found here

Performance

We will keep updating the latest performance we could achieve in this section.

Training speed

Training speed
(images/second)
AlexNet VGGNet GoogLeNet
1 card 189.63 14.37 82.47
2 cards 371.01 29.58 160.53
4 cards 632.09 50.26 309.27
  • The performance is measured on a machine with 4 GTX Titan cards.
  • On each card, we load minibatch size of 256, 24, 120 for AlexNet, VGGNet and GoogLeNet respectively. Therefore, the total minibatch size will increase as the number of cards grows (for example, training AlexNet on 4 cards will use 1024 minibatch size).

An end-to-end training

We also provide some end-to-end training codes in owl package, which could load Caffe's model file and perform training. Note that, Minerva is not the same tool as Caffe. We are not focusing on this part of logic. In fact, we implement these just to play with the Minerva's powerful and flexible programming interface (we could implement a Caffe-like network trainer in around 700~800 lines of python codes). Here is the training error with time compared with Caffe. Note that Minerva could finish GoogleNet training in less than four days with four GPU cards.

Error curve

Testing error rate

We trained several models using Minerva from scratch to show the correctness. The following table shows the error rate of different network under different testing settings.

Testing error rate AlexNet VGGNet GoogLeNet
single view top-1 41.6% 31.6% 32.7%
multi view top-1 39.7% 30.1% 31.3%
single view top-5 18.8% 11.4% 11.8%
multi view top-5 17.5% 10.8% 11.0%
  • AlexNet is trained with the solver except that we didn't use multi-group convolution.
  • GoogLeNet is trained with the quick_solver.
  • We didn't train VGGNet from scratch. We just transform the model into Minerva format and testing.

The models can be found in the following link: AlexNet GoogLeNet VGGNet

You can download the trained models and try them on your own machine using net_tester script.

Next Plan

  • Get rid of boost library dependency by using Cython. (DONE)
  • Large scale LSTM example using Minerva.
  • Easy support for user-defined new operations.

License and support

Minerva is provided in the Apache V2 open source license.

You can use the "issues" tab in github to report bugs. For non-bug issues, please send up an email at [email protected]. You can subscribe to the discussion group: https://groups.google.com/forum/#!forum/minerva-support.

Wiki

For more information on how to install, use or contribute to Minerva, please visit our wiki page: https://github.com/minerva-developers/minerva/wiki

minerva's People

Contributors

hanwentao avatar hjk41 avatar hotpxl avatar hyeontaek avatar jermainewang avatar redpony avatar serailhydra avatar sneakerkg avatar xianyi avatar zzhang-cn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minerva's Issues

Unittest compilation failed

When compiling unittest, I got error on my machine (GCC 4.9.2 prerelease, Arch Linux 2015.03.01, kernel 3.18.6). This is caused by following things in this commit 2e886a4:

  1. unittest_main is built as a shared library while for gtest, usually it only builds .a.
  2. The -flto flag is not working with my GCC.

I suggest fixing them by changing back to static library for unittest and removing lto flag (or perform a corresponding check on that flag).

Failed to run MNIST example due to CudaPerformNormAddOnRow CUDA: invalid device function

(Owl Ready) ➜  mnist: python mnist_mlp.py
[18:48:23] /home/zer0n/minerva/minerva/system/minerva_system.cpp:86: dag engine enabled
Training data: 235 mini-batches
Test data: 10000 samples
(256, 10)
---Start epoch #0
[18:48:28] /home/zer0n/minerva/minerva/op/impl/cuda/cuda_perform.cu:136: Check failed: (e) == (cudaSuccess) CudaPerformNormAddOnRow CUDA: invalid device function
[1]    72399 abort (core dumped)  python mnist_mlp.py

My environment:

  • Minerva built with cmake 3.2.3 and g++ 4.9 on Ubuntu 12.04
  • CUDA 7
  • CuDNN 2

FYI, I ran Caffe's MNIST example using GPU just fine.

Differences with MShadow

hi, all
what's the difference between MShadow and minerva? From the documents, I know that both of them can perform tensor operations of unified form on both CPU and GPU. And their languages are both C++. So I'm wondering what's the main difference between them?

C++ Documentation?

IS there a plan to release C++ API docs?

For instance, I saw concat function in python, is there a equivalent one in C++?

Thanks,
Kublai

`mnist_mlp.cpp` cannot compile

In the recent merging, I have removed all codes related to file_loader since they could be completed replaced by MakeNArray interface. Then apps/mnist_mlp.cpp and apps/ps/mnist_mlp.cpp codes cannot be compiled. Need to fix these two applications by writing similar IO codes in apps/mnist_cnn.cpp.

Can't get libminervaps.a

I'm following the instructions to integrate minerva with parameter server. But I always fail to get libminervaps.a. I have some confusions here. I don't understand "then compile with make minerva". What does this mean, should I make under the parameter server sources or under minerva? If under parameter server directory, it says not such target. And under minerva directory, it will not give me libminervaps.a. I'm sure that I run configure with parameter enabled.

self-defined elemwise function on GPU ?

Hi,

I've looked at the NIPS paper and the code snippet there presents a very convenient way of defining elem-wise function:

float Sigmoid(float x){
return 1.0 / (1.0 + exp(-x));
}

Then we just do:

Matrix z = (V * y + c).Map(&Sigmoid);

However, the current version seems to be completely different from the one presented in the paper, so I wonder why ? Or how do I achieve such goal in the current version of the code easily (instead of writing cuda code myself). Sorry, just start to look at this project recently so I might've missed some context

Thanks

mnist_cnn failed at Epoch #0

Here is the error:
F0310 16:38:25.787619 9513 narray.cpp:126] Check failed: lhs.Size(1) == rhs.Size(0) (512 vs. -6833920) size must match
the acts[6] has size: [32845682 4 32 256 ], and minibatch_size is 256.

use_dag flag

Hello,
I am experiencing a crash when trying to launch without the dag from python.

$ python tt.py --use_dag=false
[22:16:29] /home/luke/Repos/minerva/minerva/system/minerva_system.cpp:89: dag engine disabled
*** Error in `python': free(): invalid pointer: 0x000000000102a498 ***
Aborted (core dumped)

I tracked it back to:
https://github.com/dmlc/minerva/blob/master/owl/owl/libowl.pyx#L46
where argv is being freed.

Commenting that out does fix the problem but most likely leaks.

I am running from ubuntu 15.04, building with CPU only. with no flags launches without an issue.

Thanks!

Binary Classifier - Log loss function

Hi,
Fantastic Library!
I was just wondering, i am trying to use the library for a binary classifier experiment using the log loss function to train the model. This is for a university experiment around benchmarking different models. Would you have time to provide an example of how to use the library to achieve the above goal.
Also showing a visualization in how the algorithm learns and decreases the error.
Many thanks,
Best,
Andrew

How is Minerva/Owl different from Theano?

What advantages does it have over Theano?

Does Minerva/Owl have automatic differentiation capability that Theano has?

I had thought that Minerva is more comparable to Caffe but it looks like Minerva is more similar to Theano than it is to Caffe. Is it correct?

Can't build apps

Does minerva work with CUDA 7? I installed CUDA 7 and all the samples worked well but I cannot build minerva apps with errors like undefined reference to curandGenerateNormal.

...
-- cmake generator: Unix Makefiles
-- cmake build tool: /usr/bin/make
-- cmake build type: Release
-- Found cuDNN (include: ~/cudnn2, library: ~/cudnn2/libcudnn.so)
-- Found BLAS (include: /usr/include, library: /opt/openblas/lib/libcblas.so)
-- build C++ applications              -- 1                                                                                                                                                         [9/1758]
-- build unit tests                    -- 0
-- build cpu-only version              -- 0
-- build with parameter server support -- 0
-- build with BLAS library for CPU     -- 1
-- Build CXX Applications:
--   mnist_cnn_2gpu
--   mnist_mlp
--   main
--   mnist_cnn
-- Configuring done
-- Generating done
-- Build files have been written to: ~/minerva/release
[ 16%] Built target gflags
[ 32%] Built target dmlc-core
[ 32%] Built target third-party
Linking CXX shared library ../lib/libminerva.so
[ 91%] Built target minerva
Linking CXX executable main
../lib/libminerva.so: undefined reference to `curandGenerateNormal'
../lib/libminerva.so: undefined reference to `curandSetPseudoRandomGeneratorSeed'
../lib/libminerva.so: undefined reference to `curandCreateGenerator'

Here's my config.in file

BUILD_DIR=release
CXX=g++
CC=gcc
CXXFLAGS=
CUDA_ROOT=/usr/local/cuda
CUDNN_ROOT=/home/zer0n/cudnn2
BUILD_TYPE=Release
BUILD_OWL=0
BUILD_CXX_APPS=1
BUILD_TESTS=0
BUILD_WITH_PS=0
PS_ROOT=
BUILD_CPU_ONLY=0
BUILD_WITH_BLAS=1
BLAS_ROOT=/opt/openblas

I have successfully installed Caffe with CUDA 7 using this instruction tho.

doesn't compile with cuDNN R2

cuDNN R2 has different interface than R1. Our code does not compile with R2, giving out messages like "identifier "cudnnTensor4dDescriptor_t" is undefined". We should fix it or at least give a warning about this.

indexing reference of NArray?

What I really wanna do is parameters update for LSTM.
I've realized that my vocabulary is relatively large (about 500K) and having a vector of NArray is very inefficient (of which the reason I don't know). When trying to sync (by calling WaitForAll) for the following code:

int N = 600000;
int D = 128;
vector A = vector(N,NArray::Zeros({1,D}));
for(int i = 0 ; i < N ; i ++){
A[i] = NArray::Randn({1,D},0,0.05);
}
//calling ms.WaitForAll() here takes a long long time.....

it takes a long long time (maybe because pushing NArray into vector one at a time makes a lot of malloc calls on GPU, which is slow ?)
So instead I am thinking about having a giant 2D matrix and do the following thing.

NArray A = NArray::Randn({N,D},0,0.01);
NArray b = NArray::Randn({1,D},0,1);
A[10] = A[10] + b; // update a

It seems that this feature is not supported?
Any suggestion or comment is very appreciated...

script to convert the pre-trained model to caffe format

I am very interested in the minerva project and glad to deploy minerva on our lab servers. But currently we have a lot of existing codes that are largely only compatible with caffe. I am wondering if you could provide a script that could convert the networks trained by minerva back to caffe format. I think it will help grab the attention of caffe users and gradually turn them to minerva :)

race condition in owl imports

Hello again,
I think I found a race condition in importing of owl.
I have a fairly simple test program that crashes maybe 1 in 3 times.

import owl
import owl.elewise as ele

c = owl.create_cpu_device()
owl.set_device(c)

x = owl.zeros((10, 10))
y = ele.relu(x)

The error received on a bad run looks something like:

○ → python tt.py
[22:55:26] /home/luke/Repos/minerva/minerva/system/minerva_system.cpp:86: dag engine enabled
[22:55:26] /home/luke/Repos/minerva/minerva/backend/dag/dag_scheduler.cpp:46: create new op node #1 on device #0
[22:55:26] /home/luke/Repos/minerva/minerva/backend/dag/dag_scheduler.cpp:149: node #1 running right after creation
[22:55:26] /home/luke/Repos/minerva/minerva/backend/dag/dag_scheduler.cpp:46: create new op node #3 on device #0
[22:55:26] /home/luke/Repos/minerva/minerva/backend/dag/dag_scheduler.cpp:176: dispatching node #1 to device #0
[22:55:26] /home/luke/Repos/minerva/minerva/device/device.cpp:95: CPU device #0 create output for task data #0
[22:55:26] /home/luke/Repos/minerva/minerva/device/data_store.cpp:18: create data #0 length 400
terminate called after throwing an instance of 'dmlc::Error'
  what():  [22:55:26] minerva/common/singleton.h:13: Check failed: data_ please initialize before use
^[[AAborted (core dumped)

Sadly I cannot get a legit stack trace as when I run it under gdb I get no failure.

I can throw a sleep of a bit just after the import and it seems to fix the problem.

This is under cpu, ubuntu 15.04 with the dag enabled.

Thanks!

./main: symbol lookup error: ./main: undefined symbol: _ZN7minerva13MinervaSystem15CreateCpuDeviceEv

I can successfully run the given c++ sample code, but I ran into this problem when trying to build my own app.


g++ -std=c++11 -DHAS_CUDA -I./dmlc-core/include/ -I/usr/local/cuda-6.5/include -I./include/ -rdynamic ./libminerva.so -rdynamic ./libcudnn.so main.cpp -o main

when I run it, it says:

./main: symbol lookup error: ./main: undefined symbol: _ZN7minerva13MinervaSystem15CreateCpuDeviceEv

the main.cpp program is just:

include <minerva.h>

using namespace std;
using namespace minerva;
int main(int argc, char ** argv){
MinervaSystem::Initialize(&argc, &argv);
MinervaSystem& ms = MinervaSystem::Instance();
uint64_t cpuDevice = ms.CreateCpuDevice();
ms.SetDevice(cpuDevice);
return 0;
}


Which I think is strange.. since I find exactly CreateCpuDeviceEv in the binary "libminerva.so".
I am using CentOS 6.5

minerva with ps

Hi,

I am trying to compile minerva with the PS (as suggested in https://github.com/dmlc/minerva/wiki/Integrate-with-PS,). But I got some problems.
First, I cannot find the ps branch of minerva, did you open it in the repository ?
Second, I downloaded the ps from https://github.com/hjk41/parameter_server, after compiling, I can obtain libminervaps.a and libminervaps.so (using a previous version of Makefile). However, when I trying to compile minerva, I got the the following errors:

...
Linking CXX shared library ../lib/libminerva.so
/usr/bin/ld: cannot find -lminervaps
collect2: error: ld returned 1 exit status
make[2]: *** [lib/libminerva.so] Error 1
make[1]: *** [minerva/CMakeFiles/minerva.dir/all] Error 2
make: *** [all] Error 2

I tried to include the PATH, but it doesn't work.
BTW, compiling the project without ps is fine.

Can you help me with those?

Thanks,
Tao

Request owl.elewise.pow/sqrt

Hi guys,

Is that possible for you to provide a element-wise power/sqrt function for the python interface? I'd like to adjust the weight use Adagrad or RMSprop, which requires a such operation. However, the numpy solution does not work well on multi-GPU case.
Or can you show me some instruction about how to add such operations?

Thanks.
Tao

Difference between Minerva and MShadow

Dear all,

I was surprised too see that Minerva is now part of the umbrella project DMLC. So I'm a little bit lost in the middle of these awesome tools. What is the difference between them ? Especially between minerva, mshadow and Cxxnet. I'm interested in implementing a distributed version of a triplet convolutional network that I have in Caffe.

Pooling output dimension is confusing if we give a non-square matrix as input

Hi,

In the given MNIST_CNN example in owl, if we set batch size as 4, the input dimension is something like [28,28,1,4], since the pic is a 28*28 square matrix. However, I found that when input matrix is non-square, the output dimension is confusing. I am wondering if it is a bug or it is implemented that way intentionally.

For example, if input.shape is [4, 2, 1, 4] in owl format, while I set "pooling = conv.Pooler(2, 2, 2, 2, 0, 0, conv.pool_op.max)" and after I did "pooling.ff(input)" I am expected to have [2,1,1,4]. But actually I got [1, 2, 1, 4] as the output dimension in owl. Should I expect to have [1, 2, 1, 4] as the result or there is something going wrong inside the pooling function?

Great appreciation for any comments and suggestions. Thanks!

build failed on ubuntu14.04

File /home/ubgpu/github/DMLC/minerva/release/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */

include <pthread.h>

int main(int argc, char** argv)
{
(void)argv;

ifndef pthread_create

return ((int*)(&pthread_create))[argc];

else

(void)argc;
return 0;

endif

}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /home/ubgpu/github/DMLC/minerva/release/CMakeFiles/CMakeTmp

Run Build Command:/usr/bin/make "cmTryCompileExec1004526741/fast"
/usr/bin/make -f CMakeFiles/cmTryCompileExec1004526741.dir/build.make CMakeFiles/cmTryCompileExec1004526741.dir/build
make[1]: Entering directory /home/ubgpu/github/DMLC/minerva/release/CMakeFiles/CMakeTmp' /usr/bin/cmake -E cmake_progress_report /home/ubgpu/github/DMLC/minerva/release/CMakeFiles/CMakeTmp/CMakeFiles 1 Building C object CMakeFiles/cmTryCompileExec1004526741.dir/CheckFunctionExists.c.o /usr/bin/gcc -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTryCompileExec1004526741.dir/CheckFunctionExists.c.o -c /usr/share/cmake-2.8/Modules/CheckFunctionExists.c Linking C executable cmTryCompileExec1004526741 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec1004526741.dir/link.txt --verbose=1 /usr/bin/gcc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTryCompileExec1004526741.dir/CheckFunctionExists.c.o -o cmTryCompileExec1004526741 -rdynamic -lpthreads /usr/bin/ld: cannot find -lpthreads collect2: error: ld returned 1 exit status make[1]: *** [cmTryCompileExec1004526741] Error 1 make[1]: Leaving directory/home/ubgpu/github/DMLC/minerva/release/CMakeFiles/CMakeTmp'
make: *** [cmTryCompileExec1004526741/fast] Error 2

ubgpu@ubgpu:~/github/DMLC/minerva$

Minerva GPU-- free memory for a variable

Hi

Thanks for the great tool.
I are trying to use minerva to run RNN on huge-size dataset. However, the gpu memory increase gradually and get crashed when it is beyond the gpu memory limit. My first question is that is there any way/function to free memory of unused variables? I have used wait_for_all, however it cannot solve the problem.

I try to write a memory-free function by cudaFree. The function is able to free the memory of the Narray variable, however, the pointer to the Narray variable still exists. When I reuse the Narray variable for assignment operation recursively, e.g. data=owl.from_numpy(np.array([...])), I found that it works for a new array with different size, however, it get crashed when the size of new array is same as the old one that has been deleted. For example:

//cannot work
data = owl.from_numpy(np.range(10000).reshape(100,100))
owl.free_memory(data) //I write the function using cudaFree by myself
data = owl.from_numpy(np.range(10000).reshape(100,100))
owl.free_memory(data) //core dump here

//can work
data = owl.from_numpy(np.arange(10000).reshape(100,100))
owl.free_memory(data)
data = owl.from_numpy(np.arange(20000).reshape(200,100))
owl.free_memory(data)

Would you please find out for me the reason? Thanks a lot

Tutorial, API Documentation?

Hi ,
Is there tutorial for the API in the project ? I've found only the sample code (mnist_mlp, mnist_cnn....)

Thanks!

broadcasting NArray + NArray

Hello. First, thanks for the really cool library!

I am experiencing odd issues with broadcasting when doing element wise operations between two arrays. I would expect all of the following to work but not all of them do.

owl.zeros((3,3)) + 10 # works
owl.zeros((3,3)) + np.array(10) # works
owl.zeros((3,3)) + np.array([10]) # works
owl.zeros((3,3)) + np.array([[10]]) # works


owl.zeros((3,3)) + owl.from_numpy(np.array([[10]])) # Fails

what():  [22:31:13] /home/luke/Apps/minerva/minerva/op/impl/cuda.cpp:223: Check failed: (closure.dims_to_replicate.NumDims()) == (1) currently do norm on one dimension only

owl.zeros((3,3)) + owl.from_numpy(np.array(10)) # Fails

*** RuntimeError: [22:32:19] /home/luke/Apps/minerva/minerva/narray/narray.cpp:197: Check failed: (lhs.Size().NumDims()) == (rhs.Size().NumDims()) #dimension mismatch

It appears that broadcasting does not work between two owl arrays. Is this correct? Is there a way to work around this?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.