Code Monkey home page Code Monkey logo

apache / mxnet Goto Github PK

View Code? Open in Web Editor NEW
20.7K 20.7K 6.8K 98.9 MB

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Home Page: https://mxnet.apache.org

License: Apache License 2.0

C++ 48.49% Makefile 0.04% Cuda 6.14% C 0.61% Python 34.68% Shell 0.69% CMake 0.87% Jupyter Notebook 7.60% Groovy 0.53% Dockerfile 0.22% PowerShell 0.03% Cython 0.11%
mxnet

mxnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mxnet's Issues

USE_CUDNN_PATH not work

My system has two version of cudnn , when I set USE_CUDNN_PATH to compile, it can't find it, and I see the Makefile, It can't see where it work

[R] Progress Issue Tracking on R-package Documentation

  • Document of R side function using roxygen
  • Automatic wrapper generation
    • Write a Rcpp function(generate wrapper) on C++ side, that will list the functions we registered and generate a mxnet_generated.R.
    • In mxnet_generated.R The varg functions will be generated, and the docstring of the non-internal functions are writen to the comments

running the test case failed

In the instruction, to run the test case, it says:
cd ..; python example/mnist/mlp.py

However, directly running this will lead to the error of 'ImportError: No module named get_data'
When I use ' cd example/mnist/ ' and then python mlp.py, the script runs correctly.

Building fail on mac 10.10, "dynamic module does not define init function"

hi all,

I tried to install on my mac 10.10 to first run a demo on small datasets. I deleted the -fopenmp as suggested in XGBoost. I got the libmxnet.so and libmxnet.a in the lib/ folder. But I still failed to run the example/mnist.py as suggested by the tutorial. It reported "dynamic module does not define init function". What should I do next to solve this problem? Do you guys have a tutorial for installing on mac? Thanks a lot

R's mx.nd.load wraps the array in a list

The following result is observed:

> mat = mx.nd.ones(10)
> mx.nd.save(mat,'~/temp.mat')
> as.array(mat)
 [1] 1 1 1 1 1 1 1 1 1 1
> as.array(mx.nd.load('~/temp.mat'))
[[1]]
 [1] 1 1 1 1 1 1 1 1 1 1

The problem could be in mx.nd.internal.load.

R documentation with roxygen2

Currently the functions are documented in the style of roxygen2, while no actual docs/NAMESPACE file generated by the package.

Hopefully this issue tracks the progress on automatic documentation generation with roxygen2 for the R package.

training the ImageNet model

I have successfully run the mxnet on Mnist dataset using the python scripts in the example. However, when running the ImageNet example, error occured. First I run the data.py (this is normal without error) and then alexnet.py , error prompted announcing that Segmentation fault (core dumped).

In the data.py, what is the data structure of " path_imgrec= "data/ilsvrc12/train.rec" ", and how to generate the data ? What should be done before running alexnet.py on ImageNet Dataset ?

Engine Issue Tracking

  • Use C style callback
  • Move dispatcher code outside of lock
  • Implement PushAsync for NaiveEngine
  • Isolate out the scheduling part and running policy.
    • The DoPushToQueue logic can be isolated out, as subclass
    • This enables multiple threading policies in for DoPushToQueue

'undefined symbol: cblas_sgemm' error when running mnist example

Hi, I have successfully build mxnet and install python package, but when I try to run the mnist example:

python example/mnist/mlp.py

I got this error:

Traceback (most recent call last):
  File "example/mnist/mlp.py", line 2, in <module>
    from data import mnist_iterator
  File "/home/xuetingli/Documents/dmlc/mxnet/example/mnist/data.py", line 9, in <module>
    import mxnet as mx
  File "/usr/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/__init__.py", line 7, in <module>
    from .base import MXNetError
  File "/usr/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/base.py", line 43, in <module>
    _LIB = _load_lib()
  File "/usr/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/base.py", line 35, in _load_lib
    lib = ctypes.cdll.LoadLibrary(lib_path[0])
  File "/usr/lib64/python2.7/ctypes/__init__.py", line 438, in LoadLibrary
    return self._dlltype(name)
  File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/python2.7/site-packages/mxnet-0.5.0-py2.7.egg/mxnet/libmxnet.so: undefined symbol: cblas_sgemm

I'm on CentOS 7 and I have installed blas and atblas via:

sudo yum install blas blas-devel atlas atlas-devel

Does anyone know how to solve this? Thx!

singleton manager

a singleton manager to guarantee the destruct order. one reason is that, any class needs to destroy narray should guarantee engine is still alive because narray calls Engine::Get()->PushDelete()

an example solution:

struct Singleton {
    static KVStore store;
   // others 
    static Engine engine;
};
Engine::Get() { return &Singleton::engine; }

error occured when make the mxnet

In file included from src/engine/threaded_engine_pooled.cc:11:0:
src/engine/./threaded_engine.h:250:3: error: overriding ‘virtual mxnet::engine::ThreadedEngine::~ThreadedEngine() noexcept (true)’
Makefile:94: recipe for target 'build/engine/threaded_engine_pooled.o' failed
make: *** [build/engine/threaded_engine_pooled.o] Error 1

Relation to cxxnet ?

Since you are also running cxxnet and this one seems quite similar..
What horse should I bet on for the future ?
Is this basically cxxnet++ ?

参数初始化的问题?

我用了以前在cxxnet上跑的数据测试mxnet(网络结构用了alexnet和inception),发现一个奇怪的现象,只要不用batchnorm,网络都不收敛,而且运行的结果看起来学不到任何东西,简单地在卷积层后加上bn,网络就收敛正常了,一开始怀疑是inplace优化的问题,后来禁止了inplace还是一样,之前我在cxxnet上做过类似的实验,这个数据集都是能很好收敛的,是不是初始化参数权重的地方要特别注意的地方?能不能提供像cxxnet那么精确参数设置?
例如:
momentum = 0.9
wmat:lr = 0.05
wmat:wd = 0.0001
bias:wd = 0.000
bias:lr = 0.1

Can I remove the union definition in struct LinkedList?

In object_pool.h, line 48, there is a struct with a union in it:

  struct LinkedList {
    union {
      LinkedList* next{nullptr};
      T t;
    };
  };

In my environment (VS2013), the compiler cannot accept a union containing a class with a copy constructor. Can I remove the union definition, such as

  struct LinkedList {
      LinkedList* next{nullptr};
      T t;
  };

, or I must use boost::variant to replace it?

minor refactor of dag_engine.h

there are some suggestions potentially to make DAGEngine more readable:

(Variable, OprHandle) -> (VarPtr, OprPtr) or (VarHandle, OprHandle)
(use_var, mutate_var) -> (const_var, mutable_var) or (read_var, write_var)
(NewVar, NewOperator) -> (NewVar, NewOpr) or (NewVariable, NewOperator)

i'm not too sure about the following two

(DeleteOperator, PushDelete) -> (DeleteOpr, DeleteVar)
(Var, Opr, Fn) -> (Var, Oper, Func)

do we need both Push(Fn exec_fun) and PushAsync? it looks like we can always first NewOperator and then Push(OprHandle)

issue running on GPU (Ubuntu, cuda 7.5)

I'm trying to run MLP.py in the MNIST example, and it throws an error when run with the GPU context:

model = mx.model.FeedForward(
    ctx = mx.gpu(), symbol = mlp, num_round = 20,
    learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
  File "/usr/local/lib/python2.7/dist-packages/mxnet-0.5.0-py2.7.egg/mxnet/base.py", line 72, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [16:53:11] src/storage/storage.cc:43: Please compile with CUDA enabled

I'm pretty sure I have compiled with CUDA enabled. I copied over config file to the main directory (and modified it as is, just to make sure).

I set

USE_CUDA=1
USE_CUDA_PATH=/usr/local/cuda

I also set USE_CUDNN to both 0 and 1 on various attempts.

At this point, I'm not sure where to go. Is there something else I should be setting?

Thanks for any assistance.

Build fail with Mac OS X (Yosemite, gcc 4.9 from homebrew with openMP, open CV, Blas)

After following the build instruction by checking out the latest mxnet recursively, and installing c++, open cv, open blas, make gives this error on isnan in the ndarray_function from mshadow (latest mxnet commit 6155880 ):

g++ -std=c++0x -DMSHADOW_FORCE_STREAM -Wall -O3 -I./mshadow/ -I./dmlc-core/include -fPIC -Iinclude -msse3 -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMXNET_USE_OPENCV=1 -fopenmp  -MM -MT build/ndarray/ndarray.o src/ndarray/ndarray.cc >build/ndarray/ndarray.d
g++ -std=c++0x -c -DMSHADOW_FORCE_STREAM -Wall -O3 -I./mshadow/ -I./dmlc-core/include -fPIC -Iinclude -msse3 -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMXNET_USE_OPENCV=1 -fopenmp  -c src/ndarray/ndarray.cc -o build/ndarray/ndarray.o
In file included from src/ndarray/ndarray.cc:13:0:
src/ndarray/./ndarray_function.h: In static member function 'static mxnet::real_t mxnet::ndarray::Clip::mshadow_op::Map(mxnet::real_t, mxnet::real_t)':
src/ndarray/./ndarray_function.h:43:18: error: 'isnan' was not declared in this scope
       if (isnan(a)) return 0.0f;
                  ^
src/ndarray/./ndarray_function.h:43:18: note: suggested alternative:
In file included from /usr/local/Cellar/gcc49/4.9.3/include/c++/4.9.3/random:38:0,
                 from /usr/local/Cellar/gcc49/4.9.3/include/c++/4.9.3/bits/stl_algo.h:66,
                 from /usr/local/Cellar/gcc49/4.9.3/include/c++/4.9.3/algorithm:62,
                 from ./dmlc-core/include/dmlc/./parameter.h:18,
                 from ./dmlc-core/include/dmlc/registry.h:14,
                 from src/ndarray/ndarray.cc:8:
/usr/local/Cellar/gcc49/4.9.3/include/c++/4.9.3/cmath:632:5: note:   'std::isnan'
     isnan(_Tp __x)
     ^
In file included from src/ndarray/ndarray.cc:8:0:
src/ndarray/ndarray.cc: At global scope:
./dmlc-core/include/dmlc/registry.h:194:22: warning: 'mxnet::__make_NDArrayFunctionReg__set_value__' defined but not used [-Wunused-variable]
   static EntryType & __make_ ## EntryTypeName ## _ ## Name ## __ =      \
                      ^
include/mxnet/ndarray.h:580:3: note: in expansion of macro 'DMLC_REGISTRY_REGISTER'
   DMLC_REGISTRY_REGISTER(::mxnet::NDArrayFunctionReg, NDArrayFunctionReg, name)
   ^
src/ndarray/ndarray.cc:508:1: note: in expansion of macro 'MXNET_REGISTER_NDARRAY_FUN'
 MXNET_REGISTER_NDARRAY_FUN(_set_value).set_function(SetValueOp);
 ^

Can you suggest some fix so I can test on my mac? Thanks.

mx.nd.array in R cannot deal with the matrix input properly

The following behaviour is observed on my machine:

> require(mxnet)
> x = matrix(1:4,2,2)
> x
     [,1] [,2]
[1,]    1    3
[2,]    2    4
> mat = mx.nd.array(as.array(x), mx.cpu(0))
> mat$as.array()
     [,1] [,2]
[1,]    1    0
[2,]    0    0

> x = matrix(1:100,10,10)
> x
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
 [1,]    1   11   21   31   41   51   61   71   81    91
 [2,]    2   12   22   32   42   52   62   72   82    92
 [3,]    3   13   23   33   43   53   63   73   83    93
 [4,]    4   14   24   34   44   54   64   74   84    94
 [5,]    5   15   25   35   45   55   65   75   85    95
 [6,]    6   16   26   36   46   56   66   76   86    96
 [7,]    7   17   27   37   47   57   67   77   87    97
 [8,]    8   18   28   38   48   58   68   78   88    98
 [9,]    9   19   29   39   49   59   69   79   89    99
[10,]   10   20   30   40   50   60   70   80   90   100
> mat = mx.nd.array(as.array(x), mx.cpu(0))
> mat$as.array()
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
 [1,]    1    0    0    0    0    0    0    0    0     0
 [2,]    0    0    0    0    0    0    0    0    0     0
 [3,]    0    0    0    0    0    0    0    0    0     0
 [4,]    0    0    2    0    0    0    0    0    0     0
 [5,]    0    0    0    0    0    0    0    0    0     0
 [6,]    0    0    0    0    0    0    0    0    0     0
 [7,]    0    0    0    0    3    0    0    0    0     0
 [8,]    0    0    0    0    0    0    0    0    0     0
 [9,]    0    0    0    0    0    0    0    0    0     0
[10,]    0    0    0    0    0    0    4    0    0     0

grad allocation in simple_bind

I'm reading the python front end for simple_bind, we have

if not (name.endswith('data') or name.endswith('label')):
    grad_ndarrays[name] = zeros(shape, ctx)

It seems a little bit arbitrary to detect a data symbol by its name ending with data or label. At least we should document this somewhere. But otherwise, I think we could have a more explicit way:

Allow users attach attributes to symbols, e.g. marking some symbol as data, or as frozen. Consider at least the following (quite common) scenarios:

  • The data symbol might not be named with a data ending. Maybe, for example, in recurrent models, they might be named data_t0, data_t1, data_t2, ... etc.
  • The user want to fix some of the layers while only train other layers. This is quite common in layer-wise pre-training, as well as training complicated multi-modal architectures.
  • The user want to compute gradient for the data and back-propagate into the data while fixing the weights. This is used to do visualization of networks or generate adversarial examples for networks. Or maybe someone want to use this to generate an ideal test set on which a given model performs really well.

While the users can always use the raw bind to achieve full control. Do you think it would be good to have a not_so_simple_bind that does a bit more than simple_bind?

code in C++ backend vs. script front end

Hi! I'm planning to add optimizer and model to the julia frontend next. But since now we already have at least 3 frontends, I just want to pause a bit and ask about what is the design philosophy of mxnet. Specifically, what level of code do you think should live in the backend and what should live in the frontend?

I think the advantage of keeping code in the backend is to avoid code redundancy in multiple frontends. They might gradually evolve into slightly different shape or even inconsistent with each other.

On the other hand, writing some non-critical programming logic in the frontend script language is more flexible and easier for user to extend. Optimizer might be a good example.

Kind of a related question, what is the plan for supporting user-defined layers? Are we going to enrich the API for Symbol and NDArray so that the end user could easily define their own layers (though maybe less efficiently) in the target script language? Or are the users always encouraged to write things in C++ in the backend if new layers are to be added?

A question about running speed

I test many time in inception-bn, I found my machine run mxnet is so fast at the beginning(184 sam/sec), but slowly down litte later(174 sam/sec), why?how can I run mxnet full speed all the time? temperatures too hot(max is 63'c)?my cpu rate is 500% (full is 1200%), 6 core, (12 thread core)
INFO:root:Start training with [gpu(0), gpu(1), gpu(2)]
INFO:root:Iter[0] Batch [10] Speed: 201.07 samples/sec
INFO:root:Iter[0] Batch [20] Speed: 188.29 samples/sec
INFO:root:Iter[0] Batch [30] Speed: 183.96 samples/sec
INFO:root:Iter[0] Batch [40] Speed: 184.11 samples/sec
INFO:root:Iter[0] Batch [50] Speed: 184.51 samples/sec
INFO:root:Iter[0] Batch [60] Speed: 185.40 samples/sec
INFO:root:Iter[0] Batch [70] Speed: 178.27 samples/sec
INFO:root:Iter[0] Batch [80] Speed: 176.74 samples/sec
INFO:root:Iter[0] Batch [90] Speed: 174.46 samples/sec
INFO:root:Iter[0] Batch [100] Speed: 175.03 samples/sec
INFO:root:Iter[0] Batch [110] Speed: 171.43 samples/sec
INFO:root:Iter[0] Batch [120] Speed: 174.72 samples/sec

[io] Segfault when a path do not exist

When IO reader points to a path that do not exist, the result is a segfault instead of a dmlc::Error that throws out and catched by python env.

Need to check the cause, whether it is dmlc-core input split api, or doing loading in constructor that did not mark noexcept(false).

object location and detection

HI,

fast-rcnn or fcatser-rcnn is a very useful application. Both are implemented and trained in Caffe framework.
Is there a way to import its network model and pretrained model to mxnet? If not. do you have a plan to use mxnet to implement fast-rcnn or faster-rcnn?

Thanks,

Kaishi

Balance Allocation for Multiple Cards On Device KVStore

我用vggnet 11layer 训练,在总的batchsize是36,3个780gpu,conv workspace 设置为 256, cuda-7.0, 但不用cudnn(觉得可以规避一些内存分配的不确定性), 发现占用gpu:2795m,2661m,2383m,他们100多m的递减差别是怎么产生的?希望能差别小一点,这样显存的利用率高些,毕竟显卡相同,显存利用取决于占用最大的gpu了,batchsize太小我的数据有时不收敛:(

About training accuracy

Hi,
I use Alexnet.py in /example/imagenet to training with the configure unchanged,but after 20 rounds ,the accuracy is only 0.438870,much worse than the result shown in the docs(81%).
the dataset is from imagenet.
I wonder where this difference comes from.
Any help?
Thx.

refactor of symbolic.h

several comments about symbolic.h, please correct me if i'm wrong. our current procedure is

workload(neural network) -- 1 --> expression -- 2 --> dag -- 3 --> results

1. compose by `Symbol`
2. bind with input/context by `Executor`
3. execute by `DAGEngine`

Symbol often means an atomic symbol such as x and y in programming language. So Expression (or Symbolic Expression) is probably a better name here. The static syntax is

Expr e := Var[x] % a single variable x
          list(x_1, ..., x_n) % a list of variables
          op(e_1, ...., e_n) % operator takes several expressions as arguments

the current implementation breaks op(e_1, ...) into Create(op) and Compose. But if we use the above syntax, Create(op) returns an invalid expression since the arguments are not given.

StaticGraph is also not a perfect name. How about BatchExpr? (static/const expression mean the expression can be evaluated at the compile time, but exactly the things we referring here).

Executor doesn't execute, it binds an expression into a dag and then executed by the dag engine. Or we probably use the name Binder.

btw, does heads_ really means outputs_, i have problem to think about the graph direction

crash when running python ./predict-with-pretrained-model.py

I have Nvidia 960 graphic card with 4 GB memory. When I ran python ./predict-with-pretrained-model.py, I got the following error message. Any iiea why this happens?

[18:28:42] ./dmlc-core/include/dmlc/logging.h:208: [18:28:42] src/operator/./convolution-inl.h:251: Check failed: (param_.workspace) >= (scol.Size() + sdst.Size())
Minimum workspace size: 169394176
Given: 134217728
[18:28:42] ./dmlc-core/include/dmlc/logging.h:208: [18:28:42] src/engine/./threaded_engine.h:290: [18:28:42] src/operator/./convolution-inl.h:251: Check failed: (param_.workspace) >= (scol.Size() + sdst.Size())
Minimum workspace size: 169394176
Given: 134217728
terminate called after throwing an instance of 'dmlc::Error'
what(): [18:28:42] src/engine/./threaded_engine.h:290: [18:28:42] src/operator/./convolution-inl.h:251: Check failed: (param_.workspace) >= (scol.Size() + sdst.Size())
Minimum workspace size: 169394176
Given: 134217728
Aborted (core dumped)

Thanks,
Kaishi

Using numpy.ndarray for Input Data

After trying the MNIST mlp.py example, I wanted to use my own data to train an MLP. So I load input from *.txt files in python as numpy.ndarray, which are then taken as arguments of model.fit(). But I keep getting the following error:

Traceback (most recent call last):
File "test.py", line 36, in
model.fit(X=train_data, y=train_label, eval_data=(val_data, val_label))
File "/home/code/xxx/mxnet/mxnet_master/python/mxnet/model.py", line 675, in fit
logger=logger)
File "/home/code/xxx/mxnet/mxnet_master/python/mxnet/model.py", line 273, in _train_multi_device
label[islice].copyto(target)
File "/home/code/xxx/mxnet/mxnet_master/python/mxnet/ndarray.py", line 320, in copyto
return NDArray._copyto(self, out=other)
File "/home/code/xxx/mxnet/mxnet_master/python/mxnet/ndarray.py", line 640, in generic_ndarray_function
c_array(NDArrayHandle, [v.handle for v in mutate_vars])))
File "/home/code/xxx/mxnet/mxnet_master/python/mxnet/base.py", line 72, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [21:58:27] src/ndarray/ndarray.cc:158: Check failed: from.shape() == to->shape() operands shape mismatch

My data is represented as 1500-dim feature vectors. The shapes of my data array are:

train_data.shape = (1800, 1500)
train_label.shape = (1800, 1)
val_data.shape = (200, 1500)
val_label.shape = (200, 1)

The MLP has two output units for the last layer. And I have tried transposing the input data array, using 1500 units for fc1, and using 2-dim labels (for two output units). None of them helps.

So what is the correct way to use my own data? Thanks.

build and installation failed

error encounter when run

Traceback (most recent call last):
  File "example/mnist/mlp.py", line 2, in <module>
    from data import mnist_iterator
  File "/mxnet/example/mnist/data.py", line 6, in <module>
    import get_data
ImportError: No module named get_data

cd ..; python example/mnist/mlp.py

[IO] Refactor Tracking

  • make iter more consistent with cxxnet structure
  • Use NDArray store as mean image
  • The Tensor Process moved to ImageNormalizeIter
  • The creator macro is simplified.
  • Catch exception at least file loading error, c.f. #150 Consider open the file in init function, instead of child thread

@antinucleon @winstywang @sneakerkg

The commit is pushed to master directly due to my mistake. I won't revert it for now 0e02bd7

Please take a review and test if the new setting si correct.

[R] Setup Travis Test

See reference of xgboost repo https://github.com/dmlc/xgboost

Because R-travis requires sudo. Let us only do it on OSX(I know this sounds strange:), where we have the sudo right. Current linux build uses container build that disables sudo

excample exception

when I update the code in afternoon,and run excample cifar10.py:
src/io/./image_augmenter.h:307: Check failed: p_data->size(1) >= param_.data_shape1] && p_data->size(2) >= param_.data_shape[2] Data size must be bigger than the input size to net

Recent RoadMap

  • modify clip, add dot
  • ndarray trans
  • more math functions for ndarray
  • interface for making optimization algo as a single ndarray op
  • complex example: adversary
  • lstm by using symbol/narray
  • general rnn interface
  • add sum, support loss symbol
  • sparse support [together with mshadow]
  • cusolver support [to ndarray]
  • more visualization
  • launch kernel directly in OP

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.