Code Monkey home page Code Monkey logo

hpi-xnor / bmxnet-v2 Goto Github PK

View Code? Open in Web Editor NEW
229.0 229.0 33.0 48.65 MB

BMXNet 2: An Open-Source Binary Neural Network Implementation Based on MXNet

License: Apache License 2.0

CMake 0.41% Makefile 0.81% R 1.44% C++ 31.87% Python 28.30% Java 0.83% C 0.90% Shell 1.72% Groovy 0.38% Dockerfile 0.33% PowerShell 0.06% Clojure 2.49% Jupyter Notebook 14.76% Batchfile 0.05% MATLAB 0.14% Perl 6.26% Cuda 3.89% Scala 5.21% ANTLR 0.01% HTML 0.16%
binary-neural-networks bmxnet-v2 deep-learning mxnet xnor-convolutions

bmxnet-v2's People

Contributors

aaronmarkham avatar anirudh2290 avatar antinucleon avatar cjolivier01 avatar eric-haibin-lin avatar haojin2 avatar hjk41 avatar hotpxl avatar iblislin avatar jopyth avatar kellensunderland avatar kevinthesun avatar lanking520 avatar larroy avatar marcoabreu avatar mli avatar piiswrong avatar pluskid avatar sandeep-krishnamurthy avatar sneakerkg avatar sxjscience avatar szha avatar terrytangyuan avatar tqchen avatar vchuravy avatar winstywang avatar yajiedesign avatar yzhliu avatar zheng-da avatar zhreshold avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bmxnet-v2's Issues

RuntimeError: Cannot find the MXNet library

Description

I am facing problem to install BMXNet.

Environment info (Required)

CuDNN 7.1.4
Cuda 9.2
Nvcc V9.2.148

Following python packages

atomicwrites==1.3.0
attrs==19.1.0
certifi==2019.3.9
chardet==3.0.4
graphviz==0.8.4
idna==2.8
importlib-metadata==0.15
more-itertools==7.0.0
-e git+https://github.com/apache/incubator-mxnet@5fc4fc53df74f276aafa51208142e657e9cfe42d#egg=mxnet&subdirectory=python
numpy==1.16.4
pathlib2==2.3.3
pluggy==0.12.0
py==1.8.0
pytest==4.5.0
requests==2.22.0
six==1.12.0
urllib3==1.25.3
wcwidth==0.1.7
zipp==0.5.1

Used the following for the mxnet build

cmake -j USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_MKLDNN=1 .

Error Message:

@kaivu1999

>>> import mxnet as mx
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/__init__.py", line 24, in <module>
    from .context import Context, current_context, cpu, gpu, cpu_pinned
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/context.py", line 24, in <module>
    from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
  File "/home/kaivalya/ntu_second/mxnet/pythRuntimeError: Cannot find the MXNet libraryon/mxnet/base.py", line 213, in <module>
    _LIB = _load_lib()
  File>>> import mxnet as mx
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/__init__.py", line 29, in <module>
    from . import contrib
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/contrib/__init__.py", line 27, in <module>
    from . import autograd
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/contrib/autograd.py", line 29, in <module>
    from ..ndarray import NDArray, zeros_like, _GRAD_REQ_MAP
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/ndarray/__init__.py", line 26, in <module>
    from . import register
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/ndarray/register.py", line 177, in <module>
    from ._internal import _adamw_update, _mp_adamw_update
ImportError: cannot import name '_adamw_update' "/home/kaivalya/ntu_second/mxnet/python/mxnet/base.py", line 203, in _load_lib
    lib_path = libinfo.find_lib_path()
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/libinfo.py", line 74, in find_lib_path
    'List of candidates:\n' + str('\n'.join(dll_path)))
RuntimeError: Cannot find the MXNet library.
List of candidates:
/home/kaivalya/ntu_second/mxnet/libmxnet.so
/home/kaivalya/ntu_second/mxnet/python/mxnet/libmxnet.so
/home/kaivalya/ntu_second/mxnet/python/mxnet/../../lib/libmxnet.so
/home/kaivalya/ntu_second/mxnet/python/mxnet/../../build/libmxnet.so
../../../libmxnet.so

What have you tried to solve it?

Well it seems the shared object file didnot get created. But I didnot face any problems in the executing the cmake command for Intel processors with GPU

  1. Well I checked the PYTHONPATH
    Here I would like to say that in the instructions it is written:
    to add
$ export LD_LIBRARY_PATH=<mxnet-root>/build/Release
$ export PYTHONPATH=<mxnet-root>/python

Where as we don't have build/Release in mxnet-root.
They were actually in the BMXNet v1 instructions. And while installing MXNet they has asked to build it for the cmake thing. But I could not see any reference to it in the new instructions of mxnet setup

  1. Well I had install mxnet while installing BMXnet ( Version 1 without the Gluon support ) as well of course. And it had this shared object file. So I splecifically added the path accordingly in LD_LIBRARY_PATH which gave the following error.
>>> import mxnet as mx
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/__init__.py", line 29, in <module>
    from . import contrib
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/contrib/__init__.py", line 27, in <module>
    from . import autograd
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/contrib/autograd.py", line 29, in <module>
    from ..ndarray import NDArray, zeros_like, _GRAD_REQ_MAP
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/ndarray/__init__.py", line 26, in <module>
    from . import register
  File "/home/kaivalya/ntu_second/mxnet/python/mxnet/ndarray/register.py", line 177, in <module>
    from ._internal import _adamw_update, _mp_adamw_update
ImportError: cannot import name '_adamw_update'

Even after including the both the paths I am getting the same here.

Can someone help me ?
Thanking you in advance.

Regarding pretrained models

Hi,
Is there any link to the pretrained binary model files?
I found script files to train the binary models, but all the 'pretrained' arguments are disabled for each of the model.

What's your training method of BNN

Hi Guys,
I don't understand how do you train your binary weights, if you use sign function, the gradient is broken at the sign function. And I didn't get how do you handle this (in XNor-Net paper, they are keeping real weights for update) by looking at your source code. Could you point me to where you handled this? Thanks!

Also, it seems the quantization is unstable. I implemented a flow model based on your code and quantization is unstable comparing to my pytorch implementation. And I think the reason is in how you handled gradient update.

Question about BatchNorm

Hi everyone, i have a simple question: how is batchNorm implemented? Is it quantized during inference or it works with float32? Thank you very much!

Query about the new update about inference on CPU and GPU.

Description

I am actually interested in the speed up that I can get on CPU and GPU especially for inference.

According to the answer by @yanghaojin
I have tried BMXNet v1 for the same and I get speed up on CPU of about 1.4x - 1.7x on my PC for some models but also a decrease in speed up in some case.
I used : Ubuntu 16.04/64-bit platform on Intel(R) Core™ i5-8250U CPU@ 1.60GHz (supports SSE4.2)

Can you please elaborate about the update of 21st May 2019 wrt speed up ?

The update which is written in the changelog

BMXNet transition and Gluon hybridization for inference

@Jopyth
in the FAQ of the new repo v2 you're mentioning the transition to Gluon API. does that mean the underlying C/C++ implementation (ie. the backend operators that are also used by Python frontend) from BMXNet are not usable anymore?
say I have created a new model with Gluon (using HybridBlocks and the QConv2D layers for example) and hybridize to Symbol, we can still do the inference with Python API but not with C/C++?
in BMXNet there was a script to convert these models (Symbolic execution graph) to real binary models that can be loaded (using amalgamation.cc and/or C++ package) for faster inference..

Weights Quantization

First of all, thank you for sharing your code.

Not an issue per se, I have a doubt. Why do you initialize weights using Glorot Normal (and the quantizing them) instead of directly initialize the weights with a Bernoulli distribution?

Greetings

Question about MeliusNet paper

I have noticed that you have just released your new paper: MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy? Congratulations for the great job.

In your paper, you said MeliusNet for the first time can match the accuracy (69.2% and 70.7%) of the popular compact network MobileNet in terms of model size and accuracy.

Our paper AutoBNN [1] already achieved 69.65% top1 accuracy with our binary neural networks. We also released our code: https://github.com/LaVieEnRoseSMZ/AutoBNN. You can check it and cite our paper to improve the quality of your paper :)

[1] Shen, Mingzhu, et al. "Searching for accurate binary neural architectures." Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.

QDense inference

@Jopyth
Greetings, and thank you for the BMXNet library.

The QDense and QConv classes' hybrid_forward methods may set the offset to the number of weights, as follows (https://github.com/hpi-xnor/BMXNet-v2/blob/master/python/mxnet/gluon/nn/binary_layers.py):

self._offset = reduce(mul, weight.shape[1:], 1)

When using binary inputs, weights, and activations, and no bias, adding this offset seems to make the range of possible output values between zero and the number of weights (i.e., zero when all x*w contributions are "-1", and the number of weights when all contributions are "+1"). This is given the output computed as:

h = (h + self._offset) / 2

h_min = (-count(w) + count(w)) / 2 = 0

h_max = (count(w) + count(w)) / 2 = count(w)

Thus, it seems that applying a subsequent binary QActivation (i.e., det_sign) to the output of a binary QDense or non-scaled binary QConv layer will always yield "+1", never "-1". Is this the intended behavior, or do I misunderstand it?

Thanks very much for any insight!

Build instructions macOS: error with -std=c++11 flag

Description

When trying to build BMXNet v2 using Ninja for macOS environment the process fails when compiling C file

Environment info (Required)

macOS 10.13.6
CMake 3.12.3
Ninja 1.9.0

Build info (Required if built from source)

Compiler (gcc/clang/mingw/visual studio):

  • Apple LLVM version 10.0.0 (clang-1000.11.45.5) - w/o OpenMP
  • llvm: stable 8.0.0 (w. OpenMP) via brew

MXNet commit hash:
d0aaf81

Build config:
cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DUSE_CUDA=FALSE \ -DUSE_OPENCV=FALSE \ -DUSE_OPENMP=FALSE \ -DUSE_GPERFTOOLS=OFF

Error Message:

[240/270] Building C object CMakeFiles/mxnet.dir/dummy.c.o
FAILED: CMakeFiles/mxnet.dir/dummy.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -DBINARY_WORD_32=1 -DBINARY_WORD_64=0 -DDMLC_USE_CXX11 -DDMLC_USE_CXX11=1 -DMSHADOW_IN_CXX11 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_MKL=0 -DMSHADOW_USE_SSE=1 -DMXNET_USE_LAPACK=1 -DMXNET_USE_NCCL=0 -DMXNET_USE_OPENCV=0 -DNDEBUG=1 -Dmxnet_EXPORTS -I../include -I../src -I../3rdparty/mshadow -I../3rdparty/cub -I../3rdparty/tvm/nnvm/include -I../3rdparty/tvm/include -I../3rdparty/dmlc-core/include -I../3rdparty/dlpack/include -isystem /System/Library/Frameworks/vecLib.framework/Headers -Wno-braced-scalar-init -O3 -msse2 -std=c++11 -mf16c -march=native -mpopcnt -funroll-loops -O3 -DNDEBUG -fPIC -MD -MT CMakeFiles/mxnet.dir/dummy.c.o -MF CMakeFiles/mxnet.dir/dummy.c.o.d -o CMakeFiles/mxnet.dir/dummy.c.o -c dummy.c
error: invalid argument '-std=c++11' not allowed with 'C'
[246/270] Building CXX object CMakeFil...or/tensor/elemwise_unary_op_basic.cc.o

Steps to reproduce

  1. exactly as described here

What have you tried to solve it?

  1. set build instructions with OpenMP on
  2. added flag -DCMAKE_CXX_FLAGS="${CMAKE_CXX_FLAGS} -std=c++11"
  3. using make instead of Ninja (see error below, the compile process still continues as opposed to the error aboce with Ninja)

Error message with make:

[ 6%] Linking CXX shared library libconverter.dylib
Undefined symbols for architecture x86_64:
"dmlc::Stream::Create(char const*, char const*, bool)", referenced from:
convert_params_file(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in main.cpp.o
"mxnet::Engine::Get()", referenced from:
mxnet::NDArray::Chunk::Chunk(nnvm::TShape, mxnet::Context, bool, int) in main.cpp.o
"mxnet::NDArray::Load(dmlc::Stream*, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray >, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > > >)", referenced from:
convert_params_file(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in main.cpp.o
"mxnet::NDArray::Save(dmlc::Stream*, std::__1::vector<mxnet::NDArray, std::__1::allocatormxnet::NDArray > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > > > const&)", referenced from:
convert_params_file(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) in main.cpp.o
"mxnet::NDArray::Chunk::Chunk()", referenced from:
std::__1::__shared_ptr_emplace<mxnet::NDArray::Chunk, std::__1::allocatormxnet::NDArray::Chunk >::
__shared_ptr_emplace() in main.cpp.o
std::__1::__shared_ptr_emplace<mxnet::NDArray::Chunk, std::__1::allocatormxnet::NDArray::Chunk >::~__shared_ptr_emplace() in main.cpp.o
std::__1::__shared_ptr_emplace<mxnet::NDArray::Chunk, std::__1::allocatormxnet::NDArray::Chunk >::__on_zero_shared() in main.cpp.o
"mxnet::Storage::Get()", referenced from:
mxnet::NDArray::Chunk::Chunk(nnvm::TShape, mxnet::Context, bool, int) in main.cpp.o
mxnet::NDArray::CheckAndAlloc() const in main.cpp.o
"mxnet::NDArray::SetTBlob() const", referenced from:
convert_to_binary_row(mxnet::NDArray&) in main.cpp.o
transpose(mxnet::NDArray&) in main.cpp.o
transpose_and_convert_to_binary_col(mxnet::NDArray&) in main.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [tools/binary_converter/libconverter.dylib] Error 1
make[1]: *** [tools/binary_converter/CMakeFiles/converter.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
...continues...
[ 83%] Linking CXX static library libmxnet.a
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(rtc.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(onnx_to_tensorrt.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(tensorrt_pass.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(trt_graph_executor.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(nnvm_to_onnx.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(tensorrt.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(cudnn_algoreg.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(cudnn_batch_norm.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_act.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_base.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_concat.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_convolution.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_copy.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_deconvolution.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_fully_connected.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_pooling.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_softmax.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_sum.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(nnpack_util.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_concat.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_conv.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_pooling.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv_post_quantize_property.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv_property.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(vtune.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(rtc.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(onnx_to_tensorrt.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(tensorrt_pass.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(trt_graph_executor.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(nnvm_to_onnx.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(tensorrt.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(cudnn_algoreg.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(cudnn_batch_norm.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_act.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_base.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_concat.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_convolution.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_copy.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_deconvolution.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_fully_connected.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_pooling.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_softmax.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_sum.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(nnpack_util.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_concat.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_conv.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_quantized_pooling.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv_post_quantize_property.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(mkldnn_conv_property.cc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libmxnet.a(vtune.cc.o) has no symbols
[ 83%] Built target mxnet_static
make: *** [all] Error 2

Code of MeliusNet

Hi, I recently read your paper MeliusNet, but I couldn't find the code in this repo. Could you please explicitly point out the model and training file? Thanks!

Detail of BinaryDenseNet or BinaryResNet18E

Hi, recently I read your newly realeased paper "Back to Simplicity: How to Train Accurate BNNs from Scratch?" It is a quite good paper and inspires me a lot.

However, I am a little confused about the implementation in this paper. I am not familiar with the code structure of MXNet, Could you please write a more detailed readme or a tutorial or anything similar which could explain the code and the training details?

Thanks a lot in advance~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.