Code Monkey home page Code Monkey logo

pico-cnn's Introduction

Pico-CNN

Pico-CNN is a (almost) library free and lightweight neural network inference framework for embedded systems (Linux and bare-metal) implemented in C++. Neural networks can be trained with any training framework which supports export of ONNX (open neural network exchange, onnx.ai) and afterwards deployed using Pico-CNN's ONNX import.

Setup and Use

Please read the whole document carefully!

Ubuntu

sudo apt install libjpeg-dev

If you want to use cppunit:

sudo apt install libcppunit-dev

Scientific Linux

sudo yum install libjpeg-devel

If you want to use cppunit:

git clone --branch cppunit-1.14.0 git://anongit.freedesktop.org/git/libreoffice/cppunit/
cd cppunit
./autogen.sh
./configure
make
sudo make install
sudo ln -s /usr/local/include/cppunit /usr/local/include/CppUnit
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

macOS

brew install jpeg

If you want to use cppunit:

brew install cppunit

C++-Standard

Depending on the (system-) compiler you are using you might have to add a specific C++-Standard to the CFLAGS variable in the generated Makefile. If you are using an older version of g++ it should suffice to chose -std=c++11 or -std=gnu++11 as the C++-Standard.

All Operating Systems

Install Python in version 3.6.5 for example with pyenv (probably also works with other versions of Python 3.6). Then install the required Python packages with the requirements.txt file located in pico-cnn/onnx_import:

cd onnx_import
pip install -r requirements.txt

Of course you can always install the requirements into a virtual environment like this:

cd onnx_import
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

In the future you always have to activate the environment:

cd onnx_import
source venv/bin/activate

Testing with cppunit

cd test
make tests
./tests

Utilities

If you want to use scripts from the pico-cnn/util folder you should create a virtual environment for it and install the respective python packages:

cd utils
python -m venv venv
source venv/bin/activate
pip install -r util_requirements.txt

In the future you always have to activate the environment:

cd utils
source venv/bin/activate

Tested Neural Networks

Pico-CNN was tested with the following neural networks:

  • LeNet
  • MNIST Multi-Layer-Perceptron (MLP)
  • MNIST Perceptron
  • AlexNet
  • VGG-16
  • VGG-19
  • MobileNet-V2
  • Inception-V3
  • Inception-Resnet-V2
  • TC-ResNet-8

All networks

Dummy Input

For every imported onnx model a dummy_input.cpp will be generated, which uses random numbers as input and calls the network, so that no network specific input data has to be downloaded to run inferences.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/model.onnx
cd generated_code/model
make dummy_input
./dummy_input network.weights.bin NUM_RUNS GENERATE_ONCE

GENERATE_ONCE = 0 will lead to new random input for each inference run.
GENERATE_ONCE = 1 will lead to the same random input for each inference run.

You can monitor overall progress of the current RUN by adding -DDEBUG in the generated Makefile.

Reference Input

There will also be generated a reference_input.cpp which can be used to validate the imported network against the reference input/output that is provided for onnx models from the official onnx model-zoo. The data has to be preprocessed with the following script (it is assumed that the pico-cnn/utils specific virtual environment is activated):

python util/parse_onnx_reference_files.py --input PATH_TO_REFERENCE_DATA

The script will generate an input_X.data and output_X.data file which can then be used like this:

cd onnx_import/generated_code/model
make reference_input
./reference_input network.weights.bin PATH_TO_SAMPLE_DATA/input_X.data PATH_TO_SAMPLE_DATA/output_X.data

If the model was acquired in some other way (self-trained or converted) you can create sample data with the following script (it is assumed that the pico-cnn/utils specific virtual environment is activated):

cd util
python3.6 generate_reference_data.py --model model.onnx --file PATH_TO_INPUT_DATA --shape 1 NUM_CHANNELS HEIGHT WIDTH

If --file is not given the script will use random values instead. Supported file types are audio/x-wav, image/jpeg and image/x-portable-greymap.

MNIST Dataset

LeNet-5

LeNet-5 implementation as proposed by Yann LeCun et. al [LeCun1998] ONNX model at: ./data/lenet/lenet.onnx

Note: MNIST dataset required to run the LeNet specific code.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/lenet/lenet.onnx

Copy examples/lenet.cpp from examples folder to onnx_import/generated_code/lenet.

cd generated_code/lenet
make lenet
./lenet network.weights.bin PATH_TO_MNIST

MNIST Multi-Layer-Perceptron (MLP)

ONNX model at: ./data/mnist_mlp/mnist_mlp.onnx

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/mnist_mlp/mnist_mlp.onnx

Copy examples/mnist_mlp.cpp to onnx_import/generated_code/mnist_mlp.

cd generated_code/mnist_mlp
make mnist_mlp
./mnist_mlp network.weights.bin PATH_TO_MNIST

MNIST Perceptron

Single fully-connected layer. ONNX model at: ./data/mnist_simple_perceptron/mnist_simple_perceptron.onnx

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input ../data/mnist_simple_perceptron/mnist_simple_perceptron.onnx

Copy examples/mnist_simple_perceptron.cpp to onnx_import/generated_code/mnist_simple_perceptron.

cd generated_code/mnist_simple_perceptron
make mnist_simple_perceptron
./mnist_simple_perceptron network.weights.bin PATH_TO_MNIST

ImageNet Dataset

AlexNet

AlexNet [Krizhevsky2017] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/alexnet

Note: ImageNet dataset required to run the AlexNet specific code.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/alexnet.onnx

Copy examples/alexnet.cpp to onnx_import/generated_code/alexnet.

cd generated_code/alexnet

Add -ljpeg to LDFLAGS in Makefile

make alexnet
./alexnet network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGE

VGG-16

VGG-16 [Simonyan2014] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/vgg

Note: ImageNet dataset required to run the VGG-16 specific code.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/vgg16.onnx

Copy examples/vgg.cpp to onnx_import/generated_code/vgg16.cpp.

cd generated_code/vgg16

Add -ljpeg to LDFLAGS in Makefile

make vgg16
./vgg16 network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGE

VGG-19

VGG-19 [Simonyan2014] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/vgg

Note: ImageNet dataset required to run the VGG-19 specific code.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/vgg19.onnx

Copy examples/vgg.cpp to onnx_import/generated_code/vgg19.cpp.

cd generated_code/vgg19

Add -ljpeg to LDFLAGS in Makefile

make vgg19
./vgg19 network.weights.bin PATH_TO_IMAGE_MEANS PATH_TO_LABELS PATH_TO_IMAGE

MobileNet-V2

MobileNet-V2 [Sandler2019] ONNX model retrieved from: https://github.com/onnx/models/tree/master/vision/classification/mobilenet

See Reference Input section for details on input and output data generation.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/mobilenetv2-1.0.onnx
cd generated_code/mobilenetv2-1
make reference_input
./reference_input network.weights.bin input.data output.data

Inception-V3

Inception-V3 [Szegedy2014] ONNX model retrieved from: http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz and converted using TensorFlow slim.

See Reference Input section for details on input and output data generation.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/inceptionv3.onnx
cd generated_code/inceptionv3
make reference_input
./reference_input network.weights.bin input.data output.data

Inception-ResNet-V2

Inception-ResNet-V2 [Szegedy2016] ONNX model retrieved from: http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz and converted using TensorFlow slim.

See Reference Input section for details on input and output data generation.

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/inception_resnet_v2.onnx
cd generated_code/inception_resnet_v2
make reference_input
./reference_input network.weights.bin input.data output.data

Speech Recognition

TC-ResNet-8

TC-ResNet-8 [Choi2019] trained on the Google Speech Commands Dataset

cd onnx_import
python3.6 onnx_to_pico_cnn.py --input PATH_TO_ONNX/tc-res8-update.onnx
cd generated_code/tc-res8-update
make reference_input
./reference_input network.weights.bin input.data output.data

Contributors

Citing Pico-CNN

Please cite Pico-CNN in your publications if it helps your research:

@inproceedings{luebeck2019picocnn,
    author = {Lübeck, Konstantin and Bringmann, Oliver},
    title = {A Heterogeneous and Reconfigurable Embedded Architecture for Energy-Efficient Execution of Convolutional Neural Networks},
    booktitle = {Architecture of Computing Systems (ARCS 2019)},
    year = {2019},
    month = {May},
    pages = {267--280},
    address = {Copenhagen, Denmark},
    isbn = {978-3-030-18656-2}
}

References

[LeCun1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.

[Krizhevsky2017] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017.

[Simonyan2014] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv:1409.1556 [cs], Sep. 2014.

[Sandler2019] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” arXiv:1801.04381 [cs], Mar. 2019.

[Szegedy2014] C. Szegedy et al., “Going Deeper with Convolutions,” arXiv:1409.4842 [cs], Sep. 2014.

[Szegedy2016] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning,” arXiv:1602.07261 [cs], Aug. 2016.

[Choi2019] S. Choi et al., “Temporal Convolution for Real-time Keyword Spotting on Mobile Devices,” arXiv:1904.03814 [cs, eess], Nov. 2019.

pico-cnn's People

Contributors

alexjung avatar di-lee avatar k0nze avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.