Code Monkey home page Code Monkey logo

pico-cnn's People

Contributors

alexjung avatar k0nze avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pico-cnn's Issues

Slow inference time

I have tried out this framework with an Alexnet ONNX model (obtained from here as is suggested in the README). My reason for using this, is to run ONNX models independent of the hardware platform, ideally with high inference speeds.

Setup

git clone https://github.com/ekut-es/pico-cnn
# Dependency for Ubuntu
sudo apt install libjpeg-dev

# Set up virtual environment and install requirements
conda create -n pico-cnn python=3.6.5
conda activate pico-cnn
cd pico-cnn/onnx_import
pip install -r requirements.txt

# Set up the ONNX model
wget https://github.com/onnx/models/blob/master/vision/classification/alexnet/model/bvlcalexnet-9.onnx?raw=true -O $DESTINATION/bvlcalexnet-9.onnx
python onnx_to_pico_cnn.py --input $DESTINATION/bvlcalexnet-9.onnx

cd generated_code/bvlcalexnet-9
make

Running

I used the dummy input program to test my installation and to get an idea of how quick this framework is.
I used ./dummy_input network.weights.bin NUM_RUNS GENERATE_ONCE to do this.
This took a surprisingly long time to execute, so I decided to investigate a little.
I modified the dummy_input.c file to time how long the call to network() takes, which I have assumed is the call to run the inference. This took around 3s on average (I used 50 runs to get a mean inference time).

I have a similar testing setup with Microsoft's ONNX Runtime in Python for comparison, which gave mean inference times of 12ms.

Questions

  1. Is this long inference time to be expected with pico-cnn?
  2. Are there any options/flags that I forgot to set that would speed up inference?

Thanks in advance!

feature request: load weights etc from header files instead of from files

Hi,

May I suggest a feature for pico-cnn ?
I would be nice to have an option to have the weights saved into header files instead of binary files.
This would be useful for small micro-controllers without filesystems.

perhaps a script to transform a giving input into header files as well. It would be useful for quick testing purposes. of course, for deployment the input would be read from some external interface (wifi, serial, etc).

cheers!
Alexandre

creating my own onnx file to use with pico-cnn

I am trying to create my own small cnn just to test the flow from tf.keras to C using pico-cnn.

My first step was to use the latest version of tf.keras (2.2.0) and onnx (1.7.0), but them I get this error when running python3.6 onnx_to_pico_cnn.py.

Generating Pico-CNN Code for model: mnist-model
Traceback (most recent call last):
  File "onnx_to_pico_cnn.py", line 72, in <module>
    main()
  File "onnx_to_pico_cnn.py", line 66, in main
    onnx_to_pico_cnn(onnx_model, model_name)
  File "onnx_to_pico_cnn.py", line 31, in onnx_to_pico_cnn
    onnx.checker.check_model(onnx_model)
  File "/home/lsa/.local/lib/python3.6/site-packages/onnx/checker.py", line 86, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Your model ir_version is higher than the checker's.

The flow to generate the onnx file is in this notebook.

Then, i was trying to downgrade both tf.keras and onnx, to match the versions used in pico-cnn.
So, I built this notebook where i am using the same versions of tf and onnx you are using in pico-cnn.

I tried this conversion with and without TF's eager mode enabled, but in both cases I got errors

ERROR with eager mode off:

keras2onnx version is 1.5.0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras2onnx/common/utils.py:38: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras2onnx/common/utils.py:38: The name tf.logging.WARN is deprecated. Please use tf.compat.v1.logging.WARN instead.

Using TensorFlow backend.

---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-21-d58d6c50760c> in <module>()
      2 print("keras2onnx version is "+keras2onnx.__version__)
      3 # convert to onnx model
----> 4 onnx_model = keras2onnx.convert_keras(model, 'mnist-onnx', debug_mode=1)
      5 output_model_path = "/content/drive/My Drive/Colab Notebooks/models/mnist-model.onnx"
      6 # and save the model in ONNX format

3 frames

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in __iter__(self)
    475     if not context.executing_eagerly():
    476       raise TypeError(
--> 477           "Tensor objects are only iterable when eager execution is "
    478           "enabled. To iterate over this tensor use tf.map_fn.")
    479     shape = self._shape_tuple()

TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.

ERROR with eager mode on:

keras2onnx version is 1.5.0
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras2onnx/common/utils.py:38: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras2onnx/common/utils.py:38: The name tf.logging.WARN is deprecated. Please use tf.compat.v1.logging.WARN instead.

Using TensorFlow backend.

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-12-d58d6c50760c> in <module>()
      2 print("keras2onnx version is "+keras2onnx.__version__)
      3 # convert to onnx model
----> 4 onnx_model = keras2onnx.convert_keras(model, 'mnist-onnx', debug_mode=1)
      5 output_model_path = "/content/drive/My Drive/Colab Notebooks/models/mnist-model.onnx"
      6 # and save the model in ONNX format

1 frames

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in get_session()
    381     if tf.executing_eagerly():
    382         raise RuntimeError(
--> 383             '`get_session` is not available when '
    384             'TensorFlow is executing eagerly.')
    385     return tf_keras_backend.get_session()

RuntimeError: `get_session` is not available when TensorFlow is executing eagerly.

questions:

  1. have you ever tried something similar to this ? if yes, how is your flow ?
  2. I am using tf-keras, would you suggest another flow ?
  3. is there any plan to update pico-cnn to newer onnx and tf versions ?
  4. any other kind of help ?

thanks !
Alexandre

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.