Code Monkey home page Code Monkey logo

onnxmltools's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

onnxmltools's People

Contributors

bowenbao avatar dependabot[bot] avatar duli2012 avatar frdasah avatar frozengene avatar ibadr avatar interesaaat avatar janjagusch avatar jeffsaremi avatar jiafatom avatar memoryz avatar monkey0head avatar p- avatar prabhat00155 avatar prasanthpul avatar scnakandala avatar shauheen avatar singlis avatar stevenlix avatar szha avatar tiagoshibata avatar vinitra avatar weikexin avatar wenbingl avatar wschin avatar xadupre avatar xhochy avatar xiaowuhu avatar xkszltl avatar yungcero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnxmltools's Issues

Protobuf error converting CoreML to ONNX

I have a test CoreML model that is 475 MB in size, a quick script to convert it to ONNX fails with the following error.

Is there a size limitation or something I can do on the CoreML side? I hope to use ONNX to convert CoreML models to other formats so I can do model compression.

Can I do without the .json representation?

[libprotobuf ERROR google/protobuf/io/zero_copy_stream_impl_lite.cc:164] Cannot allocate buffer larger than kint32max for StringOutputStream.
Traceback (most recent call last):
  File "coreml_to_tf.py", line 16, in <module>
    onnxmltools.utils.save_text(onnx_model, onnx_path + '.json')
  File "/home/johnsigmon/programming/ml-sandbox/.env/lib/python3.5/site-packages/onnxmltools/utils/main.py", line 86, in save_text
    f.write(str(model))
ValueError: Unable to convert message to str

The script to reproduce this is:

#Quick converter

import onnx
import onnxmltools
import coremltools

model_path = 'tests/testmodel.mlmodel'
onnx_path = 'tests/testmodel'
model_name = 'Test Model'

# coreml -> onnx
coreml_model = coremltools.utils.load_spec(model_path)
onnx_model = onnxmltools.convert_coreml(coreml_model, model_name)
onnxmltools.utils.save_text(onnx_model, onnx_path + '.json')
onnxmltools.utils.save_model(onnx_model, onnx_path + '.onnx')

new "Transpose" ops when transform keras to onnx

Hi,

Thank you for sharing this nice tool for converting keras to onnx.

I followed the examples to convert my keras model to onnx model, but encountered with some weird results.

The overall model structure keeps the same before and after the conversion, but there's a lot of new "Transpose" ops in the converted onnx model.
Especially, there will be a new "Transpose" op before and after [BatchNorm, Padding, Conv] ops.

Do you know what may cause this weird result? How can I eliminate those "Transpose" ops?

Here is the plotted keras and ONNX model:
keras
onnx

CoreML UnaryFunctionLayerParams.Operation Threshold converted wrong ONNX Clip

UnaryFunctionLayerParams.Operation Threshold is f(x) = max(ฮฑ, x). (https://apple.github.io/coremltools/coremlspecification/sections/NeuralNetwork.html#unaryfunctionlayerparams-operation). That is to say, if the element of x < ฮฑ๏ผŒwe use ฮฑ. If the element of x > ฮฑ, we use x. i.e. our min value is ฮฑ

But we convert it Clip() { attribute: max: ฮฑ) (https://github.com/onnx/onnx/blob/master/docs/Operators.md#Clip) , ONNX's meaning is:

max : float
Maximum value, above which element is replaced by max

i.e. If the element of x > MaxValue (ฮฑ)๏ผŒwe will use ฮฑ. The meaning is opposite of CoreML's threshold functionใ€‚CoreMLโ€˜s threshold meaning is that our min value is ฮฑ, not max value is ฮฑ.

Multiple Operator Sets in ModelProto for One Domain

Sometimes, we create multiple operator set for one single domain, for example, (domain_name, version)=('', 1), ('', 2), and ('', 6). It's better to have only one with the highest version number, ('', 6), to avoid confusion.

pip from source, then import prompt ImportError: cannot import name 'onnx_ml_pb2'

After pip from source and import

import onnxmltools
Traceback (most recent call last):
File "", line 1, in
File "/Users/blue/Documents/third-party/onnxmltools/onnxmltools/init.py", line 23, in
from .utils import load_model
File "/Users/blue/Documents/third-party/onnxmltools/onnxmltools/utils/init.py", line 7, in
from .main import load_model
File "/Users/blue/Documents/third-party/onnxmltools/onnxmltools/utils/main.py", line 8, in
from ..proto import onnx_proto
File "/Users/blue/Documents/third-party/onnxmltools/onnxmltools/proto/init.py", line 16, in
from onnx import onnx_ml_pb2 as onnx_proto
ImportError: cannot import name 'onnx_ml_pb2'

However, pip install onnxmltools (i.e. from PyPI) doesn't have problem

Are the exported models intended to work with Windows.AI.MachineLearning?

Hi, I'm attempting to load model3.onnx (attached):
using Windows.AI.MachineLearning;

I get this error:
_model = await LearningModel.LoadFromStorageFileAsync(file);
Output:
Exception thrown: 'System.Runtime.InteropServices.COMException' in System.Private.CoreLib.dll
WinRT information: [TypeInferenceError] Invalid attribute perm {0, 3, 1, 2}, input shape = {1, 300}
Exception thrown: 'System.Runtime.InteropServices.COMException' in System.Private.CoreLib.dll
WinRT information: [TypeInferenceError] Invalid attribute perm {0, 3, 1, 2}, input shape = {1, 300}
An exception of type 'System.Runtime.InteropServices.COMException' occurred in System.Private.CoreLib.dll but was not handled in user code
WinRT information: [TypeInferenceError] Invalid attribute perm {0, 3, 1, 2}, input shape = {1, 300}
Unspecified error
[TypeInferenceError] Invalid attribute perm {0, 3, 1, 2}, input shape = {1, 300}

I'm not sure where to begin? I have another .onnx model file that loads okay with the same code. (also attached)
models.zip

Error Message Unclear

When converting scikit-learn model without specifying initial_types, the error message produced is "Initial types are required." Should we print out something like "Initial types are required. Please call help(onnxmltools.convert.sklearn.convert.convert) for detailed usage of scikit-learn's conversion function."?

Scalar shape in ONNX

It looks like a scalar in ONNX is represented as a tensor with an empty list as its shape. If it's true, we need to revise our converters to fix this.

1.1.2 Release on pip

Hi, when I check onnxmltools package from pip, it shows version 1.0.0.0.
But I think the latest version is 1.1.2.0. How can I get 1.1.2.0?
I am new to python so I should be missing something.

Any help much appreciated!

pip show onnxmltools
Name: onnxmltools
Version: 1.0.0.0
Summary: Converts Machine Learning models to ONNX
Home-page: https://github.com/onnx/onnxmltools
Author: Microsoft Corporation
Author-email: [email protected]
License: MIT License
Location: c:\users\mika\appdata\local\programs\python\python36\lib\site-packages
Requires: numpy, protobuf, onnx
Required-by:

Remove use of auto_pad in convert_keras and convert_coreml

Given that auto_pad is deprecated in ONNX and isn't supported for things like infer_shapes, I propose that we remove auto_pad from coreml and keras operator converters (Pool/Conv) and instead use explicit padding.

I was thinking about implementing it. Does anyone have any issues with this?

Adding support for more sklearn models

Hi,

For the sklearn model conversion, can you please add support for below models?

sklearn.decomposition.PCA
sklearn.naive_bayes.BernoulliNB
sklearn.naive_bayes.MultinomialNB
sklearn.linear_model.LassoLars

Example from readme does not work

Example from readme does not work, with the freshest pip-installed version of onnxmltools:

onnx_model = onnxmltools.convert_keras(keras_model)

Results in error:

AttributeError: module 'onnxmltools' has no attribute 'convert_keras'

Just in case, I have:

  • keras: 2.1.4
  • onnxmltools: 0.1.0.0000

LightGBM multiclass convert error

I try to add a test for lightgbm, but i found multiclass classification seems to have error:

=================================== FAILURES ===================================
_______ TestLGBMClassifierConverter.test_model_multiclass_classification _______

self = <test_LightGBMClassifier.TestLGBMClassifierConverter testMethod=test_model_multiclass_classification>

    def test_model_multiclass_classification(self):
        model = self._fit_model_binary_classification(LGBMClassifier(
            objective="ova",
            learning_rate=0.05,
            boosting_type="gbdt",
            num_class=10))
>       model_onnx = convert_sklearn(model, 'scikit-learn LGBM multiclass classifier', [('input', FloatTensorType([1, 10]))])

tests/sklearn/test_LightGBMClassifier.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
onnxmltools/convert/main.py:18: in convert_sklearn
    doc_string, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
onnxmltools/convert/sklearn/convert.py:97: in convert
    onnx_model = convert_topology(topology, name, doc_string, targeted_onnx)
onnxmltools/convert/common/_topology.py:704: in convert_topology
    _registration.get_converter(operator.type)(scope, operator, container)
onnxmltools/convert/sklearn/operator_converters/LightGbm.py:140: in convert_lightgbm
    _parse_tree_structure(tree_id, class_id, learning_rate, tree['tree_structure'], attrs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

tree_id = 2, class_id = 2, learning_rate = 1
tree_structure = {'leaf_value': -34.53877639770508}
attrs = {'class_ids': [0, 0, 0, 1, 1, 1], 'class_nodeids': [3, 4, 2, 3, 4, 2], 'class_treeids': [0, 0, 0, 1, 1, 1], 'class_weights': [0.004500000000000001, 0.005, -0.005000000000000001, -0.004500000000000001, -0.005, 0.005000000000000001], ...}

    def _parse_tree_structure(tree_id, class_id, learning_rate, tree_structure, attrs):
        # The pool of all nodes' indexes created when parsing a single tree. Different trees may use different pools.
        node_id_pool = set()
    
        node_id = _create_node_id(node_id_pool)
        left_id = _create_node_id(node_id_pool)
        right_id = _create_node_id(node_id_pool)
    
        attrs['nodes_treeids'].append(tree_id)
        attrs['nodes_nodeids'].append(node_id)
    
>       attrs['nodes_featureids'].append(tree_structure['split_feature'])
E       KeyError: 'split_feature'

onnxmltools/convert/sklearn/operator_converters/LightGbm.py:49: KeyError

The relevant code is here #152

Keras Parser and Supported Keras Versions

For both of Sequential and Model, Keras parser works correctly with Keras 2.0.9 on Windows Anaconda3 locally. However, when running CI it failed with the following message.

tests/end2end/test_single_operator_with_cntk_backend.py:92: in _test_one_to_one_operator_core
onnx_model = onnxmltools.convert_keras(keras_model)
onnxmltools/convert/main.py:31: in convert_keras
return convert(model, name, initial_types=initial_types, doc_string=doc_string)
onnxmltools/convert/keras/convert.py:30: in convert
topology = parse_keras(model, initial_types)


model = <keras.models.Sequential object at 0x7f7b40cf4908>, initial_types = None
def parse_keras(model, initial_types=None):
raw_model_container = KerasModelContainer(model)
topology = Topology(raw_model_container, default_batch_size=1, initial_types=initial_types)
scope = topology.declare_scope('root')
for node in model.inbound_nodes:
E AttributeError: 'Sequential' object has no attribute 'inbound_nodes'
onnxmltools/convert/keras/_parse.py:74: AttributeError

Deprecated Padding Attribute in Pooling

When converting models from Core ML, we should calculate the padding amounts and create a ONNX Pad operator to replace the use of deprecated attributes (e.g., SAME_LOWER).

For example of a correct solution, we calculate the padding amounts explicitly for Core ML's IncludeLastPixel padding model in Pooling. Then, a ONNX Pad is created to replace that padding in Pool.

Flatten layer outputting incorrect values

I successfully converted a Keras implementation of the Neural Collaborative Filtering model to Onnx using OnnxMLTools, but the Flatten layer of the converted Onnx model is outputting incorrect values (for example, the Onnx model outputs dim_value: 1) compared to the original Keras model (for example, the Keras model outputs (None, 8).).

The Onnx model I am using is available here: https://www.dropbox.com/s/fwiaqk6xy0n3z6o/NCF.onnx?dl=0

I'm not sure if this is a bug with the converter or if there are any problems with the Onnx model itself. Some relevant details include that I used Python 2.7.15, Keras 2.2.4, and OnnxMLTools 1.2.2.0129 when converting the Keras model to Onnx.

Could someone please advise on this issue? I can share more details if needed.

I've included a picture below of the Keras model:

model_plot

I'm also including a picture of the outputs for the Flatten layers for the Onnx model:

onnx_flatten

Thank you!

AttributeError: 'TypeProto' object has no attribute 'sequence_type'

Hi,

I tried to play with the example from the tests:

import onnxmltools
from sklearn.datasets import load_iris
from sklearn.svm import SVC, SVR, NuSVC, NuSVR
from onnxmltools import convert_sklearn
from onnxmltools.convert.common.data_types import FloatTensorType
 
iris = load_iris()
X = iris.data[:, :3]
y = iris.target
model = SVC(kernel='linear', probability=False)
model.fit(X, y)
nodes = convert_sklearn(model, 'SVC', [('input', FloatTensorType([1, 1]))]).graph.node

And got the error in the title. Am I missing something?

Versions:
onnx 1.3.0
onnxmltools 1.2.2.129
scikit-learn 0.19.2

initializer is not in the graph input

The related node in the model graph:

  node {
    input: "input__0"
    input: "convolution.W"
    output: "MobilenetV1_MobilenetV1_Conv2d_0_convolution_0"
    name: "convolution"
    op_type: "Conv"
    attribute {
      name: "dilations"
      ints: 1
      ints: 1
      type: INTS
    }

And when to use ONNX IR Checker

import onnx
# Load the ONNX model
model = onnx.load("mobilenet.onnx")

# Check that the IR is well formed
onnx.checker.check_model(model)

# Print a human readable representation of the graph
onnx.helper.printable_graph(model.graph)

Traceback (most recent call last):
File "onnx_checker.py", line 6, in
onnx.checker.check_model(model)
File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/onnx-1.1.1-py3.6-macosx-10.12-x86_64.egg/onnx/checker.py", line 76, in check_model
C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: convolution.W in initializer but not in graph input

I find https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/common/_topology.py#L635

    graph = helper.make_graph(container.nodes, model_name, container.inputs, container.outputs, container.initializers)

Maybe we should use input_with_initializers (container.inputs + container.initializers)

Add image metadata to converted models

Hi,

onnxmltools could support image metadata properties in the converted models. I'm starting to work on it at a fork. My goal is to have model metadata and type denotation automatically added if the source model declares the input as an image.

I am doing the work on CoreML conversion. Is it a desired feature? I can open a PR after I validate it, is it likely to be merged to upstream?

Version String Conflicts after ONNX master moves to ONNX-1.2

Running the unit tests can produce something like:

RuntimeError: ONNX version conflict found. The installed version is 1.2 while the targeted version is 1.1.2

This problem happened because the default targeted onnx version was 1.1.2 while the installed onnx was 1.2. Therefore, user needs to explicitly specify the onnx version.

Incorrect error message while converting model to onnx format

Latest version of onnxmltools has targeted_onnx = '1.1.2' and if installed version is 1.1.1 then error message appears 'ONNX version conflict found. The installed version is 1.1.2 while the targeted version is 1.1.1' which is inverse of actual situation.

if targeted_onnx != onnx.__version__:

Also, onnx version in master is 1.1.1 and in pip is 1.1.2 but master is definitely more up to date.

LightGBM regression on convert_sklearn() Pipeline

I know we have moved LightGBM converter to its own namespace, but I strongly believe that we should keep the sklearn Pipeline convert work with LightGBM. An example below:

clf = Pipeline(steps=[
    ('onehot', OneHotEncoder(handle_unknown='ignore')),
    ('classifier', lgb.LGBMClassifier(objective="binary"))
])

clf.fit(X_train[categorical_features], y_train)
clf.score(X_test[categorical_features], y_test)
model_onnx = convert_sklearn(clf, 'pipeline', [("input", StringTensorType([1, 3]))])

I got ValueError: No proper operator name found for '<class 'lightgbm.sklearn.LGBMClassifier'>'

Specifing input shapes example

When converting models from Core ML, the batch size is unknown (variable-length) by default. To overwrite this setting, one can specify their own input shapes.

Consider MNIST.mlmodel downloaded at here. We can set batch size to 3 by running the following conversion code.

from onnxmltools.utils import visualize_model
from onnxmltools.convert.coreml.convert import convert
from onnxmltools.convert.common.data_types import FloatTensorType
from winmltools.utils import save_model
from winmltools.utils import save_text
from coremltools.models.utils import load_spec

coreml_model = load_spec('MNIST.mlmodel')

# Each input type is a tuple of (variable_name, variable_type). To find Core ML variable names
# and their types, you can print out coreml_model.description. If the considered Core ML
# variable, say coreml_model.description.input[0], is an image, you need to find out its color
# space value via printing coreml_model.description.input[0].type.imageType.colorSpace. 
# If the value of colorSpace is 10/20/30, the corresponding color space is 'GRAY'/'RGB'/'BGR'.
initial_type = (coreml_model.description.input[0].name, FloatTensorType(shape=[3, 1, 28, 28], color_space='GRAY')) # Other allowed color spaces are "RGB" and "BGR"
onnx_model = convert(coreml_model, initial_types=[initial_type])
# The produced ONNX model in text format
save_text(onnx_model, "mnist.onnx.txt")
# The produced ONNX model
save_model(onnx_model, 'mnist.onnx')
# Call a simple visualization tool
visualize_model(onnx_model)

Another example using BGR image input is show below

from onnxmltools.utils import visualize_model
from onnxmltools.convert.coreml.convert import convert
from onnxmltools.convert.common.data_types import FloatTensorType
from winmltools.utils import save_model
from winmltools.utils import save_text
from coremltools.models.utils import load_spec

coreml_model = load_spec('FNS-Candy.mlmodel')

# Set batch size to 1 instead of using variable-size batch
initial_type = (coreml_model.description.input[0].name, FloatTensorType(shape=[1, 3, 720, 720], color_space='BGR')) # Other allowed color spaces are "RGB" and "GRAY"
onnx_model = convert(coreml_model, initial_types=[initial_type])
save_text(onnx_model, "fns.onnx.txt")
save_model(onnx_model, 'fns.onnx')

visualize_model(onnx_model)

Unsupported shape calculation for operator Dropout

I made a new model using keras, and saved it to an .hdf file using a callback during training.

I reloaded the model and tried to convert to an ONNX model:

model = keras.models.load_model(filename)
winml_onnx_model = onnxmltools.convert_keras(model)

convert_keras throws an error here. Any idea what I might be doing wrong? I'm on Windows 10, python 3.6.2, Tensorflow 1.8, Keras 2.1.5, onnxmltools 1.2.0.0116, winmltools 1.2.0.0725

The model uses the following operators (but the error seems to be due to the Dropout layer):

Activation, Dropout, BatchNormalization, Convolution2D, MaxPooling2D, GlobalAveragePooling2D

If I do model.summary() everything seems fine with my model. Batch size is always specified as "None" in the model summary, so I wonder if there's some issue with how to deal with batch size in the ONNx conversion (this is my first time making an ONNX model)

Thanks!

UPDATE: This simple model gives the same error. I also tried specifying the batch_input_shape explicitly to just allow 1 item in the batch, and it didn't change anything:

model2 = Sequential()
model2.add(Dense(50, batch_input_shape=[1, 5]))
model2.add(Dropout(0.5))
winml_onnx_model2 = onnxmltools.convert_keras(model2)

Error below:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-15-da007af6c51d> in <module>()
----> 1 winml_onnx_model = onnxmltools.convert_keras(model)

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\main.py in convert_keras(model, name, initial_types, doc_string, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
     36     from .keras.convert import convert
     37     return convert(model, name, initial_types,
---> 38                    doc_string, targeted_onnx, custom_conversion_functions, custom_shape_calculators)

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\keras\convert.py in convert(model, name, initial_types, doc_string, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
     40     topology = parse_keras(model, initial_types, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
     41 
---> 42     topology.compile()
     43 
     44     if name is None:

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\common\_topology.py in compile(self)
    607         self._resolve_duplicates()
    608         self._fix_shapes()
--> 609         self._infer_all_types()
    610         self._check_structure()
    611 

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\common\_topology.py in _infer_all_types(self)
    493                 pass  # in Keras converter, the shape calculator can be optional.
    494             else:
--> 495                 operator.infer_types()
    496 
    497     def _resolve_duplicates(self):

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\common\_topology.py in infer_types(self)
     95     def infer_types(self):
     96         # Invoke a core inference function
---> 97         _registration.get_shape_calculator(self.type)(self)
     98 
     99 

C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\onnxmltools\convert\common\_registration.py in get_shape_calculator(operator_name)
     66     '''
     67     if operator_name not in _shape_calculator_pool:
---> 68         raise ValueError('Unsupported shape calculation for operator %s' % operator_name)
     69     return _shape_calculator_pool[operator_name]

ValueError: Unsupported shape calculation for operator <class 'keras.layers.core.Dropout'>

Conversion of Keras MobileNet fails

The Following code fails.

from keras.applications.mobilenet import MobileNet
model = MobileNet(include_top=True, weights='imagenet')

from onnxmltools import convert_keras
onx = convert_keras(model, 'mobilenet.onnx')

from onnxmltools.utils import save_model
save_model(onx, "mobilenet.onnx")

BatchNormalization sets wrong opset version be 1, not 6

ONNX checker reports error of BatchNormalization operator of consumed_input. However, this is opset version 1, but we emit opset version 6 of BatchNormalization

https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization

This version of the operator has been available since version 6 of the default ONNX operator set.
Other versions of this operator: BatchNormalization-1

our code is https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/coreml/operator_converters/neural_network/BatchNorm.py#L69:

    container.add_node(op_type, inputs, outputs, **attrs)

We use the default parameter op_version=1, here we should be 6.

See the wrong information:

Traceback (most recent call last):
  File "onnx_checker.py", line 6, in <module>
    onnx.checker.check_model(model)
  File "/usr/local/Cellar/python3/3.6.0_1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/onnx-1.1.1-py3.6-macosx-10.12-x86_64.egg/onnx/checker.py", line 76, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: Required attribute 'consumed_inputs' is missing.

==> Context: Bad node spec: input: "MobilenetV1_MobilenetV1_Conv2d_0_convolution_0" input: "BatchNormalization_scale" input: "BatchNormalization_B" input: "BatchNormalization_mean" input: "BatchNormalization_variance" output: "MobilenetV1_MobilenetV1_Conv2d_0_BatchNorm_batchnorm_add_1_0" name: "batchnorm" op_type: "BatchNormalization" attribute { name: "epsilon" f: 0 type: FLOAT } attribute { name: "is_test" i: 1 type: INT } attribute { name: "momentum" f: 0 type: FLOAT } attribute { name: "spatial" i: 1 type: INT } domain: "" 

Error when converting cnn-emotion-detection from CoreML

Trying to convert a CNN emotion detection coreml model I get an error.

Model: https://coreml.store/cnnemotions
Config: Python 3.6.4, OSX
Install: from source using pip install git+https://github.com/onnx/onnxmltools
Script and error:

>>> import onnxmltools
>>> import coremltools
>>> model_coreml = coremltools.utils.load_spec('CNNEmotions.mlmodel')
>>> model_onnx = onnxmltools.convert_coreml(model_coreml, 'Image_Reco')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/main.py", line 23, in convert_coreml
    return convert(model, name)
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/coreml/convert.py", line 77, in convert
    nodes = _convert_coreml_node(context, spec)
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/coreml/convert.py", line 310, in _convert_coreml_node
    return _convert_neural_network(context, cm_node)
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/coreml/convert.py", line 267, in _convert_neural_network
    context, converter, nn_layer, inputs, outputs)
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/coreml/convert.py", line 115, in _do_convert
    nodes = converter.convert(context, cm_node, input, output)
  File "/Users/hag/anaconda/envs/mms-onnx-talk-3-6/lib/python3.6/site-packages/onnxmltools/convert/coreml/NeuralNetwork/pooling.py", line 154, in convert
    raise ValueError('Unsupported padding mode: {}'.format(pad_type))
ValueError: Unsupported padding mode: includeLastPixel

Lambda layer in Keras not implemented?

Hey, thank you for a great library!

I was trying to convert Yolov2 object detection model from Keras->ONNX and got this error:
Traceback (most recent call last): File "predict.py", line 179, in <module> _main_(args) File "predict.py", line 84, in _main_ onnx_model = onnxmltools.convert_keras(yolo.model) File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/main.py", line 38, in convert_keras target_opset, targeted_onnx, custom_conversion_functions, custom_shape_calculators) File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/keras/convert.py", line 43, in convert topology.compile() File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/common/_topology.py", line 624, in compile self._infer_all_types() File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/common/_topology.py", line 500, in _infer_all_types operator.infer_types() File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/common/_topology.py", line 101, in infer_types _registration.get_shape_calculator(self.type)(self) File "/home/cheng/miniconda2/envs/tensorflow/lib/python2.7/site-packages/onnxmltools/convert/common/_registration.py", line 68, in get_shape_calculator raise ValueError('Unsupported shape calculation for operator %s' % operator_name) ValueError: Unsupported shape calculation for operator <class 'keras.layers.core.Lambda'>

Is Lambda layer implemented?

Thank you!

ONNX checker fails generated models

Running checker.py from ONNX on models generated by ONNXMLTools fails with:
Error: model with IR version >= 3 must specify opset_import for ONNX

keras.layers.advanced_activations.ReLU conversion fails

from keras.applications.mobilenetv2 import MobileNetV2
model = MobileNetV2(input_shape=None, alpha=1.0, depth_multiplier=1,
include_top=True,
weights='imagenet', input_tensor=None,
pooling=None, classes=1000)

from onnxmltools import convert_keras
konnx = convert_keras(model, "mobilev2")

Issue:

Unsupported shape calculation for operator <class 'keras.layers.advanced_activations.ReLU'>

MaxPool and AveragePool conversion doesn't need explicit Pad with ONNX 1.2

As discussed in internal email, the CoreML converter is creating Pad->MaxPool sequences where the Pad node is not required. These were observed when converting squeezenet.mlmodel.

I pasted the two observed sequences below. The first generated a no-op Pad operator where the input tensor was already even-aligned. The second can fold the padding into the pool operators, possibly with AveragePool also needing count_include_pad to get the same result.

node {
input: "conv1"
output: "legacy_padded_tensor"
name: "Pad"
op_type: "Pad"
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
ints: 0
ints: 0
ints: 0
ints: 0
type: INTS
}
attribute {
name: "value"
f: -3.40282347e+38
type: FLOAT
}
domain: ""
}
node {
input: "legacy_padded_tensor"
output: "pool1"
name: "pooling"
op_type: "MaxPool"
attribute {
name: "auto_pad"
s: "VALID"
type: STRING
}
attribute {
name: "kernel_shape"
ints: 3
ints: 3
type: INTS
}
attribute {
name: "strides"
ints: 2
ints: 2
type: INTS
}
domain: ""
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.