onnx / models Goto Github PK
View Code? Open in Web Editor NEWA collection of pre-trained, state-of-the-art models in the ONNX format
Home Page: http://onnx.ai/models/
License: Apache License 2.0
A collection of pre-trained, state-of-the-art models in the ONNX format
Home Page: http://onnx.ai/models/
License: Apache License 2.0
Hi guys,
I was wondering if anyone knew what pre-processing steps were taken for each respective model, more specifically VGG19. Additionally are these pre-processing steps the same or similar throughout all the models? Any information would be helpful.
Thank you
I'm green to DeepLearning, and I want to see the output of each node of Alex net by run_node function, but I met the following error.
import onnx
model = onnx.load('/path/bvlc_alexnet/model.pb')
node = model.graph.node[0] #use the first node of op_type as "Conv"
onnx.checker.check_node(node)
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/onnx/checker.py", line 32, in checker
proto.SerializeToString(), ir_version)
onnx.onnx_cpp2py_export.checker.ValidationError: Field 'type' of attr is required but missing.
node
input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}
I tried to add 'type' of attr by following codes
from onnx import AttributeProto
attr = AttributeProto()
#I did't know which type was right, just tried to see the result of this workaround
#with 'AttributeProto.FLOAT' value.
attr.type = AttributeProto.FLOAT
node.attribute.extend([attr])
node
input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}
attribute {
type: FLOAT
}
onnx.checker.check_node(node)
, still got the same error.
Hi ONNX team, I'm working with @prasanthpul to standardize on a README template for the models in the zoo. Please let me know your thoughts. See here for the MNIST model for reference.
Download: link to model download
Model size: x MB
Description of what the model does
Name and link to the paper that the model implements, if applicable.
Dataset that was used to train the model
Reference to the tutorial used for training and/or the source/Github repo of the code that the ONNX model was generated or converted from (e.g. TinyYOLO CoreML).
If this is different than the original model code/source (e.g. TinyYOLO DarkNet), provide a link to that, too.
Expected model input format - type and shape
Expected model output format - type and shape
Description of what pre-processing steps need to be done on the data before feeding them to the model
If applicable, description or link to reference of any post-processing steps to perform on model output
Short description on the included sample test data (.pb or .npz).
If possible, to motivate model reproducibility and integrity.
I am using onnx 1.1.1, where onnx.defs.onnx_opset_version()
returns 4, and thus Caffe2 tries to download https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
Downloading this file gives the following error:
$ wget https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
--2018-05-01 15:11:56-- https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.99.21
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.99.21|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-05-01 15:11:56 ERROR 403: Forbidden.
I got this sample code from https://github.com/onnx/models and wanted to see it running with ONNX model. I took resnet50 as example and the following error.
import numpy as np
import onnx
import caffe2
from onnx.backend.base import Backend
model_path = '/home/onnx/Downloads/resnet50/model.pb'
npz_path = '/home/onnx/Downloads/resnet50/test_data_0.npz'
model = onnx.load(model_path)
sample = np.load(npz_path, encoding='bytes')
inputs = list(sample['inputs'])
outputs = list(sample['outputs'])
np.testing.assert_almost_equal(outputs, Backend.run_model(model, inputs))
AttributeError Traceback (most recent call last)
in ()
12 outputs = list(sample['outputs'])
13
---> 14 np.testing.assert_almost_equal(outputs,Backend.run_model(model, inputs))
/anaconda/lib/python2.7/site-packages/onnx/backend/base.pyc in run_model(cls, model, inputs, device, **kwargs)
55 @classmethod
56 def run_model(cls, model, inputs, device='CPU', **kwargs):
---> 57 cls.prepare(model, device, **kwargs).run(inputs)
58
59 @classmethod
AttributeError: 'NoneType' object has no attribute 'run'
Let me know how to overcome this error? Why is it returning 'NoneType'
GitHub has 100MB disk quota, and our models are usually very large. We should not store them in GitHub directly.
Right now, most of the models are hosted on S3, we should move the rest to somewhere else (e.g., s3, git lfs), too.
I'm looking at some of the models gathered in this repository (SqueezeNet, DenseNet-121, ResNet-50) and I noticed a pattern.
All parameters of the graph are specified as input
fields and none as value_info
fields. This mixes model inputs and trainable parameters into a single set and I'm not sure how to distinguish between them.
Separating model inputs and trainable variables may be useful for frameworks, which distinguish the concepts of placeholder
and Variable
.
Would you agree or am I missing something?
resnet
model provides following attributes for BatchNormalization operator:
{u'is_test': 1L, u'epsilon': 1.0000000656873453e-05, u'consumed_inputs': (0L, 0L, 0L, 1L, 1L)})
Can someone explain what is consumed_inputs
? I couldn't find it's explanation in onnx documentation about BatchNormalization operator.
@bddppq
Should the models get updated to use the .onnx
file extension?
Also wondering, should the .tar.gz
files be linked directly from the top-level README.md
and the files named bvlc_alexnet.onnx
, inception_v2.onnx
etc instead of model.db
?
Vgg16 should be like described here, composed of 16 layers with weigths. The model loaded is composed of 5 Conv and 3 Gemm.
Hi,
I used
convert-onnx-to-caffe2 model.onnx -o predict_net.pb --init-net-output init_net.pb
But when trying to run it on an input, I get :
File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 212, in RunNetOnce
StringifyProto(net),
File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 192, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at conv_pool_op_base.h:617] in_size + *pad_head + *pad_tail >= dkernel. 1 vs 3 Error from operator:
input: "ReLU2700_Output_0" input: "Parameter675" output: "Convolution2714_Output_0" name: "Convolution2714" type: "Conv" arg { name: "kernels" ints: 3 ints: 3 } arg { name: "strides" ints: 1 ints: 1 } arg { name: "auto_pad" s: "SAME_UPPER" } arg { name: "group" i: 1 } arg { name: "dilations" ints: 1 ints: 1 } device_option { device_type: 0 cuda_gpu_id: 0 }
I think my input shape is correct (else I get an error on the first layer input: "Input2505"
).
Any chance someone has an SSD Mobilenet onnx model yet?
Are all the operations supported for an SSD?
I was loading the MNIST model. I wanted to where the model weights were stored. In other onnx models, the Graph.initializer
structure holds this information, but in this particular MNIST model, the Graph.initializer
is empty.
To address the schema changes.
The caffe2 backend can't run resnet/mnist it fails.
resnet:
RuntimeError: [enforce fail at conv_op_impl.h:37] X.ndim() == filter.ndim(). 5 vs 4 Error from operator:
input: "gpu_0/data_0" input: "gpu_0/conv1_w_0" output: "gpu_0/conv1_1" name: "" type: "Conv"
mnist:
RuntimeError: [enforce fail at reshape_op.h:120] total_size == size. 64 vs 256. Argument shape
does not agree with the input data. (64 != 256) Error from operator:
input: "Pooling160_Output_0" output: "Pooling160_Output_0_reshape0" output: "OC2_DUMMY_1" name: "Times212_reshape0" type: "Reshape"
I think it could be a problem with the export of cntk?
When I convert the provided tiny-yolo-voc.onnx to another framework, it goes on well. But when I convert my own tiny-yolo-voc.onnx, it warns me that no supports for pads. Does that mean the provided onnx has no pads? And would you provide with tiny-yolo on COCO? Thanks a lot.
The 3 input / output pairs appear to be mismatched.
In particular,
Input 0 ---> Output 1
Input 1 ---> Output 2
Input 2 ---> Output 0.
So the 3 test_data_[012].npz files need to be recreated I guess.
Posted in the wrong repo, delete this.
I download emotion_ferplus onnx model https://github.com/onnx/models/tree/master/emotion_ferplus.
But using onnx.checker.check_model function, report:
I found that in emotion_ferplus.onnx, opset_import is [], which is not set.
The labels mentioned in the models readme is this -
emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}
It does not match with the labels mentioned in the dataset source
(0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral)
Can this be clarified @ebarsoum
I tried running the sample code on README.md and got the following error.
Traceback (most recent call last):
File "test.py", line 3, in <module>
import onnx_backend as backend
ImportError: No module named onnx_backend
I believe the sample is based on old code.
As I also stated in #942 , I realized that even the emotion_ferplus model has _type
= 0 for a few attributeprotos.
cc @bddppq Can you please have a look?
import onnx
model = onnx.load_model('mnist.onnx')
model = onnx.checker.check_model(model)
resulting error
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
<ipython-input-2-4ef34880ef37> in <module>()
1 model = onnx.load_model('mnist.onnx')
----> 2 model = onnx.checker.check_model(model)
~/py36/lib/python3.6/site-packages/onnx/checker.py in check_model(model)
80
81 def check_model(model): # type: (ModelProto) -> None
---> 82 C.check_model(model.SerializeToString())
83
84
ValidationError: model with IR version >= 3 must specify opset_import for ONNX
I was trying out the tiny yolo v2 model, when I came across a strange error. Apparently, there exist a convolution.W
key, which is called without being initialized. Can someone tell me the reason behind this?
The mnist model contains a layer named Constant
, along with Conv, Relu etc. I wanted to know what this Constant layer does in the model (What is the operation it performs?)
The Shufflenet model in this repo has an older version of BatchNormalization op in the model, that uses the attribute consumed_inputs
. There are two issues with this:
Is it possible to remove this attribute from the model and generate the model with the latest version of the BatchNormzalization op?
Hi, is it a mistake that the last maxpooling layer of alex net has padding [0, 0, 1, 1]? This node has output named "pool5_1". Changing it to [1, 1, 1, 1] gives correct shape as well as correct results for us.
In the net, it has two nodes with same output (the name is not unique, check below). This is not a valid model, right?
node {
input: "conv1"
input: "conv1/bn_scale"
input: "conv1/bn_bias"
input: "conv1/bn_mean"
input: "conv1/bn_var"
output: "conv1/bn"
name: ""
op_type: "SpatialBN"
attribute {
name: "is_test"
i: 1
}
attribute {
name: "epsilon"
f: 1e-05
}
attribute {
name: "consumed_inputs"
ints: 0
ints: 0
ints: 0
ints: 1
ints: 1
}
}
node {
input: "conv1/bn_internal"
input: "conv1/bn_b"
output: "conv1/bn"
name: ""
op_type: "Add"
attribute {
name: "broadcast"
i: 1
}
attribute {
name: "axis"
i: 1
}
}
How to reproduce:
Download squeezenet model from https://s3.amazonaws.com/download.onnx/models/opset_7/squeezenet.tar.gz
Unzip it and run the script below with squeezenet\test_data_set_0\input_0.pb
import onnx
import sys
from onnx import TensorProto
proto = TensorProto()
# Read the existing address book.
f = open(sys.argv[1], "rb")
proto.ParseFromString(f.read())
f.close()
print(proto.name)
The output is empty.
We need this information to know this tensor is for which input.
https://s3.amazonaws.com/download.onnx/models/opset_7/inception_v1.tar.gz
But the model file is indeed opset 6.
Are these models pre-trained or randomly initialized?
.pb
to .onnx
(note this requires adding versioning mechanism to keep backward compatibility).Cross referencing here:
onnx/onnx-mxnet#42
Any idea if I am doing something wrong or data might be wrong?
Hi,
Is there a pre-trained onnx object detection model available for reference (SSD, Faster-RCNN, RetinaNet or Mask RCNN)?
Thanks and Regards,
Kumar.D
For the onnx model - R-CNN ILSVRC13 The model outputs seem to be incorrect for the task of object detection. The model output has the dimension (1,200) and each of values in the tensors seem to be a negative value around -2.5. But for an object detection task we ideally need information of the
The output we get from the model, the (1,200) shape tensor does not provide this information.
The model README says that it is an implementation of this paper but the output does not seem to be accomplishing the task of object detection.
Can someone verify this?
Hi,
I think the axis attribute is missing into the 16th node of the onnx model densenet121.
It is the same for inception_v2 for the 53th node, axis is not provided for the concat node.
According to the doc, the attr axis is required for the concat node.
Concat
Concatenate a list of tensors into a single tensor
Versioning
This operator is used if you are using version 1 of the default ONNX operator set until the next >BC-breaking change to this operator; e.g., it will be used if your protobuf has:opset_import {
version = 1
}
Attributesaxis : int (required)
Which axis to concat on
How is it possible that the model is use as test suite of onnx with this kind of bug ? any ideas ?
Hi! Can the tiny_yolov2 model be updated to output floats rather than doubles? Windows ML currently doesn't support doubles, so the model can't be used on Windows.
Hi everyone, I have successfully imported the ONNX model for ResNet50, I tried both the master and release versions, and labels from here: http://image-net.org/challenges/LSVRC/2015/browse-synsets (copied & pasted, and parsed in Python), and I tried the IMAGENET 2012 list as well http://www.image-net.org/archive/words.txt.
If I put a golden retriever it gives me .999.. percent accuracy... of a rugby ball. Does anyone know where the correct dataset is? The evaluated output does not correspond to the correct label. Thanks.
Certain model files contain npz inputs and outputs but no serialized protobuf TensorProtos.
The latter format is useful for Pythonless workflows. Would be good if either (i) all files contain protobuf files or (ii) a script for converting npz -> protobuf files is provided.
I'm obtaining 22% classification rate on MNIST using the model provided https://github.com/onnx/models/tree/master/mnist
Can confirm that the model does load correctly : #38
and I'm performing the suggested preprocessing (/255 to get float in [0,1]).
Hi,
Where can i find the model source (in original framework) used to create the ONNX model?
For Resnet50 ONNX model:
what PreProcessing need to be done on the images before i can feed them as input to the network as NPZ files. (PreProceesing on Image that we get from LMDB or BMP with values 0-255)
Thx,
Barak
There is only three folders with .pb files. It would be nice to have a .npz file as well.
Looks like tiny_yolov2
only has one input (image
). For it to be a valid model, all the initializers should be in the input too. @houseroad
Error log:
E ValidationError: convolution.W in initializer but not in graph input
../../../../venv/lib/python2.7/site-packages/onnx/checker.py:76: ValidationError
hello
recently, I've been playing onnx models with ncnn converter.
I found that all the models in this repo can be processed using the bundled test data without errors, except the shufflenet model, which always produces different result with the desired test output.
And, I found that the three test data in shufflenet model archive are identical with each other.
test_data_0.npy
test_data_1.npy
test_data_2.npy
Is there anything wrong with the test data ?
There are only some npz files.
Hi,
I have a question about the onnx model inception_v1.
The last AveragePool node have the following proto :
graph.node[-5]
input: "inception_5b/output_1"
output: "pool5/7x7_s1_1"
name: ""
op_type: "AveragePool"
attribute {
name: "strides"
ints: 1
ints: 1
type: INTS
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 1
ints: 1
type: INTS
}
attribute {
name: "kernel_shape"
ints: 7
ints: 7
type: INTS
}
The input tensor shape is : [1, 1024, 7, 7]
So, an average pooling layer with kernel=7, stride=1, padding=0 before and 1 after will give an output tensor of shape [1, 1024, 2, 2].
This don't seem to make sense à the end of a network, where the output should be of shape [1, 1024, 1, 1].
So, according to me, the padding should be zero :
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
type: INTS
}
As you can see here for example : http://ethereon.github.io/netscope/#/preset/googlenet
Does someone can tell me from were this model was generated ?
The shape of input data_0
of onnx bvlc_alexnet model is [1, 3, 224, 224]
.
According to http://cs231n.github.io/convolutional-networks/, this causes the Conv node's output conv1_1
shape to be non-integer.
I'd like to understand why data_0
is not [1, 3, 227, 227]
which is used in other models, like caffe2 bvlc_alexnet model.
Thanks!
Hi,
I get good results on the traditional 1000 imagenet synset using the VGG16, bvlc_googlenet, and VGG19 models, with the data moved to [0, 1] and applied the std and mean normalization from ImageNet.
However I am not able to get accurate predictions using squeezenet, densenet121 pre-trained models using the same pre-processing. Have they been trained on a different synset? Do they need a specific pre-processing?
I am using MXNet as the backend.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.