Code Monkey home page Code Monkey logo

models's People

Contributors

abhinavs95 avatar ak391 avatar ankkhedia avatar asiryan avatar askhade avatar bddppq avatar bowenbao avatar daquexian avatar doughtmw avatar houseroad avatar jantonguirao avatar jcwchen avatar jennifererwangg avatar jiafatom avatar ksenijas avatar kundanapillari avatar mateusztabaka avatar mengniwang95 avatar mhamilton723 avatar mustafakasap avatar mx-iao avatar neginraoof avatar pluradj avatar prasanthpul avatar ramkrishna2910 avatar shirleysu8 avatar tmoreau89 avatar vinitra avatar wenbingl avatar yuwenzho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

models's Issues

Pre-Processing stage

Hi guys,

I was wondering if anyone knew what pre-processing steps were taken for each respective model, more specifically VGG19. Additionally are these pre-processing steps the same or similar throughout all the models? Any information would be helpful.

Thank you

ValidationError “Field 'type' of attr is required but missing.” happened when invoking check_node function.

I'm green to DeepLearning, and I want to see the output of each node of Alex net by run_node function, but I met the following error.

import onnx
model = onnx.load('/path/bvlc_alexnet/model.pb')
node = model.graph.node[0] #use the first node of op_type as "Conv"
onnx.checker.check_node(node)

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/onnx/checker.py", line 32, in checker
proto.SerializeToString(), ir_version)
onnx.onnx_cpp2py_export.checker.ValidationError: Field 'type' of attr is required but missing.

node

input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}

I tried to add 'type' of attr by following codes

from onnx import AttributeProto
attr = AttributeProto()
#I did't know which type was right, just tried to see the result of this workaround
#with 'AttributeProto.FLOAT' value.
attr.type = AttributeProto.FLOAT 
node.attribute.extend([attr])
node

input: "data_0"
input: "conv1_w_0"
input: "conv1_b_0"
output: "conv1_1"
name: ""
op_type: "Conv"
attribute {
name: "strides"
ints: 4
ints: 4
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
}
attribute {
name: "kernel_shape"
ints: 11
ints: 11
}
attribute {
type: FLOAT
}

onnx.checker.check_node(node)

, still got the same error.

Proposed template for onnx models readme

Hi ONNX team, I'm working with @prasanthpul to standardize on a README template for the models in the zoo. Please let me know your thoughts. See here for the MNIST model for reference.

Model name

Download: link to model download
Model size: x MB

Description

Description of what the model does

Paper

Name and link to the paper that the model implements, if applicable.

Dataset

Dataset that was used to train the model

Source

Reference to the tutorial used for training and/or the source/Github repo of the code that the ONNX model was generated or converted from (e.g. TinyYOLO CoreML).
If this is different than the original model code/source (e.g. TinyYOLO DarkNet), provide a link to that, too.

Model input and output

Input

Expected model input format - type and shape

Output

Expected model output format - type and shape

Pre-processing steps

Description of what pre-processing steps need to be done on the data before feeding them to the model

Post-processing steps

If applicable, description or link to reference of any post-processing steps to perform on model output

Sample test data

Short description on the included sample test data (.pb or .npz).

Results/accuracy on test set

If possible, to motivate model reproducibility and integrity.

License

Can't download model file opset_4/resnet50.tar.gz

I am using onnx 1.1.1, where onnx.defs.onnx_opset_version() returns 4, and thus Caffe2 tries to download https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz

Downloading this file gives the following error:

$ wget https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
--2018-05-01 15:11:56--  https://s3.amazonaws.com/download.onnx/models/opset_4/resnet50.tar.gz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.99.21
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.99.21|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-05-01 15:11:56 ERROR 403: Forbidden.

AttributeError: 'NoneType' object has no attribute 'run'

I got this sample code from https://github.com/onnx/models and wanted to see it running with ONNX model. I took resnet50 as example and the following error.

import numpy as np
import onnx
import caffe2
from onnx.backend.base import Backend

model_path = '/home/onnx/Downloads/resnet50/model.pb'
npz_path = '/home/onnx/Downloads/resnet50/test_data_0.npz'
model = onnx.load(model_path)
sample = np.load(npz_path, encoding='bytes')

inputs = list(sample['inputs'])
outputs = list(sample['outputs'])
np.testing.assert_almost_equal(outputs, Backend.run_model(model, inputs))


AttributeError Traceback (most recent call last)
in ()
12 outputs = list(sample['outputs'])
13
---> 14 np.testing.assert_almost_equal(outputs,Backend.run_model(model, inputs))

/anaconda/lib/python2.7/site-packages/onnx/backend/base.pyc in run_model(cls, model, inputs, device, **kwargs)
55 @classmethod
56 def run_model(cls, model, inputs, device='CPU', **kwargs):
---> 57 cls.prepare(model, device, **kwargs).run(inputs)
58
59 @classmethod

AttributeError: 'NoneType' object has no attribute 'run'

Let me know how to overcome this error? Why is it returning 'NoneType'

Graph inputs vs value_info

I'm looking at some of the models gathered in this repository (SqueezeNet, DenseNet-121, ResNet-50) and I noticed a pattern.

All parameters of the graph are specified as input fields and none as value_info fields. This mixes model inputs and trainable parameters into a single set and I'm not sure how to distinguish between them.

Separating model inputs and trainable variables may be useful for frameworks, which distinguish the concepts of placeholder and Variable.

Would you agree or am I missing something?

Update to .onnx file extension and flat folder structure

Should the models get updated to use the .onnx file extension?

Also wondering, should the .tar.gz files be linked directly from the top-level README.md and the files named bvlc_alexnet.onnx, inception_v2.onnx etc instead of model.db?

Error with emotion_ferplus loaded in Caffe2

Hi,

I used

convert-onnx-to-caffe2 model.onnx -o predict_net.pb --init-net-output init_net.pb

But when trying to run it on an input, I get :

  File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 212, in RunNetOnce
    StringifyProto(net),
  File "/Users/louisabraham/miniconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 192, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at conv_pool_op_base.h:617] in_size + *pad_head + *pad_tail >= dkernel. 1 vs 3 Error from operator: 
input: "ReLU2700_Output_0" input: "Parameter675" output: "Convolution2714_Output_0" name: "Convolution2714" type: "Conv" arg { name: "kernels" ints: 3 ints: 3 } arg { name: "strides" ints: 1 ints: 1 } arg { name: "auto_pad" s: "SAME_UPPER" } arg { name: "group" i: 1 } arg { name: "dilations" ints: 1 ints: 1 } device_option { device_type: 0 cuda_gpu_id: 0 }

I think my input shape is correct (else I get an error on the first layer input: "Input2505").

SSD mobilenet?

Any chance someone has an SSD Mobilenet onnx model yet?
Are all the operations supported for an SSD?

MNIST model weights

I was loading the MNIST model. I wanted to where the model weights were stored. In other onnx models, the Graph.initializer structure holds this information, but in this particular MNIST model, the Graph.initializer is empty.

Caffe2 backend can't run resnet/mnist

The caffe2 backend can't run resnet/mnist it fails.

resnet:
RuntimeError: [enforce fail at conv_op_impl.h:37] X.ndim() == filter.ndim(). 5 vs 4 Error from operator:
input: "gpu_0/data_0" input: "gpu_0/conv1_w_0" output: "gpu_0/conv1_1" name: "" type: "Conv"

mnist:
RuntimeError: [enforce fail at reshape_op.h:120] total_size == size. 64 vs 256. Argument shape does not agree with the input data. (64 != 256) Error from operator:
input: "Pooling160_Output_0" output: "Pooling160_Output_0_reshape0" output: "OC2_DUMMY_1" name: "Times212_reshape0" type: "Reshape"

I think it could be a problem with the export of cntk?

Does the tiny-yolo-voc.onnx have pads?

When I convert the provided tiny-yolo-voc.onnx to another framework, it goes on well. But when I convert my own tiny-yolo-voc.onnx, it warns me that no supports for pads. Does that mean the provided onnx has no pads? And would you provide with tiny-yolo on COCO? Thanks a lot.

input-outputs pairs mismatched for mnist model

The 3 input / output pairs appear to be mismatched.

In particular,
Input 0 ---> Output 1
Input 1 ---> Output 2
Input 2 ---> Output 0.

So the 3 test_data_[012].npz files need to be recreated I guess.

-

Posted in the wrong repo, delete this.

Error in Emotion_ferplus output labels

The labels mentioned in the models readme is this -
emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}

It does not match with the labels mentioned in the dataset source
(0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral)

Can this be clarified @ebarsoum

Sample Code on README.md not working

I tried running the sample code on README.md and got the following error.

Traceback (most recent call last):
  File "test.py", line 3, in <module>
    import onnx_backend as backend
ImportError: No module named onnx_backend

I believe the sample is based on old code.

MNIST example fails ONNX checker

import onnx
model = onnx.load_model('mnist.onnx')
model = onnx.checker.check_model(model)

resulting error

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
<ipython-input-2-4ef34880ef37> in <module>()
      1 model = onnx.load_model('mnist.onnx')
----> 2 model = onnx.checker.check_model(model)

~/py36/lib/python3.6/site-packages/onnx/checker.py in check_model(model)
     80 
     81 def check_model(model):  # type: (ModelProto) -> None
---> 82     C.check_model(model.SerializeToString())
     83 
     84 

ValidationError: model with IR version >= 3 must specify opset_import for ONNX

Error in the tiny yolo v2 model

I was trying out the tiny yolo v2 model, when I came across a strange error. Apparently, there exist a convolution.W key, which is called without being initialized. Can someone tell me the reason behind this?

Constant op_type definition

The mnist model contains a layer named Constant, along with Conv, Relu etc. I wanted to know what this Constant layer does in the model (What is the operation it performs?)

Shufflenet has an old version of BatchNormalization op (that uses consumed_inputs)

The Shufflenet model in this repo has an older version of BatchNormalization op in the model, that uses the attribute consumed_inputs. There are two issues with this:

  1. This version is quite old and several frameworks that support ONNX do not have support for it.
  2. Furthermore, the older version's spec does not have a good description for this attribute.

Is it possible to remove this attribute from the model and generate the model with the latest version of the BatchNormzalization op?

About AlexNet MaxPooling

Hi, is it a mistake that the last maxpooling layer of alex net has padding [0, 0, 1, 1]? This node has output named "pool5_1". Changing it to [1, 1, 1, 1] gives correct shape as well as correct results for us.

Same node output name in densenet121

In the net, it has two nodes with same output (the name is not unique, check below). This is not a valid model, right?
node {
input: "conv1"
input: "conv1/bn_scale"
input: "conv1/bn_bias"
input: "conv1/bn_mean"
input: "conv1/bn_var"
output: "conv1/bn"
name: ""
op_type: "SpatialBN"
attribute {
name: "is_test"
i: 1
}
attribute {
name: "epsilon"
f: 1e-05
}
attribute {
name: "consumed_inputs"
ints: 0
ints: 0
ints: 0
ints: 1
ints: 1
}
}

node {
input: "conv1/bn_internal"
input: "conv1/bn_b"
output: "conv1/bn"
name: ""
op_type: "Add"
attribute {
name: "broadcast"
i: 1
}
attribute {
name: "axis"
i: 1
}
}

Input test data should have a valid name

How to reproduce:

Download squeezenet model from https://s3.amazonaws.com/download.onnx/models/opset_7/squeezenet.tar.gz
Unzip it and run the script below with squeezenet\test_data_set_0\input_0.pb

import onnx
import sys

from onnx import TensorProto

proto = TensorProto()

# Read the existing address book.
f = open(sys.argv[1], "rb")
proto.ParseFromString(f.read())
f.close()
print(proto.name)

The output is empty.

We need this information to know this tensor is for which input.

Update models to the newest ir_version

  1. And automate the process in onnx-caffe2.
  2. Update the file extension from .pb to .onnx (note this requires adding versioning mechanism to keep backward compatibility).

The outputs of the R-CNN ILSVRC13 model is incorrect

For the onnx model - R-CNN ILSVRC13 The model outputs seem to be incorrect for the task of object detection. The model output has the dimension (1,200) and each of values in the tensors seem to be a negative value around -2.5. But for an object detection task we ideally need information of the

  • bounding box co-ordinates
  • classes of objects detected in each region/bounding box.

The output we get from the model, the (1,200) shape tensor does not provide this information.

The model README says that it is an implementation of this paper but the output does not seem to be accomplishing the task of object detection.

Can someone verify this?

Axis attribute of concat is missing for densnet121 and inception_v2

Hi,

I think the axis attribute is missing into the 16th node of the onnx model densenet121.

It is the same for inception_v2 for the 53th node, axis is not provided for the concat node.

According to the doc, the attr axis is required for the concat node.

Concat
Concatenate a list of tensors into a single tensor
Versioning
This operator is used if you are using version 1 of the default ONNX operator set until the next >BC-breaking change to this operator; e.g., it will be used if your protobuf has:

opset_import {
version = 1
}
Attributes

axis : int (required)
Which axis to concat on

How is it possible that the model is use as test suite of onnx with this kind of bug ? any ideas ?

tiny_yolov2 model output with floats

Hi! Can the tiny_yolov2 model be updated to output floats rather than doubles? Windows ML currently doesn't support doubles, so the model can't be used on Windows.

ResNet50 Labels Giving Wrong Results

Hi everyone, I have successfully imported the ONNX model for ResNet50, I tried both the master and release versions, and labels from here: http://image-net.org/challenges/LSVRC/2015/browse-synsets (copied & pasted, and parsed in Python), and I tried the IMAGENET 2012 list as well http://www.image-net.org/archive/words.txt.

If I put a golden retriever it gives me .999.. percent accuracy... of a rugby ball. Does anyone know where the correct dataset is? The evaluated output does not correspond to the correct label. Thanks.

Format of input and output Tensors

Certain model files contain npz inputs and outputs but no serialized protobuf TensorProtos.

The latter format is useful for Pythonless workflows. Would be good if either (i) all files contain protobuf files or (ii) a script for converting npz -> protobuf files is provided.

Source of the models

Hi,

Where can i find the model source (in original framework) used to create the ONNX model?
For Resnet50 ONNX model:
what PreProcessing need to be done on the images before i can feed them as input to the network as NPZ files. (PreProceesing on Image that we get from LMDB or BMP with values 0-255)

Thx,
Barak

Tiny_Yolov2 failed ONNX model checker

Looks like tiny_yolov2 only has one input (image). For it to be a valid model, all the initializers should be in the input too. @houseroad

Error log:

E       ValidationError: convolution.W in initializer but not in graph input

../../../../venv/lib/python2.7/site-packages/onnx/checker.py:76: ValidationError

shufflenet test data seems to be mistaken

hello

recently, I've been playing onnx models with ncnn converter.
I found that all the models in this repo can be processed using the bundled test data without errors, except the shufflenet model, which always produces different result with the desired test output.

And, I found that the three test data in shufflenet model archive are identical with each other.

test_data_0.npy
test_data_1.npy
test_data_2.npy

Is there anything wrong with the test data ?

Bug in inception_v1 model ?

Hi,

I have a question about the onnx model inception_v1.

The last AveragePool node have the following proto :

graph.node[-5]
input: "inception_5b/output_1"
output: "pool5/7x7_s1_1"
name: ""
op_type: "AveragePool"
attribute {
name: "strides"
ints: 1
ints: 1
type: INTS
}
attribute {
name: "pads"
ints: 0
ints: 0
ints: 1
ints: 1
type: INTS
}
attribute {
name: "kernel_shape"
ints: 7
ints: 7
type: INTS
}

The input tensor shape is : [1, 1024, 7, 7]
So, an average pooling layer with kernel=7, stride=1, padding=0 before and 1 after will give an output tensor of shape [1, 1024, 2, 2].

This don't seem to make sense à the end of a network, where the output should be of shape [1, 1024, 1, 1].

So, according to me, the padding should be zero :
attribute {
name: "pads"
ints: 0
ints: 0
ints: 0
ints: 0
type: INTS
}
As you can see here for example : http://ethereon.github.io/netscope/#/preset/googlenet

Does someone can tell me from were this model was generated ?

Can't get accurate prediction on some pre-trained models

Hi,

I get good results on the traditional 1000 imagenet synset using the VGG16, bvlc_googlenet, and VGG19 models, with the data moved to [0, 1] and applied the std and mean normalization from ImageNet.

However I am not able to get accurate predictions using squeezenet, densenet121 pre-trained models using the same pre-processing. Have they been trained on a different synset? Do they need a specific pre-processing?

I am using MXNet as the backend.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.