Code Monkey home page Code Monkey logo

tensorflow-onnx's Issues

dtype for graph output node

If the graph output nodes are: top10:0, top10:1

since they are generated by the same source node (e.g. top10), so in original logic, in

op = self.get_node_by_name(name)
, the top10's dtype will be used. This is incorrect for the top10:1, which essentially an int.

The other hand, Graph's _dtypes already contains the dtype for top:0, top:1, etc, so i think we can just get from them.

iterable attribute application type error

I ran tf2onnx on a resnet .pb file. The output is Reshape operator of type ndarray. I get the following error -

Traceback (most recent call last):
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 67, in
main()
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 56, in main
target=args.target)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 987, in process_tf_graph
mapped_op, unmapped_op = tensorflow_onnx_mapping(g, continue_on_error)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 940, in tensorflow_onnx_mapping
onnx_node = func(g, node, node.name, args)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 239, in reshape_op
node.set_attr("shape", shape)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/graph.py", line 104, in set_attr
self.attr[name] = helper.make_attribute(name, value)
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/site-packages/onnx/helper.py", line 202, in make_attribute
"You passed in an iterable attribute but I cannot figure out "
ValueError: You passed in an iterable attribute but I cannot figure out its applicable type.

I printed out the shapes in the helper function and I see the ndarray is not covered in the ifcases. Kindly help resolve this issue.

shape
<class 'numpy.int32'>
<class 'numpy.int32'>
shape
<class 'numpy.ndarray'>

When to convert tensorflow mobilenet frozen graph, occurred error

The mobilenet model: https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz

when to convert it, prompt:
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.0.1-py3.6.egg/tf2onnx/convert.py", line 67, in
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.0.1-py3.6.egg/tf2onnx/convert.py", line 56, in main
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.0.1-py3.6.egg/tf2onnx/tfonnx.py", line 1000, in process_tf_graph
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.0.1-py3.6.egg/tf2onnx/tfonnx.py", line 939, in tensorflow_onnx_mapping
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.0.1-py3.6.egg/tf2onnx/tfonnx.py", line 521, in relu6_op
ValueError: negative dimensions are not allowed

Pad_OP tensorflow pads[[0,0],[3,3],[3,3],[0,0]] after convert [0,3,3,0,0,3,3,0] should be [0,0,3,3,0,0,3,3]?

onnx Pad :

node = onnx.helper.make_node(
'Pad',
inputs=['x'],
outputs=['y'],
mode='constant',
value=1.2,
pads=[0, 0, 1, 3, 0, 0, 2, 4],
)
x = np.random.randn(1, 3, 4, 5).astype(np.float32)
y = np.pad(
x,
pad_width=((0, 0), (0, 0), (1, 2), (3, 4)),
mode='constant',
constant_values=1.2,
)

expect(node, inputs=[x], outputs=[y],
name='test_constant_pad')

tensorflow :

pads[[0,0],[3,3],[3,3],[0,0]]

tf-onnx pad_op:

get_tensor_value():
[[0 0]
[3 3]
[3 3]
[0 0]]
transpose():
[[0 3 3 0]
[0 3 3 0]]
flatten(): [0 3 3 0 0 3 3 0]

tensorflow op FusedBatchNorm is not supported

I am trying to convert a yolov2 model,use my own frozen pb,In onnx‘s models , I saw a tiny-yolov2 model ,has a BatchNormalization Op, So if I convert this tf FusedBatchNorm Op to onnx BatchNormalization op , What should I need to pay special attention to?

"tensorflow op FusedBatchNorm is not supported" on ResNet V2 152 Model

ResNet V1 152 Model can be converted successfully, V2 152 is not.

The reason is that:

  1. in V1 152, FusedBatchNorm are used in following pattern
node {
  name: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/BatchNorm/FusedBatchNorm"
  op: "FusedBatchNorm"
  input: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/Conv2D"
  input: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/BatchNorm/gamma/read"
  input: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/BatchNorm/beta/read"
  input: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_mean/read"
  input: "resnet_v1_152/block4/unit_2/bottleneck_v1/conv3/BatchNorm/moving_variance/read"
 ...
},

https://github.com/tensorflow/tensorflow/blob/4bd6ac7ce02338cf6d0cc6c7ecfe4bd786d45e1e/tensorflow/tools/graph_transforms/fold_old_batch_norms.cc can help convert FusedBatchNorm into few other smaller operators with pattern:

 {"BatchNormWithGlobalNormalization|FusedBatchNorm",    // batch_norm_node
        {
          {"Conv2D",                          // conv_node
            {
              {"*"},                          // input_node
              {"Const"},                      // weights_node
            }
          },
          {"Const"},                          // mean_node
          {"Const"},                          // variance_node
          {"Const"},                          // beta_node
          {"Const"},                          // gamma_node
        }
      }
  1. in V2 152, besides the above usage pattern, there are also others:
node {
  name: "resnet_v2_152/block1/unit_1/bottleneck_v2/preact/FusedBatchNorm"
  op: "FusedBatchNorm"
  input: "resnet_v2_152/pool1/MaxPool"
  input: "resnet_v2_152/block1/unit_1/bottleneck_v2/preact/gamma/read"
  input: "resnet_v2_152/block1/unit_1/bottleneck_v2/preact/beta/read"
  input: "resnet_v2_152/block1/unit_1/bottleneck_v2/preact/moving_mean/read"
  input: "resnet_v2_152/block1/unit_1/bottleneck_v2/preact/moving_variance/read"
...
}

node {
  name: "resnet_v2_152/block1/unit_2/bottleneck_v2/preact/FusedBatchNorm"
  op: "FusedBatchNorm"
  input: "resnet_v2_152/block1/unit_1/bottleneck_v2/add"
  input: "resnet_v2_152/block1/unit_2/bottleneck_v2/preact/gamma/read"
  input: "resnet_v2_152/block1/unit_2/bottleneck_v2/preact/beta/read"
  input: "resnet_v2_152/block1/unit_2/bottleneck_v2/preact/moving_mean/read"
  input: "resnet_v2_152/block1/unit_2/bottleneck_v2/preact/moving_variance/read"
...
}

Request for adding Pack/Unpack to onnx (that are the ops tf.stack/tf.unstack)

@guschmue
While I was converting a frozen tensorflow model, it errors out with "StridedSlice not implemented".
I checked the node and this operation just raises an error.
I was wondering when it could be implemented. Thanks a lot.

Details:
File "C:\Program Files\Python35\lib\site-packages\tf2onnx\tfonnx.py", line 668, in stridedslice_op
raise ValueError("StridedSlice not implemented")
ValueError: StridedSlice not implemented

ValueError: tensorflow op MirrorPad is not supported

Possible tf.pad usage:

t = tf.constant([[1, 2, 3], [4, 5, 6]])
paddings = tf.constant([[1, 1,], [2, 2]])


tf.pad(t, paddings, "CONSTANT", "const_no_val")  # [[0, 0, 0, 0, 0, 0, 0],
tf.pad(t, paddings, "CONSTANT", "const_with_val", 999)  # [[0, 0, 0, 0, 0, 0, 0],
                                 #  [0, 0, 1, 2, 3, 0, 0],
                                 #  [0, 0, 4, 5, 6, 0, 0],
                                 #  [0, 0, 0, 0, 0, 0, 0]]

tf.pad(t, paddings, "REFLECT", "reflect")  # [[6, 5, 4, 5, 6, 5, 4],
                                #  [3, 2, 1, 2, 3, 2, 1],
                                #  [6, 5, 4, 5, 6, 5, 4],
                                #  [3, 2, 1, 2, 3, 2, 1]]

tf.pad(t, paddings, "SYMMETRIC", "sysmmetric") 

which generate below graph, where we can see, there are 3 kinds of pad operators:

  • Pad: used for CONSTANT mode without specifying default filling value
  • PadV2: used for CONSTANT mode by specifying default filling value
  • MirrorPad: used for other mode.
node {
  name: "Const"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 2
          }
          dim {
            size: 3
          }
        }
        tensor_content: "\001\000\000\000\002\000\000\000\003\000\000\000\004\000\000\000\005\000\000\000\006\000\000\000"
      }
    }
  }
}
node {
  name: "Const_1"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 2
          }
          dim {
            size: 2
          }
        }
        tensor_content: "\001\000\000\000\001\000\000\000\002\000\000\000\002\000\000\000"
      }
    }
  }
}
node {
  name: "const_no_val"
  op: "Pad"
  input: "Const"
  input: "Const_1"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "Tpaddings"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "const_with_val/constant_values"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
        }
        int_val: 999
      }
    }
  }
}
node {
  name: "const_with_val"
  op: "PadV2"
  input: "Const"
  input: "Const_1"
  input: "const_with_val/constant_values"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "Tpaddings"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "reflect"
  op: "MirrorPad"
  input: "Const"
  input: "Const_1"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "Tpaddings"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "mode"
    value {
      s: "REFLECT"
    }
  }
}
node {
  name: "sysmmetric"
  op: "MirrorPad"
  input: "Const"
  input: "Const_1"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "Tpaddings"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "mode"
    value {
      s: "SYMMETRIC"
    }
  }
}

Slice_op starts:[0,0,1] size:[-1,-1,-1] --> ends:[-1, -1,0] if shape =[1, 1917, 91] run caffe2 error

[enforce fail at slice_op.h:58] end >= start. 0 vs 1 Error from operator:
input: "Postprocessor/raw_box_scores:0" input: "OC2_DUMMY_106"
input: "OC2_DUMMY_109" output: "Postprocessor/Slice:0" name: "Postprocessor/Slice"
type: "Slice" device_option { device_type: 0 cuda_gpu_id: 0 }

ends = np.add(starts, size)

Whether we should get the shape first and then calculate ends , if size contains -1

def slice_op(ctx, node, name, args):
    print("----------------------------------slice_op-----------------------------------------")
    # T output = Slice(T input, Index begin, Index size, @type Index)
    # T output = Slice(T data, @INTS axes, @INTS ends, @INTS starts)
    starts = node.inputs[1].get_tensor_value()
    size = node.inputs[2].get_tensor_value()
    ends = np.add(starts, size)
    ctx.remove_input(node, node.input[2])
    ctx.remove_input(node, node.input[1])
    node.set_attr("starts", starts)
    node.set_attr("ends", ends)
    #shape = ctx.get_shape(node.input[0])
    ......
    return node

Please support onnx 1.2

Onnx 1.2 just released, they had changes on major ops like GEMM. Could you please make the converted model only use OPs from opset 7?

Conv2DBackpropInput's 1st input is assumed to be const

The first input of Conv2DBackpropInput is the input_size. in current converter ,we assume it is a const, then we call get_tensor_value to get its value. This is because the corresponding ONNX operator https://github.com/onnx/onnx/blob/master/docs/Operators.md#convtranspose specify the input_size with attribute "output_shape". So we must give this attribute a explicit value during conversion.

But in some case, the size is a dynamically generated, for example when I am converting style2paints tail model, here is an example:

image

Though in this case, Conv2DBackpropInput only depends on Shape value of "Sub" operation, but there is still a sub-graph to be handled (maybe we need calculate the result of pack, and make it as the input of Conv2DBackpropInput , but this sounds tricky).

This is very similar to #89.

Issues when you are using Python2

Though it is clearly stated that tensorflow-onnx is tested on Python3 only, I think it's necessary to list some issues in case you have (actually I have experienced) when using Python2 accidentally (and got here after searching with error messages) , if you turn to Python3, all of them are gone.

/usr/bin/python: cannot import name tfonnx

  • Failed case 2 when run example command, this seems to be caused by Python2 & Python3 are treating the unicode differently. The former requires an explicit convert with str() for unicode. [a similar issue at stackoverflow]

python -m tf2onnx.convert --input tests/models/fc-layers/frozen.pb --inputs X:0 --outputs output:0 --output tests/models/fc-layers/model.onnx --verboseusing tensorflow=1.8.0, onnx=1.2.2
2018-06-28 16:08:50.136707: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_batch_norms
2018-06-28 16:08:50.137558: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_old_batch_norms
2018-06-28 16:08:50.150945: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-28 16:08:50.241093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: Quadro K620 major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:03:00.0
totalMemory: 1.95GiB freeMemory: 1.57GiB
2018-06-28 16:08:50.241123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-28 16:11:11.227140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-28 16:11:11.227167: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0
2018-06-28 16:11:11.227173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N
2018-06-28 16:11:11.227317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1314 MB memory) -> physical GPU (device: 0, name: Quadro K620, pci bus id: 0000:03:00.0, compute capability: 5.0)
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home//community/tensorflow-onnx/tf2onnx/convert.py", line 97, in
main()
File "/home//community/tensorflow-onnx/tf2onnx/convert.py", line 85, in main
extra_opset=extra_opset)
File "tf2onnx/tfonnx.py", line 1182, in process_tf_graph
onnx_nodes, op_cnt, attr_cnt, output_shapes, dtypes = tensorflow_to_onnx(tf_graph)
File "tf2onnx/tfonnx.py", line 85, in tensorflow_to_onnx
onnx_tensor = utils.tf_to_onnx_tensor(node.get_attr(a), name=node.name + ":0")
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2327, in get_attr
c_api.TF_OperationGetAttrValueProto(self._c_op, name, buf)
TypeError: in method 'TF_OperationGetAttrValueProto', argument 2 of type 'char const *'

add subgraph detection for convoluation with dilations > 1

Tensorflow conv2d with dilations > 1 is in a graph with

SpaceToBatchND -> conv -> BatchtoSpaceND etc.

SpaceToBatchND and BatchtoSpaceND are not supported in onnx, should combine this subgraph to normal conv2d with dilation > 1 operator.

tensorflow op Fill is not supported

When I tried to convert dcgan inference model, TensorFlow Fill operator can not be converted well.

node {
  name: "generator/ones_1/shape_as_tensor"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 4
          }
        }
        tensor_content: "@\000\000\000\016\000\000\000\016\000\000\000\n\000\000\000"
      }
    }
  }
}
node {
  name: "generator/ones_1/Const"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
        }
        float_val: 1.0
      }
    }
  }
}
node {
  name: "generator/ones_1"
  op: "Fill"
  input: "generator/ones_1/shape_as_tensor"
  input: "generator/ones_1/Const"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "index_type"
    value {
      type: DT_INT32
    }
  }
}

While on the other hand, the corresponding operator in ONNX ConstantFill is in experimental stage.

Since the mapping is pretty straight forward, I will create a pull request for fixing this.

FusedBatchNorm mean/variable dimension is not correct after conversion

TF FusedBatchNorm(https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/fused-batch-norm) is mapped to ONNX BatchNormalization(https://github.com/onnx/onnx/blob/master/docs/Operators.md#batchnormalization).

Though tf FusedBatchNorm document mentioned that mean/variable/offset/scale's size should be same as the channel number of input. But in observation of generated tf graph, mean/variable has size [0], even though scale/offset has [1024].

image

image

image

image

In ONNX, scale/bias/mean/var's size need to be equal to channel count, and MUST has same data type.

Possible solution is: to extend tf FusedBatchNorm mean/var's size to be same of offset/scale, if their size is different.

tensorflow VariableV2 not supported

Hi I ran tf2onnx on a vgg_19.pb file and got this error
Traceback (most recent call last):
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 67, in
main()
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 56, in main
target=args.target)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 987, in process_tf_graph
mapped_op, unmapped_op = tensorflow_onnx_mapping(g, continue_on_error)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 929, in tensorflow_onnx_mapping
raise ValueError("tensorflow op " + op + " is not supported")
ValueError: tensorflow op VariableV2 is not supported

Illegal instruction (core dumped) when I try to execute the example given.

Im trying out the example given in the ReadMe file.
Unfortunately, I'm getting an illegal instruction (core dumped).
I have installed onnx, caffe2 and tensorflow (All CPU versions) before installing this repository.

Here is the log and the command that I have executed:

python -m tf2onnx.convert --input tests/models/fc-layers/frozen.pb --inputs X:0 --outputs output:0 --output tests/models/fc-layers/model.onnx --pretty --verbose

ERROR:

Illegal instruction (core dumped)

Kindly, appreciate any help to resolve this issue.

Could not install on Windows

I've successfully install onnx 1.1.1 via Anaconda but could not install tf2oonx via command
python setup.py install.
The following error observed:

(base) D:\tools\windows\sources>cd tensorflow-onnx

(base) D:\tools\windows\sources\tensorflow-onnx>python setup.py install
running install
running bdist_egg
running egg_info
creating tf2onnx.egg-info
writing tf2onnx.egg-info\PKG-INFO
writing dependency_links to tf2onnx.egg-info\dependency_links.txt
writing requirements to tf2onnx.egg-info\requires.txt
writing top-level names to tf2onnx.egg-info\top_level.txt
writing manifest file 'tf2onnx.egg-info\SOURCES.txt'
reading manifest file 'tf2onnx.egg-info\SOURCES.txt'
writing manifest file 'tf2onnx.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running create_version
creating build
creating build\lib
creating build\lib\tf2onnx
copying tf2onnx\convert.py -> build\lib\tf2onnx
copying tf2onnx\graph.py -> build\lib\tf2onnx
copying tf2onnx\graph_matcher.py -> build\lib\tf2onnx
copying tf2onnx\tfonnx.py -> build\lib\tf2onnx
copying tf2onnx\utils.py -> build\lib\tf2onnx
copying tf2onnx\version.py -> build\lib\tf2onnx
copying tf2onnx\__init__.py -> build\lib\tf2onnx
creating build\bdist.win-amd64
creating build\bdist.win-amd64\egg
creating build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\convert.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\graph.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\graph_matcher.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\tfonnx.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\utils.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\version.py -> build\bdist.win-amd64\egg\tf2onnx
copying build\lib\tf2onnx\__init__.py -> build\bdist.win-amd64\egg\tf2onnx
byte-compiling build\bdist.win-amd64\egg\tf2onnx\convert.py to convert.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\graph.py to graph.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\graph_matcher.py to graph_matcher.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\tfonnx.py to tfonnx.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\utils.py to utils.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\version.py to version.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\tf2onnx\__init__.py to __init__.cpython-36.pyc
creating build\bdist.win-amd64\egg\EGG-INFO
copying tf2onnx.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying tf2onnx.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying tf2onnx.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying tf2onnx.egg-info\requires.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying tf2onnx.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist\tf2onnx-0.0.0.1-py3.6.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing tf2onnx-0.0.0.1-py3.6.egg
Removing d:\program files\anaconda3\lib\site-packages\tf2onnx-0.0.0.1-py3.6.egg
Copying tf2onnx-0.0.0.1-py3.6.egg to d:\program files\anaconda3\lib\site-packages
Removing tf2onnx 0.0.0.1 from easy-install.pth file
Adding tf2onnx 0.0.0.1 to easy-install.pth file

Installed d:\program files\anaconda3\lib\site-packages\tf2onnx-0.0.0.1-py3.6.egg
Processing dependencies for tf2onnx==0.0.0.1
Searching for pytest-cov==2.5.1
Best match: pytest-cov 2.5.1
Processing pytest_cov-2.5.1-py3.6.egg
pytest-cov 2.5.1 is already the active version in easy-install.pth

Using d:\program files\anaconda3\lib\site-packages\pytest_cov-2.5.1-py3.6.egg
Searching for PyYAML==3.12
Best match: PyYAML 3.12
Adding PyYAML 3.12 to easy-install.pth file

Using d:\program files\anaconda3\lib\site-packages
Searching for graphviz==0.8.2
Best match: graphviz 0.8.2
Processing graphviz-0.8.2-py3.6.egg
graphviz 0.8.2 is already the active version in easy-install.pth

Using d:\program files\anaconda3\lib\site-packages\graphviz-0.8.2-py3.6.egg
Searching for coverage==4.5.1
Best match: coverage 4.5.1
Processing coverage-4.5.1-py3.6-win-amd64.egg
coverage 4.5.1 is already the active version in easy-install.pth
Installing coverage-3.6-script.py script to D:\Program Files\anaconda3\Scripts
Traceback (most recent call last):
  File "setup.py", line 83, in <module>
    install_requires=['graphviz', 'pyyaml', 'pytest-cov']
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\__init__.py", line 129, in setup
    return distutils.core.setup(**attrs)
  File "D:\Program Files\anaconda3\lib\distutils\core.py", line 148, in setup
    dist.run_commands()
  File "D:\Program Files\anaconda3\lib\distutils\dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "D:\Program Files\anaconda3\lib\distutils\dist.py", line 974, in run_command
    cmd_obj.run()
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\install.py", line 67, in run
    self.do_egg_install()
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\install.py", line 117, in do_egg_install
    cmd.run()
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 412, in run
    self.easy_install(spec, not self.no_deps)
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 654, in easy_install
    return self.install_item(None, spec, tmpdir, deps, True)
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 701, in install_item
    self.process_distribution(spec, dist, deps)
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 756, in process_distribution
    self.easy_install(dist.as_requirement())
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 673, in easy_install
    return self.install_item(spec, dist.location, tmpdir, deps)
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 704, in install_item
    self.process_distribution(spec, dists[0], deps, "Using")
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 726, in process_distribution
    self.install_egg_scripts(dist)
  File "D:\Program Files\anaconda3\lib\site-packages\setuptools\command\easy_install.py", line 600, in install_egg_scripts
    dist.get_metadata('scripts/' + script_name)
  File "D:\Program Files\anaconda3\lib\site-packages\pkg_resources\__init__.py", line 1405, in get_metadata
    return value.decode('utf-8') if six.PY3 else value
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 2: invalid start byte

ValueError: Optimizer only accepts ModelProto, incorrect type: <class 'bytes'>

It's just a simple CNN frozen TF model. Not sure how it happens. Tried with both onnx 1.2.1 and 1.2.2 but neither works

More details regarding the error:
tensorflow ops: Counter({'Const': 12, 'BiasAdd': 5, 'Relu': 4, 'MatMul': 3, 'MaxPool': 2, 'Conv2D': 2, 'Reshape': 2, 'Placeholder': 1, 'Softmax': 1})
tensorflow attr: Counter({'T': 19, 'dtype': 13, 'value': 12, 'data_format': 9, 'strides': 4, 'padding': 4, 'transpose_b': 3, 'transpose_a': 3, 'ksize': 2, 'dilations': 2, 'Tshape': 2, 'use_cudnn_on_gpu': 2, 'shape': 1})
onnx mapped: Counter({'Const': 12, 'BiasAdd': 5, 'Relu': 4, 'MatMul': 3, 'MaxPool': 2, 'Conv2D': 2, 'Reshape': 2, 'Placeholder': 1, 'Softmax': 1})
onnx unmapped: Counter()
Traceback (most recent call last):
File "C:\Program Files\Python35\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Program Files\Python35\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python35\lib\site-packages\tf2onnx\convert.py", line 63, in
main()
File "C:\Program Files\Python35\lib\site-packages\tf2onnx\convert.py", line 47, in main
"converted from {}".format(args.input), args.inputs, args.outputs)
File "C:\Program Files\Python35\lib\site-packages\tf2onnx\graph.py", line 387, in make_model
"eliminate_nop_transpose"])
File "C:\Program Files\Python35\lib\site-packages\onnx\optimizer.py", line 43, in optimize
raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
ValueError: Optimizer only accepts ModelProto, incorrect type: <class 'bytes'>

ResizeNearestNeighbor's 2nd input is assumed to be Const

If it's not a const, conversion will fail.

This is found when I am working on convert style2paints go_head inference model.

-------------------- start handling style2paints_head.pb ----------------------------
change working directory to /home/pengwa/community/tensorflow
------ summarize the frozen graph, to get the inputs and outputs name
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/frozen/style2paints_head.pb
Found 5 possible inputs: (name=Placeholder, type=float(1), shape=[?,1]) (name=Placeholder_1, type=float(1), shape=[?,?,?,1]) (name=Placeholder_2, type=float(1), shape=[?,?,?,3]) (name=Placeholder_3, type=float(1), shape=[?,?,?,4]) (name=Placeholder_4, type=float(1), shape=[?,?,?,3])
No variables spotted.
Found 1 possible outputs: (name=strided_slice_20, op=StridedSlice)
Found 187414779 (187.41M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 954 Const, 640 Identity, 446 Mul, 318 Add, 238 Conv2D, 228 Relu, 151 Sub, 150 BiasAdd, 136 Rsqrt, 48 MatMul, 36 ConcatV2, 24 Mean, 24 Reshape, 18 StridedSlice, 14 AvgPool, 11 Shape, 10 MaxPool, 10 ResizeNearestNeighbor, 6 RealDiv, 5 Placeholder, 2 Pad, 1 ClipByValue, 1 ResizeBilinear
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/frozen/style2paints_head.pb --show_flops --input_layer=Placeholder,Placeholder_1,Placeholder_2,Placeholder_3,Placeholder_4 --input_layer_type=float,float,float,float,float --input_layer_shape=-1,1:-1,-1,-1,1:-1,-1,-1,3:-1,-1,-1,4:-1,-1,-1,3 --output_layer=strided_slice_20
------ update the inputs and outputs name to format like input_name:index
python3 /home/pengwa/community/learning/onnx/update_name_with_index.py Placeholder,Placeholder_1,Placeholder_2,Placeholder_3,Placeholder_4
updated input names is Placeholder:0,Placeholder_1:0,Placeholder_2:0,Placeholder_3:0,Placeholder_4:0, output names is strided_slice_20:0
------ start convertion, tensorflow usage require caller program must not in tensorflow root folder, so switch to current user directory with cd
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
using tensorflow=1.8.0, onnx=1.2.2
2018-07-26 13:25:24.639220: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_batch_norms
2018-07-26 13:25:26.395159: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_old_batch_norms
2018-07-26 13:25:37.818018: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-07-26 13:25:38.069259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 40a8:00:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2018-07-26 13:25:38.258768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 1 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 5ee9:00:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2018-07-26 13:25:38.451087: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 2 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 7cf5:00:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2018-07-26 13:25:38.654067: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 3 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: c2da:00:00.0
totalMemory: 11.17GiB freeMemory: 11.09GiB
2018-07-26 13:25:38.654247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1, 2, 3
2018-07-26 13:25:39.723172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-26 13:25:39.723246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 1 2 3
2018-07-26 13:25:39.723262: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N N N N
2018-07-26 13:25:39.723281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1:   N N N N
2018-07-26 13:25:39.723299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 2:   N N N N
2018-07-26 13:25:39.723317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 3:   N N N N
2018-07-26 13:25:39.724189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10755 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 40a8:00:00.0, compute capability: 3.7)
2018-07-26 13:25:39.818390: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10755 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 5ee9:00:00.0, compute capability: 3.7)
2018-07-26 13:25:40.053652: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10755 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 7cf5:00:00.0, compute capability: 3.7)
2018-07-26 13:25:40.301944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 10755 MB memory) -> physical GPU (device: 3, name: Tesla K80, pci bus id: c2da:00:00.0, compute capability: 3.7)
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/convert.py", line 116, in <module>
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/convert.py", line 104, in main
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/tfonnx.py", line 1396, in process_tf_graph
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/tfonnx.py", line 1325, in tensorflow_onnx_mapping
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/tfonnx.py", line 1317, in tensorflow_onnx_mapping
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/tfonnx.py", line 850, in upsample_op7
  File "/usr/local/lib/python3.5/dist-packages/tf2onnx-0.0.2.0-py3.5.egg/tf2onnx/graph.py", line 150, in get_tensor_value
ValueError: get tensor value: model_1_4/up_sampling2d_8/mul must be Const
python3 -m tf2onnx.convert --input /tmp/frozen/style2paints_head.pb --inputs Placeholder:0,Placeholder_1:0,Placeholder_2:0,Placeholder_3:0,Placeholder_4:0 --outputs strided_slice_20:0 --output style2paints_head.pb.onnx --verbose --continue_on_error
generated onnx is located/home/pengwa/style2paints_head.pb.onnx
------ switch back to original directory: /home/pengwa/community/learning/onnx

The sub graph looks like:


node {
  name: "model_1_4/up_sampling2d_8/Shape"
  op: "Shape"
  input: "model_1_4/activation_39/Relu"
  device: "/device:GPU:0"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "out_type"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "model_1_4/up_sampling2d_8/strided_slice/stack"
  op: "Const"
  device: "/device:GPU:0"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 1
          }
        }
        int_val: 1
      }
    }
  }
}

node {
  name: "model_1_4/up_sampling2d_8/strided_slice/stack_1"
  op: "Const"
  device: "/device:GPU:0"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 1
          }
        }
        int_val: 3
      }
    }
  }
}
node {
  name: "model_1_4/up_sampling2d_8/strided_slice/stack_2"
  op: "Const"
  device: "/device:GPU:0"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 1
          }
        }
        int_val: 1
      }
    }
  }
}

node {
  name: "model_1_4/up_sampling2d_8/strided_slice"
  op: "StridedSlice"
  input: "model_1_4/up_sampling2d_8/Shape"
  input: "model_1_4/up_sampling2d_8/strided_slice/stack"
  input: "model_1_4/up_sampling2d_8/strided_slice/stack_1"
  input: "model_1_4/up_sampling2d_8/strided_slice/stack_2"
  device: "/device:GPU:0"
  attr {
    key: "Index"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "begin_mask"
    value {
      i: 0
    }
  }
  attr {
    key: "ellipsis_mask"
    value {
      i: 0
    }
  }
  attr {
    key: "end_mask"
    value {
      i: 0
    }
  }
  attr {
    key: "new_axis_mask"
    value {
      i: 0
    }
  }
  attr {
    key: "shrink_axis_mask"
    value {
      i: 0
    }
  }
}


node {
  name: "model_1_4/up_sampling2d_8/Const"
  op: "Const"
  device: "/device:GPU:0"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 2
          }
        }
        tensor_content: "\002\000\000\000\002\000\000\000"
      }
    }
  }
}
node {
  name: "model_1_4/up_sampling2d_8/mul"
  op: "Mul"
  input: "model_1_4/up_sampling2d_8/strided_slice"
  input: "model_1_4/up_sampling2d_8/Const"
  device: "/device:GPU:0"
  attr {
    key: "T"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "model_1_4/up_sampling2d_8/ResizeNearestNeighbor"
  op: "ResizeNearestNeighbor"
  input: "model_1_4/activation_39/Relu"
  input: "model_1_4/up_sampling2d_8/mul"
  device: "/device:GPU:0"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "align_corners"
    value {
      b: false
    }
  }
}

Segmentation fault

I'm trying to convert TF graph, containing ResizeNearestNeigbor (conversion implemented here), but segmentation fault is raised.

Ubuntu 16.04
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

ONNX 1.2.1

2018-05-31 10:30:17.480932: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_batch_norms
2018-05-31 10:30:17.485136: I tensorflow/tools/graph_transforms/transform_graph.cc:264] Applying fold_old_batch_norms
2018-05-31 10:30:17.511056: W tensorflow/core/graph/graph_constructor.cc:582] Node 'IteratorGetNext' has 1 outputs but the _output_shapes attribute specifies shapes for 2 outputs. Output shapes may be inaccurate.
2018-05-31 10:30:17.529917: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-05-31 10:30:17.668859: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-05-31 10:30:17.669711: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8225
pciBusID: 0000:08:00.0
totalMemory: 7.92GiB freeMemory: 7.09GiB
2018-05-31 10:30:17.669730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-31 10:30:17.900738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-31 10:30:17.900782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 
2018-05-31 10:30:17.900789: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N 
2018-05-31 10:30:17.901037: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6849 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:08:00.0, compute capability: 6.1)
tensorflow ops: Counter({'Const': 60, 'BiasAdd': 17, 'Conv2D': 17, 'Maximum': 16, 'Mul': 16, 'MaxPool': 5, 'ResizeNearestNeighbor': 5, 'ConcatV2': 5, 'Relu': 1, 'Placeholder': 1})
tensorflow attr: Counter({'T': 82, 'dtype': 61, 'value': 60, 'data_format': 39, 'padding': 22, 'strides': 22, 'use_cudnn_on_gpu': 17, 'dilations': 17, 'N': 5, 'ksize': 5, 'align_corners': 5, 'Tidx': 5, 'shape': 1})
onnx mapped: Counter({'Const': 60, 'BiasAdd': 17, 'Conv2D': 17, 'Maximum': 16, 'Mul': 16, 'MaxPool': 5, 'ResizeNearestNeighbor': 5, 'ConcatV2': 5, 'Relu': 1, 'Placeholder': 1})
onnx unmapped: Counter()
Segmentation fault

Consider using the mainline ONNX optimizer for graph transformations

Hello! We've just noticed that tensorflow-onnx contains a library for graph manipulations (graph_matcher.py) and a small suite of optimizations built on top of it.

ONNX itself contains a library for graph manipulations (https://github.com/onnx/onnx/tree/master/onnx/optimizer/passes) and a growing suite of optimizations built on top of it. The reason we built the ONNX optimizer is to share work - many frontends and backends will want to perform common optimizations, and they should only need to be written once.

Would you be open to using the ONNX optimizer to represent your passes? It is a slightly different interface, but we've taken a look through and all the optimizations currently implemented in tensorflow-onnx should be compatible.

Please add a op of ”ResizeNearestNeighbor”

Hi.
I am using tensorflow-onnx to convert the TensorFlow model(pb file) into a model for ONNX. However, ”ResizeNearestNeighbor” does not support it and it can not be converted. Could you please add a operator of ”ResizeNearestNeighbor”?

cannot import name tfonnx

tensorflow=1.8.0
onnx=1.2.1

when I run 'python -m tf2onnx.convert --input tests/models/fc-layers/frozen.pb --inputs X:0 --outputs output:0 --output tests/models/fc-layers/model.onnx --pretty --verbose' this

it show cannot import name tfonnx , I have tried lots of method, but it does't work ,can you help me

import frozen graph with error "Input 0 of node X was passed float from Y:0 incompatible with expected float_ref."

Note: create this issue for anybody who might come across the similar issue in the future.

when I tried to convert frozen DCGAN inference model (trained with https://github.com/carpedm20/DCGAN-tensorflow), the error was thrown as below:

------------------- start handling dcgan.pb ----------------------------
change working directory to /home/pengwang/community/tensorflow
------ summarize the frozen graph, to get the inputs and outputs name
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/frozen/dcgan.pb
Found 2 possible inputs: (name=y, type=float(1), shape=[64,10]) (name=z, type=float(1), shape=[?,100])
No variables spotted.
Found 1 possible outputs: (name=generator/Sigmoid, op=Sigmoid)
Found 7080115 (7.08M) const parameters, 0 (0) variable parameters, and 6 control_edges
Op types used: 50 Const, 23 Identity, 8 Mul, 8 Reshape, 6 AssignSub, 6 Sub, 4 ConcatV2, 3 FusedBatchNorm, 3 Relu, 2 Add, 2 BiasAdd, 2 Conv2DBackpropInput, 2 Fill, 2 MatMul, 2 Placeholder, 1 Sigmoid
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/frozen/dcgan.pb --show_flops --input_layer=y,z --input_layer_type=float,float --input_layer_shape=64,10:-1,100 --output_layer=generator/Sigmoid
------ update the inputs and outputs name to format like input_name:index
python3 /home/pengwang/community/learning/onnx/update_name_with_index.py y,z
updated input names is y:0,z:1, output names is generator/Sigmoid:0
------ start convertion, tensorflow usage require caller program must not in tensorflow root folder, so switch to current user directory with cd
using tensorflow=1.9.0-rc0, onnx=1.2.1
2018-07-17 16:10:59.166646: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_batch_norms
2018-07-17 16:10:59.194705: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_old_batch_norms
Traceback (most recent call last):
 File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/framework/importer.py", line 418, in import_graph_def
   graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node generator/g_bn0/AssignMovingAvg was passed float from generator/g_bn0/moving_mean:0 incompatible with expected float_ref.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
   "__main__", mod_spec)
 File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
   exec(code, run_globals)
 File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 100, in <module>
   main()
 File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 80, in main
   tf.import_graph_def(graph_def, name='')
 File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/util/deprecation.py", line 432, in new_func
   return func(*args, **kwargs)
 File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/framework/importer.py", line 422, **in import_graph_def
   raise ValueError(str(e))
ValueError: Input 0 of node generator/g_bn0/AssignMovingAvg was passed float from generator/g_bn0/moving_mean:0 incompatible with expected float_ref.**

This is actually caused by the node AssignSub's first input is expected to be a float_ref, but actually after freeze_graph.py handling, it is a float. There is a discussion at davidsandberg/facenet#161 and https://www.bountysource.com/issues/36614355-unable-to-import-frozen-graph-with-batchnorm.

To get this fixed, we need to do extra work for the frozen graph, basically, at least we change the AssignSub to Sub in the graph. look at below code as an example:

import tensorflow as tf

from tensorflow.python.platform import gfile
model_path="/tmp/frozen/dcgan.pb"

# read graph definition
f = gfile.FastGFile(model_path, "rb")
gd = graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())

# fix nodes
for node in graph_def.node:
    if node.op == 'RefSwitch':
        node.op = 'Switch'
        for index in xrange(len(node.input)):
            if 'moving_' in node.input[index]:
                node.input[index] = node.input[index] + '/read'
    elif node.op == 'AssignSub':
        node.op = 'Sub'
        if 'use_locking' in node.attr: del node.attr['use_locking']

# import graph into session
tf.import_graph_def(graph_def, name='')
tf.train.write_graph(graph_def, './', 'good_frozen.pb', as_text=False)
tf.train.write_graph(graph_def, './', 'good_frozen.pbtxt', as_text=True)

For some Const node: "initializer shape is inconsistent"

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 100, in <module>
    main()
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 92, in main
    optimize=not args.continue_on_error)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/graph.py", line 427, in make_model
    raise ValueError("initializer shape is inconsistent")
ValueError: initializer shape is inconsistent
python3 -m tf2onnx.convert --input /tmp/frozen/dcgan_2.pb --inputs y:0,z:1 --outputs generator/Sigmoid:0 --output dcgan_2.pb.onnx --verbose --continue_on_error
generated onnx is located/home/pengwang/dcgan_2.pb.onnx
------ switch back to original directory: /home/pengwang/community/learning/onnx

The problematic node looks as below:

node {
  name: "generator/g_bn2/Const_1"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
          }
        }
      }
    }
  }
}

https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/graph.py#L421, in runtime, the values are:
list(shape) is [0]
initializer.dims is [1]

Since the above constant is a scalar, I think initializer.dims might get wrong?
initializer.dims is created in add_initializer, which is called by https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/tfonnx.py#L206, where when i print(tensor), and print(type(tensor)), it shows

name: "value"
t {
  dims: 1
  data_type: FLOAT
  float_data: 0.0
  name: "generator/g_bn2/Const:1"
}
type: TENSOR

<class 'onnx_pb2.AttributeProto'>

Any thoughts? @guschmue

-------------------paste the good const node as compassion in below ------------------------------

The other hand, example const nodes that works well

node {
  name: "generator/g_bn2/AssignMovingAvg/decay"
  op: "Const"
  attr {
    key: "_class"
    value {
      list {
        s: "loc:@generator/g_bn2/moving_mean"
      }
    }
  }
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
        }
        float_val: 0.10000000149011612
      }
    }
  }
}


node {
  name: "generator/g_h3/biases"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
            size: 1
          }
        }
        float_val: 0.0900091901421547
      }
    }
  }
}


node {
  name: "generator/ones_1/shape_as_tensor"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 4
          }
        }
        tensor_content: "@\000\000\000\016\000\000\000\016\000\000\000\n\000\000\000"
      }
    }
  }
}

node {
  name: "generator/g_h3/w"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
            size: 5
          }
          dim {
            size: 5
          }
          dim {
            size: 1
          }
          dim {
            size: 138
          }
        }
		tensor_content: ""
      }
    }
  }
}

error in tf2onnx on inception graph

Hi

I ran tf2onnx on inception graph and got this error
Traceback (most recent call last):
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/nfs/cadv11/dmohan/projects/Downloads/yes/envs/onnx_env/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 67, in
main()
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/convert.py", line 56, in main
target=args.target)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 987, in process_tf_graph
mapped_op, unmapped_op = tensorflow_onnx_mapping(g, continue_on_error)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 940, in tensorflow_onnx_mapping
onnx_node = func(g, node, node.name, args)
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/tfonnx.py", line 580, in concat_op
axis = axis_node.get_tensor_value()
File "/nfs/cadv11/dmohan/projects/tensorflow-onnx/tf2onnx/graph.py", line 120, in get_tensor_value
raise ValueError("get tensor value: {} must be Const".format(self.name))
ValueError: get tensor value: mixed3a_pool_reduce must be Const

Split operator - 'num_splits' attribute not available in the ONNX output

I tried converting alexnet tensorflow model to onnx ,Unfortunately in 'Split' operator the 'num_splits' attribute is not available in the converted ONNX output .
While conversion ,only the axis attribute is stored ,missing the num_splits value as shown below :

def split_op(ctx, node, name, args):
# T output = Split(int32 split_dim, T value, @int num_split)
# T outputs = Split(T input, @INT axis, @INTS split)
split_dims = node.inputs[0].get_tensor_value()
ctx.remove_input(node, node.input[0])
node.set_attr("axis", split_dims[0])
return node

The op output dtype "tf.resource" is not supported

I try to use convert the trained ssd_mobilenet_v1_coco model(frozen_inference_graph.pb) in tensorflow model zoo, but the "types_pb2.DT_RESOURCE" type is not supported in "TF_TO_ONNX_DTYPE".

Here is the def for that type:
https://www.tensorflow.org/api_docs/python/tf/DType

tf.resource: Handle to a mutable resource.

I'm a beginner for tensorflow, so I don't know the actual meaning for that op output type.

The error message:
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.2.0-py3.6.egg/tf2onnx/convert.py", line 85, in main
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.2.0-py3.6.egg/tf2onnx/tfonnx.py", line 1183, in process_tf_graph
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.2.0-py3.6.egg/tf2onnx/tfonnx.py", line 60, in tensorflow_to_onnx
File "/usr/local/lib/python3.6/site-packages/tf2onnx-0.0.2.0-py3.6.egg/tf2onnx/utils.py", line 154, in map_tf_dtype
KeyError: tf.resource

Conv2DBackpropInput conversion failure: new_shape[i] = shape[perm[i]] IndexError: list index out of range


-------------------- start handling dcgan_2.pb ----------------------------
change working directory to /home/pengwang/community/tensorflow
------ summarize the frozen graph, to get the inputs and outputs name
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/frozen/dcgan_2.pb
Found 2 possible inputs: (name=y, type=float(1), shape=[64,10]) (name=z, type=float(1), shape=[?,100])
No variables spotted.
Found 1 possible outputs: (name=generator/Sigmoid, op=Sigmoid)
Found 7080115 (7.08M) const parameters, 0 (0) variable parameters, and 6 control_edges
Op types used: 50 Const, 23 Identity, 12 Sub, 8 Mul, 8 Reshape, 4 ConcatV2, 3 FusedBatchNorm, 3 Relu, 2 Add, 2 BiasAdd, 2 Conv2DBackpropInput, 2 Fill, 2 MatMul, 2 Placeholder, 1 Sigmoid
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/frozen/dcgan_2.pb --show_flops --input_layer=y,z --input_layer_type=float,float --input_layer_shape=64,10:-1,100 --output_layer=generator/Sigmoid
------ update the inputs and outputs name to format like input_name:index
python3 /home/pengwang/community/learning/onnx/update_name_with_index.py y,z
updated input names is y:0,z:1, output names is generator/Sigmoid:0
------ start convertion, tensorflow usage require caller program must not in tensorflow root folder, so switch to current user directory with cd
using tensorflow=1.9.0-rc0, onnx=1.2.1
2018-07-19 10:59:39.363171: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_batch_norms
2018-07-19 10:59:39.390275: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_old_batch_norms
2018-07-19 10:59:39.502133: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-07-19 10:59:39.584548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: Quadro K620 major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:03:00.0
totalMemory: 1.95GiB freeMemory: 1.82GiB
2018-07-19 10:59:39.584579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2018-07-19 10:59:39.766365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-19 10:59:39.766417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958]      0
2018-07-19 10:59:39.766424: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   N
2018-07-19 10:59:39.766529: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1576 MB memory) -> physical GPU (device: 0, name: Quadro K620, pci bus id: 0000:03:00.0, compute capability: 5.0)
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 116, in <module>
    main()
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 104, in main
    shape_override=args.shape_override)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 1403, in process_tf_graph
    mapped_op, unmapped_op = tensorflow_onnx_mapping(g, continue_on_error, custom_op_handlers)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 1332, in tensorflow_onnx_mapping
    raise ex
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 1324, in tensorflow_onnx_mapping
    onnx_node = func(g, node, node.name, args)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 519, in convtranspose_op
    add_padding(ctx, node, kernel_shape, strides)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 446, in add_padding
    input_shape = spatial_map(input_shape, NHWC_TO_NCHW)
  File "/home/pengwang/community/tensorflow-onnx/tf2onnx/tfonnx.py", line 333, in spatial_map
    new_shape[i] = shape[perm[i]]
IndexError: list index out of range
python3 -m tf2onnx.convert --input /tmp/frozen/dcgan_2.pb --inputs y:0,z:1 --outputs generator/Sigmoid:0 --output dcgan_2.pb.onnx --verbose --continue_on_error
generated onnx is located/home/pengwang/dcgan_2.pb.onnx

Tensorflow Conv2DBackpropInput operation's signature:

Conv2DBackpropInput(const ::tensorflow::Scope & scope, ::tensorflow::Input input_sizes, ::tensorflow::Input filter, ::tensorflow::Input out_backprop, const gtl::ArraySlice< int > & strides, StringPiece padding)

In existing converting logic, we called add_padding(), which assume node.input[0] is the input data. But actually we passed the input_size.

Resolution: in ONNX ConvTranspose definition, pads can be automatically generated once we give the attribtue "output_shape" explicitly. So we can get rid of add_padding() actually.

output_shape : list of ints
The shape of the output can be explicitly set which will cause pads values to be auto generated. If output_shape is specified pads values are ignored. See doc for details for equations to generate pads
https://github.com/onnx/onnx/blob/master/docs/Operators.md#convtranspose

tensorflow Pad operator

Is there an option in tf2onnx to fuse the Pad operator into the Conv? The caffe2->onnx model has no operators and instead has a conv layer with padding, in tensorflow and the tf2onnx model, there is a separate pad operator followed by conv and the conv operator does not have a paddings input (unlike caffe)

If some node output shape contains -1 like [1, -1, -1, 3] in Relu6 and add_padding () report errors

tensorflow object detection model ssd_mobilenet_v1_quantized

when I convert this model,

  1. use tf_optimize() extract_sub_graph() pass new outputs names , some nodes before Postprocessor part , To some extent, delete the latter part of the model

  2. tensorflow_to_onnx () delete Preprocessor part nodes ,and delete some Assert nodes

ops = graph.get_operations()
    print("middle_inputs:", middle_inputs)
    new_input = None
    if middle_inputs:
        new_input = middle_inputs[0]
        print("new_input:", new_input)
        i = 0
        while i < len(ops):
            if "Preprocessor" in ops[i].name:
                ops.pop(i)
                i -= 1
            elif "Assert" in ops[i].name:
                print("Assert", ops[i].name)
                ops.pop(i)
                i -= 1
            elif ops[i].name == "ToFloat":
                ops.pop(i)
                i -= 1
            i += 1

    print("1 final len(ops):", len(ops))
  1. just convert middle part , and repalce palceholder node which what I need
 if node.type == "Placeholder":
                    output_names = [new_input]
                    attr["dtype"] = utils.map_tf_dtype(types_pb2.DT_FLOAT)
                    attr["shape"] = utils.middle_node_shape(node.name)
                    new_name = new_input[:-2]
                    print("new_name:", new_name)
                    onnx_node = helper.make_node(node.type, input_names, output_names, name=new_name, **attr)
  1. final , this model's nodes don't have specific shape ,only have [1, -1, -1, 3], or some thing else , and there are some ops such as relu6 need specific shape to creat numpy array , or add_padding () need specific shape to caculate pads , Now my stupid solution is run each node to get output shape and write these in a dict,
SSD_MIBILE_NODE_SHAPE = {
    'image_tensor': [1, 300, 300, 3],
    'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/depthwise': [1, 75, 75, 64],
    'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/FusedBatchNorm': [1, 75, 75, 64],
    'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D': [1, 150, 150, 32],
    'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm': [1, 150, 150, 32],

  1. Although I finally converted successfully,and run caffe2 OK, but as you can see, this is stupid. Do you have any good advice?

stridedslice_op: begin_mask =9,and begin: [0 2 2 0] Need to deal with it?

stridedslice_op:
input: "detector/yolo-v3/ResizeBilinear_1:0"
input: "detector/yolo-v3/strided_slice_1/stack:0"
input: "detector/yolo-v3/strided_slice_1/stack_1:0"
input: "detector/yolo-v3/strided_slice_1/stack_2:0"
output: "detector/yolo-v3/strided_slice_1:0"
name: "detector/yolo-v3/strided_slice_1"
op_type: "StridedSlice"
attribute {
name: "begin_mask"
i: 9
type: INT
}
attribute {
name: "ellipsis_mask"
i: 0
type: INT
}
attribute {
name: "end_mask"
i: 9
type: INT
}
attribute {
name: "new_axis_mask"
i: 0
type: INT
}
attribute {
name: "shrink_axis_mask"
i: 0
type: INT
}

begin: [0 2 2 0]
end: [ 0 -2 -2 0]
strides: [1 1 1 1]
mask: 1
mask: 0
mask: 0
mask: 1
new_begin: [0, 2, 2, 0]
new_end: [9223372036854775807, -2, -2, 9223372036854775807]

tf2onnx has no output.onnx

I tried the test graph frozen.pb and it worked fine having an output model.onnx, so the tool is working in general. However, when I use another frozen graph, there is no model.onnx created. The verbose output seems quite the same to me. As there is no additional error message, I can't seem to find a solution to this.

tensorflow-onnx-master>python -m tf2onnx.convert --input tests\models\fc-layers\frozen_graph.pb --inputs input[1,224,224,3] --outputs MobilenetV1/Predictions/Reshape_1 --output tests\models\fc-layers\model.onnx --verbose

using tensorflow=1.9.0, onnx=1.2.2

2018-07-20 09:45:32.064575: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_batch_norms

2018-07-20 09:45:32.121699: I T:\src\github\tensorflow\tensorflow\tools\graph_transforms\transform_graph.cc:318] Applying fold_old_batch_norms

2018-07-20 09:45:32.466433: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

tensorflow ops: Counter({'Const': 166, 'Identity': 138, 'Mul': 81, 'Add': 54, 'Sub': 27, 'Rsqrt': 27, 'Relu6': 27, 'Conv2D': 15, 'DepthwiseConv2dNative': 13, 'Reshape': 2, 'Softmax': 1, 'Squeeze': 1, 'BiasAdd': 1, 'Placeholder': 1, 'AvgPool': 1})
tensorflow attr: Counter({'T': 388, 'dtype': 167, 'value': 166, '_class': 137, 'data_format': 30, 'padding': 29, 'strides': 29, 'dilations': 28, 'use_cudnn_on_gpu': 15, 'Tshape': 2, 'squeeze_dims': 1, 'ksize': 1, 'shape': 1})
onnx mapped: Counter({'Const': 166, 'Identity': 138, 'Mul': 81, 'Add': 54, 'Sub': 27, 'Rsqrt': 27, 'Relu6': 27, 'Conv2D': 15, 'DepthwiseConv2dNative': 13, 'Reshape': 2, 'Softmax': 1, 'Squeeze': 1, 'BiasAdd': 1, 'Placeholder': 1, 'AvgPool': 1})

onnx unmapped: Counter()

AssertionError: input is not in graph

When a particular node name has a '/' in it, only the string before the slash is passed as a node name and hence that particular node is not found.
Eg: if my node name is "input/Placeholder" only "input" is passed as node name instead of passing entire "input/Placeholder" and hence Tensorflow throws a node not in graph error

enable shapeoverride but there is still -1 introduced by placeholder.

shapeoverride is helping following scenario:
by end2end testing (with force_input_shape: true), if a placeholder is shape is [-1, 32], and we want to override the shape to [2, 32], before doing the tf->onnx conversion with the converter.

issue: https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/tfonnx.py#L238
node.shape will still return [-1, 32] (https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/graph.py#L128). So here we need to get the overrided shape.

In https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/tfonnx.py#L61, shape overrides are recorded into _output_shapes of Graph. I think we should use that.

Tensorflow op MirrorPad is not supported

't' is [[1, 2, 3], [4, 5, 6]].

'paddings' is [[1, 1]], [2, 2]].

'mode' is SYMMETRIC.

rank of 't' is 2.

pad(t, paddings) ==>
[[2, 1, 1, 2, 3, 3, 2]

[2, 1, 1, 2, 3, 3, 2]

[5, 4, 4, 5, 6, 6, 5]

[5, 4, 4, 5, 6, 6, 5]]

tf-onnx introducing many transpose operations

Hi, I got to convert an tf graph to onnx and I see that there are several transpose operations in the graph because of tf and onnx layout differences. Is there a way I can tweak the tfonnx/other files to avoid these transposes and still get the correct output shape? I am running on a CPU

I see a related post here for onnx-> tensorflow. onnx/onnx-tensorflow#31

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.