Code Monkey home page Code Monkey logo

frozen-graph-tensorflow's Introduction

Frozen Graph TensorFlow

Lei Mao

Introduction

This repository has the examples of saving, loading, and running inference for frozen graph in TensorFlow 1.x and 2.x.

Files

.
├── LICENSE.md
├── README.md
├── TensorFlow_v1
│   ├── cifar.py
│   ├── cnn.py
│   ├── inspect_signature.py
│   ├── main.py
│   ├── README.md
│   ├── test_pb.py
│   └── utils.py
└── TensorFlow_v2
    ├── example_1.py
    ├── example_2.py
    ├── README.md
    └── utils.py

Blogs

Examples

frozen-graph-tensorflow's People

Contributors

leimao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

frozen-graph-tensorflow's Issues

dump weights

how can we dump the weights of a tf2's model by the format of TensorRT's .wts1/.wts2? I attempt to use a script of The TensorRT,called dumpTFWts.py, but it just can work with the tf1's checkpoint file? Can i use the frozen graph to dump the weights of a tf2' model?If it can, how to do it ? thanks.

Would you mind my sharing your method in Zhihu?

Hi, Mao.
Actually, it is not an issue since I have no access to say thanks under your blog post. I was frustrated in freezing graph to pb file with TF2.0 last year, since there was no support, no reply, no article about this problem online. Fortunately, I read your blog and solved the problem of my project yesterday.

I wonder if I could share your method in Chinese in Zhihu with full references so that people who had a similar problem could get support.
Thank you,

Output layer signature rename

Hi,
is there a way to change the output layer name?
From what I observed, it's always default to Identity_n:0 where n is the number of output src

    full_model = full_model.get_concrete_function(
    x=tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name = "input_1")) 
//we can change the input signature here? what about output? 

Issue with passing of the arguments?

Hey! This might come out as a very newbie question, I'm very new to this all.
I am having issues with what to pass as the arguments of the save() and save_as_pb() fucntions.

Code Snippet:

!mkdir -p saved_model # Making he directory to save the saved model
model = tf.train.load_checkpoint('/content/test1/shanghaitech') # Loading the pre-trained checkpoint as model

# Your Code for the save() and the save_as_pb() functions:
from tensorflow.python.tools import freeze_graph

def save(self, directory, filename):

    if not os.path.exists(directory):
        os.makedirs(directory)
    filepath = os.path.join(directory, filename + '.ckpt')
    self.saver.save(self.sess, filepath)
    return filepath

def save_as_pb(self, directory, filename):

    if not os.path.exists(directory):
        os.makedirs(directory)

    # Save check point for graph frozen later
    ckpt_filepath = self.save(directory=directory, filename=filename)
    pbtxt_filename = filename + '.pbtxt'
    pbtxt_filepath = os.path.join(directory, pbtxt_filename)
    pb_filepath = os.path.join(directory, filename + '.pb')
    # This will only save the graph but the variables will not be saved.
    # You have to freeze your model first.
    tf.train.write_graph(graph_or_graph_def=self.sess.graph_def, logdir=directory, name=pbtxt_filename, as_text=True)

    # Freeze graph
    # Method 1
    freeze_graph.freeze_graph(input_graph=pbtxt_filepath, input_saver='', input_binary=False, input_checkpoint=ckpt_filepath, output_node_names='cnn/output', restore_op_name='save/restore_all', filename_tensor_name='save/Const:0', output_graph=pb_filepath, clear_devices=True, initializer_nodes='')
    
    return pb_filepath

# Calling the Function save_as_pb():
file_path = save_as_pb(self, directory='/content/test1/', filename='shanghaitech')
print(file_path)

The Issue:

When I call the save_as_pb() fucntion, I don't know what to pass to the "self" argument. I have tried passing "model" in place of "self" but of no use.

Any Kind of guidance would be highly appreciated!
Cheers
-Zain Gill

Trouble creating .pb to be used by ML.net

Hello, I used your library https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v2 to freeze a tf2 graph and generate a 'frozen_model.pb' file. However, it's not being accepted by ml.net because there is no output in the graph.

When I use tool https://lutzroeder.github.io/netron/ to view the frozen_graph.pb file the visual of graph looks strange and also does not show output.

https://github.com/lutzroeder/netron has examples of .pb files that show correctly (scroll to bottom) like the chessapp.pb file.

I ran both your train.py and test.py and confirmed that reloading the frozen graph works, however, it doesn't work for ml.net.

Do you know why the discrepancy of a graph that seems to work when running test.py but doesn't show correctly on https://github.com/lutzroeder/netron or work on ml.net?

read values

Hey, Can you tell how to read the values like weights and gradients using this frozen graph.
We also have other model.ckpt.data .meta .index files.
Is there any way?

problem saving stateful LSTM model

Saving TF2 model to frozen graph worked for me, thank you! However, when I trained my LSTM model with stateful=True, it failed:

frozen_func = convert_variables_to_constants_v2(full_model, lower_control_flow=False)

File ".../lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1075, in convert_variables_to_constants_v2
converted_input_indices)
File ".../lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1001, in _construct_concrete_function
new_output_names)
File ".../lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 650, in function_from_graph_def
wrapped_import = wrap_function(_imports_graph_def, [])
File ".../lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 628, in wrap_function
collections={}),
File ".../lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File ".../lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 87, in call
return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs)
File ".../lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 93, in wrapped
return fn(*args, **kwargs)
File ".../lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 648, in _imports_graph_def
importer.import_graph_def(graph_def, name="")
File "/.../lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File ".../lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File ".../lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 501, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 0 of node sequential/lstm/AssignVariableOp was passed float from sequential/lstm/Read/ReadVariableOp/resource:0 incompatible with expected resource.

tensorflow-hub model to frozen graph

Hi lei mao:
Thanks for your blog <<Save, Load and Inference From TensorFlow Frozen Graph>> , that benefits a lot to me as I’m enabling
a style transfer model from tf-hub.
I come across some problems, simply using codes it’s like this:

import tensorflow_hub as hub
hub_handle = 'https://hub.tensorflow.google.cn/google/magenta/arbitrary-image-stylization-v1-256/2’
style_model = hub.load(hub_handle)

The style_model is a saved model: tensorflow.python.saved_model.load.Loader._recreate_base_user_object._UserObject
we can use the style_model to get stylized image prediction:

outputs = style_model(tf.constant(content_image), tf.constant(style_image))

I can also get the signatures as a function to run the model

func = style_model.signatures['serving_default']
outputs = func(tf.constant(content_image), tf.constant(style_image))

Then I try to freeze the func.graph but I come across problems

[op.type for op in func.graph.get_operations()]

The feeds and variables all become Placeholder!

['Placeholder', 'Placeholder', 'Placeholder', 'Placeholder', 'Placeholder’….. 'StatefulPartitionedCall', 'Identity']

Do you know how this happened? Or any solutions about this issue? Many thanks!

Thanks!

如果是两个输入,请问full_model.get_concrete_function下的如何设置?

这是我的模型:

class xbert_gru(tf.keras.models.Model):
    def __init__(self):
        super(gru, self).__init__()
        self.xbert_model = build_transformer_model(config_path,
                                                   checkpoint_path,
                                                   model='bert',
                                                   with_pool=False,          # 不提取CLS向量
                                                   return_keras_model=False  # 不返回keras的model结构
                                                   )
        self.gru_dense = GRU(units=384,
                    dropout=0.2,
                    recurrent_dropout=0.15,
                    return_sequences=True,
                    name='GRU-Dense'
                    )
        self.ave_pooing = GlobalAveragePooling1D(name='GlobalAveragePooling1D')

        self.x_out = Dense(168,
                      activation='softmax',
                      name='Out-Dense'
                      )
#     @tf.function(input_signature=(
#         [tf.TensorSpec(shape=(None,None), dtype=tf.float32), tf.TensorSpec(shape=(None,None), dtype=tf.float32)])
#                 )
    @tf.function
    def call(self, inputs):
        x = self.xbert_model.model([inputs[0], inputs[1]])
        x = self.gru_dense(x)
        x = self.ave_pooing(x)
        x = self.x_out(x)
        return x

我有两个点比较有疑问就是:

第一:tf.function()里面我改如何设置?这是我目前的设置:
full_model = tf.function(model)
第二: full_model.get_concrete_function( )里面我该如何设置输入层的两个输入的Tensor ? 这是我目前的设置:
full_model = full_model.get_concrete_function(
         tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype),
         tf.TensorSpec(model.inputs[1].shape, model.inputs[0].dtype))

按照我目前的设置的报错:

TypeError: in converted code:

    /home/xuyingjie/.virtualenvs/py3venv/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__  *
        outputs = call_fn(cast_inputs, *args, **kwargs)
    /home/xuyingjie/.virtualenvs/py3venv/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py:568 __call__
        result = self._call(*args, **kwds)

    TypeError: tf__call() takes 2 positional arguments but 3 were given\

请大佬指点,谢谢啦!

Multiple Input Models

Thank you so much for this repo. When using models with multiple models this code will fail obviously. Here is how to do it for multiple inputs. Wasn't sure where to PR so just opening this issue to help others trying the same.

full_model = tf.function(lambda x: model(x))
get_concrete_func_x = []
for i in range(len(model.inputs)):
    get_concrete_func_x.append(
        tf.TensorSpec(
            model.inputs[i].shape,
            model.inputs[i].dtype,
            name=model.inputs[i].name
        )
    )
# basically x should be a TensorSpec or tuple of them.
# don't put a generator directly or it will fail too.
get_concrete_func_x = tuple(get_concrete_func_x)
full_model = full_model.get_concrete_function(x=get_concrete_func_x)

Feature extraction in .pb model

Hi,
I want to know how and when does feature extraction step happen while doing inference using a .pb file. Also, if I am using two .pb files to make two inferences on same input, is there a way feature extraction can be done only once?

inputs/outputs name

    full_model = tf.function(lambda x: model(x))
    full_model = full_model.get_concrete_function(x=(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype), tf.TensorSpec(model.inputs[1].shape, model.inputs[1].dtype), tf.TensorSpec(model.inputs[2].shape, model.inputs[2].dtype)))

    # Get frozen ConcreteFunction
    # https://github.com/tensorflow/tensorflow/issues/36391#issuecomment-596055100
    frozen_func = convert_variables_to_constants_v2(full_model, lower_control_flow=False)
    frozen_func.graph.as_graph_def()

I want to change the inputs and outputs name in frozen_func, can u please tell how to solve it?

Fix function signatures for back compatibility

Hello, @leimao

Thank you very much for your research and this tool in general without it I wouldn't go anywhere with frozen_graph problem.

Please share your thoughts on problem of unconsistent function signatures when translating layers from TF>2 to TF 1.X.
I mean for example if we had tf.nn.Conv2d which has explicit_padding attribute on TF>2 and we want to deploy it on TF==1.12, we'll see an error:

Traceback (most recent call last):
  File "/root/anaconda2/envs/tf1/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 418, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, 
DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilatio
ns:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node import/spnas_net/features/init_block/conv1/conv/conv/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="V
ALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true](import/spnas_net/features/init_block/conv1/conv/zero_padding2d/Pad, import/spnas_net/features/init_block/conv1/conv/conv/Conv2D/ReadVariableOp). (Check whether
 your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

Is it possible to work with frozen graph to change signatures to valid ones?

My model has no attribute 'inputs'

My model produces errors in the get_concrete_function:
full_model = full_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name="Input_1"))

line 37, in
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name="Input_1"))
AttributeError: '_UserObject' object has no attribute 'inputs'

"cannot fine the Placeholder op that is an input"

my code:
@tf.function()
def infer(input):
output=model(input)

gpt2 = infer.get_concrete_function(tf.TensorSpec(shape=(None,None),dtype=tf.int64,name='input'))

''''
error occur here, the error message is the tittle
''''
gpt2_function = convert_variables_to_constants_v2(gpt2)

GraphDef version incompatibility

Hi

I've encountered a problem that when I'm loading the frozen graph with your example code:

ValueError: Converting GraphDef to Graph has failed. The binary trying to import the GraphDef was built when GraphDef version was 440. The GraphDef was produced by a binary built when GraphDef version was 527. The difference between these versions is larger than TensorFlow's forward compatibility guarantee.

Any ideas how to solve this problem?

Thank you~

Question about 'NoneType' object is not subscriptable

Congrats on the awesome work done and thanks for sharing.

I have create a model via subclassing and wanna to convert saved model to frozen graph.

But I have got a NoeType error at line x=tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

model = tf.keras.models.load_model(saved_model, custom_objects={'loss_fn': loss_fn})
model.summary()

# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
         x=tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

Do you know how to solve this problem?

Thanks a lot and wait for you reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.