Code Monkey home page Code Monkey logo

tools's People

Contributors

ashahba avatar ashraf-bhuiyan avatar chensuyue avatar chuanqi129 avatar claynerobison avatar dbyoung18 avatar devarajbh avatar dmsuehir avatar ghgmc2 avatar guomingz avatar jitendra42 avatar junpfeng avatar karthikvadla avatar mahmoud-abuzaina avatar mdfaijul avatar mhbuehler avatar mjkyung avatar nhasabni avatar pallavigopal avatar riverliuintel avatar s1113950 avatar srini511 avatar tonyreina avatar wafaat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tools's Issues

Quantizing the Official TF Resnet Model

I'm trying compare the accuracies between resnet and its quantized version. First, I downloaded the resnet_v1 saved_model and used tensorflow's freeze_graph tool to freeze the graph.

I then followed the guide here, all the way until the middle of step 6, until I got the model into the logging format (logged_quantized_graph.pb).

When I tried to run inferenece, however, the computation graph failed to load. I ran into this error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'QuantizedConv2D' used by node import/resnet_model/conv2d_1/Conv2D_eightbit_quantized_conv (defined at eval_resnet.py:471) with these attrs: [out_type=DT_QINT32, Tinput=DT_QUINT8, dilations=[1, 1, 1, 1], strides=[1, 1, 1, 1], Tfilter=DT_QINT8, padding="SAME"]
Registered devices: [CPU, XLA_CPU]
Registered kernels:
  device='CPU'; Tinput in [DT_QUINT8]; Tfilter in [DT_QUINT8]; out_type in [DT_QINT32]

It seems like the graph has a qint8 filter while the registered kernels only support a quint8 filter for the quantizedconv2d layer. I found a related issue, and tried building tensorflow from source with the --config=mkl and --copt="-DINTEL_MKL_QUANTIZED" flags. Then I tried loading the graph again using this specially built version of tf.

However, now I run into this error which I can't seem to resolve:

2019-09-09 11:14:59.945553: F tensorflow/core/graph/mkl_layout_pass.cc:3459] Non-OK-status: ret_status status: Invalid argument: NodeDef mentions attr 'narrow_range' not in Op<name=_MklQuantizeV2; signature=input:float, min_range:float, max_range:float, mkl_input:uint8, mkl_min_range:uint8, mkl_max_range:uint8 -> output:T, output_min:float, output_max:float, mkl_output:uint8, mkl_output_min:uint8, mkl_output_max:uint8; attr=T:type,allowed=[DT_QINT8, DT_QUINT8, DT_QINT32, DT_QINT16, DT_QUINT16]; attr=mode:string,default="SCALED",allowed=["MIN_COMBINED", "MIN_FIRST", "SCALED"]; attr=round_mode:string,default="HALF_TO_EVEN",allowed=["HALF_AWAY_FROM_ZERO", "HALF_TO_EVEN"]>; NodeDef: {{node import/resnet_model/conv2d_28/Conv2D_eightbit_quantize_resnet_model/Relu_24}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
Aborted (core dumped)

I was unable to find documentation on how to quantize the official resnet model and run the fine tuning. Is there something I'm missing?

Thanks,
Max

google.protobuf.text_format.ParseError: 2:1 : ';': Expected identifier or number, got ;.

Was trying to run the below command and had this error

root@00000:/workspace/tensorflow# python tensorflow/python/tools/freeze_graph.py \
>      --input_graph /workspace/quantization/RL_S2S_1544356761_saved_model.pb \
>      --output_graph /workspace/quantization/freezed_graph.pb \
>      --input_binary False \
>      --input_checkpoint /workspace/quantization/model.checkpoint-27000.data-00000-of-00001 \
>      --output_node_names OUTPUT_NODE_NAMES
Traceback (most recent call last):
  File "tensorflow/python/tools/freeze_graph.py", line 491, in <module>
    run_main()
  File "tensorflow/python/tools/freeze_graph.py", line 488, in run_main
    app.run(main=my_main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "tensorflow/python/tools/freeze_graph.py", line 487, in <lambda>
    my_main = lambda unused_args: main(unused_args, flags)
  File "tensorflow/python/tools/freeze_graph.py", line 381, in main
    flags.saved_model_tags, checkpoint_version)
  File "tensorflow/python/tools/freeze_graph.py", line 340, in freeze_graph
    input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
  File "tensorflow/python/tools/freeze_graph.py", line 253, in _parse_input_graph_proto
    text_format.Merge(f.read(), input_graph_def)
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 574, in Merge
    descriptor_pool=descriptor_pool)
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 631, in MergeLines
    return parser.MergeLines(lines, message)
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 654, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 676, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 744, in _MergeField
    name = tokenizer.ConsumeIdentifierOrNumber()
  File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 1212, in ConsumeIdentifierOrNumber
    raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 2:1 : ';': Expected identifier or number, got ;.

How do i fix this ?

Quantize bert

I modify this tools made it support the bert model quantize,but after quantize matmul and biasadd,I use int8 pb file replace fp32 pb file,all the output is wrong。

RetinaNet Quantization

Hi,

I am trying to quantize the RetinaNet topology trained on TensorFlow, but I am getting an error. These are the steps I followed based on these instructions https://github.com/IntelAI/tools/tree/master/tensorflow_quantization:

  1. I was able to generate an optimized_graph.pb using the command:
    bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=/workspace/quantization/frozen_inference_graph.pb --out_graph=/workspace/quantization/optimized_graph.pb --inputs="input_1" --outputs="bboxes,scores,classes" --transforms="fold_batch_norms"

  2. But when I tried to run the quantization, using this command:

python tensorflow/tools/quantization/quantize_graph.py --input=/workspace/quantization/optimized_graph.pb --output=/workspace/quantization/quantized_dynamic_range_graph.pb --output_node_names="bboxes,scores,classes" --mode=eightbit --intel_cpu_eightbitize=True

I got this error:

W0422 17:10:22.236689 140385778120448 deprecation.py:323] From tensorflow/tools/quantization/quantize_graph.py:540: remove_training_nodes (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.remove_training_nodes
2019-04-22 17:10:22.323616: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: AVX512F
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2019-04-22 17:10:22.345101: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2095090000 Hz
2019-04-22 17:10:22.360829: I tensorflow/compiler/xla/service/service.cc:162] XLA service 0x1d7bfdd0 executing computations on platform Host. Devices:
2019-04-22 17:10:22.360862: I tensorflow/compiler/xla/service/service.cc:169] StreamExecutor device (0): ,
2019-04-22 17:10:22.367186: I tensorflow/core/common_runtime/process_util.cc:92] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
W0422 17:10:22.368036 140385778120448 deprecation.py:323] From tensorflow/tools/quantization/quantize_graph.py:406: quantize_v2 (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25.
Instructions for updating:
tf.quantize_v2 is deprecated, please use tf.quantization.quantize instead.
Traceback (most recent call last):
File "tensorflow/tools/quantization/quantize_graph.py", line 1951, in
app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python2.7/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "tensorflow/tools/quantization/quantize_graph.py", line 1937, in main
output_graph = rewriter.rewrite(FLAGS.output_node_names.split(","))
File "tensorflow/tools/quantization/quantize_graph.py", line 583, in rewrite
self.output_graph)
File "tensorflow/tools/quantization/quantize_graph.py", line 1733, in remove_redundant_quantization
old_nodes_map = self.create_nodes_map(old_graph)
File "tensorflow/tools/quantization/quantize_graph.py", line 506, in create_nodes_map
raise ValueError("Duplicate node names detected.")
ValueError: Duplicate node names detected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.