First of all, thank you so much for this repository.
I am having some issues trying to use your code.
TFLiteConverter: using tensorflow v2.8.0
Optimizing for model size and inference latency
2022-02-15 11:52:18.953106: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:357] Ignored output_format.
2022-02-15 11:52:18.953168: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:360] Ignored drop_control_dependency.
2022-02-15 11:52:18.953563: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from: 0008_sitw_v2_1a.tf.mdl
2022-02-15 11:52:18.971387: I tensorflow/cc/saved_model/reader.cc:78] Reading meta graph with tags { serve }
2022-02-15 11:52:18.971459: I tensorflow/cc/saved_model/reader.cc:119] Reading SavedModel debug info (if present) from: 0008_sitw_v2_1a.tf.mdl
2022-02-15 11:52:19.016940: I tensorflow/cc/saved_model/loader.cc:228] Restoring SavedModel bundle.
2022-02-15 11:52:19.159107: I tensorflow/cc/saved_model/loader.cc:212] Running initialization op on SavedModel bundle at path: 0008_sitw_v2_1a.tf.mdl
2022-02-15 11:52:19.250234: I tensorflow/cc/saved_model/loader.cc:301] SavedModel load for tags { serve }; Status: success: OK. Took 296672 microseconds.
loc(callsite(callsite(fused["Conv2D:", "wav2xvec/mfcc2xvec/tdnn1.affine/Conv2D@__inference__wrapped_model_13657"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_17248"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): error: 'tf.Conv2D' op is neither a custom op nor a flex op
error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Conv2D
Details:
tf.Conv2D(tensor<*xf32>, tensor<1x5x30x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/kaldi-tflite/kaldi_tflite/lib/models/convert_tflite.py", line 73, in SavedModel2TFLite
tfliteModel = converter.convert()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 803, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 789, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 1084, in convert
return self._convert_from_saved_model(graph_def)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py", line 967, in _convert_from_saved_model
result = _convert_saved_model(**converter_kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 213, in wrapper
raise converter_error from None # Re-throws the exception.
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 206, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 789, in convert_saved_model
data = convert(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 306, in convert
raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: <unknown>:0: error: loc(callsite(callsite(fused["Conv2D:", "wav2xvec/mfcc2xvec/tdnn1.affine/Conv2D@__inference__wrapped_model_13657"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_17248"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Conv2D' op is neither a custom op nor a flex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["Conv2D:", "wav2xvec/mfcc2xvec/tdnn1.affine/Conv2D@__inference__wrapped_model_13657"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_17248"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Conv2D
Details:
tf.Conv2D(tensor<*xf32>, tensor<1x5x30x512xf32>) -> (tensor<?x?x?x512xf32>) : {data_format = "NHWC", device = "", dilations = [1, 1, 1, 1], explicit_paddings = [], padding = "VALID", strides = [1, 1, 1, 1], use_cudnn_on_gpu = true}
I am not doing anything new, I was just trying to use your recipe. I thought this model was compatible with TFLITE_BUILTINS
operations subset for tflite.
Thank you so much in advance.