Code Monkey home page Code Monkey logo

ios_tensorflow_objectdetection_example's Introduction

Tensorflow iOS ObjectDetection Example

This example gives a demo of loading a Object Detection model to the iOS platform and using it to do the object detection work. The currently supported models are: ssd_mobilenet_v1_coco, ssd_inception_v2_coco, faster_rcnn_resnet101_coco.

Quick Start

1.Setup Environment Variable in Terminal

First open the terminal, type in the following command:

export TF_ROOT=/your//tensorflow/root/

Then cd to the example folder and check your tensroflow version and the correctness of your tensorflow root path:

bash config.sh

The config.sh file will automatically check your TensorFlow version and copy some files that are necessary for the compile process. After running the config.sh, if the terminal show the following result then you are good for next step:

ok=> current version: # Release 1.4.0
ok=> Ready!

Otherwise, please go to the TensorFlow official website and download the latest version of TensorFlow.

2.Compile dependencies

Compile ios dependencies:

cd $TF_ROOT
tensorflow/contrib/makefile/build_all_ios_ssd.sh

3.Setup project in Xcode

Open the project in Xcode Then in the "tf_root.xcconfig" replace the TF_ROOT with your tensorflow root's absolute path. Finally, add the "op_inference_graph.pb" to your project folder.

4.Build & Run the project

Note: If you'd like to run other two models, download it from the above links and add the .ph file to your project.

5.Other Model Resource

For other model file, please check my another repo.

Result running on iOS device

alt text alt text

Update content for TensorFlow 1.4.0

After updating the TensorFlow to version 1.4.0, I did the following change to make sure the example could run successfully:

  1. Follow the steps in the new QuickStart section
  2. If you've tryed the previous version of this example before, then you need to re-compile the libtensorflow.a, otherwise many register will not able to be found:
cd $TF_ROOT
rm -r tensorflow/contrib/makefile/gen/lib/ tensorflow/contrib/makefile/gen/obj/
tensorflow/contrib/makefile/build_tflib_ssd.sh
  1. In the iOS project, add the new header search, and also make sure you use the right $(TF_ROOT) in the header search:
$(TF_ROOT)/tensorflow/contrib/makefile/downloads/nsync/public/
  1. In the iOS project, add the new lib search:
$(TF_ROOT)/tensorflow/contrib/makefile/gen/nsync
  1. In the Makefile_ios, comment out the line:
TF_CC_SRCS += tensorflow/core/platform/default/gpu_tracer.cc

Below content is the detailed explanation and FAQs of this example

Introduciton

Recently Google released the Tensorflow Object Detection API which includes the selection of multiple models. However, the API does not contain a iOS version of implementation. Therefore, in this example, I wrote a IOS implementation of the object detection API, including the SSDMobilenet model. For this example, it maintains the same functionality as the python version of object detection API. Furthermore, the IOS code is derived from Google tensorflow ios_camera_example.

Prerequisites

Installing

1.Xcode

You’ll need Xcode 7.3 or later.

2.Tensorflow

Download the Google Tensorflow repository to local: https://github.com/tensorflow/tensorflow

3.Bazel

If you don't have Bazel, please follow the Bazel's official installation process: https://docs.bazel.build/versions/master/install.html

4.Repository Download

Download this repository to local and put the directory into the tensorflow directory you just downloaded.

5.Graph Download

Follow the below instruction to download the model you want: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md We only need the graph file, aka .pb file (We chose SSDMobilenet as example):

frozen_inference_graph.pb

Then download the label file for the model you chose: https://github.com/tensorflow/models/tree/master/object_detection/data

mscoco_label_map.pbtxt

Build

1.Build Bazel

Before you could run the project, you need to build some bazel depedicies by following the Google instruction: If this is your first time build Bazel, please follow the below link to configure the installation: https://www.tensorflow.org/install/install_sources#configure_the_installation

Optional:

If you'd like to get the info of graph's input/output name, using the following command:

bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=YOUR_GRAPH_PATH/example_graph.pb

2.Change Makefile

The Makefile is under "tensorflow/contrib/makefile/".

  • In the Makefile, first delete the line "-D__ANDROID_TYPES_SLIM__ " under "# Settings for iOS." for all "$(IOS_ARCH)".

3.Generate ops_to_register.h

One of the biggest issues during iOS Tensorflow building is the missing of different OpKernel. One may get similar errors like below:

Invalid argument: No OpKernel was registered to support Op 'Equal' with these attrs.  Registered devices: [CPU], Registered kernels:
  <no registered kernels>

In order to solve the problems in one time, we use Bazel to generate a ops_to_register.h, which contains all the needed Ops to loading the certain graph into project. An example of command-line usage is:

  bazel build tensorflow/python/tools:print_selective_registration_header 
  bazel-bin/tensorflow/python/tools/print_selective_registration_header \
    --graphs=path/to/graph.pb > ops_to_register.h

This will generate a ops_to_register.h file in the current directory. Copy the file to "tensorflow/core/framework/". Then when compiling tensorflow, pass -DSELECTIVE_REGISTRATION and -DSUPPORT_SELECTIVE_REGISTRATION See tensorflow/core/framework/selective_registration.h for more details.

Attention:

For different models, you also need to provide certain ops_to_register.h file that fits the model. Therefore, if you'd like to contain several models in one project, you need to first generate a ops_to_register.h for each different model, then merge all the ops_to_register.h into one file. By doing the operation, you could use different models in one project without compiling the Tensorflow lib separately.

In this example, we provided a combined ops_to_register.h file which is compatible with ssd_mobilenet_v1_coco and ssd_inception_v2_coco and faster_rcnn_resnet101_coco.

4.Build Tensorflow iOS library

Instead of using build_all_ios for the building process, we divide the process into several steps:

  • In tensorflow/contrib/makefile/compile_ios_protobuf.sh, add the line
export MACOSX_DEPLOYMENT_TARGET="10.10"

after

set -x
set -e
  • Download the dependencies:
tensorflow/contrib/makefile/download_dependencies.sh
  • Next, you will need to compile protobufs for iOS:
tensorflow/contrib/makefile/compile_ios_protobuf.sh 
  • Then create the libtensorflow-core.a:
tensorflow/contrib/makefile/compile_ios_tensorflow.sh "-O3  -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"

If you'd like to shorten the building time, you could choose to build the "compile_ios_tensorflow_s.sh" file provided in the repository. The "complie_ios_tensorflow_s.sh" only complie two IOS_ARCH: ARM64 and x86_64, which make the building process much shorter. Make sure to copy the file to the "tensorflow/contrib/makefile/" directory before building. Then the build command is changed to:

tensorflow/contrib/makefile/compile_ios_tensorflow_s.sh "-O3  -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"

Make sure the script has generated the following .a files:

tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a
tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf.a
tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf-lite.a

5.Xocde Configuration

Running

Before you run, make sure to recompile the libtensorflow-core.a according to the modified Makefile. Otherwise, following error may be generated during the runtime:

Error adding graph to session:
No OpKernel was registered to support Op 'Less' with these attrs.  
Registered devices: [CPU],     Registered kernels: device='CPU';
 T in [DT_FLOAT]......

Once you finish the above process, you could run the project by click the build button in the Xcode

Label Config

In order to get the lable name for each detected box, you have to use proto buffer data structure. In the SSDMobilenet model, the label file is stored as a proto buffer structure, so that you need to proto's own function to extract the data.

To use proto buffer, first install it by

brew install protobuf

Then follow https://developers.google.com/protocol-buffers/docs/cpptutorial to compile the proto buffer. After the compiling, you'll get a .h and a .cc files which contain the declaration and implementation of your classes.

example.pb.h
example.pn.cc

Finally you could use the funcition in the files to extract your label data.

FAQ

  1. If you still get errors like after finishing the above instruction:
Invalid argument: No OpKernel was registered to support Op 'xxx' with these attrs.  Registered devices: [CPU], Registered kernels:
<no registered kernels>
  • Solution: First check if you use the certain ops_to_register.h for the model you choose. Then check the file "tensorflow/contrib/makefile/tf_op_files.txt" and add the "tensorflow/core/kernels/cwise_op_xxx.cc" into the txt file if it is not in there.
  1. Make sure you've added the "-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION" when run the "compile_ios_tensorflow_s.sh".

  2. Invalid argument: No OpKernel was registered to support Op 'Conv2D' with these attrs. Registered devices: [CPU], Registered kernels:

 [[Node: FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise, FeatureExtractor/InceptionV2/Conv2d_1a_7x7/pointwise_weights/read)]]
  • Solution: Because of the special structure of models, they include the GEMM function in the conv layers. However, the default Makefile does not use the GEMM for conv layer, so that you need to munually replace the line in the ops_to_register.h. In your ops_to_register.h replace the line "Conv2DOp<CPUDevice, float>" with "Conv2DUsingGemmOp< float, Im2ColConvFunctor<float, float, float, FastGemmFunctor<float, float, float>>>" and the problem will be solved.

ios_tensorflow_objectdetection_example's People

Contributors

jiehe96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ios_tensorflow_objectdetection_example's Issues

Issues in string_int_label_map.pb

First, awesome project! 😁

Getting a couple errors with the string_int_label_map.pb files included in your project.

First is:

#if 3003002 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION
#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers.  Please
#error regenerate this file with a newer version of protoc.
#endif

Second is:
No member named 'Shutdown' in 'object_detection::protos::StringIntLabelMapItemDefaultTypeInternal'
for:

void TableStruct::Shutdown() {
  _StringIntLabelMapItem_default_instance_.Shutdown();
  delete file_level_metadata[0].reflection;
  _StringIntLabelMap_default_instance_.Shutdown();
  delete file_level_metadata[1].reflection;
}

It seems that these files should've been regenerated when I built tensorflow, but I can't find them in the gen directory.

Any ideas?

Thanks!

1 duplicate symbol for architecture arm64

I followed the instruction and build the 3 static libraries. But when building the app, this problem showed up.

image

duplicate symbol __ZN10tensorflow15CreateGPUTracerEv in:
/Users/yingjie/tensorflow-master/tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a(gpu_tracer.o)
ld: 1 duplicate symbol for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Could you help me to solve it? Or could you share the final version of Makefile you used(the after modified) for building the ios tensorflow library? I have no idea why I got this problem with gpu_tracer.o.

Hope to hear the advice from you soon.

tensorflow/core/lib/strings/numbers.cc:26:10: fatal error: 'double-conversion/double-conversion.h' file not found

Quick Start

2.Compile dependencies

Compile ios dependencies:

cd $TF_ROOT
tensorflow/contrib/makefile/build_all_ios_ssd.sh
tensorflow/core/lib/strings/numbers.cc:26:10: fatal error: 
      'double-conversion/double-conversion.h' file not found
#include "double-conversion/double-conversion.h"
         ^
1 error generated.
make: *** [/Users/admin/tensorflow-master/tensorflow/contrib/makefile/gen/host_obj/tensorflow/core/lib/strings/numbers.o] Error 1
make: *** Waiting for unfinished jobs....
+ '[' 2 -ne 0 ']'
+ echo 'arm64 compilation failed.'
arm64 compilation failed.
+ exit 1

nsync directory not generated

Hello,Jie,
I met a problem below,even if I use official sh also the same result。
ld: warning: directory not found for option '-L/Users/schubert/Documents/tensorflow-master/tensorflow/contrib/makefile/gen/nsync'
ld: library not found for -lnsync

No OpKernel was registered to support Op 'Conv2D' with these attrs

hi I run by step and occur this problem:
Invalid argument: No OpKernel was registered to support Op 'Conv2D' with these attrs. Registered devices: [CPU], Registered kernels:

 [[Node: FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise, FeatureExtractor/InceptionV2/Conv2d_1a_7x7/pointwise_weights/read)]]

and we also tried the SSD-mobilenet and will occur the same problem.
and I have tried the methods you gave but it seems not work. could you help me ? thanks~

three images for tf.session->run

Do you know why we send in three images as the input ?

out_tensors->push_back(resized_tensor);
out_tensors->push_back(image_tensors_org);
out_tensors->push_back(image_tensors_org4);

Seems like we send tensors of shape:
image_tensors[0]: Tensor<type: float shape: [1,224,224,3]
image_tensors[1]: Tensor<type: uint8 shape: [1,900,1352,3]
image_tensors[2]: Tensor<type: uint8 shape: [1,900,1352,4]

though according to summarize_graph
Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[1,?,?,3])

Please rewrite project to use CocoaPods

it would be much easier to install tensorflow using CocoaPods https://www.tensorflow.org/mobile/ios_build

building from source is so much pain (when you don't even have a mac, but use virtual machines)

I tried to change it to CocoaPods by-myself, but as I saw in the project it also needs Google Protobuf files (but this lib is created/downloaded only when building tensorflow ios from source using build script build_all_ios_ssd.sh)

so it would be great to update it to CocoaPods and resolve this issue #11
remove TF-Root from project settings (as we don't need when we have tensorflow from CocoaPods)

tensorflow zoo version

Hi
according to tensorflow models zoo page, they update all their model to be on tf.1.5.0
so they might not work with tf.1.4.0
I'll remade to add the *.pb file with the right version to the repo

TensorFlow iOS Build

Hi, it seems there are no build_ios_ssd.sh but just build_ios.sh exist. How do I follow your instructions to do this thing. Many thanks if got any apply.

Speed Issue

Hi JieHe,

Thanks a lot for sharing your awesome work. I have tried to build your app following your instructions. But the speed is much slower than I expected.

I am using a iPhone 6s plus to do the test. The speed I've got is around 1 sec per frame.
I was wondering what speed you have got.
Do you have any idea to improve the speed to get it realtime?

Still got "No OpKernel was registered to support Op 'Less' " issues.

Sorry to bother you again. I still have this issue when I try to load the static library into the official tensorflow-ios camera code. However, the same library works fine for your code. So could I assume the ops had successfully generated as in ops_to_register.h?

Couldn't load model: Invalid argument: No OpKernel was registered to support Op 'Less' with these attrs. Registered devices: [CPU], Registered kernels:
device='CPU'; T in [DT_FLOAT]

I am wondering what trick did you use to avoid this error? In the official project, I use install pods as package manager. Is that the problem? It seems my less op only registered with float, not int32 as in the ssd-mobilenet model, although in ops to register, the line - "BinaryOp< CPUDevice, functor::less>" is listed.

I am desperate with this weird issue. Could you please help me with it? ToT

Unable to compile

Hi,

While running the tensorflow/contrib/makefile/build_all_ios_ssd.sh script, I encountered the following compilation error:

tensorflow/tensorflow/contrib/makefile/gen/protobuf-host/bin/protoc  tensorflow/core/grappler/costs/op_performance_data.proto --cpp_out tensorflow/tensorflow/contrib/makefile/gen/host_obj/
tensorflow/tools/git/gen_git_source.sh tensorflow/core/util/version_info.cc
make: *** No rule to make target `tensorflow/tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/common_runtime/gpu/gpu_tracer.o', needed by `/Users/tomiaijo/tensorflow/tensorflow/contrib/makefile/gen/lib/ios_ARM64/libtensorflow-core-arm64.a'.  Stop.
make: *** Waiting for unfinished jobs....
+ '[' 2 -ne 0 ']'
+ echo 'arm64 compilation failed.'
arm64 compilation failed.
+ exit 1

Tensorflow is 1.2 (e92aed):

$ cat RELEASE.md | grep "1.2.0"  #make sure the output is "Relase 1.2.0"
# Release 1.2.0

Any ideas what might have gone wrong?

Using latest tensorflow 1.4 having issues with string_int_label_map.pb.h

i have this line in ./tensorflow/contrib/makefile/downloads/protobuf/src/google/protobuf/stubs/common.h

#define GOOGLE_PROTOBUF_VERSION 3004000
and this:
#define GOOGLE_PROTOBUF_MIN_PROTOC_VERSION 3004000

when I build the project it is complaining about protobuf version in file string_int_label_map.pb.h

#if 3003002 < GOOGLE_PROTOBUF_MIN_PROTOC_VERSION
#error This file was generated by an older version of protoc which is
#error incompatible with your Protocol Buffer headers. Please
#error regenerate this file with a newer version of protoc.
#endif

what commit of version 1.4 are you using? maybe I can roll back to that commit to use older version of protobuf for ios?

Suggestions on "No OpKernel was registered to support Op..."

Thanks for this project, especially the walk through of project compiling.

Here I tried to follow every step of the walk through and made sure there was nothing left undone, I still got the error "No OpKernel was registered to support Op...", the Op could be 'All', 'Where', 'Less' and etc. I checked all the files and was sure they were defined and tensorflow was recompiled. Therefore I have a suggestion, would you please share 'libtensorflow-core.a' you precompiled? For architecture arm64 and x86_64 as specified in the example? Maybe we can at least try and see what's going on there...

Thank you.

Camera Demo

Hi,
Is there a reason there is no camera demo?
Just due to complexity or because of some kind of limitation?

I am also wondering why this works, when I can't find any Object Detection models from TF that have been converted to coreml anywhere.
For example these: ssd_mobilenet_v1_coco, ssd_inception_v2_coco, faster_rcnn_resnet101_coco

It seems on the TensorFlow repo there is discussion about how the model is not supported in TFLite yet.

tensorflow/tensorflow#14670

Meanwhile on the tfcoreml repo there is an example script for converting the SSD android model:
https://github.com/tf-coreml/tf-coreml/blob/master/examples/ssd_example.ipynb

I assume until TensorFlow Lite is stable it might be better to switch to using CoreML Models?

ImportError: No module named enum

when I run the py:
bazel-bin/tensorflow/python/tools/print_selective_registration_header
--graphs=path/xx.pb > ops_to_register.h
there's the problem below:
ImportError: No module named enum

How to fix this problem, help!

Ran into an issue with protobuf

I receive the following error in "within string_int_label_map.pb.h" when I run the project in xcode on my macOS.

"This file was generated by a newer version of protoc which is incompatible with your Protocol Buffer headers. Please update your headers."

How can I regenerate this file or point to a different location in xcode?

Thanks,
Todd

Invalid argument: No OpKernel was registered to support Op 'FusedBatchNorm' with these attrs. Registered devices: [CPU], Registered kernels: <no registered kernels>

Thanks for this awesome project !
After following the process mentioned in readme and other issues on this repo, I was able to run the SSD MobileNet model present in your tf_resource repo.
Following runtime error comes when I try to run my own custom trained SSD model (trained using tensorflow object detection api itself):

Invalid argument: No OpKernel was registered to support Op 'FusedBatchNorm' with these attrs. Registered devices: [CPU], Registered kernels:

[[Node: FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/FusedBatchNorm = FusedBatchNorm[T=DT_FLOAT, data_format="NHWC", epsilon=0.001, is_training=false](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/convolution, FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/gamma/read, FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/beta/read, FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/moving_mean/read, FeatureExtractor/MobilenetV1/Conv2d_0/BatchNorm/moving_variance/read)]]

Now I looked for 'FusedBatchNorm' in "ops_to_register.h" provided by you and as suspected that operation was missing. So I generated ops_to_register.h file by myself using $TF_ROOT/tensor flow/python/tools/print_selective_registration_header.py and then replaced that header file in $TF_ROOT/tensorflow/core/framework/ and rebuild everything again. Also "tensorflow/core/kernels/fused_batch_norm_op.cc" is present there in tf_op_files.txt
But the error still persists.

After building many times the error wouldn't go.

FYI - I tried stripping unnecessary ops by using $TF_ROOT/tensor flow/python/tools/strip_unused.py and using this stripped graph for detection. Then the above error is no longer there and the following error is produced.

Invalid argument: Input 0 of node ToFloat was passed float from image_tensor:0 incompatible with expected uint8.

Another thing I found while manually setting breakpoint and debugging is that both the errors are produced while loading the graphs and not while actually running for inference. That is at line 378 of tensorflow_utils.mm.

Any pointers will be appreciated. Thanks !

Encounting No OpKernel problem, try the solutions offered in the readme.m but faild

I use the pb file(in ssd_mobilenet_v1_coco_11_06_2017) offerd by author, and set the right environment. But encounted the following problem. Upon I add the "tensorflow/core/kernels/cwise_op_xxx.cc" into "tensorflow/contrib/makefile/tf_op_files.txt". Other "Op ' xxx'" problems will occurs, so I don't think the solutions presented by author is the best, if anyone can give some help, I will be appreciated of that!!!
No OpKernel was registered to support Op 'Equal' with these attrs. Registered devices: [CPU], Registered kernels:

 [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Equal = Equal[T=DT_INT32](Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/strided_slice, Postprocessor/

Not found: Op type not registered 'PlaceholderWithDefault' in binary running on iPhone. Make sure the Op and Kernel are registered in the binary running in this process.

Model retrained using : https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0

The final model generated is rounded_graph.pb and retrained_labels.txt.

The above model and labels are working fine in the sample application : https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios/camera

however when I put the pb and txt file with this sample application, on loading the graph I get the following error :
iOS_Tensorflow_ObjectDetection_Example/ex_SSD_Mobilenet_TF/tensorflow_utils.mm:379] Not found: Op type not registered 'PlaceholderWithDefault' in binary running on iPhone-ezy. Make sure the Op and Kernel are registered in the binary running in this process.

Tensorflow version : 1.4.0
OS : Mac 10.12

No OpKernel was registered to support Op 'Prod' with these attrs.

I'm trying to use your example with Tensorflow r1.4, and was able to successfully build and run with the ssd_mobilenet_v1_coco model with the ops_to_regsiter.h that includes ops for multiple models. However, when I tried to switch to faster_rcnn_resnet101_coco using the same build, I'm running into the following error when trying to load the model:

2017-10-30 18:00:28.307769: E [path]/tensorflow_utils.mm:209] Could not create TensorFlow Graph: Invalid argument: No OpKernel was registered to support Op 'Prod' with these attrs.  Registered devices: [CPU], Registered kernels:
  <no registered kernels>

	 [[Node: SecondStageBoxPredictor/Flatten/Prod = Prod[T=DT_INT32, Tidx=DT_INT32, keep_dims=false](SecondStageBoxPredictor/Flatten/Slice_1, SecondStageBoxPredictor/Flatten/Const)]]
2017-10-30 18:00:28.320101: F [path]/CameraExampleViewController.mm:495] Couldn't load model: Invalid argument: No OpKernel was registered to support Op 'Prod' with these attrs.  Registered devices: [CPU], Registered kernels:
  <no registered kernels>

	 [[Node: SecondStageBoxPredictor/Flatten/Prod = Prod[T=DT_INT32, Tidx=DT_INT32, keep_dims=false](SecondStageBoxPredictor/Flatten/Slice_1, SecondStageBoxPredictor/Flatten/Const)]]

From a closer examination, it looks like the "Prod" ops isn't in the ops_to_header.h file and there's no cwise_op_prod.cc file in the kernels to add to tp_ops_files.txt. At the same time, I think "Prod" ops is already part of math_ops.cc(?), and I have included -DANDROID_TYPES=ANDROID_TYPES_FULL in the build to account for the expanded data types. So I have no clue why it's failing to load. Can anyone help?

'Undefined symbols for architecture x86_64' when running compile_ios_tensorflow.sh

I am trying to use the instruction in the README file to compile a Tensorflow static library that is compatible with my Mobilenet model file that is produced from tensorflow/examples/image_retraining/retrain.py. The error I get when I try to run this model file in the Tensorflow tf_camera_example app is No OpKernel was registered to support Op 'All' with these attrs.. I'm uncertain whether I should report this issue here or in the Tensorflow Issue tracker.

Here are all of the commands I execute, starting in Tensorflow root:

git checkout v1.4.0
export TF_ROOT=/Users/ashleysands/code/tensorflow-v1.4/
cd ~/code/iOS_Tensorflow_ObjectDetection_Example/config/
bash config.sh
cd $TF_ROOT
./configure
vim tensorflow/contrib/makefile/Makefile
bazel build tensorflow/python/tools:print_selective_registration_header
bazel-bin/tensorflow/python/tools/print_selective_registration_header \     --graphs=/Users/ashleysands/code/data/mobilenet_1.0_224_quantized.pb > ops_to_register.h
cp ops_to_register.h tensorflow/core/framework/
vim tensorflow/contrib/makefile/compile_ios_protobuf.sh
tensorflow/contrib/makefile/download_dependencies.sh
tensorflow/contrib/makefile/compile_ios_protobuf.sh
tensorflow/contrib/makefile/compile_ios_tensorflow.sh "-O3  -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"

The very last command I get this error:

Undefined symbols for architecture x86_64:
"nsync::nsync_mu_init(nsync::nsync_mu_s_)", referenced from:
tensorflow::mutex::mutex() in env.o
tensorflow::mutex::mutex() in random.o
"nsync::nsync_mu_lock(nsync::nsync_mu_s_
)", referenced from:
tensorflow::mutex::lock() in env.o
tensorflow::mutex::lock() in random.o
tensorflow::mutex::lock() in histogram.o
"nsync::nsync_mu_unlock(nsync::nsync_mu_s_*)", referenced from:
tensorflow::mutex::unlock() in env.o
tensorflow::mutex::unlock() in random.o
tensorflow::mutex::unlock() in histogram.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [/Users/ashleysands/code/tensorflow-v1.4/tensorflow/contrib/makefile/gen/host_bin/proto_text] Error 1

  • '[' 2 -ne 0 ']'
  • echo 'armv7 compilation failed.'
    armv7 compilation failed.
  • exit 1

I am using macOS 10.13.3 on a Macbook air (Mid 2012) with Bazel 0.8.1 and Xcode 9.2.

Here's my modified Makefile just in case I didn't change it correctly.
Makefile.txt

Here's my ops_to_register.h file:
ops_to_register.h.txt

I've been banging my head against this problem for over a week, and the README in this repo is the best resource I have found on the web for my exact problem.

No OpKernel was registered to support Op 'All' with these attrs

I build the demo ,show this problem .

Quick Start with tensorflow/contrib/makefile/build_all_ios_ssd.sh .

Invalid argument: No OpKernel was registered to support Op 'All' with these attrs. Registered devices: [CPU], Registered kernels:

 [[Node: assert_equal/All = All[Tidx=DT_INT32, keep_dims=false](assert_equal/Equal, assert_equal/Const)]]

how to fix this error ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.