Code Monkey home page Code Monkey logo

edgetpu-minimal-example's Introduction

Hi, I'm Nam 👋

Some info that github suggests that I put here:

💬 Ask me about my dog

📫 How to reach me: [email protected]

😄 Pronouns: nam burger

Checkout my most recent tutorial:

Open In Colab

I also write about non tech stuffs:

edgetpu-minimal-example's People

Contributors

mc-requtech avatar namburger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

edgetpu-minimal-example's Issues

std::bad_alloc in model_utils.cc

Hey there!
It´s me with the annoying setup :D
Right now i´m still working inside a docker container (ros:foxy) with ubuntu 20.04.
I tried using this code inside ROS which is actually quite easy because i can simply add a few ROS specific lines to the CmakeList and I´m ready to go.
Or at least i though... My new issue is kinda interesting and I´m not sure where the problem is so i though i´d ask here first.

Building this project as is works fine, output is as expected.
But when i try to build & run it with ROS the code simply stops at a certain point. No error, no output, it just finished without any remark.
I modified model_utils.cc for debugging purposes:

std::array<int, 3> GetInputShape(const tflite::Interpreter& interpreter, int index) 
{
  std::cout << "hey"<< std::endl;
  const int tensor_index = interpreter.inputs()[index];
  std::cout << "ho"<< std::endl;
  const TfLiteIntArray* dims = interpreter.tensor(tensor_index)->dims;
  std::cout << "let´s go!"<< std::endl;
  return std::array<int, 3>{dims->data[1], dims->data[2], dims->data[3]};
}

The "hey" gets printed and after that dead silence.
I tried calling interpreter.inputs() like this:
auto test = interpreter.inputs();
to find out what´s wrong. With that i get the error
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc

which is probably my whole problem.

I also tried adding std::cout statements in the input() functions of interpreter.h and subgraph.h, these get printed.

Do you have any idea what the issue might be or how i could debug this?

Doesn't work on Raspberry Pi CM3+

Hi! I am working with the custom board with CM3+ and edgetpu.
During compilation of "edgetpu-minimal-example" master all is ok, I see all compiled libraries.
But after I can't compile my code with edgetpu libraries (on C). It fails with the followong error:

/usr/bin/ld: /home/pi/edgetpu-minimal-example/build/tensorflow/src/tf/tensorflow/lite/tools/make/gen/rpi_armv7l/lib/libtensorflow-lite.a(densify.o): in function tflite::ops::builtin::densify::Eval(TfLiteContext*, TfLiteNode*)': densify.cc:(.text+0x388): undefined reference to tflite::optimize::sparsity::FormatConverter::FormatConverter(std::vector<int, std::allocator > const&, TfLiteSparsity const&)'
/usr/bin/ld: densify.cc:(.text+0x394): undefined reference to tflite::optimize::sparsity::FormatConverter<signed char>::SparseToDense(signed char const*)' /usr/bin/ld: densify.cc:(.text+0x730): undefined reference to tflite::optimize::sparsity::FormatConverter::FormatConverter(std::vector<int, std::allocator > const&, TfLiteSparsity const&)'
/usr/bin/ld: densify.cc:(.text+0x73c): undefined reference to tflite::optimize::sparsity::FormatConverter<float>::SparseToDense(float const*)' /usr/bin/ld: /home/pi/edgetpu-minimal-example/build/tensorflow/src/tf/tensorflow/lite/tools/make/gen/rpi_armv7l/lib/libtensorflow-lite.a(spectrogram.o): in function tflite::internal::Spectrogram::ProcessCoreFFT()':
spectrogram.cc:(.text+0xc0): undefined reference to `rdft'
collect2: error: ld returned 1 exit status
make: *** [Makefile:32: rwc_main] Error 1

I also tried to launch "minimal" with *_edgetpu.tflite - ended with the following error:

ERROR: Encountered unresolved custom op: edgetpu-custom-op.
ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.

Failed to allocate tensors.
Segmentation fault

I checked the same with Raspberry pi 4 - all works fine.
What could be the issue?
Also, python edgetpu code works on RPI3+, but I need C libraries running.
Thanks!

Segmentation fault (core dumped)

System info

  • Ubuntu 18.04
  • Edge TPU Compiler version 2.1.302470888
  • TENSORFLOW COMMIT = "5d0b55dd4a00c74809e5b32217070a26ac6ef823"

Issue
@Namburger
I'm trying to write an application to do classification of video-stream from the inbuilt camera in my laptop using edgetpu. I have used OpenCV to read, decode and vectorize the input to the model. As a first step I wanted to check the compatilibitty of images decoded by OpenCV with TFlite interpretors. So I have written the following code adapted from minimal.cc. I have not made any changes in building the tf interpreter, but the TFlite interpreter doesn't build and I get the following error. I have added a few debug messages to understand where exactly the problem lies nothing else. I could pin point that it is building the interpreter but beyond that I have no idea. The minimal executable runs fine with no issues having the same codes to build the interpreter which is what making me think that I have done something wrong here. Code snippet and error message for your reference.
The source file is attached here for your reference.

std::unique_ptrtflite::FlatBufferModel model =
tflite::FlatBufferModel::BuildFromFile(model_path.c_str());
if (model == nullptr) {
std::cerr << "Fail to build FlatBufferModel from file: " << model_path << std::endl;
std::abort();
}
else
{
std::cout << "model loaded successfully" << std::endl;
}

// Build interpreter.
std::shared_ptredgetpu::EdgeTpuContext edgetpu_context =
edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();

std::unique_ptrtflite::Interpreter interpreter;
if (!edgetpu_context) {
interpreter = std::move(coral::BuildInterpreter(*model));
} else {
std::cout << "opening of edgtpu successful, building interpretor" <<std :: endl;
interpreter = std::move(coral::BuildEdgeTpuInterpreter(*model, edgetpu_context.get()));
}
std::cout << "Interpreter built";

Screenshot from 2020-05-01 12-34-31

Would be nice if this issue is resolved
Edit

  • TENSORFLOW COMMIT =d855adfc5a0195788bf5f92c3c7352e638aa1109

../out/aarch64/minimal doesn't exist

Hi,

Thank for making this repo. I have the issue with running my own tflite model in the Edge-TPU's cpp API.

ERROR: Internal: Unsupported data type in custom op handler: -119671996
ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.

Failed to allocate tensors.

Through a github pose. I followed the instruction in this repo and the building was successful. However, I could not run the example detection. I used ../out/k8/minimal instead of ../out/aarch64/minimal because the latter is not found. And the similar error occurred:

ERROR: Internal: Unsupported data type in custom op handler: 0
ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.

Failed to allocate tensors.
Segmentation fault (core dumped)

Thank you!

compile error on Jetson nano aarch64 platform.

Hi, I found that there is an compiling error when I trying to natively build this project on Jetson nano the aarch64 arch. Here is part of the output. Any idea on this issue?

download_dependencies.sh completed successfully.
[ 21%] Performing build step for 'tf'
[ 28%] Performing install step for 'tf'
[ 35%] Completed 'tf'
[ 57%] Built target tf
[ 71%] Built target model_utils
make[2]: *** No rule to make target 'tensorflow/src/tf/tensorflow/lite/tools/optimize/sparsity/format_converter.cc', needed by 'CMakeFiles/minimal.dir/tensorflow/src/tf/tensorflow/lite/tools/optimize/sparsity/format_converter.cc.o'.  Stop.
CMakeFiles/Makefile2:124: recipe for target 'CMakeFiles/minimal.dir/all' failed
make[1]: *** [CMakeFiles/minimal.dir/all] Error 2
Makefile:100: recipe for target 'all' failed
make: *** [all] Error 2

seg fault after running multiple models within the same program

Hi, I have encountered an seg fault on my modified version of this repo. Basically what I'm trying to change is to run two separated models within the same program sequentially. There are two interpreters to handle them separately. I got the following gdb trace where it stocks at the moment when retrieving output data after second model's inferencing.

I'm wondering if such scenario has been tested or not, and it might help me to understand what is the root cause of the seg fault in my situation.
Thanks!

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000055a5a7cfc03f in std::vector<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> >, std::allocator<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> > > >::begin() const ()
[Current thread is 1 (Thread 0x7f82fc180780 (LWP 2733))]
(gdb) bt
#0  0x000055a5a7cfc03f in std::vector<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> >, std::allocator<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> > > >::begin() const ()
#1  0x000055a5a7cfaeed in std::vector<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> >, std::allocator<std::unique_ptr<tflite::Subgraph, std::default_delete<tflite::Subgraph> > > >::front() const ()
#2  0x000055a5a7cfa72e in tflite::Interpreter::primary_subgraph() const ()
#3  0x000055a5a7cfa676 in tflite::Interpreter::outputs() const ()

tf cloning frozen

I face the following problem when I try to make this project:

~/edgetpu-minial-example/build $ make
[7%] Performing download step (git clone) for 'tf'
Cloning into 'tf' ...

And it just hang forever. Is this part of the potential 30~ 45 min process time?

my platform is aarch64. Thanks.

SegFault issue on build

Hi, I've run into a problem when building the minimal example:
image

To clarify, I cloned this repo into a relatively cleaned up Raspberry Pi 4 B (aarch64), so I didn't have anything related to TF Lite or Edgetpu runtime installed on the board, prior to building the minimal program.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.