Code Monkey home page Code Monkey logo

Comments (15)

moraval avatar moraval commented on July 18, 2024 8

I solved it by "downgrading" fblualib (commit: bdcf94bb835c41b5d822371fb6f7b75c1d53efbc) and thpp (commit: ebb4fcbb4a9b3310ac0a36eb37775a45f111448b).
I have a question concerning the receptive field. What is the width of the column, which is send to the convolutional layer. Is it really just 1 pixel as was stated in the paper?
Thanks in advance.
@bgshih

from crnn.

mpech avatar mpech commented on July 18, 2024 7

with a simple function as

template <class T>
int naiveDecoding(lua_State* L) {
    fblualib::luaPushTensor(L, thpp::Tensor<T>({1,1}));
    fblualib::luaPushTensor(L, thpp::Tensor<T>({1,1}));
    return 2;
}

we do get a core so I don't know how to fix the core (I'm all new to lua and fb*) so hopefully someone more competent will give a help

However constructs such as

const thpp::Tensor<T> input     = *fblualib::luaGetTensorChecked<T>(L, 1);

are wrong because

fblualib::luaGetTensorChecked<T>(L, 1)

actually instanciate a THType*

template <class Tensor>
TensorPtr<Tensor>::TensorPtr(THType* th) noexcept
  : hasTensor_(th) {
  if (hasTensor_) {
    construct(th, true);
  }
}

and return a TensorPtr BUT when you dereference the TensorPtr

Tensor& operator*() const noexcept { return *get(); }

you get the rawpointer (I presume linking to the very same THType*)
Since your TensorPtr is a temporary variable (not used by anyone) it may be disposed and its destructor is called

template <class Tensor>
TensorPtr<Tensor>::~TensorPtr() {
  destroy();
}

template <class Tensor>
void TensorPtr<Tensor>::destroy() noexcept {
  if (hasTensor_) {
    tensor_.~Tensor();
    hasTensor_ = false;
  }
}

then your input variable has an underlying THType* pointing to a wrong area

Better (at least somehow working) code is:

const auto& input = fblualib::luaGetTensorChecked<T>(L, 1);//construct
const int nFrame      = input->size(0);
const int inputLength = input->size(1);

or

const auto& dummy = fblualib::luaGetTensorChecked<T>(L, 1);//simply increase the life duration of the underlying thtype* instance
const thpp::Tensor<T>& input = *dummy;

long story short below diff works for me

diff --git a/crnn/src/cpp/ctc.cpp b/crnn/src/cpp/ctc.cpp
index 97f9231..a752d2f 100644
--- a/crnn/src/cpp/ctc.cpp
+++ b/crnn/src/cpp/ctc.cpp
@@ -19,17 +19,19 @@ const int blankLabel = 0;
 
 template <class T>
 int forwardBackward(lua_State* L) {
-    const thpp::Tensor<T> input     = fblualib::luaGetTensorChecked<T>(L, 1);
-    const thpp::Tensor<int> targets = fblualib::luaGetTensorChecked<int>(L, 2);
+    const auto& input     = fblualib::luaGetTensorChecked<T>(L, 1);
+    const auto& dumtargets = fblualib::luaGetTensorChecked<int>(L, 2);
+    const thpp::Tensor<int>& targets = *dumtargets;
     const bool forwardOnly          = lua_toboolean(L, 3);
-    thpp::Tensor<T> gradInput       = fblualib::luaGetTensorChecked<T>(L, 4);
+    const auto& dumgrad       = fblualib::luaGetTensorChecked<T>(L, 4);
+    thpp::Tensor<T> gradInput = *dumgrad;
 
-    const int nFrame      = input.size(0);
-    const int inputLength = input.size(1);
-    const int nClasses    = input.size(2);
+    const int nFrame      = input->size(0);
+    const int inputLength = input->size(1);
+    const int nClasses    = input->size(2);
     const int maxTargetlength = targets.size(1);
     if (!forwardOnly) {
-        gradInput.resizeAs(input);
+        gradInput.resizeAs(*input);
         gradInput.fill(LogMath<T>::logZero);
     }
 
@@ -41,7 +43,7 @@ int forwardBackward(lua_State* L) {
 
     #pragma omp parallel for
     for (int i = 0; i < nFrame; ++i) {
-        const thpp::Tensor<T>& input_i = input[i];
+        const thpp::Tensor<T>& input_i = (*input)[i];
         const int* targetData_i = targets[i].data();
         const int targetLength = zeroPadArrayLength(targetData_i, maxTargetlength);
         const int nSegment = 2 * targetLength + 1;
@@ -156,11 +158,11 @@ int forwardBackward(lua_State* L) {
 
 template <class T>
 int naiveDecoding(lua_State* L) {
-    const thpp::Tensor<T>& input = fblualib::luaGetTensorChecked<T>(L, 1);
-    const int nFrame = input.size(0);
-    const int inputLength = input.size(1);
+    const auto& input = fblualib::luaGetTensorChecked<T>(L, 1);
+    const int nFrame = input->size(0);
+    const int inputLength = input->size(1);
 
-    thpp::Tensor<long> rawPred_ = input.max(2).second; // [nFrame x inputLength]
+    thpp::Tensor<long> rawPred_ = input->max(2).second; // [nFrame x inputLength]
     thpp::Tensor<int> rawPred({nFrame, inputLength});

from crnn.

umdreamer avatar umdreamer commented on July 18, 2024 3

The compile error is because the type "thpp::Tensor&" is different with the type "thpp::Ptr<Tensor >", that is, the reference type and the pointer type.
For example:
const thpp::Tensor<T> input = fblualib::luaGetTensorChecked<T>(L, 1);
The fblualib::luaGetTensorChecked is defined as follows in the torch/install/include/fblualib/LuaUtils.h

template <class NT>
typename thpp::Tensor<NT>::Ptr luaGetTensorChecked(lua_State* L, int ud);

So the modification is the change all the fblualib::luaGetTensorChecked to *fblualib::luaGetTensorChecked, according to the file in thpp/thpp/TensorPtr.h

 73   Tensor& operator*() const noexcept { return *get(); }
 74   Tensor* operator->() const noexcept { return get(); }
 75   Tensor* get() const noexcept;

The ctc.cpp modification is as follows:

--- a/crnn/src/cpp/ctc.cpp
+++ b/crnn/src/cpp/ctc.cpp
@@ -19,10 +19,10 @@ const int blankLabel = 0;
 
 template <class T>
 int forwardBackward(lua_State* L) {
-    const thpp::Tensor<T> input     = fblualib::luaGetTensorChecked<T>(L, 1);
-    const thpp::Tensor<int> targets = fblualib::luaGetTensorChecked<int>(L, 2);
+    const thpp::Tensor<T> input     = *fblualib::luaGetTensorChecked<T>(L, 1);
+    const thpp::Tensor<int> targets = *fblualib::luaGetTensorChecked<int>(L, 2);
     const bool forwardOnly          = lua_toboolean(L, 3);
-    thpp::Tensor<T> gradInput       = fblualib::luaGetTensorChecked<T>(L, 4);
+    thpp::Tensor<T> gradInput       = *fblualib::luaGetTensorChecked<T>(L, 4);
 
     const int nFrame      = input.size(0);
     const int inputLength = input.size(1);
@@ -156,7 +156,7 @@ int forwardBackward(lua_State* L) {
 
 template <class T>
 int naiveDecoding(lua_State* L) {
-    const thpp::Tensor<T>& input = fblualib::luaGetTensorChecked<T>(L, 1);
+    const thpp::Tensor<T>& input = *fblualib::luaGetTensorChecked<T>(L, 1);
     const int nFrame = input.size(0);
     const int inputLength = input.size(1);

And then build, Done!
Cheers.

from crnn.

moraval avatar moraval commented on July 18, 2024

The problem was not compatibility with TH++ but with the newest fblualib library. fblualib::luaGetTensorChecked() returns newly TensorPtr and not Tensor.
I fixed it by adding *() operator but now I can't run the demo.lua, I get segmentation fault using anything from LogMath.
Does anyone know what could be the problem?
@bgshih

from crnn.

bgshih avatar bgshih commented on July 18, 2024

Thanks for your feedback. I have just updated by fblualib to the latest version. However, the compilation goes well on my machine. Maybe it's due the OS or compiler. Still looking into this issue by viewing the thpp source.

from crnn.

moraval avatar moraval commented on July 18, 2024

I updated my comment above.
The newest version of fblualib has different return value than is required in the code (ctc.cpp, row 22, const thpp::Tensor input = fblualib::luaGetTensorChecked(L, 1);).
It really doen't create any problems in your compilation?

from fblualib - LuaUtils-inl.h
template
typename thpp::Tensor::Ptr luaGetTensorChecked(lua_State* L, int ud) {
auto p = static_cast<typename thpp::Tensor::THType*>(
luaT_toudata(L, ud, thpp::Tensor::kLuaTypeName));
if (!p) {
luaL_error(L, "Not a valid tensor");
}
return typename thpp::Tensor::Ptr(p);
}

from crnn.

bgshih avatar bgshih commented on July 18, 2024

@moraval Glad to hear that. The widths are not 1-pixel. I have not calculate the precise width of the receptive fields. I guess it's 8 pixels. Note that CNN takes whole image as input, not the receptive fields.

from crnn.

CatWang avatar CatWang commented on July 18, 2024

@bgshih I have encountered the same problem and I have tried your solution of downgrading. But it seems that it doesn't work now.
Maybe this solution can work well at that time but not for now.
I wonder if you can provide some other solutions.
Thanks in advance.

from crnn.

bgshih avatar bgshih commented on July 18, 2024

@CatWang I tried again using Torch and THPP of the latest version. However, I still could not reproduce any compiling errors. Can you describe your environment, e.g. operating system?

from crnn.

liuwenran avatar liuwenran commented on July 18, 2024

I meet the same problem and solve it like @moraval. Thanks

from crnn.

liuwenran avatar liuwenran commented on July 18, 2024

but when I run demo, I got this:

Loading model...
/home/bbnc/torch/install/bin/luajit: ./utilities.lua:253: assertion failed!
stack traceback:
[C]: in function 'assert'
./utilities.lua:253: in function 'loadModelState'
demo.lua:32: in main chunk
[C]: in function 'dofile'
...bbnc/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00406670

how to solve this? @moraval Thanks

from crnn.

jmrichardson avatar jmrichardson commented on July 18, 2024

I know this is an old thread, but having the same issue when compiling:

[ 66%] Building CXX object CMakeFiles/crnn.dir/ctc.cpp.o
/home/john/Downloads/crnn/src/cpp/ctc.cpp: In instantiation of ‘int {anonymous}::forwardBackward(lua_State*) [with T = float; lua_State = lua_State]’:
/home/john/Downloads/crnn/src/cpp/ctc.cpp:194:16:   required from ‘const luaL_Reg {anonymous}::Registerer<float>::functions_ [3]’
/home/john/Downloads/crnn/src/cpp/ctc.cpp:203:24:   required from ‘static void {anonymous}::Registerer<T>::registerFunctions(lua_State*) [with T = float; lua_State = lua_State]’
/home/john/Downloads/crnn/src/cpp/ctc.cpp:210:24:   required from here
/home/john/Downloads/crnn/src/cpp/ctc.cpp:22:76: error: conversion from ‘thpp::TensorBase<float, thpp::Storage<float>, thpp::Tensor<float> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<float> >}’ to non-scalar type ‘const thpp::Tensor<float>’ requested
     const thpp::Tensor<T> input     = fblualib::luaGetTensorChecked<T>(L, 1);

I have tried downgrading as suggested by @moraval , but unable to get them to compile due to other dependencies. Can you please help with a recommendation or updating the code? I am actually trying to use this code inside TextBoxes++ but also having the same issue with it's modified crnn package. Thanks in advance.

By the way, I have all the fblualib dependencies compiled, torch, etc. I am on Ubuntu 16.04

Thank you

from crnn.

umdreamer avatar umdreamer commented on July 18, 2024

@jmrichardson Same problem, have you solved the problem yet?

[ 25%] Building CXX object CMakeFiles/crnn.dir/ctc.cpp.o
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp: In instantiation of ‘int {anonymous}::forwardBackward(lua_State*) [with T = float; lua_State = lua_State]’:
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:200:16:   required from ‘const luaL_Reg {anonymous}::Registerer<float>::functions_ [3]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:209:24:   required from ‘static void {anonymous}::Registerer<T>::registerFunctions(lua_State*) [with T = float; lua_State = lua_State]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:216:24:   required from here
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:22:76: error: conversion from ‘thpp::TensorBase<float, thpp::Storage<float>, thpp::Tensor<float> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<float> >}’ to non-scalar type ‘const thpp::Tensor<float>’ requested
     const thpp::Tensor<T> input     = fblualib::luaGetTensorChecked<T>(L, 1);
                                                                            ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:23:78: error: conversion from ‘thpp::TensorBase<int, thpp::Storage<int>, thpp::Tensor<int> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<int> >}’ to non-scalar type ‘const thpp::Tensor<int>’ requested
     const thpp::Tensor<int> targets = fblualib::luaGetTensorChecked<int>(L, 2);
                                                                              ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:25:76: error: conversion from ‘thpp::TensorBase<float, thpp::Storage<float>, thpp::Tensor<float> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<float> >}’ to non-scalar type ‘thpp::Tensor<float>’ requested
     thpp::Tensor<T> gradInput       = fblualib::luaGetTensorChecked<T>(L, 4);
                                                                            ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp: In instantiation of ‘int {anonymous}::naiveDecoding(lua_State*) [with T = float; lua_State = lua_State]’:
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:200:16:   required from ‘const luaL_Reg {anonymous}::Registerer<float>::functions_ [3]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:209:24:   required from ‘static void {anonymous}::Registerer<T>::registerFunctions(lua_State*) [with T = float; lua_State = lua_State]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:216:24:   required from here
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:159:73: error: invalid initialization of reference of type ‘const thpp::Tensor<float>&’ from expression of type ‘thpp::TensorBase<float, thpp::Storage<float>, thpp::Tensor<float> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<float> >}’
     const thpp::Tensor<T>& input = fblualib::luaGetTensorChecked<T>(L, 1);
                                                                         ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp: In instantiation of ‘int {anonymous}::forwardBackward(lua_State*) [with T = double; lua_State = lua_State]’:
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:200:16:   required from ‘const luaL_Reg {anonymous}::Registerer<double>::functions_ [3]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:209:24:   required from ‘static void {anonymous}::Registerer<T>::registerFunctions(lua_State*) [with T = double; lua_State = lua_State]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:217:25:   required from here
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:22:76: error: conversion from ‘thpp::TensorBase<double, thpp::Storage<double>, thpp::Tensor<double> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<double> >}’ to non-scalar type ‘const thpp::Tensor<double>’ requested
     const thpp::Tensor<T> input     = fblualib::luaGetTensorChecked<T>(L, 1);
                                                                            ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:23:78: error: conversion from ‘thpp::TensorBase<int, thpp::Storage<int>, thpp::Tensor<int> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<int> >}’ to non-scalar type ‘const thpp::Tensor<int>’ requested
     const thpp::Tensor<int> targets = fblualib::luaGetTensorChecked<int>(L, 2);
                                                                              ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:25:76: error: conversion from ‘thpp::TensorBase<double, thpp::Storage<double>, thpp::Tensor<double> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<double> >}’ to non-scalar type ‘thpp::Tensor<double>’ requested
     thpp::Tensor<T> gradInput       = fblualib::luaGetTensorChecked<T>(L, 4);
                                                                            ^
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp: In instantiation of ‘int {anonymous}::naiveDecoding(lua_State*) [with T = double; lua_State = lua_State]’:
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:200:16:   required from ‘const luaL_Reg {anonymous}::Registerer<double>::functions_ [3]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:209:24:   required from ‘static void {anonymous}::Registerer<T>::registerFunctions(lua_State*) [with T = double; lua_State = lua_State]’
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:217:25:   required from here
/home/data1/github-clone/TextBoxes_plusplus/crnn/src/cpp/ctc.cpp:159:73: error: invalid initialization of reference of type ‘const thpp::Tensor<double>&’ from expression of type ‘thpp::TensorBase<double, thpp::Storage<double>, thpp::Tensor<double> >::Ptr {aka thpp::TensorPtr<thpp::Tensor<double> >}’
     const thpp::Tensor<T>& input = fblualib::luaGetTensorChecked<T>(L, 1);
                                                                         ^
CMakeFiles/crnn.dir/build.make:86: recipe for target 'CMakeFiles/crnn.dir/ctc.cpp.o' failed
make[2]: *** [CMakeFiles/crnn.dir/ctc.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/crnn.dir/all' failed
make[1]: *** [CMakeFiles/crnn.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

I cloned the Textboxes_plusplus at the HEAD, and installed the dependencies,

  • fblualib: has to modify the CMakeLists.txt on c++14,
  • fbthrift: c++14 option
  • folly: HEAD at v2018.04.02
  • mstch: latest
  • thpp: CMakeLists.txt
  • torch: latest
  • wangle: latest
  • zstd: latest
    There needs some modification for these libriaries to make them work. As I have to use the C++14 compilation option. My system is Ubuntu 16.04 x86_64, with CUDA 8.0 and CUDNN 5.1.

Now everything looks OK, but the crnn can not still compiled. The problem seems the fblualib has changed its API. Need to know which version the CRNN used to build.

from crnn.

AmirmasoudGhasemi avatar AmirmasoudGhasemi commented on July 18, 2024

I did what @umdreamer suggested. The code was built without error. but when I am running the demo.lua in Textbox_plusplus, the code stopped at the line after the changes(line 157):
const int nFrame = input.size(0);
It is the first time that input is called in the code. Do I need to change anything else?

from crnn.

umdreamer avatar umdreamer commented on July 18, 2024

@mpech, Thank you for your modification. I get code compiled, and can run the demo sucessfully. As I didn't notice that the code fblualib::luaGetTensorChecked(L,1) actually is to create a new object. So in the runtime, the Core will be dumped to indicate error. Now it works using your code.

Cheers.

from crnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.