Code Monkey home page Code Monkey logo

sobfu's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sobfu's Issues

it doesn't converge

at first I removed findMaxUpdateNorm. After a 5 iterations, it goes into inf. energy. Shouldn't the energy be decreasing? And then it crashes at the next data term estimation.

image

Or if I run the code with findMaxNorm enable, it will crash at 3rd iteration at findMaxNorm
**
image

**

I can't get most of the datasets

I have compiled sobfu with the solution provided by the issue #15. But I was only able to run it with the "Snoopy" and the "Hat" datasets. I downloaded them from this page: http://campar.in.tum.de/personal/slavcheva/deformable-dataset/index.html#groundtruth

I tried to download the rest of the datasets from this page: https://cloud9.cs.fau.de/index.php/s/46qcNZSNePHx08A?path=%2F
But I get an error if I want to use the pcl viewer. If I do not use it, I see in the terminal how the program performs iterations for each image but it always reaches the maximum number of iterations without converging.

I have probably downloaded the datasets from the wrong page but, could somebody help me? Where did you found the datasets? Does anybody else have this issue?

can you clarify variable names

is global_phi is the canonical tsdf volume?
what is phi_n in the paper literature?
what's phi_n_psi?
what's phi_global_psi_inv?

Compilation in Ubuntu 20 fails [solved]

There are several issues related to compilation in an end-of-2021 system.

My setup:

  • NVIDIA GeForce RTX 3070
  • x86_64 GNU/Linux Ubuntu 20.04.3 LTS
  • gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
  • cuda_11.5.0_495.29.05
  • Ubuntu 20 lacks of "metslib" package: compiled from sources
wget https://www.coin-or.org/download/source/metslib/metslib-0.5.3.tgz
tar xzvf metslib-0.5.3.tgz
cd metslib-0.5.3
./configure --prefix=/usr/local
make
sudo make install
  • The imported target "vtk" references the file "/usr/bin/vtk" but "this file does not exist". And so on.
sudo update-alternatives --install /usr/bin/vtk vtk /usr/bin/vtk7 10
sudo update-alternatives --install /usr/bin/pvtk pvtk /usr/bin/python3 10
sudo touch /usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so
sudo touch /usr/bin/vtkParseOGLExt-7.1
  • CUDA_CUDA_LIBRARY (ADVANCED) -> NOTFOUND
    cmake -DCMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DBUILD_TESTS=ON ..
  • CUB_COMPILER_DEPRECATION_SOFT(C++14, C++11);
    Edit CMakeLists.txt:
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -MP -MD -pthread -fpermissive")
SET(CMAKE_C_COMPILER "/usr/bin/gcc-9")
SET(CMAKE_CXX_COMPILER "/usr/bin/g++-9")
  • Error compiling demo.cpp

src/apps/demo.cpp:311:43: error: "CV_LOAD_IMAGE_ANYDEPTH" was not declared in this scope
Edit and change that to:

            depth = cv::imread(depths[i], cv::IMREAD_ANYDEPTH);
            image = cv::imread(images[i], cv::IMREAD_COLOR);

With these changes, compilation ends succesfully. But "bin/app" complains:

Device 0:  "NVIDIA GeForce RTX 3070"  7960Mb
Can't determine number of cores. Unknown SM version 8.6!
, sm_86, 0 cores, Driver/Runtime ver.11.40/11.50

So, I tried replacing PCL 1.10 from distribution with my own compiled PCL 1.12.0 (with CUDA and GPU support).

Then, I had to change include/kfusion/cuda/marching_cubes.hpp too, replacing

typedef boost::shared_ptr<MarchingCubes> Ptr;
with

typedef pcl::shared_ptr<MarchingCubes> Ptr;

But resulting "app" still can't determine number of cores.

So, I've edited src/kfusion/core.cpp following the example in

helper_cuda_drvapi.h

i.e., add {0x80, 64}, {0x86, 128} to gpuArchCoresPerSM array, and now seems to be OK.

Device 0: "NVIDIA GeForce RTX 3070" 7960Mb, sm_86, 5888 cores, Driver/Runtime ver.11.40/11.50

You can find a quick n' dirty patch here:
sobfu.patch

With these changes, binaries are generated, and "app" works as expected with the "Snoopy" example.

Comple sucessed, but get wrong result

Hi @dgrzech , thanks for your kindly sharing of the source code, it really helps me.
However, when I compile the code and try to run it at the umbrella dataset, I find the wrong result will be produced.

Results at 149 frames,
image

Is the data need any pre-processing?
Thanks for your reply in advance, I very much appreciate it.

Segmentation fault(core dumped)

Hi,there. Thank you for your work. I have successfully compiled your project. But when I tried to run it with options on the umbrella data, it just throw a bug on frame 0 says Segmentation fault(core dumped). I also tried to run it with out any options, it just kept running with iter.no.x and nothing else happens. Any ideas how could I fix this problem? Thanks.

opencv_viz is required but was not found

Hi, I'm working inside an Anaconda virtual environment trying to install the software with source setup.sh -a, but i get the following output :

CMake Error at /home/davide/miniconda3/envs/sobfuEnv/share/OpenCV/OpenCVConfig.cmake:205 (message):
opencv_viz is required but was not found
Call Stack (most recent call first):
CMakeLists.txt:32 (find_package)
-- Configuring incomplete, errors occurred!

These are some of the package's versions I've installed with conda install:
boost: 1.67
cuda: 9.1
opencv: 2.4.11
pcl: 1.8.1
vtk: 6.3

Dependencies versions conflict

Hello, it seems that you have done a pretty cool job and I am interested in building ant running your code, however, I got some trouble while building your code:

It appears to me that you have chosen Boost1.58, PCL1.8.1, CUDA9.0 at least (since __all_sync() and __ballot_sync() are used in your code) with an unknown version of OpenCV and VTK. When I try to build your code on my Ubuntu 16.04 LTS with the same versions of dependencies, I got error from the utilization of CUDACC_VER and I can't solve this problem. I have an idea that Boost1.58.0 should have some conflicts with CUDA9.0, so you probably have used a later version of Boost, or maybe a different of CUDA. could you please tell me the versions of your choices?

Thanks ! :)

why 48?

There is a magic number 48 in your code. Can you you explain why you need a loop(48) here?
image

Convergence criteria not met...

@dgrzech

Thanks for uploading this, good work, man!

I've been trying to reproduce Killing/SobolevFusion results for over a year now. I have an implementation over InfiniTAM

Here is canonical in the first 65 frames of the Snoopy sequence: https://www.youtube.com/watch?v=0C2Djk4jo4I
Last frame:
image

Tracking kinda sucks and drifts a lot, that part is being worked on.
I've been capping the iterations at 200 frames and using voxel hashing to speed up the processing.

What really escapes me, though, is that, according to Mira Slavcheva herself in her emails to me, non-rigid alignment should converge within 30-150 iterations. Max-warp threshold was set to "0.1 mm" for KillingFusion (seems more like 0.1 voxel IMHO).

It seems like you've run into the same issue, i.e. your optimizations here for Snoopy are running (and are capped) to 2048 iterations.

  1. You're seeing oscillations in warps for a small percentage of voxels across the voxel grid, no?
  2. What is your take on this, is the whole convergence story just a story, i.e. convergence would work only if setting the max warp update threshold unreasonably high, resulting in bad reconstruction quality / high drift?

Thanks!

Potential mathmatics error

Hello, I'm re-reading the Sobolev fusion paper and I was confused by the Sobolev approximation part. So I re-read your code for a better comprehension, then I found the way that you calculate the laplacien matrix is weird :
`cv::Mat L_mat = -6.f * cv::Mat::eye(s3, s3, CV_32FC1);
for (int i = 0; i <= static_cast(pow(params.s, 3)) - 1; ++i) {
int idx_z = i / (params.s * params.s);
int idx_y = (i - idx_z * params.s * params.s) / params.s;
int idx_x = i - params.s * (idx_y + params.s * idx_z);
if (idx_x + 1 < params.s) {
int pos = (idx_x + 1) + idx_y * params.s + idx_z * params.s * params.s;
L_mat.at(i, pos) = 1.f;
}
if (idx_x - 1 >= 0) {
int pos = (idx_x - 1) + idx_y * params.s + idx_z * params.s * params.s;
L_mat.at(i, pos) = 1.f;
}

    if (idx_y + 1 < params.s) {
        int pos                 = idx_x + (idx_y + 1) * params.s + idx_z * params.s * params.s;
        L_mat.at<float>(i, pos) = 1.f;
    }
    if (idx_y - 1 >= 0) {
        int pos                 = idx_x + (idx_y - 1) * params.s + idx_z * params.s * params.s;
        L_mat.at<float>(i, pos) = 1.f;
    }

    if (idx_z + 1 < params.s) {
        int pos                 = idx_x + idx_y * params.s + (idx_z + 1) * params.s * params.s;
        L_mat.at<float>(i, pos) = 1.f;
    }
    if (idx_z - 1 >= 0) {
        int pos                 = idx_x + idx_y * params.s + (idx_z - 1) * params.s * params.s;
        L_mat.at<float>(i, pos) = 1.f;
    }
}`

I checked the definition of laplacian matrix on wikipedia,
image
As the diagonal of tha laplacian matrix should be the row-wise sum of the correspondant adjacency matrix, it shouldn't be -6 for all, and the L_mat.at<float>(i,pos) should be -1.f, am I right?

The reconstruction effect is not very good.

Thank you for your great work first.I successfully run your program, But the effect is not very good. every frams takes about 5s/frame.Is there something wrong with my operation?Look forward to your reply @dgrzech

function get_3d_sobolev_filter is not being used

You have two types of sobolev filters. one is a 3D data in get_3d_sobolev_filter, the other one is a 1D data in decompose_sobolev_filter. It looks like you decided to hardcode some data in the end without using "get_3d_sobolev_filter". Can I ask what's the reason?

image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.