dgrzech / sobfu Goto Github PK
View Code? Open in Web Editor NEWreal-time 3D reconstruction of non-rigidly deforming scenes using depth data
License: BSD 3-Clause "New" or "Revised" License
real-time 3D reconstruction of non-rigidly deforming scenes using depth data
License: BSD 3-Clause "New" or "Revised" License
float4 Mat4f ::data[4];
is data column or row major?
I have compiled sobfu with the solution provided by the issue #15. But I was only able to run it with the "Snoopy" and the "Hat" datasets. I downloaded them from this page: http://campar.in.tum.de/personal/slavcheva/deformable-dataset/index.html#groundtruth
I tried to download the rest of the datasets from this page: https://cloud9.cs.fau.de/index.php/s/46qcNZSNePHx08A?path=%2F
But I get an error if I want to use the pcl viewer. If I do not use it, I see in the terminal how the program performs iterations for each image but it always reaches the maximum number of iterations without converging.
I have probably downloaded the datasets from the wrong page but, could somebody help me? Where did you found the datasets? Does anybody else have this issue?
is global_phi is the canonical tsdf volume?
what is phi_n in the paper literature?
what's phi_n_psi?
what's phi_global_psi_inv?
There are several issues related to compilation in an end-of-2021 system.
My setup:
wget https://www.coin-or.org/download/source/metslib/metslib-0.5.3.tgz
tar xzvf metslib-0.5.3.tgz
cd metslib-0.5.3
./configure --prefix=/usr/local
make
sudo make install
sudo update-alternatives --install /usr/bin/vtk vtk /usr/bin/vtk7 10
sudo update-alternatives --install /usr/bin/pvtk pvtk /usr/bin/python3 10
sudo touch /usr/lib/x86_64-linux-gnu/libvtkRenderingPythonTkWidgets.so
sudo touch /usr/bin/vtkParseOGLExt-7.1
cmake -DCMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DBUILD_TESTS=ON ..
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -MP -MD -pthread -fpermissive")
SET(CMAKE_C_COMPILER "/usr/bin/gcc-9")
SET(CMAKE_CXX_COMPILER "/usr/bin/g++-9")
src/apps/demo.cpp:311:43: error: "CV_LOAD_IMAGE_ANYDEPTH" was not declared in this scope
Edit and change that to:
depth = cv::imread(depths[i], cv::IMREAD_ANYDEPTH);
image = cv::imread(images[i], cv::IMREAD_COLOR);
With these changes, compilation ends succesfully. But "bin/app" complains:
Device 0: "NVIDIA GeForce RTX 3070" 7960Mb
Can't determine number of cores. Unknown SM version 8.6!
, sm_86, 0 cores, Driver/Runtime ver.11.40/11.50
So, I tried replacing PCL 1.10 from distribution with my own compiled PCL 1.12.0 (with CUDA and GPU support).
Then, I had to change include/kfusion/cuda/marching_cubes.hpp too, replacing
typedef boost::shared_ptr<MarchingCubes> Ptr;
with
typedef pcl::shared_ptr<MarchingCubes> Ptr;
But resulting "app" still can't determine number of cores.
So, I've edited src/kfusion/core.cpp following the example in
i.e., add {0x80, 64}, {0x86, 128} to gpuArchCoresPerSM array, and now seems to be OK.
Device 0: "NVIDIA GeForce RTX 3070" 7960Mb, sm_86, 5888 cores, Driver/Runtime ver.11.40/11.50
You can find a quick n' dirty patch here:
sobfu.patch
With these changes, binaries are generated, and "app" works as expected with the "Snoopy" example.
Hi @dgrzech , thanks for your kindly sharing of the source code, it really helps me.
However, when I compile the code and try to run it at the umbrella dataset, I find the wrong result will be produced.
Is the data need any pre-processing?
Thanks for your reply in advance, I very much appreciate it.
Hi,there. Thank you for your work. I have successfully compiled your project. But when I tried to run it with options on the umbrella data, it just throw a bug on frame 0 says Segmentation fault(core dumped). I also tried to run it with out any options, it just kept running with iter.no.x and nothing else happens. Any ideas how could I fix this problem? Thanks.
Hi, I'm working inside an Anaconda virtual environment trying to install the software with source setup.sh -a
, but i get the following output :
CMake Error at /home/davide/miniconda3/envs/sobfuEnv/share/OpenCV/OpenCVConfig.cmake:205 (message):
opencv_viz is required but was not found
Call Stack (most recent call first):
CMakeLists.txt:32 (find_package)
-- Configuring incomplete, errors occurred!
These are some of the package's versions I've installed with conda install
:
boost: 1.67
cuda: 9.1
opencv: 2.4.11
pcl: 1.8.1
vtk: 6.3
Hello, it seems that you have done a pretty cool job and I am interested in building ant running your code, however, I got some trouble while building your code:
It appears to me that you have chosen Boost1.58, PCL1.8.1, CUDA9.0 at least (since __all_sync() and __ballot_sync() are used in your code) with an unknown version of OpenCV and VTK. When I try to build your code on my Ubuntu 16.04 LTS with the same versions of dependencies, I got error from the utilization of CUDACC_VER and I can't solve this problem. I have an idea that Boost1.58.0 should have some conflicts with CUDA9.0, so you probably have used a later version of Boost, or maybe a different of CUDA. could you please tell me the versions of your choices?
Thanks ! :)
Thanks for uploading this, good work, man!
I've been trying to reproduce Killing/SobolevFusion results for over a year now. I have an implementation over InfiniTAM
Here is canonical in the first 65 frames of the Snoopy sequence: https://www.youtube.com/watch?v=0C2Djk4jo4I
Last frame:
Tracking kinda sucks and drifts a lot, that part is being worked on.
I've been capping the iterations at 200 frames and using voxel hashing to speed up the processing.
What really escapes me, though, is that, according to Mira Slavcheva herself in her emails to me, non-rigid alignment should converge within 30-150 iterations. Max-warp threshold was set to "0.1 mm" for KillingFusion (seems more like 0.1 voxel IMHO).
It seems like you've run into the same issue, i.e. your optimizations here for Snoopy are running (and are capped) to 2048 iterations.
Thanks!
Hello, I'm re-reading the Sobolev fusion paper and I was confused by the Sobolev approximation part. So I re-read your code for a better comprehension, then I found the way that you calculate the laplacien matrix is weird :
`cv::Mat L_mat = -6.f * cv::Mat::eye(s3, s3, CV_32FC1);
for (int i = 0; i <= static_cast(pow(params.s, 3)) - 1; ++i) {
int idx_z = i / (params.s * params.s);
int idx_y = (i - idx_z * params.s * params.s) / params.s;
int idx_x = i - params.s * (idx_y + params.s * idx_z);
if (idx_x + 1 < params.s) {
int pos = (idx_x + 1) + idx_y * params.s + idx_z * params.s * params.s;
L_mat.at(i, pos) = 1.f;
}
if (idx_x - 1 >= 0) {
int pos = (idx_x - 1) + idx_y * params.s + idx_z * params.s * params.s;
L_mat.at(i, pos) = 1.f;
}
if (idx_y + 1 < params.s) {
int pos = idx_x + (idx_y + 1) * params.s + idx_z * params.s * params.s;
L_mat.at<float>(i, pos) = 1.f;
}
if (idx_y - 1 >= 0) {
int pos = idx_x + (idx_y - 1) * params.s + idx_z * params.s * params.s;
L_mat.at<float>(i, pos) = 1.f;
}
if (idx_z + 1 < params.s) {
int pos = idx_x + idx_y * params.s + (idx_z + 1) * params.s * params.s;
L_mat.at<float>(i, pos) = 1.f;
}
if (idx_z - 1 >= 0) {
int pos = idx_x + idx_y * params.s + (idx_z - 1) * params.s * params.s;
L_mat.at<float>(i, pos) = 1.f;
}
}`
I checked the definition of laplacian matrix on wikipedia,
As the diagonal of tha laplacian matrix should be the row-wise sum of the correspondant adjacency matrix, it shouldn't be -6 for all, and the L_mat.at<float>(i,pos)
should be -1.f, am I right?
Thank you for your great work first.I successfully run your program, But the effect is not very good. every frams takes about 5s/frame.Is there something wrong with my operation?Look forward to your reply @dgrzech
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.