Code Monkey home page Code Monkey logo

npbgpp's People

Contributors

rakhimovv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

npbgpp's Issues

The inference of npbg

Hi, thanks for your great work! Can you provide the command line used to test on the npbg after fine-tuning on the ScanNet? I have replaced the checkpoint path with the last.ckpt after finetuning and add eval_only=true. But the results seems not correct.

rendered
raster

Make datasets accessible

Hi,

Thanks a lot for your contributions! Currently, the datasets provided are hosted on yandex.disk. However, it is not possible to download these datasets in bulk (clicking "download all" on yandex.disk), as a premium subscription to yandex.disk is required. To avoid this problem, could you possibly upload these datasets to a different data repository instead? Such as Zenodo? Licensing of the dataset is also possible on Zenodo. See this repository for example.

Reproducing the Results from the Paper

Hey, I got two questions/requests.

  1. I would like to reproduce the results from the paper for the 8 synthetic NeRF scenes. More precisely I need the 8 * 200 test images generated by NPBG++. I see that you provide checkpoints and instructions. As I want to avoid unnecessary work, I wanted to ask whether it would be possible to upload the images you used for the paper.
    Alternatively, I would love a quick explanation of how to generate the results for the synthetic NeRF dataset from the provided checkpoints. Are there any configurations for the default commands that I should make to get the exact results?

  2. I think support for the CO3D dataset would be a great addition. Have you considered adding it and if not, do you have any idea of how I could add support for the dataset myself? Specifically, I wonder how I would proceed if I wanted to optimize a NPBG++ model for a single scene of the CO3D dataset (CO3D contains around 50 object categories with hundreds of scenes each).

Error about run train_net.py

Hi, thanks for your wonderful work. When I run train_net.py use the command line, I got the error:

python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_scannet datasets=scannet_pretrain datasets.n_point=6e6 system=npbgpp_sphere system.visibility_scale=0.5 trainer.max_epochs=39 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1
In 'config': Could not find 'trainer/project'
Config search path:
provider=hydra, path=pkg://hydra.conf
provider=main, path=file:///public/home/luanzl/WorkSpace/npbgpp/configs
provider=schema, path=structured://
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

By the way, when I try to debug train_net.py on Pycharm IDE in a remote server step-by-step, I got the error:

hydra.errors.MissingConfigException: In 'config': Could not find 'trainer/project'
Config search path:
provider=hydra, path=pkg://hydra.conf
provider=main, path=file:///public/home/luanzl/WorkSpace/npbgpp/configs
provider=schema, path=structured://

I think it's due to the config.yaml file error, which is related to trainer: project

May I know how can I resolve these issues? Thanks in advance!

Question about how to set coordinate of other dataset (camera pose and lidar)?

Hi @rakhimovv @TArdelean, thanks for your excellent work. Now I want to reproduce your npbgpp on RobotCar Dataset, which gets lidar dataset, RGB images and camera pose (from IMUs) from a motion car with onboard sensors. I am curious about what kind of MetaShape is used to generate point clouds, and camera poses in your work.

Note I have tried to set the camera pose and point clouds from lidar to OpenGL coordinate, but the value of PSNR is quite low (around 5), so I doubt the coordinate is not right. I appreciate any help that you can provide!

Dataset

Hi authors,

This is such an awesome works! I would like to follow your work. Currently, I suffer from some problems. When I jump into your dataset, I found there are only point cloud files that are provided in the dataset folder. Could you please provide me with the complete dataset similar to the data in example folder? I really appreciate it for your assistance.

Cheers,
Jiahao

Quick question about ScanNet and DTU data preprocessing

Dear authors,

Thank you so much for your great work and clean code. I had a quick question -- how do you preprocess your ScanNet and DTU data for pretraining? For ScanNet, I believe this would involve generating full.ply and images/ files for each scan in the dataset. For DTU, I believe this involves generating masks and point clouds.

Your help is greatly appreciated.

All the best,

How to generate the point cloud of scannet scenes

The paper mentions that the point clouds from the scannet dataset were reconstructed using depth maps following the NBPG setup. However, it seems that the NPBG code for point cloud generation is only reconstructed by RGB, the scale and coordinate system of the reconstructed point clouds are not in the scannet world coordinate system. What do I need to do to generate a point cloud like the one in the example?

Segmentation fault (core dumped)

When I'm running the testing command, the error always pops out:

Segmentation fault (core dumped)

I debug step by step and find that the program exit at this line of code, but I don't know why, I followed the instructions without any change.

model: pl.LightningModule = instantiate(cfg.system.system_class, cfg)

Can anybody help me? Thanks very much!

Reproduce the quantitative evaluation on scannet

Hi!

Thank you for your great work and clean code! I'm interested in your work and trying to reproduce some results using the ScanNet epoch38 checkpoint given by you.

As far as I understand, the holdout scene for ScanNet is only scene0000_00, so my test result on scene0000_00 should be the same as the paper, which is the performance below:

image

However, I can't reach that performance using the checkpoint you released.

Below is the performance I could reach:

image

Here is the command I used to test scene0000_00:

python train_net.py trainer.gpus=1 hydra.run.dir=experiments/<your_test_experiment_folder> datasets=scannet_one_scene datasets.data_root=${hydra:runtime.cwd}/example/scannet datasets.scene_name=scene0000_00 system=npbgpp_sphere system.visibility_scale=0.5 weights_path=./<your_weights_ckpt_file> eval_only=true dataloader=small datasets.n_point=6e6

Did I miss or misunderstand something about the test command?

Thank you in advance!

Best regards!

Can you provide a Docker environment?

I encountered difficulties when configuring the environment according to the Readme using the A100 GPU. Can you please provide a Dockerfile or a repository link for the Docker image?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.