Code Monkey home page Code Monkey logo

nerf-osr's Introduction

NeRF for Outdoor Scene Relighting

Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, Christian Theobalt

Codebase for ECCV 2022 paper "NeRF for Outdoor Scene Relighting".

Based on NeRF++ codebase and inherits the same training data preprocessing and format.

Data

Our datasets and preprocessed Trevi dataset from PhotoTourism can be found here. Put the downloaded folders into data/ sub-folder in the code directory.

See NeRF++ sections on data and COLMAP on how to create adapt a new dataset for training. In addition, we also support masking via adding mask directory with monochrome masks alongside rgb directory in the dataset folder. For more details, refer to the provided datasets.

So, if you have an image dataset, you would need to do the following:

  1. Set the path to your colmap binary in colmap_runner/run_colmap.py:13.
  2. Create a dataset directory in data/, e.g., data/newdataset and create source and out subfolders, e.g., data/newdataset/source, data/newdataset/out.
  3. Copy all the images to data/newdataset/source.
  4. Run colmap_runner/run_colmap.py data/newdataset in the root folder.
  5. This will set the data up, undistort images to data/newdataset/rgb, and calibrate the camera parameters to data/newdataset/kai_cameras_normalized.json.
  6. Optionally, you can now generate the masks by using data/newdataset/rgb/* images as the source, to filter out, e.g., people, bicycle, cars or any other dynamic objects. We used this repository to generate the masks. The grayscale masks should be placed to data/newdataset/mask/ subfolder. You can use the provided datasets as reference.
  7. Now that we have all data and calibrations, we need to create train, val, test splits. To do so, first create corresponding subfolders: data/newdataset/{train,val,test}/rgb. Then split the images as you like by copying them from data/newdataset/rgb to the corresponding split's rgb folder, e.g., data/newdataset/train/rgb/.
  8. Now you want to generate camera parameters for splits by running colmap_runner/cvt.py while in the dataset directory. It will automatically copy all camera parameters and masks to the split folders.
  9. The dataset folder is ready. Now you need to create the dataset config. You can copy the config from the provided dataset, e.g., here, to configs/newdataset.txt. Then you would need to change datadir to data, scene to newdataset, and expname in the config.
  10. Now you can launch the training by python ddp_train_nerf.py --config configs/newdataset.txt

Models

We provide pre-trained models here. Put the folders into logs/ sub-directory. Use the scripts from scripts/ subfolder for testing.

Create environment

conda env create --file environment.yml
conda activate nerfosr

Training and Testing

Use the scripts from scripts/ subfolder for training and testing.

VR Demo

Please find precompiled binaries, source code, and the extracted Site 1 mesh from here.

To run the demo, make sure you have an OpenVR runtime such as SteamVR and launch run.bat in hellovr_opengl directory.

To extract the mesh from another model, run

ddp_mesh_nerf.py --config lk2/final.txt

The list of folder name correspondences can be found in the README of the dataset.

Note that in the VR demo executable, we also clip the model to keep only the main building on ll. 1446-1449. The bounds are hard-coded for the Site 1.

To recompile the code, refer to OpenVR instructions, as the demo is based on one of the samples.

Citation

Please cite our work if you use the code.

@InProceedings{rudnev2022nerfosr,
      title={NeRF for Outdoor Scene Relighting},
      author={Viktor Rudnev and Mohamed Elgharib and William Smith and Lingjie Liu and Vladislav Golyanik and Christian Theobalt},
      booktitle={European Conference on Computer Vision (ECCV)},
      year={2022}
}

License

Permission is hereby granted, free of charge, to any person or company obtaining a copy of this software and associated documentation files (the "Software") from the copyright holders to use the Software for any non-commercial purpose. Publication, redistribution and (re)selling of the software, of modifications, extensions, and derivates of it, and of other software containing portions of the licensed Software, are not permitted. The Copyright holder is permitted to publically disclose and advertise the use of the software by any licensee.

Packaging or distributing parts or whole of the provided software (including code, models and data) as is or as part of other software is prohibited. Commercial use of parts or whole of the provided software (including code, models and data) is strictly prohibited. Using the provided software for promotion of a commercial entity or product, or in any other manner which directly or indirectly results in commercial gains is strictly prohibited.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

nerf-osr's People

Contributors

r00tman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nerf-osr's Issues

Questions about relighting effects

The algorithm does not have an ideal effect on the indoor dataset.
As shown in the figure below, the effect of relighting is not very good. There is a "hole" on the surface of the chair.
I guss this is due to some inaccuracies in geometry estimation.
I would appreciate it if receiving your reply.
@r00tman

ori_img:
image

relighting:
000000
000046
fg_normal_000046

[Errno 2] No such file or directory: 'logs/trevi_final_masked_flipxzinitenv/train_images.json'

Hi @r00tman ,
Thanks for sharing the code and the wonderful work.
I 've downloaded all the models and data. But when I try to run the tesh *sh scripts, there is an error message like:

Traceback (most recent call last):
File "/home/mi/anaconda3/envs/nerf/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/mi/PycharmProjects/NeRF-OSR/ddp_test_nerf.py", line 48, in ddp_test_nerf
start, models = create_nerf(rank, args)
File "/home/mi/PycharmProjects/NeRF-OSR/ddp_train_nerf.py", line 348, in create_nerf
with open(f) as file:
FileNotFoundError: [Errno 2] No such file or directory: 'logs/trevi_final_masked_flipxzinitenv/train_images.json'

I want to know how can I get these json file or how to generate them?

Looking to try out the project but the download link is really slow

Hi there,

This is a very promising project, thanks for open-sourcing it.
We are trying to evaluate the project, but the model and data download link is really slow (< 100 kbps).
In the best interest of the project, can the data and models please be uploaded to some other cloud storage?

Thanks

Question about masks for evaluation

Hello. Thank you for sharing your excellent research and code.

I have a question about the dataset. Your paper mentions the following: "Furthermore, even though NeRF-OSR can synthesise the sky and vegetation (e.g., trees), it is not possible to evaluate their predictions due to their highly varying appearance, especially when recordings sessions span different weather seasons. Hence, we also estimate the masks of these regions and exclude them from our evaluation."

However, I noticed that there are no masks available under the 'scene_name/final/test' directory in the dataset, and the masks found in the 'scene_name/final/mask' directory do not seem to include masks for trees and the sky. In order to conduct a fair evaluation, could you please release the masks that you used for evaluation?

Weird Parameters?

Hi Viktor et al - very excited about this code, but in all the test*.sh files there's a parameter --test_env e.g.:
python ddp_test_nerf.py --config configs/europa/final.txt --render_splits static_path1_blend --test_env data/europa/final/static_path1_blend/envmaps
...but "--test-env" doesn't seem to be a supported parameter - I can't see it added anywhere in the code, and ddp_test_nerf.py never gets past the fact it doesn't recognise one or more parameters. Am I missing something obvious?

Error when performing mesh extraction

The following error is generated when running python ddp_mesh_nerf.py --config configs/lk2/final.txt. Any ideas how to resolve it?

2023-02-12 15:56:18,287 [INFO] root: Found ckpts: ['logs/lk2_final_masked_flipxzinitenv/model_420000.pth']
2023-02-12 15:56:18,287 [INFO] root: Reloading from: logs/lk2_final_masked_flipxzinitenv/model_420000.pth
Traceback (most recent call last):
  File "ddp_mesh_nerf.py", line 102, in <module>
    mesh()
  File "ddp_mesh_nerf.py", line 97, in mesh
    join=True)
  File "/projappl/project_2007011/miniconda3/envs/nerfosr/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/projappl/project_2007011/miniconda3/envs/nerfosr/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
    while not context.join():
  File "/projappl/project_2007011/miniconda3/envs/nerfosr/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 119, in join
    raise Exception(msg)
Exception: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/projappl/project_2007011/miniconda3/envs/nerfosr/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
    fn(i, *args)
  File "/scratch/project_2007011/NeRF-OSR/ddp_mesh_nerf.py", line 32, in ddp_mesh_nerf
    start, models = create_nerf(rank, args)
  File "/scratch/project_2007011/NeRF-OSR/ddp_train_nerf.py", line 420, in create_nerf
    models[name].load_state_dict(to_load[name])
  File "/projappl/project_2007011/miniconda3/envs/nerfosr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DistributedDataParallel:
	Missing key(s) in state_dict: "module.env_params.01-08_07_30_IMG_6645-JPG", "module.env_params.01-08_07_30_IMG_6650-JPG",

Question about min_depth and max_depth in your code and data

Hi~ Thanks for your awesome work!
I am using your dataset to train my own nerf. I see min_depth and max_depth in your dataloder. But I have not seen these in your provided dataset. Are min_depth and max_depth nesseary? Will it cause error in your code when min_depth is None?

Hope for your reply, Thank you very much.

What is the env params of testing or validating images?

Hi!
Thanks for sharing your code and I think this is a wonderful work to solve the problems in relighting outdoor scenes by a NeRF framework.
I have an issue with the env params consisting of 9*3 variables. I clearly know the network how to process the training images and optimize the env params from default values. However, I find that the network also takes the default values ​​as testing env params and do not optimize them. if i dont misunderstand. Is this reasonable? How do u get the default values?

Looking forward to your answer,
Thanks again!

How to obtain rendered images through Test scripts

I Use
python ddp_test_nerf.py --config configs/lk2/final.txt --render_splits static_path1_blend --test_env data/lk2/final/static_end_blend/envmaps

Obtain output

2023-07-22 19:33:57,538 [INFO] root: Reloading from: logs/lk2_final_masked_flipxzinitenv/model_1165000.pth
2023-07-22 19:33:57,657 [INFO] root: raw intrinsics_files: 0
2023-07-22 19:33:57,659 [INFO] root: raw pose_files: 0
2023-07-22 19:33:57,659 [INFO] root: Split static_path1_blend, # views: 0

Question about training iterations

Hello, I have a question about iterations.
In your paper, you train the model for 5 * 10^5 iterations. However, N_iters in config.txt is set to 5000001. What is the correct number of parameters?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.