Code Monkey home page Code Monkey logo

Comments (6)

r00tman avatar r00tman commented on June 7, 2024

Hi, thank you for your interest in the project!

The method was designed primarily for outdoor scene relighting, and we didn't test it on indoor scenes ourselves.
This means that we don't explicitly support view-dependent effects such as reflections, speculars, and non-natural illumination sources.
I'm quite surprised that the method still worked to some degree in your case, where these assumptions don't hold.
In your shared video, video compression even manages to hide most of the artifacts that are visible on the shared images, except for the semi-transparency in the chair seat.
Judging from the normals, it seems like the method learned reflected geometry to emulate reflections, same as it usually is with the original NeRF.
And as for the hole in the chair, it could be either of the strong speculars or not having enough training views/lighting conditions.
I suggest using data where there's less speculars in the current hole region and adding more training data.
Adding more training data should also reduce artifacts in normals on the floor.

Again, it's quite exciting to see NeRF-OSR almost working on the indoor scenes.
So if you have any progress or more questions, please write here!

Best,
Viktor

from nerf-osr.

r00tman avatar r00tman commented on June 7, 2024

Also it seems like the rendered image is cropped a bit.
This is a known issue, as the code assumes 1044x775 resolution for views that have no ground-truth image.
A quick fix would be to change these default values here: https://github.com/r00tman/NeRF-OSR/blob/main/data_loader_split.py#L100
Then the rendered images would render the whole view as in your ori_img.

Also it seems like the rendering camera poses for the videos were generated using transition_path.py script.
Please note that it only does linear blend of the transforms, instead of the proper interpolation.
Hence, the resulting views might look sheared or distorted when camera rotation changes significantly.
A quick fix would be to add one-two more in-between views to arr in such cases, then it would work better.

from nerf-osr.

wangmingyang4 avatar wangmingyang4 commented on June 7, 2024

Thanks for your reply!

from nerf-osr.

wangmingyang4 avatar wangmingyang4 commented on June 7, 2024

During the test, your technique can synthesise novel images at arbitrary camera viewpoints and scene illumination; the user directly supplies the desired camera pose and the scene illumination, either from an environment map or directly via SH coefficients.
I see that the code you provided is to load the SH coefficient for testing. Is there any code for testing by loading an environment map (such as *hdr or *exr) ?
I am looking for documentation, or code or an example of how to extract spherical harmonics coefficients from a light-probe HDR file.
Can anyone help?

from nerf-osr.

qizipeng avatar qizipeng commented on June 7, 2024

Hi, I also think this is wonderful work for generating novel views and relighting images from an optimized neural network. However, i also have a similar issue that is the NeRF takes the default values as the env param of testing or validating images and does not optimize them it, so ..... if this, the validation images or test images have no means. Although, I also agree that this work is very inspired me!!!

from nerf-osr.

r00tman avatar r00tman commented on June 7, 2024

Hi. Thank you for the kind words!

You can change environments used for testing with --test_env argument (implemented here).
There you can either provide a folder with per-view SH environments or a path to a single SH environment written in a .txt.
Even if you provide a single SH environment, you can still rotate it around the building by using --rotate_test_env argument in addition to --test_env.

Default value for the environment is taken from one of the runs of the method on our data. When we used completely random inialisation values, the model often diverged. Using these coefficients instead resulted in more consistent and better training results on the tested scenes. Coincidentally, they are also used for rendering views where no other envmap is found, which are validation and test views (when --test_env argument is not provided).

To use an external LDR/HDR envmap, you would first need to convert it to SH coefficients. The conversion will just fit the closest SH coefficients with least squares. The script for that is not yet in the repo, but I'll upload it soon, as well as the instructions on how to reproduce our numerical results from the paper. The latter involves using external environment maps and this SH conversion step too, so it should be helpful.

from nerf-osr.

Related Issues (12)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.