Code Monkey home page Code Monkey logo

neroic's People

Contributors

holzers avatar krrish94 avatar willbrennan avatar yenchenlin avatar zfkuang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neroic's Issues

can not train

image

ValueError: The provided lr scheduler "<torch.optim.lr_scheduler.StepLR object at 0x7f5c0c38e6d0>" is invalid

Question on Foreground Masks

The masks in provided data seem perfect. I am working on my own data and wondering if it can work with noisy foreground mask coming from a saliency detection model or it has to be perfect foreground mask. Thanks!

Relighting problems

Thanks to the authors for your contributions. Here I have some questions about relighting:

  1. When I run test_relighting.py using milkbox sample with an outdoor image (download from google), but the relighting results seem a littie bad. After checking the code, I found that the relighting results were highly dependent on the setting of args.test_env_maxthres. I am curious about how to choose the env image and set the args.test_env_maxthres to achieve the better relighting results?

The outdoor image we used while tesing the milkbox sample:
outdoor

The relighting results:
temp

  1. In section 3.5 of the paper it is introduced that the purpose of Rendering Networks is to estimate the lighting of each input image and the material properties of the object, but the self.env_lights in class NeROICRenderer keeps the value unchanged since the initialization. I am curious about how to the network learning the various illumination conditions?

Image names covered in brine, custom datasets and LLFF compatibility

Your dataset structure follows common LLFF, and I understand that the main issue with custom datasets and LLFF is the lack of file names being presented to the model. I had similar issues with nvdiffrec and a simple list solves any number of memory leaks that happen when loading images/files rejected by colmap.

a "view_imgs.txt" is pretty important I'd think, and I'm glad some of the example datasets use a poses.npy -I do not understand the reasoning to use remove.bg masks, construct datasets with another .db file, and pickle a list of files (instead of a readable .txt) that users might want to edit when making their own sets.

if os.path.exists(os.path.join(mask_dir, "%s-removebg-preview.png"%img_id)):
when I am trying to handle masks, in their own separate folder, would I want them just to be a b/w image out of any number of salient object matting repos, or images specifically from remove.bg of a separate size (to then use bicubic filtering on), only in the alpha, and containing an entire unused image?

There are 0 technological limitations on making a dataset renderable and testable in instant-ngp, meshable in nvdiffrec (and this repo), optimizable in AdaNeRF and R2L, and still be created from a video shot on a phone. If you're planning on making a dataset creation guide, please dont use removebg filenames, dont use new db files, dont use non user-readable lists of files (it takes one extra line to parse a .txt file), and have support for traditional b/w masks.
all that's needed is /images, /masks, imgs.txt, and a poses.npy (pts seems like it's to build a bounding box and isn't in all your example sets)
lowering that barrier allows anyone who knows how to run a script to make datasets, and it's WHY instant-ngp worked, anyone could try it out with ffmpeg and a script. Forks are being made to test datasets made from my colmap2poses script, if a simple colmap2NeROIC script is needed to read colmap data I can make a push with a more forgiving LLFF dataloader and said script.

Error in dataset.py , to fix add additional check in padding function in ./dataset/dataset.py

when i was training i was getting
Screenshot_20230509_183842

this is because image dimension is greater than sh, thus need to add additional check in padding function
def padding(img, sh, pad_value=0):
if sh[0]<img.shape[0]:
new_w,new_h=int(img.shape[1]/img.shape[0])*sh[0],sh[0]
print(new_w,new_h,type(img))
img=cv2.resize(img.astype(np.uint8),(new_w,new_h))
elif sh[1]<img.shape[1]:
new_h,new_w=int(img.shape[0]/img.shape[1])*sh[1],sh[1]
img=cv2.resize(img.astype(np.uint8),(new_w,new_h))
else:
img=img
if img.ndim == 2:
return np.pad(img, [(0,sh[0]-img.shape[0]), (0, sh[1]-img.shape[1])], 'constant', constant_values=pad_value)
else:
return np.pad(img, [(0,sh[0]-img.shape[0]), (0, sh[1]-img.shape[1]), (0,0)], 'constant', constant_values=pad_value)

Help! While running the code, an error occurred.

In the process of running the code 'python train.py --config configs/milkbox_geometry.yaml --datadir ./data/milkbox_dataset', an error occurs, prompting 'AttributeError: module 'keras.backend' has no attribute 'is_tensor'
' Error, please, how do I fix this? Thanks!

Segmentation masks

Hi, what kind of a method can be used to get the black and white masks needed to run this on a custom dataset?

Pre-trained Model

Hi, thank you for sharing the code. Is the pre-trained model available? Thanks!

Perform baddly on validation set

I use the milkbox dataset to train the model and the training PSNR is about 29.
However, the images synthesis in validation are nearly blank with a little noise, and the PSNR is extremely low. Is it overfitting?

I just run like this for the geometry model:

python train.py
--config
configs/milkbox_geometry.yaml
--datadir
data\milkbox_dataset

how long does this take to run?

I know it depends on the strength and number of GPUs used so I guess how long did it take for whatever you used in NeROIC's creation / testing?

Can NeROIC run on CPU?

I modifid train.py\ generate_normal.py\dataset.py to cpu device type。
Then I execute code python train.py \ --config configs/milkbox_geometry.yaml \ --datadir ./data/milkbox_dataset.
But there not generate 'epoch=29.ckpt' file

Results not as good on custom dataset

Hello!
I have been training the network on my own data and the rendering results are way worse than the ones from training on the provided datasets. I trained on a dataset that has 53 images. I kept the same parameters for the config files, changing the image size and name, of course.

One of the input images:
2
And these are the result images taken from the logs folder:
00
train_01
Visualizing the dense output from colmap:
Screenshot from 2022-09-08 11-26-43

What could be the possible reasons for such results?

hope someone will set this up on colab!

would be great to have a colab for this , where the user could upload three image or more , from different view points and then get this 3d model out.

Would be extremely useful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.