snap-research / neroic Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Traceback (most recent call last):
File "train.py", line 341, in
train()
File "train.py", line 313, in train
checkpoint_callback = ModelCheckpoint(dirpath=os.path.join(args.basedir, args.expname),
TypeError: init() got an unexpected keyword argument 'every_n_val_epochs'
The figure dataset missing poses_bounds.npy,please check and share it.Thanks! @formyfamily
Hello, I came here from the Instant ngp issues “export of 3D model is not clear ...". Is there any way to export clear 3D models to run on general 3D software in this paper?
Can you tell us when the code will be open sourced,waiting for a very very long time,really looking forward to your results
The milkbox in the video I generated rotates very quickly. How can I reduce the rotation speed?
The masks in provided data seem perfect. I am working on my own data and wondering if it can work with noisy foreground mask coming from a saliency detection model or it has to be perfect foreground mask. Thanks!
Thanks to the authors for your contributions. Here I have some questions about relighting:
test_relighting.py
using milkbox sample with an outdoor image (download from google), but the relighting results seem a littie bad. After checking the code, I found that the relighting results were highly dependent on the setting of args.test_env_maxthres
. I am curious about how to choose the env image and set the args.test_env_maxthres
to achieve the better relighting results?The outdoor image we used while tesing the milkbox sample:
self.env_lights
in class NeROICRenderer keeps the value unchanged since the initialization. I am curious about how to the network learning the various illumination conditions?It will be nice if we can try out the model on huggingface using gradio
Your dataset structure follows common LLFF, and I understand that the main issue with custom datasets and LLFF is the lack of file names being presented to the model. I had similar issues with nvdiffrec and a simple list solves any number of memory leaks that happen when loading images/files rejected by colmap.
a "view_imgs.txt" is pretty important I'd think, and I'm glad some of the example datasets use a poses.npy -I do not understand the reasoning to use remove.bg masks, construct datasets with another .db file, and pickle a list of files (instead of a readable .txt) that users might want to edit when making their own sets.
Line 243 in e535d50
There are 0 technological limitations on making a dataset renderable and testable in instant-ngp, meshable in nvdiffrec (and this repo), optimizable in AdaNeRF and R2L, and still be created from a video shot on a phone. If you're planning on making a dataset creation guide, please dont use removebg filenames, dont use new db files, dont use non user-readable lists of files (it takes one extra line to parse a .txt file), and have support for traditional b/w masks.
all that's needed is /images, /masks, imgs.txt, and a poses.npy (pts seems like it's to build a bounding box and isn't in all your example sets)
lowering that barrier allows anyone who knows how to run a script to make datasets, and it's WHY instant-ngp worked, anyone could try it out with ffmpeg and a script. Forks are being made to test datasets made from my colmap2poses script, if a simple colmap2NeROIC script is needed to read colmap data I can make a push with a more forgiving LLFF dataloader and said script.
Hi,
I'm running it on my own dataset using two GPUs and it stuck like this:
Any suggestions?
btw, here https://github.com/snap-research/NeROIC/tree/master/scripts it shouldn't be
cd utils/data_preproccess
It should be
cd scripts
instead.
when i was training i was getting
this is because image dimension is greater than sh, thus need to add additional check in padding function
def padding(img, sh, pad_value=0):
if sh[0]<img.shape[0]:
new_w,new_h=int(img.shape[1]/img.shape[0])*sh[0],sh[0]
print(new_w,new_h,type(img))
img=cv2.resize(img.astype(np.uint8),(new_w,new_h))
elif sh[1]<img.shape[1]:
new_h,new_w=int(img.shape[0]/img.shape[1])*sh[1],sh[1]
img=cv2.resize(img.astype(np.uint8),(new_w,new_h))
else:
img=img
if img.ndim == 2:
return np.pad(img, [(0,sh[0]-img.shape[0]), (0, sh[1]-img.shape[1])], 'constant', constant_values=pad_value)
else:
return np.pad(img, [(0,sh[0]-img.shape[0]), (0, sh[1]-img.shape[1]), (0,0)], 'constant', constant_values=pad_value)
In the process of running the code 'python train.py --config configs/milkbox_geometry.yaml --datadir ./data/milkbox_dataset', an error occurs, prompting 'AttributeError: module 'keras.backend' has no attribute 'is_tensor'
' Error, please, how do I fix this? Thanks!
Hi, what kind of a method can be used to get the black and white masks needed to run this on a custom dataset?
Hi, thank you for sharing the code. Is the pre-trained model available? Thanks!
I use the milkbox dataset to train the model and the training PSNR is about 29.
However, the images synthesis in validation are nearly blank with a little noise, and the PSNR is extremely low. Is it overfitting?
I just run like this for the geometry model:
python train.py
--config
configs/milkbox_geometry.yaml
--datadir
data\milkbox_dataset
Thank you for this great work!
I am wondering whether there are any model files like obj files generated during training procedure or test procedure. I just find some png files and mp4 files in logs and results directory respectively. I have executed all steps of Quick Start and Testing in READEME.md.
Your help is greatly appreciated!
I train NeROIC with a custom dataset and I don't know if it's possible to extract the 3d Model generated. Any hint?
Thanks in advance.
What is the plan for code releasing ? Two months has passed.
I know it depends on the strength and number of GPUs used so I guess how long did it take for whatever you used in NeROIC's creation / testing?
Was there code being placed in this repo?
Have been patiently waiting for months, is anyone working on this repo?
How to get the fbx or obj 3D file from this method? Thanks! @formyfamily
Hi, is there some documentation on how to run/train this model using a single image from an object?
I modifid train.py\ generate_normal.py\dataset.py to cpu device type。
Then I execute code python train.py \ --config configs/milkbox_geometry.yaml \ --datadir ./data/milkbox_dataset
.
But there not generate 'epoch=29.ckpt' file
Hello!
I have been training the network on my own data and the rendering results are way worse than the ones from training on the provided datasets. I trained on a dataset that has 53 images. I kept the same parameters for the config files, changing the image size and name, of course.
One of the input images:
And these are the result images taken from the logs folder:
Visualizing the dense output from colmap:
What could be the possible reasons for such results?
I found some hyperparameters in configuration files inconsistent with those reported in the original paper. For example, lambda_tr is 1 in config while 0.01 in the original paper. Which one should I refer to?
For me training take hours, and I don't know if things go wrong before training ends.
Looking forward for code demo.
would be great to have a colab for this , where the user could upload three image or more , from different view points and then get this 3d model out.
Would be extremely useful.
Where can I find the configuration instructions for *_geometry.yaml and *_geometry.yaml in NeROIC?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.