Code Monkey home page Code Monkey logo

scenepriors's People

Contributors

yinyunie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

scenepriors's Issues

cam_K.npy

I try to reproduce your experiment result by training, and I got an error:

FileNotFoundError: [Errno 2] No such file or directory: '/home/shockjiang/workspace/scenepriors/datasets/3D-Front/3D-FRONT_renderings_improved_mat/cam_K.npy'

It seems like the cam_K.npy is missing, isn't it?

The shape of "pred_mask" is causing some errors while training

Hello, I'm reproducing your training code with the process you provided, but there's an error while rendering. I think it is related to the difference between the shapes of pred_mask and sizes(=centers) in calculating loss. How can I fix it? (log is as below) The error is occuring for the 4th batch.

[2023-08-21 19:32:07,869][root][INFO] - G_LR: [0.0001, 0.0001] | train | Epoch: [0][4/56] | Loss: {'box_cls_loss': 3.0503957271575928, 'box_loss': 1.0786898136138916, 'completeness_loss': 0.6710511445999146, 'frustum_loss
': 1.1139416694641113, 'mask_loss': 0.0, 'total': 10.228837966918945}                               Batch Time 6.234 | Data Time 2.340                                                                                       
shape of pred_mask before expanding:  torch.Size([64, 17])                                                                                                                                                                   
shape of pred_mask after expanding:  torch.Size([64, 17, 3])                                                                                                                                                                 
shape of sizes before filling:  torch.Size([64, 15, 3])                                                                                                                                                                      
shape of pred_mask before expanding:  torch.Size([64, 13])                                                                                                                                                                   
shape of pred_mask after expanding:  torch.Size([64, 13, 3])                                                                                                                                                                 
shape of sizes before filling:  torch.Size([64, 13, 3])                                                                                                                                                                      
shape of sizes after filling:  torch.Size([64, 13, 3])                                                                                                                                                                       
Error executing job with overrides: []                                                                                                                                                                                       
Process Process-2:                                                                                                                                                                                                           
Traceback (most recent call last):                                                                                                                                                                                           
  File "main.py", line 44, in main                                                                                                                                                                                           
    multi_proc_run(config.distributed.num_gpus, func=single_proc_run, func_args=(config,))                                                                                                                                   
Traceback (most recent call last):                                                                                                                                                                                           
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 100, in multi_proc_run                                                                                                                                
    p.join()                                                                                                                                                                                                                 
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/process.py", line 149, in join                                                                                                                              
    res = self._popen.wait(timeout)                                                                                                                                                                                          
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait                                                                                                                            
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)                                                                                                                                                                    
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll                                                                                                                            
    pid, sts = os.waitpid(self.pid, flag)                                                                                                                                                                                    
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 61, in signal_handler                                                                                                                                 
    raise ChildException(self.error_queue.get())                                                                                                                                                                             
net_utils.distributed.ChildException: Traceback (most recent call last):                                                                                                                                                     
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 72, in run                                                                                                                                            
    fun(*fun_args, **fun_kwargs)                                                                                                                                                                                             
  File "main.py", line 16, in single_proc_run                                                                                                                                                                                
    trainer.run()                                                                                                                                                                                                            
  File "/root/dev/scenegen/ScenePriors/train.py", line 159, in run                                                                                                                                                           
    eval_loss_recorder = self.train_epoch(epoch, stage)                                                                                                                                                                      
  File "/root/dev/scenegen/ScenePriors/train.py", line 99, in train_epoch                                                                                                                                                    
    loss, extra_output = self.subtrainer.train_step(data, stage, start_deform=self.cfg.config.start_deform)                                                                                                                  
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/training.py", line 54, in train_step                                                                                                                               
    loss, extra_output = self.compute_loss(data, start_deform=start_deform, **kwargs)                                                                                                                                        
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/training.py", line 93, in compute_loss                                                                                                                             
    est_data = self.generator(latent_z, data, start_deform=start_deform, **kwargs)                                                                                                                                           
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl                                                                                                         
    return forward_call(*input, **kwargs)                                                                                                                                                                                    
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward                                                                                                      
    output = self._run_ddp_forward(*inputs, **kwargs)                                                                                                                                                                        
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward                                                                                              
    return module_to_run(*inputs[0], **kwargs[0])                                                                                                                                                                            
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl                                                                                                         
    return forward_call(*input, **kwargs)                                                                                                                                                                                    
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/modules/network.py", line 162, in forward                                                                                                                          
    renderings = self.render(box3ds, meshes, data['cam_T'], data['cam_K'], data['image_size'],                                                                                                                               
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl                                                                                                         
    return forward_call(*input, **kwargs)                                                                                                                                                                                    
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/modules/render.py", line 236, in forward                                                                                                                           
    sizes.masked_fill_(pred_mask, 0.)                                                                                                                                                                                        
RuntimeError: The expanded size of the tensor (15) must match the existing size (17) at non-singleton dimension 1.  Target sizes: [64, 15, 3].  Tensor sizes: [64, 17, 3]

I tried to fix your forward function in 'ScenePriors/models/ScenePriors/modules/render.py' code as below, but another error has occurred.

def forward(self, box3ds, meshes, cam_Ts, cam_Ks, image_sizes, render_mask_tr, start_deform=False,
                pred_gt_matching=None, pred_mask=None, **kwargs):
    '''
    Render generated boxes given cam params
    :param box3ds: n_batch x n_box x box_dim
    :param meshes: (n_batch * n_view) x pytorch3d mesh
    :param cam_Ts: n_batch x n_view x 4 x 4
    :param cam_Ks: n_batch x n_view x 3 x 3
    :param render_mask_tr: n_batch x n_view x im_height x im_width
    :param image_sizes: n_batch x n_view x 2
    :return:
    '''
    centers = box3ds[..., :3]
    sizes = box3ds[..., 3:6]
    classes_completeness = box3ds[..., 6:]

    if self.cfg.config.mode == 'train':
        if pred_mask is not None:
            pred_mask = torch.logical_not(LengthMask(pred_mask).bool_matrix)
            pred_mask = pred_mask[:, :, None].expand(-1, -1, 3)
            if pred_mask.shape == sizes.shape and centers.shape:
                sizes.masked_fill_(pred_mask, 0.)
                centers.masked_fill_(pred_mask, 0.)
            else:
                if sizes.shape != centers.shape:
                    raise ValueError("shapes of sizes and ceneters should be same!")
                else:
                    print("shape of pred_mask after expanding: ", pred_mask.shape)
                    print("shape of sizes: ", sizes.shape)
                    print("shape of centers: ", sizes.shape)
                    pred_mask_trimmed = pred_mask[:sizes.shape[0], :sizes.shape[1], :sizes.shape[2]]
                    print("shape of trimmed pred_mask: ", pred_mask_trimmed.shape)
                    sizes.masked_fill_(pred_mask_trimmed, 0.)
                    centers.masked_fill_(pred_mask_trimmed, 0.)

The error was another problem because of the shape of pred_mask (I predict) as below:

Traceback (most recent call last):
  File "main.py", line 44, in main
    multi_proc_run(config.distributed.num_gpus, func=single_proc_run, func_args=(config,))
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 100, in multi_proc_run
    p.join()
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/process.py", line 149, in join
    res = self._popen.wait(timeout)
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
    return self.poll(os.WNOHANG if timeout == 0.0 else 0)
  File "/opt/conda/envs/sceneprior/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
    pid, sts = os.waitpid(self.pid, flag)
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 61, in signal_handler
    raise ChildException(self.error_queue.get())
net_utils.distributed.ChildException: Traceback (most recent call last):
  File "/root/dev/scenegen/ScenePriors/net_utils/distributed.py", line 72, in run
    fun(*fun_args, **fun_kwargs)
  File "main.py", line 16, in single_proc_run
    trainer.run()
  File "/root/dev/scenegen/ScenePriors/train.py", line 159, in run
    eval_loss_recorder = self.train_epoch(epoch, stage) 
  File "/root/dev/scenegen/ScenePriors/train.py", line 99, in train_epoch
    loss, extra_output = self.subtrainer.train_step(data, stage, start_deform=self.cfg.config.start_deform)
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/training.py", line 54, in train_step
    loss, extra_output = self.compute_loss(data, start_deform=start_deform, **kwargs)
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/training.py", line 96, in compute_loss
    loss, extra_output = self.generator.module.loss(est_data, data, start_deform=start_deform, **kwargs)
  File "/root/dev/scenegen/ScenePriors/models/ScenePriors/modules/network.py", line 171, in loss
    render_loss, extra_output = self.render_loss(pred_data, gt_data, start_deform=start_deform, **kwargs)
  File "/root/dev/scenegen/ScenePriors/models/loss.py", line 258, in __call__
    view_losses, extra_output = self.views_loss(est_data, gt_data, start_deform,
  File "/root/dev/scenegen/ScenePriors/models/loss.py", line 169, in views_loss
    indices = self.matcher(pred, gt, pred_mask=pred_mask, gt_mask=gt_obj_view_mask)
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/envs/sceneprior/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/root/dev/scenegen/ScenePriors/net_utils/matcher_tracking.py", line 89, in forward
    C[torch.logical_not(pred_mask)] = max_thresh
IndexError: The shape of the mask [1088] at index 0 does not match the shape of the indexed tensor [960, 1088] at index 0

Is this error because of the wrong dataset preprocessing? or the training code?

Thank you!

Inquire about 3D-FRONT preprocessing(rendering with BlenderProc2)

Hello,

Thank you for providing this codes of the awesome work!

I'm wondering if the rendering process and the result are right.

In my case, I commented out only 'bproc.renderer.enable_depth_output(activate_antialiasing=False)' line, not commented out the 'bproc.renderer.enable_normals_output()' line in 'render_dataset_improved_mat.py' file.
Also, I run this shellscript:

CUDA_VISIBLE_DEVICES=0,1
python ./examples/datasets/front_3d_with_improved_mat/multi_render.py
./examples/datasets/front_3d_with_improved_mat/render_dataset_improved_mat.py
./examples/datasets/front_3d_with_improved_mat/3D-FRONT
./examples/datasets/front_3d_with_improved_mat/3D-FUTURE-model
./examples/datasets/front_3d_with_improved_mat/3D-FRONT-texture
--blender_path /root/dev/blender
--cc_material_folder ./resources/cctextures/
--output_folder /root/hdd1/ScenePriors/3D-FRONT_renderings_normal/3D-FRONT_renderings_improved_mat
--n_processes 2

And the results are as below:
in the result folder...

  • 6,190 folders of rendered scene containing 101 hdf5 files
  • failed_scene_names.txt containing 623 lines of scenes failed to render

As far as I understood, the number of sum of rendered folders and failed scenes (6,190 + 623) should be same to the number of items in 3D-FUTURE-model folder (16,565), but it's not.

Is there any wrong process to preprocess in my case? If not, why are the numbers different?

Thank you!

Cannot import pytorch3d

Hi! Thank you for releasing this wonderful repo. I followed the instructions to install the dependencies, including the pytorch3d provided in the external folder; however, I got import error like this:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name '_C' from 'pytorch3d' (/workspace/sceneprior/external/pytorch3d/pytorch3d/__init__.py)

May I ask what the CUDA version is that you used? What happens if I install the official pre-built version - what's the difference?

Thank you very much!

Question on number of objects

Hi! I have a few questions on the number of objects to render and train: it looks like the maximum number of objects for ScanNet is set to 7 - what if the scene contains less than 7 objects? How does the hungarian matching and batched mesh rendering work? On the contrary, when the scene contains more than 7 objects, does it mean the extra objects are simply ignored?

Thank you very much!

Request for Details on 2D Orthographic Projection Rendering for Ground-Truth Scene

Hello,
I am attempting to reproduce your results. The training stage is successful, and I am now moving on to the quantitative evaluation stage. I am curious about the method you used to render 2D top-down orthographic projection images of the ground-truth scenes (for FID and SCA evaluation). Is it possible for you to provide details about your evaluation process?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.