Code Monkey home page Code Monkey logo

nef_code's People

Contributors

c-h-chien avatar yunfan1202 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

nef_code's Issues

Questions on evaluations

First of all, thanks for the amazing work! I have a few questions on how the 3D edges are evaluated:

  1. When sampling points on the parametric curves, the paper says that the points are down-sampled on voxel grids. What is the size of the voxel grids you use?
  2. In Table 1 of the paper, is the numbers for IoU, precision, recall, etc the average of all objects in the A-N and A-N-L datasets?
  3. In Figure 2 of the supplementary materials, how do you choose the views? The supp says "all evenly distributed" but it is based on what criteria, i.e., the baseline of cameras? The order of the images in the dataset is not in the order of how the camera moves.

Thanks!

The coordinates system used in the blender_dataset

Thanks for sharing the code!

I have a question about the released dataset. The code in datasets/blender.py, line 46-48, shows that the transformation is defined as camera to world manner. However, when I take a 3D point on the obj file, let's say point (-2.58, -6.47, -3.81) in the mesh 00000077_767e4372b5f94a88a7a17d90_trimesh_018.obj, and multiply it using camera_intrinsics @ inv(transform_matrix) @ point. I cannot get the right pixel position of this point on the image. Do I miss something?

edge detection

Hi, I read your papers about edge detection, i.e. "STedge" and "Delving into Crispness". Do you have a plan to release the related code? I can't find your email address, so I pull an issue here. Looking forward to your answer!

about how to use my own point cloud data(.ply) for testing

Hello author, excuse me. I want to ask about this NEF paper, about how to use my own point cloud data(.ply) for testing. I originally thought that each pre-training model is the same, and then I generated multiple views and extracted edges for my own data, and then tested it, and I realized that the paper's code for generating the point cloud only requires the input of a specific pre-training model, and the pre-training model only corresponds to one point cloud. I would like to ask the authors about the general process of using my own data(.ply format) to test your proposed model. I hope that you can answer the question.

Missing atrribute when using point-cloud-utils for visualization

Hi, I am sorry for bothering you again. While I was trying the visualization and evaluation part, I met an attribute error where module 'point_cloud_utils' has no attribute 'downsample_point_cloud_voxel_grid'. I understand that this error stems from the point-cloud-utils itself, but I am wondering if you had experienced this error before. (This error is also an opened issue raised here)

Interestingly, I do not see where downsample_point_cloud_voxel_grid is defined in the point-cloud-utils.

Request on the ckpts of all ABC-NEF objects

Thanks for the amazing work!
My understanding to the code is that we need to train for each object individually, i.e., the trained model on one object basically does not generalize to another object.
However, training the model on one object requires many hours to complete (> 6 hrs on my side), and there are over 100 objects.
Would it be possible if you could share all the ckpts of all objects so that I can quickly look into the details of the reconstruction across different objects?
Thank you!

"Transforms_test.json" missing

Hello! When I try the command of "Novel View Synthesis", it seems like the "transforms_test.json" file missing. Could you please advise me on how to find or create it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.