yinyunie / depth_renderer Goto Github PK
View Code? Open in Web Editor NEWRendering color and depth images for ShapeNet models.
License: MIT License
Rendering color and depth images for ShapeNet models.
License: MIT License
Hi, thanks for your helpful work. I would like to ask if there is a convenient way of outputing correspondence relationships between pixels on the rendered image and points on the corresponding 3D model. I notice that "visualization/draw_pc_from_depth.py" can generate partial point clouds from specific viewpoints. Is there any function to obtain such "pixel-point" mapping? Look forward to your reply as soon as possible!
Hi author! Thank you for this repo!
Using the code, I succesfully get the rendered png images with black backgrounds. What if I want to have transparent background?
I try the following code but it dosen't seem to work. Could you please help?
if g_background_image_path == 'TRANSPARENT':
if bpy.app.version < (2, 80, 0):
bpy.context.scene.render.alpha_mode = 'TRANSPARENT'
bpy.context.scene.render.image_settings.color_mode = 'RGBA' #
else:
bpy.context.scene.render.image_settings.color_mode = 'RGBA' # blender 2.8
bpy.context.scene.render.film_transparent = True # 0 for "SKY"; 1 for "TRANSPARENT
Hi,
thanks for open-sourcing this project. I have used it to render some of the shapenet models but have observed that the 3D point do not lie exactly on the surface once back-projected to 3D. The cloud2mesh distance lie in the range of 0.002.
Have you maybe observed the same? Do you know if this is an inherent precision of Blender depth rendering or if there is maybe some precision loss occurring at some point?
Zan
Hi, thanks for the great work, I'm wondering how to add lights from all directions.
Lines 236 to 245 in 57af771
Hi!
I might have noticed a typo in the render_all.py script.
You set the tile_x in line 78 and also in line 79 leaving tile_y out.
Line 79 in 57af771
Would it be possible to add an option to render out surface normal images from Blender, in addition to the RGB and depth images?
`(renderer) root@VGG-V100-LZ-2:~/lkq/depth_renderer# python depth_fusion.py
Traceback (most recent call last):
File "depth_fusion.py", line 9, in
from external import pyfusion
File "/root/lkq/depth_renderer/external/pyfusion/init.py", line 8, in
from cyfusion import *
ModuleNotFoundError: No module named 'cyfusion'`
many thanks for your great work. I follow the install step, but can not run the code. can you give me some help?
Thanks a lot for the awesome work from the authors. But I am a bit confused with data preprocessing. I noticed that in the dataset attached each object has two folders which are respectively image and model. But in the original dataset, each object only has one file which is called XXX.obj. How could I change the XXX.obj to the form of the current dataset?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.