yifita / dss Goto Github PK
View Code? Open in Web Editor NEWDifferentiable Surface Splatting
Differentiable Surface Splatting
I'm trying to use a custom renderer to generate the reference data instead of create_mvr_data_from_mesh script. I have some questions.
Hi , I have a couple of questions on the results I am getting during training .
val
contains best.ply
which if visualized with open3d with the following code, is showing an empty view.import open3d as o3d
path="best.ply"
textured_mesh = o3d.io.read_point_cloud(path)
o3d.visualization.draw_geometries([textured_mesh])
I can see how the model trains using tensorboard . And I have noticed there are points surrounding the point cloud , is this expected ?
The directory vis
is empty. Why ?
In addition, how can I get the final mesh (the grey one you show in the paper) ?
Finally, does the model handle texture in some way ?
Hi ,
I was testing Pix2Pix using a Sketchfab mesh and it returned a filtered silhouette of the image as a grey blob .
Did you use it as a mask or are there settings that allow me to obtain the original filtered image directly ?
Thank you , Pietro.
it was so hard to run this project, but after many failures to get any results there is always an issue related with getting dense depth as exr file. Did anyone have such problem?
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 251, in <module>
imageio.imwrite(os.path.join(depth_dir, "%06d.exr" % idx),
File "/home/admin/anaconda3/envs/DSS/lib/python3.8/site-packages/imageio/core/functions.py", line 303, in imwrite
writer = get_writer(uri, format, "i", **kwargs)
File "/home/admin/anaconda3/envs/DSS/lib/python3.8/site-packages/imageio/core/functions.py", line 226, in get_writer
raise ValueError(
ValueError: Could not find a format to write the specified file in single-image mode
And can we expect denoising function back?
And i did not get the use of this method. On the left side the sculpture produced by Poisson Reconstruction Plugin Cloud Compare, and the right side the result from your DSS project. I did not understand the use of splatting, the model just loses necessary details.
When running scripts/create_mvr_data_from_mesh.py
for the first time, I have also encountered this error
An exception occurred in telemetry logging.Disabling telemetry to prevent further exceptions.
Traceback (most recent call last):
File "/home/mengqi/.conda/envs/pytorch3d/lib/python3.8/site-packages/iopath/common/file_io.py", line 946, in __log_tmetry_keys
handler.log_event()
File "/home/mengqi/.conda/envs/pytorch3d/lib/python3.8/site-packages/iopath/common/event_logger.py", line 97, in log_event
del self._evt
AttributeError: _evt
It is more like a warning than an error , because the telemetry is disabled after that and the code continues .
Version
Python: 3.8
Pytorch: 1.6
Pytorch3D : 0.6.1
System
System='Linux'
release='4.15.0-188-generic'
version='#199-Ubuntu SMP Wed Jun 15 20:42:56 UTC 2022'
machine='x86_64'
Hi,
I'm trying to run any of the demo scripts and getting this error from the CUDA section. It is running on a fresh conda environment configured with the requirements file with CUDA 10. Any idea?
Thanks for making the project available.
`
Traceback (most recent call last):
File "learn_shape_from_target.py", line 105, in
baseline=opt.baseline, benchmark=opt.benchmark)
File "learn_shape_from_target.py", line 35, in trainShapeOnImage
trainer.create_reference(refScene)
File "/home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/utils/trainer.py", line 147, in create_reference
self.groundtruths = renderScene(refScene, self.opt, self.cameras)
File "/home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/utils/trainer.py", line 92, in renderScene
result = splatter.render().detach()
File "/home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/core/renderer.py", line 882, in render
self.local_occlusion = guided_scatter_maps(numPoint, occludedMap.unsqueeze(-1), pointIdxMap, boundingBoxes)
RuntimeError: "guided_scatter_maps_kernel" not implemented for 'Bool' (operator() at DSS/cuda/rasterize_forward_cuda_kernel.cu:339)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7fc9c4702e37 in /home/guygafni/anaconda3/envs/DSS/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x3733b (0x7fc99b53133b in /home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/cuda/rasterize_forward.cpython-37m-x86_64-linux-gnu.so)
frame #2: guided_scatter_maps_cuda(long, at::Tensor const&, at::Tensor const&, at::Tensor const&) + 0x358 (0x7fc99b53178b in /home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/cuda/rasterize_forward.cpython-37m-x86_64-linux-gnu.so)
frame #3: guided_scatter_maps(long, at::Tensor const&, at::Tensor const&, at::Tensor const&) + 0x9e (0x7fc99b51ba2e in /home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/cuda/rasterize_forward.cpython-37m-x86_64-linux-gnu.so)
frame #4: + 0x30e7b (0x7fc99b52ae7b in /home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/cuda/rasterize_forward.cpython-37m-x86_64-linux-gnu.so)
frame #5: + 0x2c8f0 (0x7fc99b5268f0 in /home/guygafni/projects/point_cloud_render/neural_textures/DSS/DSS/cuda/rasterize_forward.cpython-37m-x86_64-linux-gnu.so)
frame #28: __libc_start_main + 0xf0 (0x7fca04247830 in /lib/x86_64-linux-gnu/libc.so.6)
`
Hi,
I am trying compiling the pytorch_points module,but got an error:
"pytorch_points/_ext/torch_batch_svd.cpp:39:16: error: ‘TORCH_CHECK’ was not declared in this scope"
Could you please give me some help?
Hi, @yifita. I find your great work recently.
But I am a little confused about the gradient of the regularization terms. In the paper, 4.2 Alternating normal and point update, you said the point and normal are updated by the gradient of the regularization terms. I find the code in
Line 182 in d96260c
As I was trying to use DSS to render very big (ScanNet) scenes with ca. 300K vertices, I was getting CUDA out of memory issues (>10 GB) in the computation of rho.
I think the in DSS/core/renderer.py, in the function PickRenderablePoints, there might be two independent reasons, leading to memory inefficiency:
to check whether the point is out of camera scope, WIDTH is used twice, i.e both for x and y. For rectangular images this causes extra points to be considered. the canvas HEIGHT isn't used.
As you are using absolute value to check this condition, only half of the image size is actually needed.
So, this can be used as follows, and afaik should produce the same results (please correct me if I'm wrong)
render_point = render_point & (torch.abs(cameraPoints[:, :, 0] / cameraPoints[:, :, 2]) < (**0.5** *self.camera.width/self.camera.focalLength/self.camera.sv)) # added 0.5 factor
render_point = render_point & (torch.abs(cameraPoints[:, :, 1] / cameraPoints[:, :, 2]) < (**0.5** *self.camera.**height**/self.camera.focalLength/self.camera.sv)) # changed to height, added 0.5 factor
This cuts memory usage by more than half for me.
Thanks for your great work!
If I try to preprocess the Kangaroo point cloud (example_data/pointclouds/Kangaroo_V10k.ply
) I get :
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 106, in <module>
verts, faces = load_ply(mesh_path)
File "/home/bpietro/.conda/envs/DSS/lib/python3.8/site-packages/pytorch3d/io/ply_io.py", line 656, in load_ply
raise ValueError("Invalid vertices in file.")
ValueError: Invalid vertices in file.
As noted in facebookresearch/pytorch3d#363 ,
I have downloaded the last stable version of pytorch with :
pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
And tried their solution to at least load the point cloud , but then , but then the code breaks as in #19 and the data cannot be loaded .
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 106, in <module>
pointcloud = IO().load_pointcloud(path=mesh_path)
File "/home/bpietro/.conda/envs/DSS/lib/python3.8/site-packages/pytorch3d/io/pluggable.py", line 187, in load_pointcloud
pointcloud = pointcloud_interpreter.read(
File "/home/bpietro/.conda/envs/DSS/lib/python3.8/site-packages/pytorch3d/io/ply_io.py", line 1442, in read
data = _load_ply(f=path, path_manager=path_manager)
File "/home/bpietro/.conda/envs/DSS/lib/python3.8/site-packages/pytorch3d/io/ply_io.py", line 1029, in _load_ply
raise ValueError("Unexpected form of faces data.")
class RasterizeAutograd(torch.autograd.Function):
@staticmethod
def forward(ctx, rho, rhoValues, Ws, projPoints, boundingBoxes, inplane, Ms, cameraPoints,
width, height, localWidth, localHeight, camFar, focalLength, mergeThreshold, considerZ, topK):
"""
input:
rho BxNxhxw rho evaluated in a BB of (h, w) around projected points
rhoValues BxNx1 normalizing term of rho
Ws BxNx3 point color
projPoints BxNx2or3 projected point location
boundingBoxes BxNx4 bounding boxes (w0, h0, width, height)
inplane BxNxhxwx3 window back projected in space
Ms BxNx2x2 inverse of gaussian variance
cameraPoints BxNx3 point location in camera space
width scalar
height scalar
localWidth scalar used to limit gradient computation to local window
localHeight scalar
carFar scalar
focalLength scalar
mergeThreshold scalar depth merging threshold T
considerZ bool consider Z gradient
topK int front most points to consider during forward/backward rendering
returns:
pixels BxHxWx3
pointIdxMap BxHxWx5
rhoMap BxHxWx5
WsMap BxHxWx5x3
isBehind BxHxWx5
"""
batchSize, numPoints, bbHeight, bbWidth = rho.shape
# compute visiblity, return per pixel top5 contributor sorted by their
# depthValue
# pointIdxMap BxHxWx5, index inside the splat's bounding box window which is to be rendered at pixel (h,w)
# depthMap BxHxWx5, depths of splats that are rendered at pixel (h,w)
# rhoMap BxHxWx5
pointIdxMap = torch.full((batchSize, height, width, topK), -1, dtype=torch.int64, device=rho.device)
depthMap = torch.full((batchSize, height, width, topK), camFar, dtype=rho.dtype, device=rho.device)
bbPositionMap = torch.full((batchSize, height, width, topK, 2), -1, dtype=torch.int64, device=rho.device)
with torch.cuda.device(rho.device):
# call visibility kernel, outputs depthMap, pointIdxMap which store the depth and index of
# the 5 closest point for each pixel, if less than 5 points paint the pixel, set idxMap to -1
_compute_visibility_maps(boundingBoxes[:, :, :2].contiguous(), inplane, pointIdxMap, bbPositionMap, depthMap)
# gather rho, wk
WsMap = _gather_maps(Ws, pointIdxMap, 0.0)
# per batch indice for rhos bx(Nxhxw)
# gather rho, Ws values from pointIdxMap and bbPositionMap, if idx < 0 (unset), then set rho=0 Ws=0
totalIdxMap = pointIdxMap*bbHeight*bbWidth+bbPositionMap[:, :, :, :, 0]*bbWidth+bbPositionMap[:, :, :, :, 1]
validMaps = totalIdxMap >= 0
totalIdxMap = torch.where(validMaps, totalIdxMap, torch.full_like(totalIdxMap, -1))
rhoMap = _gather_maps(rho.reshape(batchSize, -1, 1), totalIdxMap, 0.0).squeeze(-1)
# check depth jump
isBehind = torch.zeros(depthMap.shape, dtype=torch.uint8, device=depthMap.device)
isBehind[:, :, :, 1:] = (depthMap[:, :, :, 1:]-depthMap[:, :, :, :1]) > mergeThreshold
rhoMap_filtered = torch.where(isBehind, torch.zeros(1, 1, 1, 1, device=rhoMap.device, dtype=rhoMap.dtype), rhoMap)
# WsMap[:, :, :, 1:, :] = torch.where(isBehind.unsqueeze(-1), torch.zeros(1, 1, 1, 1, 1, device=WsMap.device, dtype=WsMap.dtype), WsMap[:, :, :, 1:])
# normalize rho
sumRho = torch.sum(rhoMap_filtered, dim=-1, keepdim=True)
sumRho = torch.where(sumRho == 0, torch.ones_like(sumRho), sumRho)
rhoMap_normalized = rhoMap_filtered/sumRho
# rho * w
pixels = torch.sum(WsMap * rhoMap_normalized.unsqueeze(-1), dim=3)
# accumulated = WsMap[:, :, :, 0, :]
ctx.save_for_backward(pointIdxMap, bbPositionMap, isBehind, WsMap, rhoMap, depthMap, Ws, rhoValues, projPoints, cameraPoints, boundingBoxes, pixels, Ms)
ctx.numPoint = numPoints
ctx.bbWidth = bbWidth
ctx.bbHeight = bbHeight
ctx.localHeight = localHeight
ctx.localWidth = localWidth
ctx.mergeThreshold = mergeThreshold
ctx.focalLength = focalLength
ctx.considerZ = considerZ
ctx.rho_requires_grad = rho.requires_grad
ctx.w_requires_grad = Ws.requires_grad
ctx.xyz_requires_grad = projPoints.requires_grad
ctx.mark_non_differentiable(pointIdxMap, rhoMap, WsMap, isBehind)
return pixels, pointIdxMap, rhoMap_normalized, WsMap, isBehind
@staticmethod
def backward(ctx, gradPixels, dpointIdxMap, gradRhoMap, gradWsMap, gradIsBehind):
"""
input
gradPixels (BxHxWx3)
output
dRho (BxNxbbHxbbW)
dW (BxNx3)
dP (BxNx2) derivative wrt projected points
dcamP (BxNx3) derivative wrt camera points (only z-dim is nonzero)
"""
pointIdxMap, bbPositionMap, isBehind, WsMap, rhoMap, depthMap, Ws, rhoValues, projPoints, cameraPoints, boundingBoxes, pixels, Ms = ctx.saved_tensors
mergeThreshold = ctx.mergeThreshold
focalLength = ctx.focalLength
numPoint = ctx.numPoint
considerZ = ctx.considerZ
bbWidth = ctx.bbWidth
bbHeight = ctx.bbHeight
batchSize, height, width, topK, C = WsMap.shape
if ctx.needs_input_grad[0]: # rho will not be backpropagated
WsMap_ = torch.where(isBehind.unsqueeze(-1), torch.zeros(1, 1, 1, 1, 1, device=WsMap.device, dtype=WsMap.dtype), WsMap)
totalIdxMap = pointIdxMap*bbHeight*bbWidth+bbPositionMap[:, :, :, :, 0]*bbWidth+bbPositionMap[:, :, :, :, 1]
# TODO check dNormalizeddRho
rhoMap_filtered = torch.where(isBehind, torch.zeros(1, 1, 1, 1, device=rhoMap.device, dtype=rhoMap.dtype), rhoMap)
sumRho = torch.sum(rhoMap_filtered, dim=-1, keepdim=True)
dNormalizeddRho = torch.where(rhoMap > 0, 1/sumRho-rhoMap/sumRho, rhoMap)
dRho = _guided_scatter_maps(numPoint*bbWidth*bbHeight, dNormalizeddRho.unsqueeze(-1)*gradPixels.unsqueeze(3)*WsMap_, totalIdxMap, boundingBoxes)
dRho = torch.sum(dRho, dim=-1)
dRho = torch.reshape(dRho, (batchSize, numPoint, bbHeight, bbWidth))
else:
dRho = None
if ctx.needs_input_grad[2]:
# dPixels/dWs = Rho
rhoMap_filtered = torch.where(isBehind, torch.zeros(1, 1, 1, 1, device=rhoMap.device, dtype=rhoMap.dtype), rhoMap)
sumRho = torch.sum(rhoMap_filtered, dim=-1, keepdim=True)
sumRho = torch.where(sumRho == 0, torch.zeros_like(sumRho), sumRho)
rhoMap_normalized = rhoMap_filtered/sumRho
# BxHxWx3 -> BxHxWxKx3 -> BxNx3
dWs = _guided_scatter_maps(numPoint, gradPixels.unsqueeze(3)*rhoMap_normalized.unsqueeze(-1), pointIdxMap, boundingBoxes)
else:
dWs = None
if ctx.needs_input_grad[3]:
localWidth = ctx.localWidth
localHeight = ctx.localHeight
depthValues = cameraPoints[:, :, 2].contiguous()
# B,N,1
dIdp = torch.zeros_like(projPoints, device=gradPixels.device, dtype=gradPixels.dtype)
dIdz = torch.zeros(1, numPoint, device=gradPixels.device, dtype=gradPixels.dtype)
outputs = _visibility_backward(focalLength, mergeThreshold, considerZ,
localHeight, localWidth,
gradPixels, pointIdxMap, rhoMap, WsMap, depthMap, isBehind,
pixels, boundingBoxes, projPoints, Ws, depthValues, rhoValues, dIdp, dIdz)
dIdp, dIdz = outputs
# outputs = _visibility_debug_backward(mergeThreshold, focalLength, considerZ,
# localHeight, localWidth, 0,
# gradPixels, pointIdxMap, rhoMap, WsMap, depthMap, isBehind,
# pixels, boundingBoxes, projPoints, Ws, depthValues, rhoValues, dIdp, dIdz)
# dIdp, dIdz, debugTensor = outputs
dIdcam = torch.zeros_like(cameraPoints)
dIdcam[:, :, 2] = dIdz
# saved_variables["dI"] = gradPixels.detach().cpu()
# saved_variables["dIdp"] = saved_variables["dIdp"].scatter_(1, saved_variables["renderable_idx"].expand(-1, -1, dIdp.shape[-1]),
# dIdp.cpu().detach())
# saved_variables["projPoints"] = saved_variables["projPoints"].scatter_(1, saved_variables["renderable_idx"].expand(-1,-1,dIdp.shape[-1]),
# projPoints.cpu().detach())
# saved_variables["dIdpMap"] = debugTensor[:,:,:,:2].cpu().detach()
else:
dIdp = dIdcam = None
return (None, None, dWs, dIdp, None, None, dIdcam, None, None, None, None, None, None, None, None, None, None)
Hi, I am confused with the seventh return item of backward function, which is named dIdcam
and explained as (BxNx3) derivative wrt camera points (only z-dim is nonzero)
. Why its corresponding input item of forward function is Ms
? Something wrong here?
It looks like the scaler positional argument is missing when creating the empty PointFragment here. This does throw an error when running a non-renderable scene. I am not sure how to fix because I don't know what an empty PointFragment should be.
Thanks!
the code Iso-Points doesnot caontain directory 【configs】,could you please offer it?
differential should be replaced by differentiable in the tags and the description
Hi, thanks for sharing this great project, I tried to compile dss with pytorch 1.10, but the during running there is undefined_symbol error, will this project going to support pytorch 1.10 or later? thanks
/home/ray/anaconda3/envs/dss/lib/python3.8/site-packages/pytorch3d/structures/meshes.py:1108: UserWarning:
__floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
Traceback (most recent call last):
File "train_mvr.py", line 74, in <module>
model = config.create_model(
File "/mnt/Datasets/projects/DSS/config.py", line 184, in create_model
renderer = create_renderer(cfg.renderer).to(device)
File "/mnt/Datasets/projects/DSS/config.py", line 244, in create_renderer
Raster = get_class_from_string(render_opt.raster_type)
File "/mnt/Datasets/projects/DSS/DSS/utils/__init__.py", line 71, in get_class_from_string
mod = importlib.import_module(cls_str[:i])
File "/home/ray/anaconda3/envs/dss/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/mnt/Datasets/projects/DSS/DSS/core/rasterizer.py", line 21, in <module>
from .. import _C, logger_py
ImportError: /mnt/Datasets/projects/DSS/DSS/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor7is_cudaEv
I have run into the following error when running scripts/create_mvr_data_from_mesh.py
for the first time
` File "scripts/create_mvr_data_from_mesh.py", line 207, in <module>
images = renderer.shader(fragments, meshes_batch, lights=lights, cameras=cams)
File "/home/mengqi/.conda/envs/pytorch3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mengqi/.conda/envs/pytorch3d/lib/python3.8/site-packages/pytorch3d/renderer/mesh/shader.py", line 376, in forward
images = hard_rgb_blend(colors, fragments, blend_params)
File "/home/mengqi/.conda/envs/pytorch3d/lib/python3.8/site-packages/pytorch3d/renderer/blending.py", line 84, in hard_rgb_blend
return torch.cat([pixel_colors, alpha], dim=-1) # (N, H, W, 4)
RuntimeError: Tensors must have same number of dimensions: got 4 and 5`
As you can see the error is raised at line 206 of the script and is caused by this call:
images = renderer.shader(fragments, meshes_batch, lights=lights, cameras=cams)
Can you help me with this ?
Version
Python: 3.8
Pytorch: 1.6
Pytorch3D : 0.6.1
System
System='Linux'
release='4.15.0-188-generic'
version='#199-Ubuntu SMP Wed Jun 15 20:42:56 UTC 2022'
machine='x86_64'
Note
In the README.md
file you say to run external/prefix
, but I could only find external/prefix_sum
.
I have successfully run all the setup.py
scripts.
The learning rate for Adam is hardcoded , but a line in the script also tries to retrieve it from yaml .
When I try to run the demo in the readme, I get the following error:
Traceback (most recent call last):
File "train_mvr.py", line 75, in
cfg, camera_model=train_dataset.get_cameras(), device=device)
File "/home/llx/packages/DSS/config.py", line 184, in create_model
renderer = create_renderer(cfg.renderer).to(device)
File "/home/llx/packages/DSS/config.py", line 244, in create_renderer
Raster = get_class_from_string(render_opt.raster_type)
File "/home/llx/packages/DSS/DSS/utils/init.py", line 71, in get_class_from_string
mod = importlib.import_module(cls_str[:i])
File "/home/llx/anaconda3/envs/vibe/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/llx/packages/DSS/DSS/core/rasterizer.py", line 21, in
from .. import _C, logger_py
ImportError: /home/llx/packages/DSS/DSS/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _Z25RasterizePointsCoarseCudaRKN2at6TensorES2_S2_S2_iii
my environment is python3.7 pytorch1.4. I follow the installation to install except:
my pytorch3d is installed from source(in my test it works)
my pytorch version is 1.4 and cuda version is cuda10.0
I install fvcore and iopath with pip
Does these matter?
How can I solve this problem?
Hi, thanks for sharing this great project. I runned DSS with your example data and also with my own point cloud, under both situation I get few point's GVdets
to be negative in function _get_per_point_info
in SurfaceSplatting class, which means that the projected second order curve is not the ellipse, is this behavior correct? Thanks
I keep getting this error:
RuntimeError: "guided_scatter_maps_kernel" not implemented for 'Bool' (operator() at DSS/cuda/rasterize_forward_cuda_kernel.cu:339)
When I set learn color to be true in the config file, the resulting pointcloud has color that seems like random noise. When printing out the "color" in get_point_cloud in point_modeling.py, the value seems to be initialized to 1 at the first loop, but quickly goes to random numbers later (including negative values). img_pred generated by calling self.model(...) in trainer.py also seems to have very weird results.
I am interested in using this repo for my research and have been digging through the DSS rendering code. I found that when using the SmapeLoss in a toy rendering problem with a single point and single normal (similar to Figure 6 of your paper) the gradient of the loss with respect to the normal came back as None. After some more digging, I notice that in the render() function line 855-859 the projPoints, cameraPoints, and cameraNormals have their gradients detached when computing rho. Therefore, it looks like the partial derivatives drho/dn and drho/dp from equations 8 and 9 of your paper are not being computed.
Also I dug into the DSS rasterize backward function and it looks like dRho is always returned as None. It looks like only dI/dw and (dI/dh)*(dh/dp) from equations 8 and 9 are returned.
It would be very helpful if I could get assistance with recovering these gradients or clarifying my understanding of the code.
Thank you!
Hi I would like to reproduce results showed in Fig.3 of the paper ?
Could you give me a link to the mesh and the necessary config information to reproduce the results ?
Thanks !
When running python scripts/create_mvr_data_from_mesh.py --points example_data/mesh/yoga6.ply --output example_data/images --num_cameras 128 --image-size 512 --tri_color_light --point_lights --has_specular, the following error pops up.
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 190, in
for c_idx, cams in tqdm(camera_sampler):
ValueError: too many values to unpack (expected 2)
The error above was fixed by removing c_idx, then the following error pops up.
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 206, in
images = renderer.shader(
File "/home/chx/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/chx/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/pytorch3d/renderer/mesh/shader.py", line 377, in forward
images = hard_rgb_blend(colors, fragments, blend_params)
File "/home/chx/anaconda3/envs/pytorch3d/lib/python3.8/site-packages/pytorch3d/renderer/blending.py", line 77, in hard_rgb_blend
return torch.cat([pixel_colors, alpha], dim=-1) # (N, H, W, 4)
RuntimeError: Tensors must have same number of dimensions: got 4 and 5
The error was caused by the ambient color of the light created by get_tri_color_lights_for_view in common.py having one more dimension and can be solved by reducing one dimension of the ambient color.
When running python train_mvr.py --config configs/dss.yml, the error pops up in the log
ERROR - Could not plot gradient: TypeError("transform_points_screen() got multiple values for argument 'eps'").
The error is fixed by replacing
pts_ndc = _cams.transform_points_screen(pts_world.view(
1, -1, 3), ((W, H),), eps=1e-17).view(-1, 3)[..., :2]
pts_grad_ndc = _cams.transform_points_screen(
(pts_world + pts_world_grad).view(1, -1, 3), ((W, H),), eps=1e-8).view(-1, 3)[..., :2] at line 531 in trainer.py
with
pts_ndc = _cams.transform_points_screen(pts_world.view(
1, -1, 3), eps=1e-17, image_size=((W, H),)).view(-1, 3)[..., :2]
pts_grad_ndc = _cams.transform_points_screen(
(pts_world + pts_world_grad).view(1, -1, 3), eps=1e-8, image_size=((W, H),)).view(-1, 3)[..., :2]
Then the following error appears,
Traceback (most recent call last):
File "train_mvr.py", line 199, in
eval_dict = trainer.evaluate_3d(
File "/home/chx/Forest/DSS/DSS/training/trainer.py", line 167, in evaluate_3d
if not pointcloud.is_empty:
AttributeError: 'PointClouds3D' object has no attribute 'is_empty'
and after defining is_empty for PointClouds3D in cloud.py,
Traceback (most recent call last):
File "train_mvr.py", line 199, in
eval_dict = trainer.evaluate_3d(
File "/home/chx/Forest/DSS/DSS/training/trainer.py", line 169, in evaluate_3d
np.array(pointcloud.vertices)[None, ...], global_step=it)
AttributeError: 'PointClouds3D' object has no attribute 'vertices'
which I solved by defining vertices to be "points" that is passed into the constructor of PointCloud3D and I am unsure whether that is correct.
After fixing the errors above, the following error appears,
Traceback (most recent call last):
File "train_mvr.py", line 205, in
metric_val = eval_dict[model_selection_metric]
KeyError: 'chamfer'
I printed out eval_dict
{'chamfer_point': 0.027734989300370216, 'chamfer_normal': 0.5018584728240967}
and I am not sure how to fix this.
The point cloud provided by ScanNet is very noisy, so how can I run learn_image_filter.py
on ScanNet scene?
Any toy example or suggestion?
Why it is not possible to run demo? Why does It raise this issues? What is it required as input?
python scripts/create_mvr_data_from_mesh.py --points example_data/mesh/yoga6.ply --output example_data/images --num_cameras 128 --image-size 512 --tri_color_light --point_lights --has_specular
0%| | 0/128 [00:00<?, ?it/s]
Traceback (most recent call last):
File "scripts/create_mvr_data_from_mesh.py", line 241, in <module>
imageio.imwrite(os.path.join(depth_dir, "%06d.exr" % idx),
File "/home/admin/anaconda3/envs/DSS/lib/python3.8/site-packages/imageio/core/functions.py", line 303, in imwrite
writer = get_writer(uri, format, "i", **kwargs)
File "/home/admin/anaconda3/envs/DSS/lib/python3.8/site-packages/imageio/core/functions.py", line 226, in get_writer
raise ValueError(
ValueError: Could not find a format to write the specified file in single-image mode
Is there any other detailed documentation? And would be denoising function available?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.