shichenliu / softras Goto Github PK
View Code? Open in Web Editor NEWProject page of paper "Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning"
License: MIT License
Project page of paper "Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning"
License: MIT License
Can you explain why you have this clipping step before calculating the z buffer value?
Hi, thanks for good work!
In soft_renderer/rasterize.py, a function, srf.rasterize, is called. But I cannot figure out where it is defined. Can you give me some help?
SoftRas/soft_renderer/functional/look.py
Line 20 in a238ad7
Should be: direction = direction.to(device)
And you might want to process "up" similar as line 15 to 27.
Hi, I installed the code and run demo, however it required neural_renderer package in soft_renderer/functional/load_obj.py, line 7
. I modified it to soft_renderer.cuda.load_textures as load_textures_cuda
and it worked.
Also, maybe it would be better to include gcc related things in your README. Current CUDA (even 10.0) does not support gcc8 but latest OS (I'm using latest Fedora) use gcc8 as default compiler. I manually installed gcc7 and added soft link to enforce my machine use gcc7 as default compiler. After changing to gcc7, it worked.
Hi,
I want to use your SoftRas in a segmentation project. I have a CNN that claculates a polygon, which describes the boundaries of my segmented object. Now I want to extract the object from the input image. So I want to create a mask with ONES in the inner of the polygon and ZEROS in the outside. Multiplying this mask with my input image pointwise yields the segmented object. Since I want to use this segmented object to calculate the loss, the mask has to be differentiable in regard to the polygon, that the CNN calculates. Therefore I want to use your rasterizer algorithm. Is it possible to use one of your functions for 2D polygons to 2D images, or should I implement my own SoftRas for this problem?
demo_render.py works fine
but in demo_deform.py I get this error (windows 10 python 3.7 cuda 10)
set CUDA_VISIBLE_DEVICES=0 & python examples/demo_deform.py
Loss: 0.8354: 0%| | 0/20000 [00:00<?, ?it/s]THCudaCheck FAIL file=C:/w/1/s/windows/pytorch/aten/src\THC/THCReduceAll.cuh line=327 error=74 : misaligned address
Traceback (most recent call last):
File "examples/demo_deform.py", line 111, in
main()
File "examples/demo_deform.py", line 100, in main
loss.backward()
File "D:\apps\python3\lib\site-packages\torch\tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "D:\apps\python3\lib\site-packages\torch\autograd_init_.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuda runtime error (74) : misaligned address at C:/w/1/s/windows/pytorch/aten/src\THC/THCReduceAll.cuh:327
Can this code be used for training? I printed gradients of weights of the last layer, and they are always zero.
Hello @ShichenLiu @chenweikai ,
Could you please provide some details about texture reconstruction? I am primarily interested in the following points:
texture_type
used for generating textureThanks.
Hi,
I use the 3ddfa output mesh to render face, but only got the black images. I noticed that the vertices value of given sample are between -1 ~ 1 , should i normalize mine too?
Can you share the code for the pose optimization example with the cube?
Thanks!
Does texture_res
do anything when texture_type
is vertex
? If so, why is it set via self.texture_res = int(np.sqrt(self._textures.shape[2]))
in mesh.py
?
Could you add code showing how to learn/optimize the texture (vertex and/or surface), as in your paper?
Thanks!
Thanks for sharing this great work. There is some confusions with this code. As you mentioned in paper, you are working on single view reconstruction but the code use 2 viewpoints.
Besides, in model.py, you do concat viewpoints = torch.cat((viewpoint_a, viewpoint_a, viewpoint_b, viewpoint_b), dim=0) and vertices = torch.cat((vertices, vertices), dim=0). What's the purposes of such operation?
Wish to achieve your reply! Thanks.
Thanks for your code ! I notice the applications include Non-rigid Shape Fitting. Is it realized in this public code?
Hi! Thanks for sharing your nice work!
I am doing human mesh reconstruction task(with SMPL model). Before I use the mask to supervise our network(i.e. use soft rasterizer), 1 epoch finishes in about 10 mins, but after using softras... It becomes very slow(about 3h a epoch). I don't know whether this is a normal phenomenon. Does the rasterization generally affect the speed a lot?(In particular, I use 6890 vertices and 13776 faces). Thanks!
Hi!
I'm not able to find any method for rendering depth. The current routines return an RGBA image. Can you please explain how to render depth?
I'll switch from NeuralMeshRenderer to SoftRas for my research if I can render a DepthMap.
Thank you!
a bug will cause error when texture_type="vertex" in function save_obj
may it should change to this
if textures is not None and texture_type == 'surface':
f.write('mtllib %s\n\n' % os.path.basename(filename_mtl))
dataset script not upload
Hi,
Thanks for your amazing work.Is there a plan for supporting rectangle images?
Hi! @ShichenLiu
I just want to run test.py so I need 'data/results/models/recon/checkpoint_0200000.pth.tar'
Can you upload your 'checkpoint_0200000.pth.tar' file?
Thanks!
I am trying to understand the dimensions of source.npy represent.(120, 4,64,64). I understand that 120 are the examples and 64*64 are the size of the image, but what is the 4 stand for? I am asking because I am trying to replicate and use SoftRas with another dataset of silhouette images
Hi, Thanks for open-sourcing this awesome project. For the image reconstruction part, may I ask what parameters you use in the renderer? (gamma_val, aggr_func_rgb etc.). Also, would you please shed some light on why we need to aggregate all faces when computing color? Cause it seems only the face with the smallest z matters to color.
Hi,
I want to import a camera parameter file in json format, which include camera eye position, camera focal position, camera up vector, camera focal length, camera skew and principal point, is there any support for importing this?
SoftRas/soft_renderer/losses.py
Line 90 in f644279
I don't understand exactly this loss. This is not a normal loss. It looks like you are trying to make the angle between opposite vertices in faces sharing and edge as close as possible to 180 degrees. Can you explain what does this loss mean?
Thank you
Could this model build the mesh for any object image without the 3D mesh label data?
Hi,
I was trying your code and when running "python setup.py install" I got this error. Do you have any idea why this happened?
Thanks.
I cloned the git repo and installed as per instructions in README
. It ran with some warnings and ended with
Using /home/users/piyushb/anaconda3/envs/anthro/lib/python3.7/site-packages
Finished processing dependencies for soft-renderer==1.0.0
Further, on trying to import, it gives the following error message:
>>> import soft_renderer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/users/piyushb/projects/SoftRas/soft_renderer/__init__.py", line 1, in <module>
from . import functional
File "/home/users/piyushb/projects/SoftRas/soft_renderer/functional/__init__.py", line 4, in <module>
from .load_obj import load_obj
File "/home/users/piyushb/projects/SoftRas/soft_renderer/functional/load_obj.py", line 7, in <module>
import soft_renderer.cuda.load_textures as load_textures_cuda
ModuleNotFoundError: No module named 'soft_renderer.cuda.load_textures'
There is no package in the respective directory. Any help?
As described in the title
Hello, ShichenLiu ! Thank you for your work. When I load data during training, it takes up a lot of my memory. My memory configuration is only 16g. How should I set it to avoid this kind of situation? I look forward to your reply
Hi, I wanted to point out this rather strange bug/issue when the program executes line 124 ,i.e loss.backward() function. The program sits there for a while like 0.5 seconds or so and then the terminal ,without any sort of prompt or anything , stops execution as if the program has completed successfully without any errors even though the program has not even completed 1 iteration. I tried this on my custom dataset and then used the mesh reconstrction.zip dataset , basically the dataset you described in the Readme and I got the same result, the program stops. I have no idea what could be causing it. I tried just checking whether the backward function works or not on a random sample tensor and in that case it did work. Infact, I just used it on a different Machine learning based project which uses pytorch. The backward function is a method of Tensor class so I doubt there is any link to the soft_renderer related modules which were installed. I tried checking anywhere else on the internet and the closest I got to this issue was a when a person a forum mentioned this and then replied himself after a while that this error or rather behavior was exhibited on Windows. I would really appreciate if you could provide any insight into this matter. For anyone using this repo, did they try it on Windows ?
Many Thanks!
Hello, I was trying to render an obj file with vertex color. I thought in SoftRas texture_type="vertex" can handle this. But the rendered image doesn't have color. I use following code:
import soft_renderer as sr
import imageio
import numpy as np
import matplotlib.pyplot as plt
mesh = sr.Mesh.from_obj("00336.obj", load_texture=True, texture_type="vertex")
renderer = sr.SoftRenderer(camera_mode="look_at", texture_type="vertex")
renderer.transform.set_eyes_from_angles(-380, 0, 0)
images = renderer.render_mesh(mesh)
image = images.detach().cpu().numpy()[0].transpose((1,2,0))
image = (255*image).astype(np.uint8)
plt.imshow(image)
plt.show()
I have also attached my obj file, thanks!
00336.zip
Thanks for sharing your nice work!
My question is why the sigma value in test.py is larger than train.py.
In train.py:
SoftRas/examples/recon/train.py
Line 36 in f644279
And it is decayed to be smaller during training. But in test.py:
SoftRas/examples/recon/test.py
Line 27 in f644279
Maybe the sigma value in test.py should be smaller than in train.py.
Thanks.
Hi,I have read your rasterize code,I found that it seems that your code only support "square" image rendering because the parameter "image_size=256" is scalar but not a 2D vector.
Actually I am writing a "smpl-overlay" program by your soft-rasterization, but input image is not square,so its hard to overlay 3d body model in original image with your code,do you have some suggestion for that.
BTW,for square image,all works fine. And its excited that your rasterization is differentiable theoretically to make pixel level gradient backwards
Did you train your model with multiple GPUs? When I train my model with your module in multi-gpu environment, it encounters an error as below. I used nn.DataParallel to wrap my model for multi-gpu training.
RuntimeError: CUDA error: an illegal memory access was encountered (block at /opt/conda/conda-bld/pytorch_1544176307774/work/aten/src/ATen/cuda/CUDAEvent.h:96)
Can you give me some help?
Hi:
I am trying the render example "example/demo_render.py", and the function sr.Mesh.from_obj returns "segmentation fault" error, is there any one has the same issue?
My env:
pytorch 1.1.0; cuda 9.0; Tesla M40 GPU
Thanks~
Hi, the mesh reconstruction code seems doesn't include the texture prediction part, could you add it?
Hello,
I did not understand the meaning behind doing this (line 40 to 45) transformation in the deformation experiment. Could anyone please explain what's going on in it?
I use Python3.6.5 and Pytorch=1.1.0, and after running "sudo python setup.py install", I found this error. In "load_obj.py", there is a line "import soft_renderer.cuda.load_textures as load_textures_cuda", but there is just a "load_textures_cuda.cpp" in that folder. Is there any error?
Is it possible to train the network to reconstruct human mesh? It seems that this is possible but maybe you've tried that already? Any information about the result if so? Thanks.
intrinsic matrix K should be applied after applying distortion
Hello, ShichenLiu ! I find there is no hashname in your datasets which are .npz files? Therefore, I want to know how to sort them in order?
Thanks for your kinedly open source code. But L_g term in Eq.5 is a little ambiguous. Can you tell me the exact form?
I am trying to reproduce the model for unsupervised mesh reconstruction (using the same script as in the examples
directory). But during training I get the following recurring errors (the code doesn't terminate due to these)
Error in forward_transform_inv_triangle: invalid device function
Error in forward_soft_rasterize: invalid device function
Error in backward_soft_rasterize: invalid device function
I am using Pytorch 1.3 and CUDA 10.0
Hi! Thanks for sharing this wonderful work! I tried to use it to render the face models but I obtained a weird rendering for the BFM. Basically I observed that some parts (nose) are partially transparent and there is a black border around the border of the face. I am wondering is that normal and if not, how to properly use the soft rasterizer?
I attached the rendering results for your reference.
Can you share the code for generating the gif of the cube with varying sigma and gamma? Are you plotting the RGBA values or just the RGB?
Thank you for sharing your great work. I'm a little confused about the back propagation process, could you help me? I = sum(wj * cj), in your implementation, it seems that only wj has impact on x, y coordinate, but actually cj also has impact on x, y coordinate either. If my understanding is correct, what's the meaning of this simplification, or my understanding is wrong? Thanks.
I'm working on a project where I need to find optimal camera parameters, but I am getting some strange rendering artifacts when I try to use this renderer from some viewpoints.
These glitches do not appear when I view the mesh in MeshLab.
Here is a zip of the files needed to reproduce this issue: test.zip
I've been working on this for a few days but can't figure out what the problem is. Do you have any ideas about what could be causing this?
Is it possible to use this model in Azure? (I have to use Azure, not other cloud services...)
Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.