benjiebob / wldo Goto Github PK
View Code? Open in Web Editor NEWCode for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.
Hey, I want to know how I can change your code and let them merely obtain 3D coordinates of keyjoints from dogs.
Good afternoon,
May I ask how do you install the neural renderer dependency? If I run regular pip install
, it just fails for me. I have tried to fix some issues there and install it manually by copying the source code, and install it from the setup.py
. The WLDO code then seems to run, but results are pretty bad. If I compare with the image from the README
, I can see that I have worse results (with the pre-trained network) then it probably should be.
I am not 100% sure, but I guess that it might be the problem with the neural renderer dependency. Which version did you use and how did you install it?
Thank you.
I noticed that you have customized a dataset format and converted Animal Pose into this format. I'm curious about how you did it。Did you consider to provide the file like "changejsons.py"?
Hi Benjamin, your WLDO is excellent, I applied it on other animals and it can estimate their pose very good. However, I have a question about the keypoint you choose. For the dogs, I only see one keypoint on the whole torso, the tail-start. If there is another point around the neck, then it seems reasonable. However, there are only few points on the head. How do this one point define the direction of the torso? Or the tail-start point and the four points (point 4,2,5,11) can define the whole body. Thank you.
Hi, thanks for your great work. Are you going to release the training code?
self.model_renderer = NeuralRenderer(
config.IMG_RES, proj_type=config.PROJECTION,
norm_f0=config.NORM_F0,
norm_f=config.NORM_F,
norm_z=config.NORM_Z)
synth_rgb, synth_silhouettes = self.model_renderer(
verts, faces, pred_camera)
synth_rgb = torch.clamp(synth_rgb, 0.0, 1.0)
TypeError: clamp(): argument 'input' (position 1) must be Tensor, not tuple
Is there a problem with the torch version? Or is the library now incompatible?
Issue reported by Le Jiang:
There are two functions "load_model_from_disk()" and "Model()"which are used in both demo and eval. One has "shape_family_id" and "load_from_disk" variables and one doesn't. The load_model_from_disk is defined by yourself so it can be different, but the Model() is the function from neural_renderer I think and it needs shape_family_id and load_from_disk. And I cannot run it now. Can you use the demo or do you have any update on this? Thank you so much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.