Code Monkey home page Code Monkey logo

fishermatch's People

Contributors

yd-yin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

fishermatch's Issues

Code for visualization

Can you tell me the details about visualizing the Fisher distribution on SO(3)? As it is in Figure 1.

Pretrained model

Thanks for amazing work! But I didn't find the public weights file, like: .ckpt .tar. Do we have the pretrained models for testing or evaluation? Thanks for download linking~

Supervision loss in code and paper

Hi, thanks for your work. It's very interesting. I realized that during supervision part, the loss function is different between the code and paper, right? In the paper, the loss function for supervision is simple (negative log likelihood of the groundtruth rotation in the predicted distributions). Why the loss function in the code is different? Does this loss function (in the code) have been mentioned in the paper?

Thanks!

Why not filter by the negative log-likelihood?

Hi, nice to see your interesting work. But I am curious about the entropy-based filtering part.
There seems to be a strong correlation between the Fisher entropy [equation 9] $H(A)$ and the negative log-likelihood of the mode w.r.t. its parameter: $-\log (p(R^{\text{mode}}|A))$, where $R^{\text{mode}}=\text{procrustes}(A)$. The negative log-likelihood can also show the prediction confidence. So why not perform filtering by the likelihood?

Why we use the best_median_error?

Dear author, thank you for your excellent work! I have a question about the code in train.py. Why we use the best_median_error for filtering and choosing the best model? What if we use the best_mean_error as an indicator.

Why is entropy confidence negative?

Hi! Thank you for your inspiring work!

I noticed that the entropy threshold $\tau$ is set to -5.3. Why is it negative? Shouldn't entropy always positive?

How to get averaged results of 6 categories in Pascal3D+ dataset?

@yd-yin
Hi, Yingda:

Could you mind sharing the way to get averaged results of 6 categories in Pascal3D+ dataset? I have trained each category and computed average result following the pascal.yml setting (ss_ratio: 20). However, I cannot get the same results with your paper's results.
My reproduced results:
image

Your results in Table2:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.