Code Monkey home page Code Monkey logo

Comments (5)

Tord-Zhang avatar Tord-Zhang commented on May 22, 2024 1

@ranftlr When I attempted to do a zero-shot test on my own dataset, I found that the results produced by MidasNet are sometimes over-smooth. As far as I am concerned, while the trimmed loss can exclude some points with imperfect groundtruth depth, it may lead to over-smooth boundaries in predicted depth maps, since the depth near the semantic boundaries in images is more difficult to learn. In other words, the points in semantic boundaries may be excluded by the trimming. So I tried finetuning MidasNet with BerHu Loss. The conclusion is consistent with your opinion. While the depth in the semantic boundaries are much sharper, it can also produce some outrageous bad cases.

from midas.

ranftlr avatar ranftlr commented on May 22, 2024

No, we tried neither normal nor BerHu losses. Other than the losses reported in the paper, we experimented with a Cauchy loss for a bit. We found it tricky to set the hyper-parameters correctly, however, and weren't able to achieve better results than the trimming.

from midas.

Tord-Zhang avatar Tord-Zhang commented on May 22, 2024

@ranftlr Thanks for your quick response! Would you please share the insights why choose l1 rather than berhu, since there are some other works that concludes normal loss and berhu loss would effectively improve the performance. Thanks a lot.

from midas.

ranftlr avatar ranftlr commented on May 22, 2024

Conceptually, BerHu does to opposite of what we try to achieve with the trimming. It behaves like L1 for small errors, but amplifies large errors. According to our observations, these errors are mostly due to imperfections in the ground truth and thus do not provide a relevant signal for learning. I'm also not aware of the work that shows a significant advantage of BerHu over L1. Can you point me to it?

As for the normal loss: the gradient matching term is implicitly constraining the normals and very related to standard normal losses. A common formulation for a normal loss would be to measure cosine distance of the normals, which themselves are a function of the gradient (see equation 5 in this paper for example). The resulting loss is a function of the gradient of the estimate and the gradient of the ground truth. So is the gradient matching term. The difference is that the gradient matching term also considers the magnitude of the gradient, an not only the direction. We didn't try the normal loss, so it might or might not improve results. However, even if it does, I don't believe that the difference would be significant.

from midas.

ranftlr avatar ranftlr commented on May 22, 2024

Thanks for sharing this information. Yes, we do observe issues with over smooth results in some cases. The gradient matching term does counter it to some degree, but also cannot do it when the ground truth is already missing values.

from midas.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.