Comments (5)
@ranftlr When I attempted to do a zero-shot test on my own dataset, I found that the results produced by MidasNet are sometimes over-smooth. As far as I am concerned, while the trimmed loss can exclude some points with imperfect groundtruth depth, it may lead to over-smooth boundaries in predicted depth maps, since the depth near the semantic boundaries in images is more difficult to learn. In other words, the points in semantic boundaries may be excluded by the trimming. So I tried finetuning MidasNet with BerHu Loss. The conclusion is consistent with your opinion. While the depth in the semantic boundaries are much sharper, it can also produce some outrageous bad cases.
from midas.
No, we tried neither normal nor BerHu losses. Other than the losses reported in the paper, we experimented with a Cauchy loss for a bit. We found it tricky to set the hyper-parameters correctly, however, and weren't able to achieve better results than the trimming.
from midas.
@ranftlr Thanks for your quick response! Would you please share the insights why choose l1 rather than berhu, since there are some other works that concludes normal loss and berhu loss would effectively improve the performance. Thanks a lot.
from midas.
Conceptually, BerHu does to opposite of what we try to achieve with the trimming. It behaves like L1 for small errors, but amplifies large errors. According to our observations, these errors are mostly due to imperfections in the ground truth and thus do not provide a relevant signal for learning. I'm also not aware of the work that shows a significant advantage of BerHu over L1. Can you point me to it?
As for the normal loss: the gradient matching term is implicitly constraining the normals and very related to standard normal losses. A common formulation for a normal loss would be to measure cosine distance of the normals, which themselves are a function of the gradient (see equation 5 in this paper for example). The resulting loss is a function of the gradient of the estimate and the gradient of the ground truth. So is the gradient matching term. The difference is that the gradient matching term also considers the magnitude of the gradient, an not only the direction. We didn't try the normal loss, so it might or might not improve results. However, even if it does, I don't believe that the difference would be significant.
from midas.
Thanks for sharing this information. Yes, we do observe issues with over smooth results in some cases. The gradient matching term does counter it to some degree, but also cannot do it when the ground truth is already missing values.
from midas.
Related Issues (20)
- module 'cv2.cv2' has no attribute 'COLORMAP_INFERNO'
- What is the loss function?
- Copy of MiDaS model made with `copy.deepcopy` does not work. HOT 1
- swin2_tiny failed to run forward(): RuntimeError: unflatten: Provided sizes [64, 64] don't multiply up to the size of dim 2 (64) in the input tensor. HOT 1
- [question] Any suggestions on normalizing the outputs better? HOT 9
- Error Loading Pre-trained Weights: Size Mismatch in DPTDepthModel when trying to run for first time. HOT 7
- System crash when loading DPT_Hybrid
- PyTorch Pipeline Broken HOT 2
- gibberish output? HOT 4
- DPT 3.1 models are now available in the Transformers library HOT 1
- Question about COCO dataset HOT 1
- DEPTH VALUE OF THE EACH PIXEL HOT 12
- iOS Demo app is slowing down over time, and the first inference seems much slower HOT 2
- Converted MiDaS 2.1 TFLite model get wrong result on Mobile HOT 5
- MiDaS 2.1 TFLite fp16 with Core ML Delegate gets wrong results
- Cant find pretrained model HOT 1
- Imp. (Improvement) description in the documents
- Exact distance of image HOT 7
- Can't see output...
- New ‘ModuleNotFoundError’ HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from midas.