Code Monkey home page Code Monkey logo

Comments (5)

tuanhungvu avatar tuanhungvu commented on August 15, 2024

Hello @HanqingXu ,
Sorry for the late response. In some sense, the two aforementioned losses play as regularization terms to bring "adaptation effect" to the final model. Their weighting factors are indeed sensitive to the particular adaptation setting. Normally we cross-validate to choose the proper weighting values. Regarding the direct entropy loss concern, it is possible that the model gets "corrupted" as you described, especially when using too big entropy weight. Pre-training only on the source domain might mitigate such adverse effects. However, as we use unsupervised losses on the target samples, there is no good guarantee to avoid the corrupted scenario. Again, there is no magic hyper-parameters that work for all cases, you will need to cross-validate to find the good set.
T-H

from advent.

HanqingXu avatar HanqingXu commented on August 15, 2024

Thank your for the reply. I tried the entropy loss term in my dataset. For the direct loss, even if I tried tuning the weighting factor, it still hurts the performance. As for the adversarial loss, it doesn't even converge. I saw several methods from your paper and try weighting the unsupervised loss, and training on specific ranges. Neither worked. Any suggestios would be appreciated.

from advent.

tuanhungvu avatar tuanhungvu commented on August 15, 2024

Can you specify which dataset and which task you are working on?

from advent.

HanqingXu avatar HanqingXu commented on August 15, 2024

I tried the KITTI dataset which has only 200 images (which was my intention since I want it to fully exploit the advantage of the domain adaptation), and my own private dataset which is lane marking dataset(8 classes) in some parking lot. It has only around 200 as well

from advent.

tuanhungvu avatar tuanhungvu commented on August 15, 2024

Ok I see. It's actually difficult to give you suggestions on a set-up that we've never seen. Maybe some guidelines for you: - make sure that your baseline performance on the target set (no UDA) is "OKish", - pretrain the model without UDA only on source. - do more data augmentation as your datasets are quite small.
T-H

from advent.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.