Code Monkey home page Code Monkey logo

ovanet's People

Contributors

ksaito-ut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ovanet's Issues

evaluation metric questions

I want to ask you whether the calculation of 'known_acc' is correct or not, which is line number 169 in eval.py.

In my understanding, 'per_class_acc' in the function of 'test' has a length of (n.share+1). And, since 'open_class' is defined as "open_class = int(out_t.size(1))" in line 108, it is a "num_class = n_share + n_source_private".

However, the known accuracy is calculated by "known_acc = per_class_acc[:open_class - 1].mean()" in line 169.

Thus, when n_source_private>0, i think it covers all dimensions of per_class_acc.

In my thinking, it should be "known_acc = per_class_acc[:- 1].mean()" in line 169.

Please clarify my concerns.
Thank you.

A bug on HScore computation

After run the scripts I found that there is a bug when computing the HScore.

In detail, the open_class is defined as a extra class with a predefined classifier. So it should be 15 when one wants to use the UniDA setting of OfficeHome (10/5/50). But the known_acc is produced with known_acc = per_class_acc[:open_class - 1].mean().
The problem is known_acc will equals to 'acc_close_all' when source private classes existed in UniDA setting. So HScore is acquired with the wrong known_acc.

I run an experiment and I give the result here. The experiment is performed on OfficeHome - Art2Clipart (10/5/50). I haven't changed any parameters at all.

['step 10000', 'closed perclass', [0.8, 0.16071428571428573, 0.296875, 0.15306122448979592, 0.898989898989899, 0.40404040404040403, 0.4931506849315068, 0.9130434782608695, 0.2948717948717949, 0.3333333333333333, 0.8319482917820868], 'acc per class 0.5072753087649069', 'acc 76.02586421288237', 'acc close all 65.41450777202073', 'h score 0.6302559578815994', 'roc 0.7201107714290726', 'roc ent 0.7298303822459522', 'roc softmax 0.721641334041403', 'best hscore 0.6167971843264333', 'best thr 0.8421052631578947']

Correct HScore is 0.6046
HScore in log is h score 0.6302559578815994
If I change to use the acc_close_all as the accuracy of common label set, I got the HScore reported in the log.

I'm looking forward the final result of VisDA2021. Thank you for your contribution. 😄

About the metrics

Thanks for your great work!
I got questions about the metrics: 'Acc close', 'AUROC', and 'UNK' in paper. I think the first two should be 'acc close all' and 'roc' in the output of the test function, but I seem not to find out which is the 'UNK'.

How to report the results

Hi, thank you very much for open-sourcing such a wonderful work!

I have some questions aobut how to report the results.

(1) I should choose the result of the last step, or the best result across all steps?
(2) In addition, did you repeat the experiments several times and report the mean results?

I just want to known some details, so I can do further research based on your wonderful work.

Result of Fig5 and Fig6

I try to reproduce the result of Fig5 and Fig6.

When I add my code of loading data on OVANet, the result is more or less the same as in the figure. However, when I transfer the code to DANCE, the result was very different from that shown in the figure. I checked the data loading code and it seems to have no problem. And the calculation of H-score is also transferred from OVANet.

known_acc = per_class_acc[:len(class_list)-1].mean()
unknown = per_class_acc[-1]
h_score = 2 * known_acc * unknown / (known_acc + unknown)

Is there anything else that I haven't changed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.