Comments (7)
Hi,
Thanks for making the code public.
I tried to reproduce the numbers for open set setting of OfficeHome and the numbers I get are way less than what you report in the paper. I have already tried several torch and torchvision environments but every environment is giving the lower numbers.
Is it possible for you to upload the model checkpoints for open set source only? Then I hope to reproduce the numbers for SHOT-IM and SHOT with your source only checkpoints (source_F.pt, source_B.pt and source_C.pt). It will indeed be very helpful.
Thanks in advance.
Hi, Roy, so the source only model you trained behaved worse than that in our paper? If so, I would send these models to your email.
Best
from shot.
Hi @tim-learn , yes the source models behaved worse than in your paper. If you could please me to this email [email protected] then it will be great. Thank you.
from shot.
Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.
from shot.
Hi @tim-learn , a gentle reminder to please send me the checkpoints to the above email address as we discussed.
Hi, Roy. Sorry for the delay. I have re-trained SHOT today and the average accuracy of ODA (OfficeHome) is 73.0%. The associated models are uploaded in https://drive.google.com/drive/folders/14GIyQ-Dj7Mr8_FJdPl4EBhFMgxQ2LXnq. You can try again and tell me whether it works well for you.
Best
from shot.
Hi @tim-learn , thank you for sending the checkpoints.
I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA.
Attaching it below.
I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.
Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.
from shot.
Hi @tim-learn , thank you for sending the checkpoints.
I used your source trained checkpoints and simply used them to compute the source-only numbers for ODA. The numbers I get are very poor (almost like random guess). I am using the default run command for ODA. Attaching it below.
I am not sure why the numbers are so bad. When I was training my own model, the numbers were better (however not close to what you report but decent numbers). I was wondering if there is some difference in the dataset list.txt or something different in the installed packages.
Is it possible to share the dataset_list.txt for each domain of Office-Home which you used for the experiments and the list of packages (and their versions) used in the experiment? It would be very helpful. Thank you again.
How abut the performance for other settings like closed-set UDA or partial-set UDA in this code? If these results are okay, it seems that both the versions of different library packages and the data list files are okay.
from shot.
HI @tim-learn , now the numbers from your paper match when I re-run your code. The problem was in the file list actually. We were using different file lists for open set and thats why the numbers were different. Thanks for you help. Closing the issue.
from shot.
Related Issues (20)
- ResNet baseline vs. source only HOT 1
- Can not reproduce source only score of Office-31 HOT 2
- The reasons for locking the source classifier HOT 2
- About Officehome HOT 1
- can not reproduce the result of VisDA-C HOT 1
- Var iter_test in function train_target is not defined
- Pretrained checkpoints HOT 2
- use domainnet dataset HOT 4
- Unable to compile SHOT
- can't reproduce the result of office and office-home
- Can not reproduce ODA result HOT 1
- Pretrained source model obtain random-level output HOT 2
- Pseudo-labeling HOT 1
- -
- The test loader and the target loader are consistent HOT 1
- 您的这行代码似乎有问题?initc = aff.transpose().dot(all_fea) HOT 2
- Can't reproduce the Office-Home results HOT 2
- Error when running code: "RuntimeError: Trying to backward through the graph a second time"
- Fair diversity Term
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from shot.