clovaai / rebias Goto Github PK
View Code? Open in Web Editor NEWOfficial Pytorch implementation of ReBias (Learning De-biased Representations with Biased Representations), ICML 2020
License: MIT License
Official Pytorch implementation of ReBias (Learning De-biased Representations with Biased Representations), ICML 2020
License: MIT License
Dear authors,
When I conducted my experiments using your HISC implementation, I found the HSIC value could be negative. Is it reasonable? Do you find the phenomenon in your experiments?
Thanks in advance!
While executing main_imagenet.py
, python type error was raised.
Thank you for such a great work! When reading the paper and code, I have the following questions.
According to the definition of HSIC, it measures the level of independence and HSIC(U, V) = 0
indicates that U and V are independent. A larger HSIC value indicates that U and V are dependent to some extent.
So, to debias the representation of network f by using a biased network g, shouldn't we minimize HSIC(f, g)?
Besides, the for loop in line 62 seems redundant because the g_dim
will be over-written by the last loop, right?
Looking forward to your reply. Thanks!
Description:
I encountered discrepancies while attempting to reproduce the results presented in your paper titled "Learning De-biased Representations with Biased Representations." Despite following the provided methodology and using the suggested code and parameters on GitHub, the obtained results differ from what was reported in the paper.
Expected Results:
Rubi 0.99:
ReBias 0.999:
Actual Results:
Rubi 0.99:
ReBias 0.999:
Steps to Reproduce:
git clone https://github.com/clovaai/rebias.git
cd rebias
pip install -r requirements.txt
python main_biased_mnist.py --root data/ --train_correlation 0.99 --outer_criterion RUBi --g_lambda_inner 0
python main_biased_mnist.py --root data/ --train_correlation 0.999
I would greatly appreciate your assistance in understanding the reasons behind these inconsistencies.
Thank you for your time, and I look forward to your response.
Sincerely,
Alexandre DEVILLERS
Hello, I'm trying to run your LearnedMixin implementation on MNIST but I'm getting the following error:
[2021-07-03 06:58:30] start training
Traceback (most recent call last):
File "main_biased_mnist.py", line 138, in <module>
fire.Fire(main)
File "/home/barbano/.pyenv/versions/rebias/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/barbano/.pyenv/versions/rebias/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.__name__)
File "/home/barbano/.pyenv/versions/rebias/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "main_biased_mnist.py", line 134, in main
save_dir=save_dir)
File "/home/barbano/rebias/trainer.py", line 390, in train
self._train_epoch(tr_loader, cur_epoch)
File "/home/barbano/rebias/trainer.py", line 362, in _train_epoch
self._update_f(x, labels, loss_dict=loss_dict, prefix='train__')
File "/home/barbano/rebias/trainer.py", line 340, in _update_f
_f_loss_indep = self.outer_criterion(f_feats, _g_feats, labels=labels, f_pred=preds)
File "/home/barbano/.pyenv/versions/rebias/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/barbano/rebias/criterions/comparison_methods.py", line 92, in forward
loss = F.cross_entropy(f_pred+g_pred, labels)
RuntimeError: The size of tensor a (10) must match the size of tensor b (128) at non-singleton dimension 1
Dear authors,
Hi, thanks for your great work and this icml paper gives me many insights.
Now I am trying to follow your work and I want to obtain the cluster of texture feature for the ImageNet training set.
Could you please provide the texture feature cluster code for reference?
Thanks a lot for your kind help.
Best,
Tan
Thanks the author release the code.
I change the "StepLR" to "CosineAnnealingLR" in the color-mnist experiments (vanilla approach). The acc is much better than that reported in the paper. Can author explain how the hyperparameters is selected? (such the StepLR vs CosineAnnealingLR)
Dear Authors,
Thanks for sharing your code. I would like to reproduce the 9 class Imagenet clustering: I saw your .pth checkpoints but there is no reference to images and I am not sure whether I am considering the very same split as yours.
Could you please share a .csv file with filenames and associated cluster labels?
Thanks in advance
Hello, I wonder that whether the code 'KH = K - K.mean(0, keepdim=True)' is right? I think it should be 'KH = K - K.mean(1, keepdim=True)'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.