Code Monkey home page Code Monkey logo

cfvqa's Introduction

Counterfactual VQA (CF-VQA)

This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in CVPR 2021. This code is implemented as a fork of RUBi.

CF-VQA is proposed to capture and mitigate language bias in VQA from the view of causality. CF-VQA (1) captures the language bias as the direct causal effect of questions on answers, and (2) reduces the language bias by subtracting the direct language effect from the total causal effect.

If you find this paper helps your research, please kindly consider citing our paper in your publications.

@inproceedings{niu2020counterfactual,
  title={Counterfactual VQA: A Cause-Effect Look at Language Bias},
  author={Niu, Yulei and Tang, Kaihua and Zhang, Hanwang and Lu, Zhiwu and Hua, Xian-Sheng and Wen, Ji-Rong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Summary

Installation

1. Setup and dependencies

Install Anaconda or Miniconda distribution based on Python3+ from their downloads' site.

conda create --name cfvqa python=3.7
source activate cfvqa
pip install -r requirements.txt

2. Download datasets

Download annotations, images and features for VQA experiments:

bash cfvqa/datasets/scripts/download_vqa2.sh
bash cfvqa/datasets/scripts/download_vqacp2.sh

Quick start

Train a model

The bootstrap/run.py file load the options contained in a yaml file, create the corresponding experiment directory and start the training procedure. For instance, you can train our best model on VQA-CP v2 (CFVQA+SUM+SMRL) by running:

python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml

Then, several files are going to be created in logs/vqacp2/smrl_cfvqa_sum/:

  • [options.yaml] (copy of options)
  • [logs.txt] (history of print)
  • [logs.json] (batchs and epochs statistics)
  • [_vq_val_oe.json] (statistics for the language-prior based strategy, e.g., RUBi)
  • [_cfvqa_val_oe.json] (statistics for CF-VQA)
  • [_q_val_oe.json] (statistics for language-only branch)
  • [_v_val_oe.json] (statistics for vision-only branch)
  • [_all_val_oe.json] (statistics for the ensembled branch)
  • ckpt_last_engine.pth.tar (checkpoints of last epoch)
  • ckpt_last_model.pth.tar
  • ckpt_last_optimizer.pth.tar

Many options are available in the options directory. CFVQA represents the complete causal graph while cfvqas represents the simplified causal graph.

Evaluate a model

There is no test set on VQA-CP v2, our main dataset. The evaluation is done on the validation set. For a model trained on VQA v2, you can evaluate your model on the test set. In this example, boostrap/run.py load the options from your experiment directory, resume the best checkpoint on the validation set and start an evaluation on the testing set instead of the validation set while skipping the training set (train_split is empty). Thanks to --misc.logs_name, the logs will be written in the new logs_predicate.txt and logs_predicate.json files, instead of being appended to the logs.txt and logs.json files.

python -m bootstrap.run \
-o ./logs/vqacp2/smrl_cfvqa_sum/options.yaml \
--exp.resume last \
--dataset.train_split ''\
--dataset.eval_split val \
--misc.logs_name test 

Useful commands

Use a specific GPU

For a specific experiment:

CUDA_VISIBLE_DEVICES=0 python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml

For the current terminal session:

export CUDA_VISIBLE_DEVICES=0

Overwrite an option

The boostrap.pytorch framework makes it easy to overwrite a hyperparameter. In this example, we run an experiment with a non-default learning rate. Thus, I also overwrite the experiment directory path:

python -m bootstrap.run -o cfvqa/options/vqacp2/smrl_cfvqa_sum.yaml \
--optimizer.lr 0.0003 \
--exp.dir logs/vqacp2/smrl_cfvqa_sum_lr,0.0003

Resume training

If a problem occurs, it is easy to resume the last epoch by specifying the options file from the experiment directory while overwritting the exp.resume option (default is None):

python -m bootstrap.run -o logs/vqacp2/smrl_cfvqa_sum/options.yaml \
--exp.resume last

Acknowledgment

Special thanks to the authors of RUBi, BLOCK, and bootstrap.pytorch, and the datasets used in this research project.

cfvqa's People

Contributors

yuleiniu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cfvqa's Issues

The weight in the loss computing.

What should be the weight of the error in the different parts?I've found that the training effect is worse when the KL divergence loss weight is not 1, and when the KL divergence weight is larger, the training effect is poor.

How to extract image features?

Could you please share the code to extract image features "2018-04-27_bottom-up-attention_fixed_36" or the pre-trained RCNN?

Maybe something wrong in cfvqasimple.py

Hello, thanks for sharing your code!
I found a possible wrong in cfvqasimple.py
Isn't out['logits_all'] = z_qkv # for optimization should be out['logits_all'] = z_qk # for optimization ? Or I took it wrong?

when to update

Thank you for your outstanding work. When will the code be updated?

Questions about the core idea

Hi @yuleiniu Thank you for your great work! I have two quick questions:

  1. It seems to me that the core idea is very similar to Tang's unbiased SGG (CVPR'20) in that both works aim to remove the bad co-occurrence bias by subtracting the results with certain data blocked out (image modality/image patches). Is there any misunderstanding here?
  2. The discussion of "good" and "bad" biases: it seems to me that the "bad" language bias can be removed by the proposed method; however, it seems to also remove the "good" ones. I didn't find a detailed discussion on your main motivation of removing the bad biases and retaining the good ones, neither further discussion beyond the Introduction section nor experimental proofs, in the paper. How do the good ones remain? Could you please elaborate on this?

smrl_cfvqa_rubi is TOO slow to train

I can train all the other versions except smrl_cfvqa_rubi with batch_size = 256, I have to change it to 64 preventing CUDA out of memory. But the training is too slow, which takes a day to train a epoch with three 3090.

I wonder what is the difference between smrl_cfvqa_rubi and other versions making it so speical, and is it normal to train so slowly? or is it because I did something wrong?

Typo on Command Line

Hi, thanks for sharing your code! In README, I think boostrap.run under the Use a Specific GPU section shall be bootstrap.run.

How to implement only update c when minimizing KL divergence?

Hello Yulei Niu. Thank you very much for your inspiring work, it has given me great inspiration. I have a question that I hope you can answer: In your paper, under equation 17, there is a sentence "Only c is updated when minimizing L_kl", but in the code, I don't see how it is implemented, it seems that L_kl is added to the whole loss.

Question about evaluation strategy

Hi,
Recently I'm doing my own VQA work. In my case, the convergence speed in three categories (Y/N, Num., Other) are not same. Hence, the best result for each category may reveal in three different epoch. I'm little confused on how to choose the best result for each category.

So, in your work, how do you choose the best result for each category?
Choose the highest result for each category from all epochs result?
Or just choose one epoch which has highest All score and choose all results from this epoch only?

Where is the "block" module?

I found the files in the networks folder that the block module is introduced in the header file, such as "from block.models.networks.mlp import MLP". How do I get the block module?

TypeError: Object of type Tensor is not JSON serializable

配置好环境后运行,在bootstrap/lib/logger.py里的json.dump()报TypeError: Object of type Tensor is not JSON serializable,请问作者有遇到这个问题吗?不解决是否可以,把logger.flush()注释掉后train和eval的结果都有了。

Requesting some clarification regarding the core idea

Hi @yuleiniu Thank you for your great work!

I have some questions related to the core idea, hoping answering them will make the paper more clear for me.

1- In equations 11, 12, and 13 you are replacing the learned embeddings by a learnable constant, how this constant may be interpreted? what does it imply?
2- Why fixing this constant across the whole dimension, by multiplying it to ones?
3- Following this,

        z_qkv = self.fusion(logits, q_pred, v_pred, q_fact=True,  k_fact=True, v_fact=True) # te
        z_q = self.fusion(logits, q_pred, v_pred, q_fact=True,  k_fact=False, v_fact=False) # nie
        logits_cfvqa = z_qkv - z_q

if we neglect the non-linearity (z = torch.log(torch.sigmoid(z) + eps)), (z_qkv - z_q) will be interpreted as (z_k + z_q + z_v) - (2C + z_q) which means we can just rely from the beginning on z_k + z_v and remove the QA branch?! I think I missunderstand something here :D
4- Is it possible to replace the constant with other real example, such as augmented version of the input or something like that, what do u think?

Thanks in advance!

Enquiries on reproducing the results

Hi,

First of all, great work and a good paper!

I just want to clarify a few things! I followed the readme file and re-trained the following variants:
i) vqacp2 (smrl_baseline.yaml / smrl_cfvqa_sum.yaml /smrl_cfvqasimple_sum.yaml)
ii) vqa2 (smrl_baseline.yaml / smrl_cfvqa_sum.yaml /smrl_cfvqasimple_sum.yaml)
However, the evaluation results were different from the results reported in the paper.

  1. Were the models originally trained for 22 epochs (as indicated in the YAML files)? if not, what's the recommended epoch to achieve results similar to the ones stated in the paper?
  2. The paper reports the model's performance on the vqacp2 test set. However, the readme file states that there is no test set for vqacp2. Can I assume that the eval set and test set are the same for vqacp2?
  3. Upon training, the model report results on logits_all, logits_vq, logits_cfvqa, logits_q and logits_v. How do I relate these results to the ones reported in the table?

Thank you for your time.

ModuleNotFoundError: No module named 'block.external'

Could you please tell me whether this error affects the experimental results?

[I 2021-10-12 03:55:06] ...trap/engines/engine.py.126: Saving best checkpoint for strategy eval_epoch.accuracy_top1
[I 2021-10-12 03:55:06] ...trap/engines/engine.py.420: Saving model...
Traceback (most recent call last):
  File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/data/gaokuofeng/anaconda3/envs/cfvqa/lib/python3.7/site-packages/block.bootstrap.pytorch-0.1.6-py3.7.egg/block/models/metrics/compute_oe_accuracy.py", line 8, in <module>
ModuleNotFoundError: No module named 'block.external'
[I 2021-10-12 03:55:07] ...trap/engines/engine.py.424: Saving optimizer...
[I 2021-10-12 03:55:10] ...trap/engines/engine.py.428: Saving engine...
[I 2021-10-12 03:55:10] ...trap/engines/engine.py.129: Saving last checkpoint
[I 2021-10-12 03:55:10] ...trap/engines/engine.py.420: Saving model...
[I 2021-10-12 03:55:11] ...trap/engines/engine.py.424: Saving optimizer...
[I 2021-10-12 03:55:14] ...trap/engines/engine.py.428: Saving engine...
[I 2021-10-12 03:55:14] ...trap/engines/engine.py.133: Ending training procedures

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.