Code Monkey home page Code Monkey logo

vlci's Introduction

VLCI

This is the implementation of Cross-Modal Causal Intervention for Medical Report Generation. It contains the codes of the Visual-Linguistic Pre-training (VLP), and fine-tuning via Visual-Linguistic Causal Intervention (VLCI) on IU-Xray/MIMIC-CXR dataset.

图片替换文本

Requirements

All the requirements are listed in the requirements.yaml file. Please use this command to create a new environment and activate it.

conda env create -f requirements.yaml
conda activate mrg

Preparation

  1. Datasets: You can download the dataset via data/datadownloader.py, or download from the repo of R2Gen. Then, unzip the files into data/iu_xray and data/mimic_cxr, respectively.
  2. Models: We provide the well-trained models of VLCI for inference, and you can download from here.
  3. Please remember to change the path of data and models in the config file (config/*.json).

Evaluation

  • For VLCI on IU-Xray dataset
python main.py -c config/iu_xray/vlci.json
Model B@1 B@2 B@3 B@4 C R M
R2Gen 0.470 0.304 0.219 0.165 / 0.371 0.187
CMCL 0.473 0.305 0.217 0.162 / 0.378 0.186
PPKED 0.483 0.315 0.224 0.168 0.351 0.376 0.190
CA 0.492 0.314 0.222 0.169 / 0.381 0.193
AlignTransformer 0.484 0.313 0.225 0.173 / 0.379 0.204
M2TR 0.486 0.317 0.232 0.173 / 0.390 0.192
MGSK 0.496 0.327 0.238 0.178 0.382 0.381 /
RAMT 0.482 0.310 0.221 0.165 / 0.377 0.195
MMTN 0.486 0.321 0.232 0.175 0.361 0.375 /
DCL / / / 0.163 0.586 0.383 0.193
VLCI 0.505 0.334 0.245 0.189 0.456 0.397 0.204
  • For VLCI on MIMIC-CXR dataset
python main.py -c config/mimic_cxr/vlci.json
Model B@1 B@2 B@3 B@4 C R M CE-P CE-R CE-F1
R2Gen 0.353 0.218 0.145 0.103 / 0.277 0.142 0.333 0.273 0.276
CMCL 0.334 0.217 0.140 0.097 / 0.281 0.133 / / /
PPKED 0.360 0.224 0.149 0.106 0.237 0.284 0.149 / / /
CA 0.350 0.219 0.152 0.109 / 0.283 0.151 0.352 0.298 0.303
AlignTransformer 0.378 0.235 0.156 0.112 / 0.283 0.158 / / /
M2TR 0.378 0.232 0.154 0.107 / 0.272 0.145 0.240 0.428 0.308
MGSK 0.363 0.228 0.156 0.115 0.203 0.284 / 0.458 0.348 0.371
RAMT 0.362 0.229 0.157 0.113 / 0.284 0.153 0.380 0.342 0.335
MMTN 0.379 0.238 0.159 0.116 / 0.283 0.161 / / /
DCL / / / 0.109 0.281 0.284 0.150 0.471 0.352 0.373
VLCI 0.400 0.245 0.165 0.119 0.190 0.280 0.150 0.489 0.340 0.401

Citation

If you use this code for your research, please cite our paper.

@misc{chen2023crossmodal,
      title={Cross-Modal Causal Intervention for Medical Report Generation}, 
      author={Weixing Chen and Yang Liu and Ce Wang and Jiarui Zhu and Guanbin Li and Liang Lin},
      year={2023},
      eprint={2303.09117},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Contact

If you have any question about this code, feel free to reach me ([email protected])

Acknowledges

We thank R2Gen for their open source works.

vlci's People

Contributors

wissingchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vlci's Issues

Download link for VLCI model not working

Hi there, I'm trying to download the well-trained models of VLCI for inference from the link provided in the README, but the link appears to be broken. When I click on it, I receive an error message saying the page cannot be found. Could you please update the download link or provide an alternate method for accessing the models? Thanks!

Training code

Nice work! Can you release the code for training? Thank you very much!

CE metric, cheXpert or cheXbert?

Hi, Weixing,
I extend my gratitude for your generosity in sharing the open-source code. However, I have encountered challenges in replicating the clinical metrics (i.e., F1) outlined in your paper using the provided checkpoint on the MIMIC-CXR dataset.

In the process of computing CE metrics, I felt that I might have gone wrong at one step or another.

To elaborate, when utilizing your pretrained VLCI model on the MIMIC-CXR dataset, our obtained NLP and clinical metrics are as follows:
BLEU4: 0.113
METEOR: 0.144
ROUGE_L: 0.276
CIDEr: 0.174

Precision: 0.314
Recall: 0.181
F1: 0.179

Here are a few key points:

  1. I use cheXbert to extract entities from gt and pred reports. (I will use cheXpert to extract and compute CE metrics later.)
  2. I use compute_ce.py from R2Gen (https://github.com/zhjohnchan/R2Gen/blob/main/compute_ce.py) to compute CE metrics.
    The extracted entity csv files of gt and pred are attached.

labeled_reports_gts.csv
labeled_reports_res.csv

I eagerly await your insights on this matter.
Best

MRG训练效果不太好

作者您好,我的预训练后,再验证测试的效果不太好'task_name': 'funtune_vlci_iuxray', 'val_BLEU_1': 0.1380630630630553, 'val_BLEU_2': 0.09158504914607858, 'val_BLEU_3': 0.06221934861401921, 'val_BLEU_4': 0.04247527165566198, 'val_METEOR': 0.08422837893406326, 'val_ROUGE_L': 0.15350846384780967, 'val_CIDEr': 0.003667854710952635, 'test_BLEU_1': 0.13562146892654986, 'test_BLEU_2': 0.08884478297779994, 'test_BLEU_3': 0.06049948761282417, 'test_BLEU_4': 0.041740465382958794, 'test_METEOR': 0.09046227977818676, 'test_ROUGE_L': 0.15533803930844284, 'test_CIDEr': 0.0033052960085075836}能加您个联系方式,耽误您一点时间,请教一下您吗

CE metrics

This is a good job that has inspired me greatly. However, I'm not clear on how the CE metrics (Precision, Recall, F1-Score) in the paper are calculated, and I haven't been able to find it in the source code either. I would like to ask how missing values are handled after CheXpert extracts labels. From what I've searched, there are two methods: one is to treat missing values as 0 and treat the remaining values (1,-1,0) as 1 (as shown in https://github.com/MIT-LCP/mimic-cxr/blob/master/txt/validation/compare_negbio_and_chexpert.ipynb), while the other is to treat missing values as 0 and keep the remaining labels unchanged for calculation. Alternatively, there may be other ways of handling. Could you please let me know which method was used or provide the relevant code for this?

训练文件的运行结果

非常抱歉打扰你,请问运行训练文件是通过python main.py -c config/vlp.json指令来运行吗?我运行的结果为什么是负无穷呢?
epoch: 30 32/33 lv: 0.146 lt: 0.670
Epoch 30 mean_loss: 0.1476 0.7979 time: 22.8444s
Saving checkpoint: results/pretrain/current_checkpoint.pth ...
Best results (w.r.t BLEU_4) in validation set:
val_BLEU_4 : -inf
Best results (w.r.t BLEU_4) in test set:
test_BLEU_4 : -inf
希望你能抽空回答我的问题,非常感激!

MIMIC-CXR Annotation JSon

Hi!

Thanks for the paper and implemantation. Both are perfect! I am trying to run inference on my X-Ray dataset, however tokenizer module needs annotation.json files for both MIMIC-CXR and IU X-Ray datasets. I found the IU X-Ray's, but could not find the MIMIC-CXR. I requested, but it would be wonderful if you provide ONLY annotation json file.

Thanks

MixTokenizer Mimic-CXR Dataset Annotation Path Bug

Hi!

Thanks for the implementation. I am working on your repository and I realized that your MixTokenizer reads only IU Xray annotations. Even Mimic-CXR annotation.json path is defined above code lines, IU Xray's json path is given for Mimic-CXR too. Also your pretrained models were trained with this bug. They are only loaded with this configuration. You can see the wrong line from the link below.

https://github.com/WissingChen/VLCI/blob/216038fe28e1fb9e3d2fee5b7c76b8dc843c4397/utils/tokenizers.py#LL132C53-L132C53

The visualization of the attention map

Hi, thanks for your work. Do you provide the code for the attention visualisation in your work, I could not find it. If so, could you provide the code for this section, please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.