Code Monkey home page Code Monkey logo

dodnet's Introduction

DoDNet

This repo holds the pytorch implementation of DoDNet and TransDoDNet:

DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets (https://arxiv.org/pdf/2011.10217.pdf)
Learning from partially labeled data for multi-organ and tumor segmentation (https://arxiv.org/pdf/2211.06894.pdf)

Usage

1. MOTS Dataset Preparation

Before starting, MOTS should be re-built from the serveral medical organ and tumor segmentation datasets

Partial-label task Data source
Liver data
Kidney data
Hepatic Vessel data
Pancreas data
Colon data
Lung data
Spleen data
  • Preprocessed data will be available soon.

2. Training/Testing/Evaluation

sh run_script.sh

3. Citation

If this code is helpful for your study, please cite:

@inproceedings{zhang2021dodnet,
  title={DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets},
  author={Zhang, Jianpeng and Xie, Yutong and Xia, Yong and Shen, Chunhua},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={},
  year={2021}
}
@article{xie2023learning,
  title={Learning from partially labeled data for multi-organ and tumor segmentation},
  author={Xie, Yutong and Zhang, Jianpeng and Xia, Yong and Shen, Chunhua},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023}
}

dodnet's People

Contributors

jianpengz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dodnet's Issues

Help,Please!

Traceback (most recent call last):
File "/home/shiya.xu/papers/DoDNet/a_DynConv/train.py", line 250, in
main()
File "/home/shiya.xu/papers/DoDNet/a_DynConv/train.py", line 185, in main
preds = model(images, task_ids)
File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/apex/parallel/distributed.py", line 564, in forward
result = self.module(*inputs, **kwargs)
File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shiya.xu/papers/DoDNet/a_DynConv/unet3D_DynConv882.py", line 224, in forward
x = x + skip1
RuntimeError: The size of tensor a (96) must match the size of tensor b (95) at non-singleton dimension 3
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 26535 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 26536) of binary: /home/shiya.xu/anaconda3/envs/pyy/bin/python

How to adapt the code to do segmentation of multiple organs?

Hello,

I am running the code on multiple partially labelled datasets, including BTCV and FLARE22, with the goal to segment multiple organs using the same model. Since each dataset only have a subset of categories annotated, so I think your method suits this situation well. These datasets are slightly different from the datasets you used in the paper, as they have more than 2 organs annotated for each image. Therefore, I changed the data loader and made sure that in each iteration, only one FG class is selected and the task_id is the same as the corresponding class label. I expected the model to work well but actually I found the model converges slowly and after more than 150 epochs (I used an alternative way to do the same task and after training for this number of epochs, the model can already have reasonable performance), a majority of organs were still not correctly segmented (some even were predicted as all 0s). Could you please share your experiences on how to adapt the code to do segmentation of multiple organs to maximize its performance? And is it expected to have a slow convergence rate when using this method? Thanks!

Liver and Kidney Preprocessing

The link that's given for the 0Liver data does not have the same format as the rest of the data with the imagesTr/labelsTr/imagesTs structure the rest of the datasets have. In the train.txt file, the 0Liver is given that structure. Where can I find the data for 0Liver that has this structuring?

Also, for the 1Kidney preprocessing, should line 28 on respacing.py be dirs1 and not i_dirs1 so that the preprocessing for 1Kidey goes through this loop because the i_dirs1 is each specific case? Also, when going through this loop the data does not save in the spacing folder with origin as a sub folder. Is this an issue, or can I just delete the origin folder from the train.txt file?

数据集申请

你好,我想向你申请数据集。我通过代码给的链接下载数据集,随时掉,就很耽搁时间,所以想数据集申请。

Results seem different

Hi,

I downloaded the chekcpoint and did the inference but found that the Dice of Kidney (and Kidney's tumor) is close to 0! Like the following figure shows. Did anyone meet the same issue? Can anyone help to explain the reason? Thanks a lot.

image

cannot import name 'parse_devices' from 'utils.pyt_utils'

it's a great honor for me to read your paper, when i run your code i find it shows 'cannot import name 'parse_devices' from 'utils.pyt_utils'', i checked the pyt_utils.py but didn't find the defination of parse_devices. i don't how to solve it ? thanks for your help.
image

Error while training with checkpoint

File "train.py", line 266, in <module> main() File "train.py", line 151, in main args.reload_path, map_location=torch.device('cpu'))) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 1407, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for unet3D: Missing key(s) in state_dict: "conv1.weight", "layer0.0.gn1.weight", "layer0.0.gn1.bias", "layer0.0.conv1.weight", "layer0.0.gn2.weight", "layer0.0.gn2.bias", "layer0.0.conv2.weight", "layer1.0.gn1.weight", "layer1.0.gn1.bias", "layer1.0.conv1.weight", "layer1.0.gn2.weight", "layer1.0.gn2.bias", "layer1.0.conv2.weight", "layer1.0.downsample.0.weight", "layer1.0.downsample.0.bias", "layer1.0.downsample.2.weight", "layer1.1.gn1.weight", "layer1.1.gn1.bias", "layer1.1.conv1.weight", "layer1.1.gn2.weight", "layer1.1.gn2.bias", "layer1.1.conv2.weight", "layer2.0.gn1.weight", "layer2.0.gn1.bias", "layer2.0.conv1.weight", "layer2.0.gn2.weight", "layer2.0.gn2.bias", "layer2.0.conv2.weight", "layer2.0.downsample.0.weight", "layer2.0.downsample.0.bias", "layer2.0.downsample.2.weight", "layer2.1.gn1.weight", "layer2.1.gn1.bias", "layer2.1.conv1.weight", "layer2.1.gn2.weight", "layer2.1.gn2.bias", "layer2.1.conv2.weight", "layer3.0.gn1.weight", "layer3.0.gn1.bias", "layer3.0.conv1.weight", "layer3.0.gn2.weight", "layer3.0.gn2.bias", "layer3.0.conv2.weight", "layer3.0.downsample.0.weight", "layer3.0.downsample.0.bias", "layer3.0.downsample.2.weight", "layer3.1.gn1.weight", "layer3.1.gn1.bias", "layer3.1.conv1.weight", "layer3.1.gn2.weight", "layer3.1.gn2.bias", "layer3.1.conv2.weight", "layer4.0.gn1.weight", "layer4.0.gn1.bias", "layer4.0.conv1.weight", "layer4.0.gn2.weight", "layer4.0.gn2.bias", "layer4.0.conv2.weight", "layer4.0.downsample.0.weight", "layer4.0.downsample.0.bias", "layer4.0.downsample.2.weight", "layer4.1.gn1.weight", "layer4.1.gn1.bias", "layer4.1.conv1.weight", "layer4.1.gn2.weight", "layer4.1.gn2.bias", "layer4.1.conv2.weight", "fusionConv.0.weight", "fusionConv.0.bias", "fusionConv.2.weight", "x8_resb.0.gn1.weight", "x8_resb.0.gn1.bias", "x8_resb.0.conv1.weight", "x8_resb.0.gn2.weight", "x8_resb.0.gn2.bias", "x8_resb.0.conv2.weight", "x8_resb.0.downsample.0.weight", "x8_resb.0.downsample.0.bias", "x8_resb.0.downsample.2.weight", "x4_resb.0.gn1.weight", "x4_resb.0.gn1.bias", "x4_resb.0.conv1.weight", "x4_resb.0.gn2.weight", "x4_resb.0.gn2.bias", "x4_resb.0.conv2.weight", "x4_resb.0.downsample.0.weight", "x4_resb.0.downsample.0.bias", "x4_resb.0.downsample.2.weight", "x2_resb.0.gn1.weight", "x2_resb.0.gn1.bias", "x2_resb.0.conv1.weight", "x2_resb.0.gn2.weight", "x2_resb.0.gn2.bias", "x2_resb.0.conv2.weight", "x2_resb.0.downsample.0.weight", "x2_resb.0.downsample.0.bias", "x2_resb.0.downsample.2.weight", "x1_resb.0.gn1.weight", "x1_resb.0.gn1.bias", "x1_resb.0.conv1.weight", "x1_resb.0.gn2.weight", "x1_resb.0.gn2.bias", "x1_resb.0.conv2.weight", "precls_conv.0.weight", "precls_conv.0.bias", "precls_conv.2.weight", "precls_conv.2.bias", "GAP.0.weight", "GAP.0.bias", "controller.weight", "controller.bias". Unexpected key(s) in state_dict: "module.conv1.weight", "module.layer0.0.gn1.weight", "module.layer0.0.gn1.bias", "module.layer0.0.conv1.weight", "module.layer0.0.gn2.weight", "module.layer0.0.gn2.bias", "module.layer0.0.conv2.weight", "module.layer1.0.gn1.weight", "module.layer1.0.gn1.bias", "module.layer1.0.conv1.weight", "module.layer1.0.gn2.weight", "module.layer1.0.gn2.bias", "module.layer1.0.conv2.weight", "module.layer1.0.downsample.0.weight", "module.layer1.0.downsample.0.bias", "module.layer1.0.downsample.2.weight", "module.layer1.1.gn1.weight", "module.layer1.1.gn1.bias", "module.layer1.1.conv1.weight", "module.layer1.1.gn2.weight", "module.layer1.1.gn2.bias", "module.layer1.1.conv2.weight", "module.layer2.0.gn1.weight", "module.layer2.0.gn1.bias", "module.layer2.0.conv1.weight", "module.layer2.0.gn2.weight", "module.layer2.0.gn2.bias", "module.layer2.0.conv2.weight", "module.layer2.0.downsample.0.weight", "module.layer2.0.downsample.0.bias", "module.layer2.0.downsample.2.weight", "module.layer2.1.gn1.weight", "module.layer2.1.gn1.bias", "module.layer2.1.conv1.weight", "module.layer2.1.gn2.weight", "module.layer2.1.gn2.bias", "module.layer2.1.conv2.weight", "module.layer3.0.gn1.weight", "module.layer3.0.gn1.bias", "module.layer3.0.conv1.weight", "module.layer3.0.gn2.weight", "module.layer3.0.gn2.bias", "module.layer3.0.conv2.weight", "module.layer3.0.downsample.0.weight", "module.layer3.0.downsample.0.bias", "module.layer3.0.downsample.2.weight", "module.layer3.1.gn1.weight", "module.layer3.1.gn1.bias", "module.layer3.1.conv1.weight", "module.layer3.1.gn2.weight", "module.layer3.1.gn2.bias", "module.layer3.1.conv2.weight", "module.layer4.0.gn1.weight", "module.layer4.0.gn1.bias", "module.layer4.0.conv1.weight", "module.layer4.0.gn2.weight", "module.layer4.0.gn2.bias", "module.layer4.0.conv2.weight", "module.layer4.0.downsample.0.weight", "module.layer4.0.downsample.0.bias", "module.layer4.0.downsample.2.weight", "module.layer4.1.gn1.weight", "module.layer4.1.gn1.bias", "module.layer4.1.conv1.weight", "module.layer4.1.gn2.weight", "module.layer4.1.gn2.bias", "module.layer4.1.conv2.weight", "module.fusionConv.0.weight", "module.fusionConv.0.bias", "module.fusionConv.2.weight", "module.x8_resb.0.gn1.weight", "module.x8_resb.0.gn1.bias", "module.x8_resb.0.conv1.weight", "module.x8_resb.0.gn2.weight", "module.x8_resb.0.gn2.bias", "module.x8_resb.0.conv2.weight", "module.x8_resb.0.downsample.0.weight", "module.x8_resb.0.downsample.0.bias", "module.x8_resb.0.downsample.2.weight", "module.x4_resb.0.gn1.weight", "module.x4_resb.0.gn1.bias", "module.x4_resb.0.conv1.weight", "module.x4_resb.0.gn2.weight", "module.x4_resb.0.gn2.bias", "module.x4_resb.0.conv2.weight", "module.x4_resb.0.downsample.0.weight", "module.x4_resb.0.downsample.0.bias", "module.x4_resb.0.downsample.2.weight", "module.x2_resb.0.gn1.weight", "module.x2_resb.0.gn1.bias", "module.x2_resb.0.conv1.weight", "module.x2_resb.0.gn2.weight", "module.x2_resb.0.gn2.bias", "module.x2_resb.0.conv2.weight", "module.x2_resb.0.downsample.0.weight", "module.x2_resb.0.downsample.0.bias", "module.x2_resb.0.downsample.2.weight", "module.x1_resb.0.gn1.weight", "module.x1_resb.0.gn1.bias", "module.x1_resb.0.conv1.weight", "module.x1_resb.0.gn2.weight", "module.x1_resb.0.gn2.bias", "module.x1_resb.0.conv2.weight", "module.precls_conv.0.weight", "module.precls_conv.0.bias", "module.precls_conv.2.weight", "module.precls_conv.2.bias", "module.GAP.0.weight", "module.GAP.0.bias", "module.controller.weight", "module.controller.bias".

How to directly run train.py ?

@jianpengz i can run your code use your command in pycharm terminal ,but when i run the train.py directly it shows 'RuntimeError:' as the picture shows,can you tell me how to solve it ?thanks for your time and kindness.
image
image

Question About Training Process

Hello, thank you very much for the code you provided. I encountered some problems during training. During the training process, a dimensional abnormality error occurred at a certain epoch. What is the reason for this?
torch.Size([1, 64, 32, 96, 96])
skip1 : shape torch.Size([1, 64, 32, 96, 95])
�[32m16 22:01:46 �[0m�[1;31mWRN A exception occurred during Engine initialization, give up running process�[0m
Traceback (most recent call last):
File "train.py", line 253, in
main()
File "train.py", line 188, in main
preds = model(images, task_ids)
File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/parallel/distributed.py", line 560, in forward
result = self.module(*inputs, **kwargs)
File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/data/qy/code/DoDNet/a_DynConv/unet3D_DynConv882.py", line 230, in forward
x = x + skip1
RuntimeError: The size of tensor a (96) must match the size of tensor b (95) at non-singleton dimension 4

inconsistent data numbers on 2HepaticVessel

Hi, thank you a lot for your contribution.
I am trying to reproduce the results but find that on dataset 2Hippocampus, you have 242 + 61 = 303 images but in imagesTr folder under Task04_Hippocampus downloaded from MSD, I can only find 260 in total. How do you get labeled data from cases in imagesTs?

Question About re_spacing.py

Hello, thank you very much for the code. I have a question about re_spacing.py. The code does not process the 0liver data set. Does it mean that there is no need to perform additional processing on the downloaded data, just modify the file name directly.

Data download

Hi. @jianpengz ,,
Thx very much for your work. Currently, I just use this repro, and occur some problems when downloading the dataset as your instruction.
1.) I do not find any data related with Kidney dataset;
2) I just download Lits and found that are different with your data folder format in the lits/MOTS/MOTS_train/test.txt

The training error

The MSD Liver dataset was used for training:
DoDNet/a_DynConv/MOTSDataset.py", line 471, in my_collate
data_dict = tr_transforms(**data_dict)
TypeError: call() got an unexpected keyword argument 'image'

Broken Pipe Training Error

Error while training on Hepatic Vessels and Pancreas:

Traceback (most recent call last): File "train.py", line 250, in <module> main() File "train.py", line 177, in main for iter, batch in enumerate(trainloader): File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe

KiTS19

Hi, you really did an excelent job, and thanks for your sharing.
I have already prepared all datasets except KiTS19, other datasets work well on your code, they all got right results. I followed the steps in KiTS19, but this result run by evaluate.py and postp.py is wrong. So, would you please tell me more details about how to get right KiTS19?
Thanks!

运行代码过程时的疑问

张老师好,您给的代码中a_DynConv文件夹下没有train_Dynconv.py文件,是需要运行train.py吗?python train.py后出现

Traceback (most recent call last):
  File "train.py", line 30, in <module>
    from engine import Engine
  File "../engine.py", line 9, in <module>
    from utils.logger import get_logger
Traceback (most recent call last):
ModuleNotFoundError:   File "train.py", line 30, in <module>
No module named 'utils.logger'
    from engine import Engine
  File "../engine.py", line 9, in <module>
    from utils.logger import get_logger
ModuleNotFoundError: No module named 'utils.logger'

但是pip install utils.logger 又没有找到该module

ERROR: Could not find a version that satisfies the requirement utils.logger
ERROR: No matching distribution found for utils.logger

我想着这个utils.logger是不是您少公开的文件?

期待您的回复,谢谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.