Code Monkey home page Code Monkey logo

segwithdistmap's Introduction

3D Medical Image Segmentation With Distance Transform Maps

Motivation: How Distance Transform Maps Boost Segmentation CNNs (MIDL 2020)

Incorporating the distance Transform maps of image segmentation labels into CNNs-based segmentation tasks has received significant attention in 2019. These methods can be classified into two main classes in terms of the main usage of distance transform maps.

  • Designing new loss functions
  • Adding an auxiliary task, e.g. distance map regression

Overview

However, with these new methods on the one hand and the diversity of the specific implementations and dataset-related challenges on the other, it's hard to figure out which design can generalize well beyond the experiments in the original papers. In this repository, we want to re-implement these methods (published in 2019) and evaluate them on the same 3D segmentation tasks (heart and liver tumor segmentation).

Experiments

Task LA Contributor GPU LiTS Contributor GPU
Boundary loss Yiwen Zhang 2080ti Mengzhang Li TITIAN RTX
Hausdorff loss Yiwen Zhang 2080ti Mengzhang Li TITIAN RTX
Signed distance map loss (AAAI 2020) Zhan Wei 1080ti cancel -
Multi-Head: FG DTM regression-L1 Yiwen Zhang 2080ti cancel -
Multi-Head: FG DTM regression-L2 Jianan Liu 2080ti cancel -
Multi-Head: FG DTM regression-L1 + L2 Gaoxiang Chen 2080ti cancel -
Multi-Head: SDF regression-L1 Feng Cheng TITAN X Chao Peng TITAN RTX
Multi-Head: SDF regression-L2 Rongfei Lv TITAN RTX Rongfei Lv TITAN RTX
Multi-Head: SDF regression-L1+L2 Yixin Wang P100 cancel -
Add-Branch: FG DTM regression-L1 Yaliang Zhao TITAN RTX cancel -
Add-Branch: FG DTM regression-L2 Mengzhang Li TITIAN RTX cancel -
Add-Branch: FG DTM regression-L1+L2 Yixin Wang P100 cancel -
Add-Branch: SDF regression-L1 Feng Cheng TITAN X Yixin Wang TITAN RTX
Add-Branch: SDF regression-L2 Feng Cheng TITAN X Yixin Wang P100
Add-Branch: SDF regression-L1+L2 Yixin Wang P100 Yunpeng Wang TITAN XP

Here is the code, and trained modles can be downloaded from Baidu Disk (pw:mgn0).

Related Work in 2019

New loss functions

Date First author Title Official Code Publication
2019 Yuan Xue Shape-Aware Organ Segmentation by Predicting Signed Distance Maps None AAAI 2020
2019 Hoel Kervadec Boundary loss for highly unbalanced segmentation pytorch MIDL 2019
2019 Davood Karimi Reducing the Hausdorff Distance in Medical Image Segmentation with Convolutional Neural Networks (arxiv) None TMI 2019

Auxiliary tasks

Date First author Title Official Code Publication
2019 Yan Wang Deep Distance Transform for Tubular Structure Segmentation in CT Scans None CVPR2020
2019 Shusil Dangi A Distance Map Regularized CNN for Cardiac Cine MR Image Segmentation (arxiv) None Medical Physics
2019 Fernando Navarro Shape-Aware Complementary-Task Learning for Multi-organ Segmentation (arxiv) None MICCAI MLMI 2019
2019 Balamurali Murugesan Psi-Net: Shape and boundary aware joint multi-task deep network for medical image segmentation (arXiv) None EMBC
2019 Balamurali Murugesan Conv-MCD: A Plug-and-Play Multi-task Module for Medical Image Segmentation (arXiv) Pytorch MLMI

Acknowledgments

The authors would like to thank the organization team of MICCAI 2017 liver tumor segmentation challenge MICCAI 2018 and left atrial segmentation challenge for the publicly available dataset. We also thank the reviewers for their valuable comments and suggestions. We appreciate Cheng Chen, Feng Cheng, Mengzhang Li, Chengwei Su, Chengfeng Zhou and Yaliang Zhao to help us finish some experiments. Last but not least, we thank Lequan Yu for his great PyTorch implementation of V-Net and Fabian Isensee for his great PyTorch implementation of nnU-Net.

Including the following citation in your work would be highly appreciated.

@inproceedings{ma-MIDL2020-SegWithDist,
  title={How Distance Transform Maps Boost Segmentation CNNs: An Empirical Study},
  author={Ma, Jun and Wei, Zhan and Zhang, Yiwen and Wang, Yixin and Lv, Rongfei and Zhu, Cheng and Chen, Gaoxiang and Liu, Jianan and Peng, Chao and Wang, Lei and Wang, Yunpeng and Chen, Jianan},
  booktitle={Medical Imaging with Deep Learning},
  pages = {479--492},
  volume = {121},
  month = {06--08 Jul},
  year={2020},
  series = {Proceedings of Machine Learning Research},
  editor = {Tal Arbel and Ismail Ben Ayed and Marleen de Bruijne and Maxime Descoteaux and Herve Lombaert and Christopher Pal},
  publisher = {PMLR},
  url = {http://proceedings.mlr.press/v121/ma20b.html}
}

segwithdistmap's People

Contributors

junma11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

segwithdistmap's Issues

Questions about the hd loss

Thanks for sharing your great work and I have questions about the Hausdorff loss.

I saw your implementation of the Hausdorff loss in train_LA_HD.py.
In tran_LA_HD.py, two augments(seg_dtm and gt_dtm) of the function hd_loss are calculated using numpy and scipy.
In my knowledge, using numpy and scipy (also torch.no_grad()) broke the pytorch backward graphs of the augments, so I think the augments can be only used for multiplying gradients from the others (seg_soft, gt).

My questions are,
If I'm right, is it okay to calculate Hausdorff loss in train_LA_HD.py?
Is numpy and scipy being used because the distance transform mapping is non-differentiable?
If the distance transform mapping is differentiable (I think it's differentiable because of Karimi's paper, but I'm not sure), could calculating seg_dtm and gt_dtm using pytorch operation help to improve the model?

Code Error in VNet-RecBranch

Hi, when I try using residual Conv in VNet, I see your implementation in vnet_rec.py, but in the class VNetRec, there are only ConvBlock instead of ResidualConvBlock.

Is it a code error? Looking forward to your early reply, thanks!

heart MRI data MICCAI 2018 Atrial Segmentation Challenge.

Hi Jun Ma:
Thank you for sharing. I can't download heart MRI data MICCAI 2018 Atrial Segmentation Challenge, can you provide it?
I have a rectal data in our lab, and I want to run it with your code. How can I modify my data?

Best wish!

About Loss Divergent

Hi, thanks for your great work!
When I run train_LA_AAAISDF.py, I use tensor board to see the change of total loss function and found it became divergent after 1.4k. So I use the model in 1k to test and get an unexpected result, Could tell me how to fix it? Thanks a lot!
image
image

question about the AAAI sdf loss equation

Hello,

I looked at the Lproduct loss equation, and noticed a strange property - this function seem non convex. E.g. for a GT=0.05 in 1D it would look this (when combined with L1 loss):

image

I know this actually comes from another paper, but maybe during the work on your analysis you encountered and analysed this part.

I wonder if this might be related to issues with divergence e.g. #19 because this might be a problem, even if it works most of the time.

A question of train

Hello,
I am very interested in you work, and try to use this code in my segmentation task. but i have some question of it. I am try use code like train_LA_MultiHead_FGDTM_L1 in my segment work, and epoch_numbers is 10000, but when i use torch.save model to test data, they will get the predict all is 0. and the dist is also. and i try to print the predict_y that unprocess via argmax. and i foud that y[:,0,:,:,:] is all bigger than y[:,1,:,:,:]. it means the net can't distinguish the foreground and background. so, please give me some advice. thanks.

About the data

Hi, the link you provided of MICCAI 2018 Atrial Segmentation Challenge is not the h5 format you claimed, may you re-provide a h5 format MICCAI 2018 Atrial Segmentation Challenge Data?

reproduce LA dataset Multi-heads: SDF-L2

Hi JunMa,

Thanks for your great work! and can you give me advice when tuning the model.

I tried the code/train_LA_MultiHead_SDF_L2.py with the default settings and no change.

After training is done, use code/test_LA_MultiHead_SDF.py to test the model, the results are:
root@475e8e50b861:/Documents/SegWithDistMap/code# python test_LA_MultiHead_SDF.py init weight from ../model_la/vnet_dp_la_MH_SDFL2/iter_10000.pth 100%| ... | 20/20 [00:54<00:00, 2.74s/it] average metric is [ 0.84311185 0.73323151 22.54856166 5.7714824 ]

There is still gap to the paper presents:
Multi-heads: SDF-L2 87.0 (3.49) 77.2 (5.49) 16.1 (13.5) 3.97 (3.14)

Any idea about the settings at the training stage, like lr, batch size, iteration, etc.

Thanks

Dimensions for Loss and sdf

Hello, @JunMa11 thanks for the nice codes,
Can you please clarify what the (x,x,y) is for loss functions and SDF/DT? I am implementing 2D images, so I am confused about depth, height and width.

1- net_output: net logits; shape=(batch_size, class, x, y, z) --> (batch_size, class, height, width, depth)
or
2- net_output: net logits; shape=(batch_size, class, x, y, z) --> (batch_size, class, depth,height, width)
If I consider the PyTorch required dimensions?

Thanks
Cheers
Abbas

How to use the SDF for mutli-label mask?

I want to regress the signed distance map with a new head, but it's for a binary mask in your paper. I wonder what should I do if I want to use it for a multi-label mask?
is it possible to get the SDF image for a three-label ground truth? like the BraTS dataset.
Hope for your reply.

The problem of processing own data into H5 data

Hello author! I am very interested in your work. I think your work is very frontier and challenging. I also want to use the code here to segment my own 3D enhanced CT image (nifti format 3D image). How do you process the data into H5 format? thank you!

data h5

hi,I see that you have converted the data of 2017LITS into H5, but there are only 118 cases, while there are 131 cases in LITS. Have you screened some of them?Is there any benefit to this?

sdf aaai loss implementation - summation order

Hello,

I'm looking at the function

https://github.com/JunMa11/SegWithDistMap/blob/153dabf3bc5d9e48058e1497857ac6b00c7abab8/code/train_LA_AAAISDF.py#L95C1-L108C1

and I don't understand why the summation order is such e.g. that it sums each of the parts of the equation in isolation (intersection, pd_sum, gt_sum) while according to equation from the paper:

image

the summation should be outside.

It does matter to the calculation:

smooth = 1e-5

net_output = torch.Tensor([0.3,0.4])
gt_sdm = torch.Tensor([0.05,-0.05])

# original
intersect = torch.sum(net_output * gt_sdm)
pd_sum = torch.sum(net_output ** 2)
gt_sum = torch.sum(gt_sdm ** 2)

L_product_orig = (intersect + smooth) / (intersect + pd_sum + gt_sum + smooth)

# summed at the end according to the equation
intersect = net_output * gt_sdm
pd_sum = net_output ** 2
gt_sum = gt_sdm ** 2

L_product_changed = (intersect + smooth) / (intersect + pd_sum + gt_sum + smooth)

print(L_product_orig, L_product_changed.sum())

shows:

tensor(-0.0200) tensor(-0.0007)

Data Preprocess

Hi, thanks for the great work.

In your paper, you said that "all the cases were cropped centering at the heart or liver region for better comparison ... "

May I ask what are the final cropped image sizes in your experiment? When I ran your codes, I found that the size of cropped image is critical to the results of the experiments. But I didn't find the crop size in your paper or codes. I'm confused about it.

If you have any other data preprocessing technology of if I misunderstand anything, please let me know.

Thank you very much!

A questions about the AAAI SDF implementation.

Hi there,

Thank you so much for your awesome work!

I have one questions related to the training of the signed distance map described in "Shape-Aware Organ Segmentation by Predicting Signed Distance Maps"

As described in the paper, after apply the smooth approximation to the Heaviside function, the boundary of the SDM should convert from 0 to 0.5. (inside the boundary should be 1, and outside value should be 0.). I just not sure how such conversion could overlap witht the binary ground truth?

Thank you for your advise.

Qi

Terrific work

Not an issue, but don't really know where to post that otherwise.

I think you made an awesome work at comparing all those settings, and I am happy that your paper got accepted at MIDL 2020.

The code that you provide is very useful, especially the implementation of Karimi, D. and Salcudean, S.E., 2019. Reducing the Hausdorff distance in medical image segmentation with convolutional neural networks. IEEE transactions on medical imaging.

Best,

Hoel

Why for loop c in range(out_shape[1]) in the compute_sdf()?

Hi,

Thanks for the great work of the surface losses.

May I ask why "for c in range(1, out_shape[1])" is necessary in the def compute_sdf1_1(img_gt, out_shape)? Based on my understanding, the sdf is the distmap of each sample with the shape(x,y,z), so there is no need to for loop the c and "normalized_sdf[b] = sdf" will be enough (b is the number of samples in a batch).

If I misunderstand anything, please let me know. Thank you very much!

Regards,
Cathy


The following is the function I mentioned.

def compute_sdf1_1(img_gt, out_shape):
"""
compute the normalized signed distance map of binary mask
input: segmentation, shape = (batch_size, x, y, z)
output: the Signed Distance Map (SDM)
sdf(x) = 0; x in segmentation boundary
-inf|x-y|; x in segmentation
+inf|x-y|; x out of segmentation
normalize sdf to [-1, 1]
"""

img_gt = img_gt.astype(np.uint8)

normalized_sdf = np.zeros(out_shape)

for b in range(out_shape[0]): # batch size
        # ignore background
    for c in range(1, out_shape[1]):
        posmask = img_gt[b].astype(np.bool)
        if posmask.any():
            negmask = ~posmask
            posdis = distance(posmask)
            negdis = distance(negmask)
            boundary = skimage_seg.find_boundaries(posmask, mode='inner').astype(np.uint8)
            sdf = (negdis-np.min(negdis))/(np.max(negdis)-np.min(negdis)) - (posdis-np.min(posdis))/(np.max(posdis)-np.min(posdis))
            sdf[boundary==1] = 0
            normalized_sdf[b][c] = sdf

return normalized_sdf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.