Code Monkey home page Code Monkey logo

cldice's Introduction

clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

paper | poster |

CVPR 2021

Authors: Suprosanna Shit and Johannes C. Paetzold et al.

@inproceedings{cldice2021,
  title={clDice-a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation},
  author={Shit, Suprosanna and Paetzold, Johannes C and Sekuboyina, Anjany and Ezhov, Ivan and Unger, Alexander and Zhylka, Andrey and Pluim, Josien PW and Bauer, Ulrich and Menze, Bjoern H},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={16560--16569},
  year={2021}
}

Abstract

Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.

Table of contents

clDice Metric

In our publication we show how clDice can be used as a Metric to benchmark segmentation performance for tubular structures. The metric clDice is calculated using a "hard" skeleton using skeletonize from the scikit-image library. Other potentially more sophisticated skeletonization techniques could be integrated in to the clDice metric as well. You can find a python implementation in this repository.

clDice as a Loss function

To train neural networks with clDice we implemented a loss function. For stability reasons and to ensure a good volumetric segmentation we combine clDice with a regular Dice or binary cross entropy loss function. Moreover, we need to introduce a Soft Skeleton to make the skeletonization fully differentiable. In this repository you can find the following implementations:

  1. pytorch 2D and 3D
  2. tensorflow/Keras 2D and 3D

Soft Skeleton

To use clDice as a loss function we introduce a differentiable soft-skeletonization where an iterative min- and max-pooling is applied as a proxy for morphological erosion and dilation.

drawing

cldice's People

Contributors

jocpae avatar suprosanna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cldice's Issues

3D soft_skel gives discontinuous input and different results on pytorch vs tensorflow

First of all - thanks for a great paper about clDice, it's really interesting approach.

I wanted to test the idea on 3D dataset.

I have a synthetic 3D shape on which I just run soft_skeletonize and I assume it should leave the same single connected component. unfortunately it doesn't. See the following summary showing the input image on the left, and two iterations of soft_skel - for both tensorflow and pytorch.

image

I've created a reproducible Colab notebook for the case: https://gist.github.com/kretes/84f6025e7e1ded19591a54b62abcc539

clDice Metrics

Hello nice work!

I was wondering if this is the right implementation of the clDice metrics. Your paper indicated high performance of this metric. However, finding the clDice score of a vessel mask between itself was resulting 91%.

I used your implementation by the way. Other metrics were reporting 100%.
Have any idea what the bug could be?

Evaluation Metrics

Hi, I really appreciate your work. Can you please also share the implementation of topology-based evaluation metrics that you used in your paper? That will be very helpful. will look forward to your response. Thanks.

Training

Can you provide an example of cldice as a loss function to train the network? I used cldice as a loss function for training, but the result is that I can't train it. The training loss curve is also very strange, and even negative values, and the shock is large, and it cannot be converged.

about cldice loss

Thank you very much for your article. What I want to ask is whether cldice is used as a module? If I have multiple tasks, how should I combine multiple loss functions for training?

Thin vessel disappear when use soft_skeleton

Hello, when I use soft_skeleton, I found only large vessels left no matter how I set the iteration, please tell me what should I do? I'm a little confused about the theory.
Thanks!

adding cldice degrades the performance

I am using cldice to vessel segmentation task, I found when I use cldice the performance(dice, jacard, hd, asd) degrades. Besides the loss will be negative.
(Plus, I divide cldice into dice and cldice with the default setting.

image
image

I wonder whether I mistake some setting or the wrong use of the loss?

Boundary mask of CREMI?

Hello, how did you get the boundary mask of CREMI? It only provides the neuron_ids. I can extract the boundaries by applying skimage.segmentation.find_boundarys on neuron_ids. But I got boundaries which seem thinner than what you have shown in paper. Did you add something such as a Gaussian blur? Thanks!

soft_skel function memory issue

Hello, I apply soft-cldice loss to a segmentation network with Cityscapes as an input (2048x1024x3).
For 16 batch size, the network is trained well with other losses like cross-entropy.
However, I've met "CUDA out of memory" when I use the soft-cldice loss with the smaller batch size. (even for 1 or 2 batch size)
I use pytorch and the NVIDIA 1080TI with 12GB memory.

I've traced the memory allocation of my graphic card with command "watch -n 1 nvidia-smi", so I found out the issue occurs on "for loop in the soft_skel" function.

def soft_skel(img, iter_):

How can I solve this problem?

predicted mask format

During training, should I convert the output from the Unet to a binary mask prior feeding into soft_dice_cldice? I used raw logits, but the training result is really bad.
However when I tried to convert to a binary mask, I lost the grad on the tensor.

about loss value

Hello, I successfully added cldice to the network, but the final result is very unsatisfactory, the loss value is very strange, I want to seek some suggestions or solutions
image

about clDice

Thanks for a great paper about clDice.
I am a newbie who just started deep learning. I want to add clDice as the loss function of the network. How should I implement it? Besides, Can it be combined with other loss functions?

clloss can not be used alone, and keep 0 all the time.

When i use the cldice to train a vessel segmentation network, i find that the cldice keeps 0 all the time.

class soft_cldice(nn.Module):
def init(self, iter_=3, smooth = 1.):
super(soft_cldice, self).init()
self.iter = iter_
self.smooth = smooth

def forward(self, y_pred, y_true):
    y_true = y_true.contiguous().unsqueeze(1).to(float) # to get the true label
    y_pred = (y_pred > 0.5).contiguous().to(float).requires_grad_() #to get the pred mask
    skel_pred = soft_skel(y_pred, self.iter)
    skel_true = soft_skel(y_true, self.iter)
    tprec = (torch.sum(torch.multiply(skel_pred, y_true)[:,1:,...])+self.smooth)/(torch.sum(skel_pred[:,1:,...])+self.smooth)    
    tsens = (torch.sum(torch.multiply(skel_true, y_pred)[:,1:,...])+self.smooth)/(torch.sum(skel_true[:,1:,...])+self.smooth)    
    cl_dice = 1.- 2.0*(tprec*tsens)/(tprec+tsens)
    return cl_dice

run issue

Hello, your pytorch version cldice cannot run successfully. Can you republish a pytorch version cldice in your free time?

about loss value is unstable

Sorry to disturb you again. I tried many experiments, but the final result was far from the ideal, and even the result was a completely black picture. The test result did not get the edge graph. My loss curve graph is very turbulent. Large, sometimes even negative values appear, the loss curve is as follows
image
I trained for 150 epochs, the abscissa is epoch, the ordinate is the loss value.
The method I call the clide class is as follows.
image
Can you give me some suggestions?

sensitivity and precision

tsens = cl_score(v_l,skeletonize_3d(v_p))

the definition of precision and sensitivity seems not consist with cldice loss and common definition, although it will not affect the final cldice score
tprec = cl_score(v_l,skeletonize_3d(v_p))
tsens = cl_score(v_p,skeletonize_3d(v_l))

What does each index of [:,1:,:,:,:] mean?

Can you explain what each index represents? My task is a segmentation task of two classifications (foreground and background), and the output of the network is 4-dimensional (batchsize, 2, H, W). I want to try the loss function proposed in the paper, but I haven't figured out why it is 5-dimensional.

soft cldice code question

Hi there. I don't understand why the first channel was discarded when computing tprec and tsens after multiplication in the nominator, in cldice.py class soft_cldice. Thans for your answering!

soft skeleton

Hi, thanks for the great package.

I would like to understand why you perform soft erosion using a sequence of 3 min_pooling with separable filters. Can't we instead just use one big min_pool (using -F.max_pool3d(-img, kernel_size=(3,3,3),stride=1,padding=1) ) ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.