Code Monkey home page Code Monkey logo

seglossodyssey's Introduction

Loss functions for image segmentation

A collection of loss functions for medical image segmentation

@article{LossOdyssey,
title = {Loss Odyssey in Medical Image Segmentation},
journal = {Medical Image Analysis},
volume = {71},
pages = {102035},
year = {2021},
author = {Jun Ma and Jianan Chen and Matthew Ng and Rui Huang and Yu Li and Chen Li and Xiaoping Yang and Anne L. Martel}
doi = {https://doi.org/10.1016/j.media.2021.102035},
url = {https://www.sciencedirect.com/science/article/pii/S1361841521000815}
}

Take-home message: compound loss functions are the most robust losses, especially for the highly imbalanced segmentation tasks.

Some recent side evidence: the winner in MICCAI 2020 HECKTOR Challenge used DiceFocal loss; the winner and runner-up in MICCAI 2020 ADAM Challenge used DiceTopK loss.

Date First Author Title Conference/Journal
20231101 Bingyuan Liu Do we really need dice? The hidden region-size biases of segmentation losses (pytorch) MedIA
2023 MICCAI Alvaro Gonzalez-Jimenez Robust T-Loss for Medical Image Segmentation (pytorch) MICCAI23
2023 MICCAI Zifu Wang Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels (pytorch) MICCAI23
2023 MICCAI Fan Sun Boundary Difference Over Union Loss For Medical Image Segmentation (pytorch) MICCAI23
20220517 Florian Kofler blob loss: instance imbalance aware loss functions for semantic segmentation (pytorch) IPMI23
20220426 Zhaoqi Len PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions (pytorch) ICLR
20211109 Litao Yu Distribution-Aware Margin Calibration for Semantic Segmentation in Images (pytorch) IJCV
20211013 Pei Wang Relax and Focus on Brain Tumor Segmentation MedIA
20210418 Bingyuan Liu The hidden label-marginal biases of segmentation losses (pytorch) arxiv
20210330 Suprosanna Shit and Johannes C. Paetzold clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation (keras and pytorch) CVPR 2021
20210325 Attila Szabo, Hadi Jamali-Rad Tilted Cross Entropy (TCE): Promoting Fairness in Semantic Segmentation CVPR21 Workshop
20210318 Xiaoling Hu Topology-Aware Segmentation Using Discrete Morse Theory arxiv ICLR 2021
20210211 Hoel Kervadec Beyond pixel-wise supervision: semantic segmentation with higher-order shape descriptors Submitted to MIDL 2021
20210210 Rosana EL Jurdi A Surprisingly Effective Perimeter-based Loss for Medical Image Segmentation Submitted to MIDL 2021
20201222 Zeju Li Analyzing Overfitting Under Class Imbalance in Neural Networks for Image Segmentation TMI
20210129 Nick Byrne A Persistent Homology-Based Topological Loss Function for Multi-class CNN Segmentation of Cardiac MRI arxiv STACOM 2020
20201019 Hyunseok Seo Closing the Gap Between Deep Neural Network Modeling and Biomedical Decision-Making Metrics in Segmentation via Adaptive Loss Functions TMI
20200929 Stefan Gerl A Distance-Based Loss for Smooth and Continuous Skin Layer Segmentation in Optoacoustic Images MICCAI 2020
20200821 Nick Byrne A persistent homology-based topological loss function for multi-class CNN segmentation of cardiac MRI arxiv STACOM
20200720 Boris Shirokikh Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation arxiv (pytorch) MICCAI 2020
20200708 Gonglei Shi Marginal loss and exclusion loss for partially supervised multi-organ segmentation (arXiv) MedIA
20200706 Yuan Lan An Elastic Interaction-Based Loss Function for Medical Image Segmentation (pytorch) (arXiv) MICCAI 2020
20200615 Tom Eelbode Optimization for Medical Image Segmentation: Theory and Practice when evaluating with Dice Score or Jaccard Index TMI
20200605 Guotai Wang Noise-robust Dice loss: A Noise-robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions from CT Images (pytorch) TMI
202004 J. H. Moltz Contour Dice coefficient (CDC) Loss: Learning a Loss Function for Segmentation: A Feasibility Study ISBI
201912 Yuan Xue Shape-Aware Organ Segmentation by Predicting Signed Distance Maps (arxiv) (pytorch) AAAI 2020
201912 Xiaoling Hu Topology-Preserving Deep Image Segmentation (paper) (pytorch) NeurIPS
201910 Shuai Zhao Region Mutual Information Loss for Semantic Segmentation (paper) (pytorch) NeurIPS 2019
201910 Shuai Zhao Correlation Maximized Structural Similarity Loss for Semantic Segmentation (paper) arxiv
201908 Pierre-AntoineGanaye Removing Segmentation Inconsistencies with Semi-Supervised Non-Adjacency Constraint (paper) (official pytorch) Medical Image Analysis
201906 Xu Chen Learning Active Contour Models for Medical Image Segmentation (paper) (official-keras) CVPR 2019
20190422 Davood Karimi Reducing the Hausdorff Distance in Medical Image Segmentation with Convolutional Neural Networks (pytorch) TMI 201907
20190417 Francesco Caliva Distance Map Loss Penalty Term for Semantic Segmentation (paper) MIDL 2019
20190411 Su Yang Major Vessel Segmentation on X-ray Coronary Angiography using Deep Networks with a Novel Penalty Loss Function (paper) MIDL 2019
20190405 Boah Kim Mumford–Shah Loss Functional for Image Segmentation With Deep Learning TIP
201901 Seyed Raein Hashemi Asymmetric Loss Functions and Deep Densely Connected Networks for Highly Imbalanced Medical Image Segmentation: Application to Multiple Sclerosis Lesion Detection (paper) IEEE Access
201812 Hoel Kervadec Boundary loss for highly unbalanced segmentation (paper), (pytorch 1.0) MIDL 2019
201810 Nabila Abraham A Novel Focal Tversky loss function with improved Attention U-Net for lesion segmentation (paper) (keras) ISBI 2019
201809 Fabian Isensee CE+Dice: nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation (paper) Nautre Methods
20180831 Ken C. L. Wong 3D Segmentation with Exponential Logarithmic Loss for Highly Unbalanced Object Sizes (paper) MICCAI 2018
20180815 Wentao Zhu Dice+Focal: AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy (arxiv) (pytorch) Medical Physics
201806 Javier Ribera Weighted Hausdorff Distance: Locating Objects Without Bounding Boxes (paper), (pytorch) CVPR 2019
201805 Saeid Asgari Taghanaki Combo Loss: Handling Input and Output Imbalance in Multi-Organ Segmentation (arxiv) (keras) Computerized Medical Imaging and Graphics
201709 S M Masudur Rahman AL ARIF Shape-aware deep convolutional neural network for vertebrae segmentation (paper) MICCAI 2017 Workshop
201708 Tsung-Yi Lin Focal Loss for Dense Object Detection (paper), (code) ICCV, TPAMI
20170711 Carole Sudre Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations (paper) DLMIA 2017
20170703 Lucas Fidon Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks (paper) MICCAI 2017 BrainLes
201705 Maxim Berman The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks (paper), (code) CVPR 2018
201701 Seyed Sadegh Mohseni Salehi Tversky loss function for image segmentation using 3D fully convolutional deep networks (paper) MICCAI 2017 MLMI
201612 Md Atiqur Rahman Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation (paper) 2016 International Symposium on Visual Computing
201608 Michal Drozdzal "Dice Loss (without square)" The Importance of Skip Connections in Biomedical Image Segmentation (arxiv) DLMIA 2016
201606 Fausto Milletari "Dice Loss (with square)" V-net: Fully convolutional neural networks for volumetric medical image segmentation (arxiv), (caffe code) International Conference on 3D Vision
201605 Zifeng Wu TopK loss Bridging Category-level and Instance-level Semantic Image Segmentation (paper) arxiv
201511 Tom Brosch "Sensitivity-Specifity loss" Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation (code) MICCAI 2015
201505 Olaf Ronneberger "Weighted cross entropy" U-Net: Convolutional Networks for Biomedical Image Segmentation (paper) MICCAI 2015
201309 Gabriela Csurka What is a good evaluation measure for semantic segmentation? (paper) BMVA 2013

Most of the corresponding tensorflow code can be found here.

seglossodyssey's People

Contributors

clementpoiret avatar junma11 avatar neuronflow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seglossodyssey's Issues

Boundary Loss for Multi Class

The boundary loss and the sample for computing the distance map (input to boundary loss) both mentioned, that there are for binary masks.

How would I use the Boundary loss in a multi class scenario?
How would I compute the sdf map for multi class?

Thank you!

when `apply_nonlin=None` calculation of GDiceLoss fails because softmax_output is not created

when apply_nonlin=None calculation of GDiceLoss fails:

Traceback (most recent call last):
  File "path/loser.py", line 181, in <module>
    loss = criterion(prediction, label)
  File "path/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "path/SegLoss/losses_pytorch/dice_loss.py", line 123, in forward
    intersection: torch.Tensor = w * einsum("bcxyz, bcxyz->bc", softmax_output, y_onehot)
UnboundLocalError: local variable 'softmax_output' referenced before assignment

Prediction Input Shape

Hi @JunMa11,

I would like to ask if your prediction tensor in IoUloss and Diceloss consists of only 1 class. For multi-class segmentation can i use softmax function before the calculation of loss ? I am using different architecture than you so i don't know your input shape.

thanks

About Sensitivity Specificity loss function implementation

Hello, I have a doubt about Sensitivity Specificity loss function's implementation.
image
So I think that the correct way to translate this Sensitivity into **prediction ** and **ground_truth ** terms for a loss function would be similar to:
true_positives = prediction*ground_truth
false_negatives = (1-prediction)*ground_truth
image
But in the implementation:
Sensitivity = (square(ground_truth-prediction) * (1-one_hot) ) / (1-one_hot)
How should I interpret its expressions(The same occurs with the Specificity )?

Polyloss

Thank you for your work. can you please provide polyloss PyTorch for image segmentation? Thank you

About distance map

Hi JunMa,

Thanks alot for your code.
I am just wondering as to boudary loss, should the distance map be a signed distance map (negative inside and positive outside the boundary) or an un-signed distance map (positive both inside and outside)?
Thanks in advance.

Cheers.

Accuracy_reimplementation

Hi, @JunMa11,

I tried reimplementing the liver segmentation using dice loss. Everything goes well though, the performance is a little lower than the results in your paper. I use the SoftDiceloss and the LiTS dataset. I conducted the experiments, using the same splits as provided. When I evaluate the liver segmentation, I combine the liver and liver tumor into one channel. The average Dice result is 94.51.

Do you have any idea about that or suggestions?
Best,
Xi

Focal Loss

Hello, @JunMa11

I want to know whether focal loss is suitable for 3D image segmentation or not,and for segmentation with only two classifications of background and foreground,whether the network can be single-channel output and use focal loss.

Hope your help, thank you!

Data preprocessing

Hi, @JunMa11,

Thanks for your great work. I have a question about data preprocessing. As you've provided the detailed splits for the datasets. Could you further provide the code for liver CT dataset processing, so we can compare our results with yours directly?

Best,
Xi

multi focal loss alpha

Hi, I'd like to ask Focal Loss with multiple classifications. So for example, I have background, C1,C2,C3, four categories. If I set alpha to [0.25, 1,1,0.5], does that mean that my background weight is 0.25, then the foreground three classes are weighted 1,1,0.5, and C1 and C2 have the same weight and are twice as heavy as C3

Dataset issue, can help!

Hey Jun. @JunMa11 I hope you are fine. I thought I shall ask you as you have already tried the dataset conversion from Fabian's repo. So, I download the cardiac data from ACDC, they have training repo, but when I ran the code (for ACDC data-preparation) fromhis repo, it's throwing the error. I can't use the Hackathon data, as it has got only 30 set. Can you please clarify whether it's the right process? I just need to convert the data, ans strip the 4D to 3D.

about gradient

I'm sorry to bother you.In SegLoss/losses_pytorch/hausdorff.py,I find function distance_field use torch.no_grad,this wil make this fuction have no gradient, Will this have an impact on training?I don't know much about the impact.Can you explain it for me?
Thanks.

about multi-organ segmentation loss: Dice+Focal loss

Hi, Jun, thank you very much for your valuable work.
When you perform multi-organ segmentation on nnUNet, you observed that the combine of Dice loss and Focal loss achieved the best DSC. Can you share your parameters used in Focal loss? Such as the alpha and gamma and learning rate.
Many thanks, waiting for your reply.

Boundary Loss Training ?

I want to use the boundary loss with my dataset but the conversion between masks to contours can be a pain, is there a simpler way to do it ?

multi focal loss alpha

Hi, I'd like to ask Focal Loss with multiple classifications. So for example, I have background, C1,C2,C3, four categories. If I set alpha to [0.25, 1,1,0.5], does that mean that my background weight is 0.25, then the foreground three classes are weighted 1,1,0.5, and C1 and C2 have the same weight and are twice as heavy as C3

How long will it take to train with DICEHD on multiorgan dataset by nnunet?

When train using nnunet with loss of DiceHD or DiceBD, it seems that in most of time the usage of GPU is around 0%, few time is around 40%. Looks like around 30 mins will cost for 1 epoch. Is this the normal situation?

I built my own dataset which is similar with Multi-organ Abdominal CT and preprocess it according to the instruction of nnunet. Then I copied https://github.com/JunMa11/SegLoss/blob/master/test/nnUNetV1/network_training/nnUNetTrainer_DiceBD.py and lossfunctions to nnunet folder. I replaced all nnUNetTrainer to nnUNetTrainerV2 . Is this the right procedure?

Thank you very much!

is nnUNetTrainerV2.py available?

Hi,

Thank you for these loss functions, they are very helpful. I am trying to run with nnUNetv2 and I can't seem to find the nnUNetTrainerV2.py in this repository and I can't seem to find it. Is there one available or would I have to change the nnUNetTrainer.py to suit nnUNetv2 instead of nnUNetv1?

Thank you in advance.

Some Code issues!

Hi Jun @JunMa11

Thanks for getting time for collecting all loss functions at one place. You are.awesome.~😉

Can you please help me in understanding why you did them? (One by one). Will be great if you help.

  1. In get_tp_fp_fn function, why you did this?

Started the tuple from 2 and the next lines, I mean you checking masks are returning proper identity without squares, then can you please explain x_i * with all rows + columns of the mask?

axes = tuple(range(2, len(net_output.size()))) ...... tp = torch.stack(tuple(x_i * mask[:, 0] for x_i in torch.unbind(tp, dim=1)), dim=1)

  1. In soft dice loss, can you please explain?

if self.batch_dice: axes = [0] + list(range(2, len(shp_x))) else: axes = list(range(2, len(shp_x))) if self.apply_nonlin is not None: x = self.apply_nonlin(x)

Where is the code for DiceFocal loss?

Hello, it's amazing repository thank you for share your code,

I have two questions:

I try to use in DiceTopK loss function and i not sure that I understood that parameters to use correctly in this loss function,
what is meaning in this paramaters:

  • batch_dice,
  • do_bg.

I would be very happy if you could explain them to me?

Also i search in your code for implementation for DiceFocal and i I can not find, Can you please direct me to link?

Thank you very much,
Aviad

Please add a license to this repository

Hello,

First of all, thanks for your repository! I would like to use some of the loss functions that are in it in my project, and this is why I'm opening this issue to ask you to add a license to your repository.

As explained in GitHub's help site on license choosing, not providing a license for your repository means that "the work is under exclusive copyright by default. Unless you include a license that specifies otherwise, nobody else can copy, distribute, or modify your work without being at risk of take-downs, shake-downs, or litigation."

As per the way the README of this repository is written, I'm assuming that the lack of license is more of an oversight than a wish to restrict the use of this repository ; though if I'm wrong, and you actually do not want to add a license to your repository, could you state so in the README ?

Thank you for your time, and the consideration of this request !

Learning rate for dice compound losses

Hi, @JunMa11,

Your paper mentions that grid searching is used to determine the learning rates for different losses for a fair comparison. Could you provide the learning rates of different dice compound losses, (i.e. Dice, DiceCE, DiceBD, DiceHD, DiceTopk, DiceFocal), so that we can have an easier reimplementation?

Best,
Xi

about DC_and_HD_loss

Hi, Jun, I want to use DC+HD as the loss to train a multi-organ segmentation model, but I meet two problems:

  1. in nnUNetTrainer_DiceHDBinary.py: from nnunet.training.loss_functions.boundary_loss import DC_and_HDBinary_loss,
    but there is no DC_and_HDBinary_loss in boundary_loss.py, only has DC_and_HD_loss, are they the same?
  2. in the defination of DC_and_HD_loss
    def forward(self, net_output, target): dc_loss = self.dc(net_output, target) hd_loss = self.hd(net_output, target) if self.aggregate == "sum": with torch.no_grad(): alpha = hd_loss / (dc_loss + 1e-5 ) result = alpha * dc_loss + hd_loss else: raise NotImplementedError("nah son") return result
    Here,
    alpha = hd_loss / (dc_loss + 1e-5 ), result = alpha * dc_loss + hd_loss,
    alpha * dc_loss = hd_loss / (dc_loss + 1e-5 ) * dc_loss, where 1e-5 is too small which can be ignore, then the result will be hd_loss , the final result will equals to 2*hd_loss, not hd_loss+dc_loss, isn't it?
    Thanks, waiting for your reply~

IoULoss

Hi jun,

I want to ask that in IoULoss def forward(self, x, y, loss_mask=None): How your output can be minus if it is not minus why did you use return -iou

thanks

'--ndet' flag

Hi Jun,

Thank you so much for sharing this with everyone. I'm coming from the nnU-Net repository after researching some of the issues posted.

I'm getting ready to implement your repository but noticed the --ndet flag at the end of the suggested run command python run/run_training.py 3d_fullres nnUNetTrainer_Dice TaskXX_MY_DATASET FOLD --ndet. I'm not familiar with what this flag does, and I can't seem to find any information about it anywhere. My best guess is that it is for non-deterministic training but I think nnU-Net does this by default.

Any info is much appreciated. Take care!

Andres

HausdorffERLoss is not differentiable

I wrote a doctest for the erosion based hausdorff loss and I found it is not differentiable:

    Examples:
        >>> image_preds = torch.tensor([[
        ...    [0, 0.1, 0.1, 0],
        ...    [0, 0.3, 0.6, 0],
        ...    [0, 0.4, 0.8, 0],
        ...    [0, 0, 0, 0],
        ... ]], requires_grad=True)[None]
        >>> image_gt = torch.tensor([[
        ...    [0, 1, 0, 0],
        ...    [1, 1, 1, 0],
        ...    [1, 1, 1, 0],
        ...    [0, 0, 0, 0],
        ... ]])[None]
        >>> hdloss = HausdorffERLoss(from_logits=False)
        >>> hdloss(image_preds, image_gt)
        tensor(0.0625)
        >>> hdloss(image_preds, image_gt).backward()

I would suspect something is missing from your implementation.

Request to include FCM loss.

Hi @JunMa11 ,

I greatly appreciate the effort you've put into creating a library of loss functions for the community. I'm reaching out to see if you might consider adding a loss function that we've developed, the FCM loss, which is specifically designed for both unsupervised and semi-supervised registration tasks.
Paper: Medical Physics
Code: FCM Loss

Thank you!
Junyu

instance segmentation loss

Hey! First of all, thanks for this really nice summary of all the different losses!!! That's amazing... Since you are kind of an expert with all these losses just one question:
Which loss do you suggest for Instance Segmentation where the given dataset contains negative samples (i.e. no instance in the scene). In several works, these samples would be filtered and not used in the loss, however, in my current case, this is not possible. Dice loss? However, it is not capable of negative samples (since the GT-mask is empty there is no intersection no matter what is predicted).
Would be nice to read your opinion on that!

Violin plot

Could you please provide the code for the violin plot? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.