Code Monkey home page Code Monkey logo

seglossodyssey's Issues

instance segmentation loss

Hey! First of all, thanks for this really nice summary of all the different losses!!! That's amazing... Since you are kind of an expert with all these losses just one question:
Which loss do you suggest for Instance Segmentation where the given dataset contains negative samples (i.e. no instance in the scene). In several works, these samples would be filtered and not used in the loss, however, in my current case, this is not possible. Dice loss? However, it is not capable of negative samples (since the GT-mask is empty there is no intersection no matter what is predicted).
Would be nice to read your opinion on that!

HausdorffERLoss is not differentiable

I wrote a doctest for the erosion based hausdorff loss and I found it is not differentiable:

    Examples:
        >>> image_preds = torch.tensor([[
        ...    [0, 0.1, 0.1, 0],
        ...    [0, 0.3, 0.6, 0],
        ...    [0, 0.4, 0.8, 0],
        ...    [0, 0, 0, 0],
        ... ]], requires_grad=True)[None]
        >>> image_gt = torch.tensor([[
        ...    [0, 1, 0, 0],
        ...    [1, 1, 1, 0],
        ...    [1, 1, 1, 0],
        ...    [0, 0, 0, 0],
        ... ]])[None]
        >>> hdloss = HausdorffERLoss(from_logits=False)
        >>> hdloss(image_preds, image_gt)
        tensor(0.0625)
        >>> hdloss(image_preds, image_gt).backward()

I would suspect something is missing from your implementation.

Request to include FCM loss.

Hi @JunMa11 ,

I greatly appreciate the effort you've put into creating a library of loss functions for the community. I'm reaching out to see if you might consider adding a loss function that we've developed, the FCM loss, which is specifically designed for both unsupervised and semi-supervised registration tasks.
Paper: Medical Physics
Code: FCM Loss

Thank you!
Junyu

Polyloss

Thank you for your work. can you please provide polyloss PyTorch for image segmentation? Thank you

Please add a license to this repository

Hello,

First of all, thanks for your repository! I would like to use some of the loss functions that are in it in my project, and this is why I'm opening this issue to ask you to add a license to your repository.

As explained in GitHub's help site on license choosing, not providing a license for your repository means that "the work is under exclusive copyright by default. Unless you include a license that specifies otherwise, nobody else can copy, distribute, or modify your work without being at risk of take-downs, shake-downs, or litigation."

As per the way the README of this repository is written, I'm assuming that the lack of license is more of an oversight than a wish to restrict the use of this repository ; though if I'm wrong, and you actually do not want to add a license to your repository, could you state so in the README ?

Thank you for your time, and the consideration of this request !

IoULoss

Hi jun,

I want to ask that in IoULoss def forward(self, x, y, loss_mask=None): How your output can be minus if it is not minus why did you use return -iou

thanks

about multi-organ segmentation loss: Dice+Focal loss

Hi, Jun, thank you very much for your valuable work.
When you perform multi-organ segmentation on nnUNet, you observed that the combine of Dice loss and Focal loss achieved the best DSC. Can you share your parameters used in Focal loss? Such as the alpha and gamma and learning rate.
Many thanks, waiting for your reply.

Boundary Loss for Multi Class

The boundary loss and the sample for computing the distance map (input to boundary loss) both mentioned, that there are for binary masks.

How would I use the Boundary loss in a multi class scenario?
How would I compute the sdf map for multi class?

Thank you!

about gradient

I'm sorry to bother you.In SegLoss/losses_pytorch/hausdorff.py,I find function distance_field use torch.no_grad,this wil make this fuction have no gradient, Will this have an impact on training?I don't know much about the impact.Can you explain it for me?
Thanks.

Dataset issue, can help!

Hey Jun. @JunMa11 I hope you are fine. I thought I shall ask you as you have already tried the dataset conversion from Fabian's repo. So, I download the cardiac data from ACDC, they have training repo, but when I ran the code (for ACDC data-preparation) fromhis repo, it's throwing the error. I can't use the Hackathon data, as it has got only 30 set. Can you please clarify whether it's the right process? I just need to convert the data, ans strip the 4D to 3D.

About Sensitivity Specificity loss function implementation

Hello, I have a doubt about Sensitivity Specificity loss function's implementation.
image
So I think that the correct way to translate this Sensitivity into **prediction ** and **ground_truth ** terms for a loss function would be similar to:
true_positives = prediction*ground_truth
false_negatives = (1-prediction)*ground_truth
image
But in the implementation:
Sensitivity = (square(ground_truth-prediction) * (1-one_hot) ) / (1-one_hot)
How should I interpret its expressions(The same occurs with the Specificity )?

Prediction Input Shape

Hi @JunMa11,

I would like to ask if your prediction tensor in IoUloss and Diceloss consists of only 1 class. For multi-class segmentation can i use softmax function before the calculation of loss ? I am using different architecture than you so i don't know your input shape.

thanks

Accuracy_reimplementation

Hi, @JunMa11,

I tried reimplementing the liver segmentation using dice loss. Everything goes well though, the performance is a little lower than the results in your paper. I use the SoftDiceloss and the LiTS dataset. I conducted the experiments, using the same splits as provided. When I evaluate the liver segmentation, I combine the liver and liver tumor into one channel. The average Dice result is 94.51.

Do you have any idea about that or suggestions?
Best,
Xi

multi focal loss alpha

Hi, I'd like to ask Focal Loss with multiple classifications. So for example, I have background, C1,C2,C3, four categories. If I set alpha to [0.25, 1,1,0.5], does that mean that my background weight is 0.25, then the foreground three classes are weighted 1,1,0.5, and C1 and C2 have the same weight and are twice as heavy as C3

about DC_and_HD_loss

Hi, Jun, I want to use DC+HD as the loss to train a multi-organ segmentation model, but I meet two problems:

  1. in nnUNetTrainer_DiceHDBinary.py: from nnunet.training.loss_functions.boundary_loss import DC_and_HDBinary_loss,
    but there is no DC_and_HDBinary_loss in boundary_loss.py, only has DC_and_HD_loss, are they the same?
  2. in the defination of DC_and_HD_loss
    def forward(self, net_output, target): dc_loss = self.dc(net_output, target) hd_loss = self.hd(net_output, target) if self.aggregate == "sum": with torch.no_grad(): alpha = hd_loss / (dc_loss + 1e-5 ) result = alpha * dc_loss + hd_loss else: raise NotImplementedError("nah son") return result
    Here,
    alpha = hd_loss / (dc_loss + 1e-5 ), result = alpha * dc_loss + hd_loss,
    alpha * dc_loss = hd_loss / (dc_loss + 1e-5 ) * dc_loss, where 1e-5 is too small which can be ignore, then the result will be hd_loss , the final result will equals to 2*hd_loss, not hd_loss+dc_loss, isn't it?
    Thanks, waiting for your reply~

How long will it take to train with DICEHD on multiorgan dataset by nnunet?

When train using nnunet with loss of DiceHD or DiceBD, it seems that in most of time the usage of GPU is around 0%, few time is around 40%. Looks like around 30 mins will cost for 1 epoch. Is this the normal situation?

I built my own dataset which is similar with Multi-organ Abdominal CT and preprocess it according to the instruction of nnunet. Then I copied https://github.com/JunMa11/SegLoss/blob/master/test/nnUNetV1/network_training/nnUNetTrainer_DiceBD.py and lossfunctions to nnunet folder. I replaced all nnUNetTrainer to nnUNetTrainerV2 . Is this the right procedure?

Thank you very much!

Some Code issues!

Hi Jun @JunMa11

Thanks for getting time for collecting all loss functions at one place. You are.awesome.~😉

Can you please help me in understanding why you did them? (One by one). Will be great if you help.

  1. In get_tp_fp_fn function, why you did this?

Started the tuple from 2 and the next lines, I mean you checking masks are returning proper identity without squares, then can you please explain x_i * with all rows + columns of the mask?

axes = tuple(range(2, len(net_output.size()))) ...... tp = torch.stack(tuple(x_i * mask[:, 0] for x_i in torch.unbind(tp, dim=1)), dim=1)

  1. In soft dice loss, can you please explain?

if self.batch_dice: axes = [0] + list(range(2, len(shp_x))) else: axes = list(range(2, len(shp_x))) if self.apply_nonlin is not None: x = self.apply_nonlin(x)

multi focal loss alpha

Hi, I'd like to ask Focal Loss with multiple classifications. So for example, I have background, C1,C2,C3, four categories. If I set alpha to [0.25, 1,1,0.5], does that mean that my background weight is 0.25, then the foreground three classes are weighted 1,1,0.5, and C1 and C2 have the same weight and are twice as heavy as C3

Data preprocessing

Hi, @JunMa11,

Thanks for your great work. I have a question about data preprocessing. As you've provided the detailed splits for the datasets. Could you further provide the code for liver CT dataset processing, so we can compare our results with yours directly?

Best,
Xi

About distance map

Hi JunMa,

Thanks alot for your code.
I am just wondering as to boudary loss, should the distance map be a signed distance map (negative inside and positive outside the boundary) or an un-signed distance map (positive both inside and outside)?
Thanks in advance.

Cheers.

is nnUNetTrainerV2.py available?

Hi,

Thank you for these loss functions, they are very helpful. I am trying to run with nnUNetv2 and I can't seem to find the nnUNetTrainerV2.py in this repository and I can't seem to find it. Is there one available or would I have to change the nnUNetTrainer.py to suit nnUNetv2 instead of nnUNetv1?

Thank you in advance.

Violin plot

Could you please provide the code for the violin plot? Thanks!

Learning rate for dice compound losses

Hi, @JunMa11,

Your paper mentions that grid searching is used to determine the learning rates for different losses for a fair comparison. Could you provide the learning rates of different dice compound losses, (i.e. Dice, DiceCE, DiceBD, DiceHD, DiceTopk, DiceFocal), so that we can have an easier reimplementation?

Best,
Xi

when `apply_nonlin=None` calculation of GDiceLoss fails because softmax_output is not created

when apply_nonlin=None calculation of GDiceLoss fails:

Traceback (most recent call last):
  File "path/loser.py", line 181, in <module>
    loss = criterion(prediction, label)
  File "path/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "path/SegLoss/losses_pytorch/dice_loss.py", line 123, in forward
    intersection: torch.Tensor = w * einsum("bcxyz, bcxyz->bc", softmax_output, y_onehot)
UnboundLocalError: local variable 'softmax_output' referenced before assignment

'--ndet' flag

Hi Jun,

Thank you so much for sharing this with everyone. I'm coming from the nnU-Net repository after researching some of the issues posted.

I'm getting ready to implement your repository but noticed the --ndet flag at the end of the suggested run command python run/run_training.py 3d_fullres nnUNetTrainer_Dice TaskXX_MY_DATASET FOLD --ndet. I'm not familiar with what this flag does, and I can't seem to find any information about it anywhere. My best guess is that it is for non-deterministic training but I think nnU-Net does this by default.

Any info is much appreciated. Take care!

Andres

Boundary Loss Training ?

I want to use the boundary loss with my dataset but the conversion between masks to contours can be a pain, is there a simpler way to do it ?

Where is the code for DiceFocal loss?

Hello, it's amazing repository thank you for share your code,

I have two questions:

I try to use in DiceTopK loss function and i not sure that I understood that parameters to use correctly in this loss function,
what is meaning in this paramaters:

  • batch_dice,
  • do_bg.

I would be very happy if you could explain them to me?

Also i search in your code for implementation for DiceFocal and i I can not find, Can you please direct me to link?

Thank you very much,
Aviad

Focal Loss

Hello, @JunMa11

I want to know whether focal loss is suitable for 3D image segmentation or not,and for segmentation with only two classifications of background and foreground,whether the network can be single-channel output and use focal loss.

Hope your help, thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.