Code Monkey home page Code Monkey logo

acsconv's People

Contributors

a1302z avatar danilown avatar duducheng avatar sarthakpati avatar seanywang0408 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

acsconv's Issues

Mismatch Between Shapes of Kernels in Paper and Code

Hello,

Though the shape of the kernels which is supposed to map axial features is said to be of dimensions (Co,Ci,k,k,1) in the paper, the implementation weight_a = weight[0 : acs_kernel_split[0]].unsqueeze(2) makes it so that this weight will be (Co,Ci,1,k,k) in dimensions. So only weight_c is correct as in the paper since weight_s is also not being unsqueezed in the proper dimension.

I do not believe this is of importance for the experiments you have conducted but, for my experimental purposes I need to match the kernels partitions to views of 2D slices so I notices this when I went to make sure of what is happening.

Waiting,

Hello

Really excited about this,Thanks for sharing your work, waiting for more updates.

Can you share the code of i3d?

I'd like to try using i3d as well as your method to do medical image segmentation. Would you like to share the i3d pipeline? Thanks.

typo in eq1

image
I believe you mean a, not v.
BTW, is this paper in print in any conference or journal?

Transpose Conv conversion

Hi, I'm using your library to convert segmentation models from 2D to 3D. In segmentation, transpose convolution layers are common and your library does not seem to include this conversion. We forked your repo and implemented some changes to include these types of layers in the following repo. However this code does not work. I have tried to debug it and the problem seems to be in the file functional.py on the acs_conv_f function, where we split the kernel in 3 (a,c,s) and then we concatenate the 3 3D conv layers together. When the same is done for transposed convolutions, the sizes of the produced layers do not match and throws an error because it can not concatenate arrays with different dimensions.

Would you be able to maybe share a light on what we are implementing wrong?

Thank you in advance!

Add conda recipe

It would be great if this was available on conda in addition to pip for the wider community.

Happy to work on this, if you want.

What is the default input shape for ConvNext?

Hi there.

Thanks for your excellent work!

I'm trying to modify the ConvNext in your repo to a feature extraction module. My inputs are 3 slices of grayscale medical images, and I concatenated them to a 3 x 512 x 512 tensor. Thus, my dataloader return data in shape of [batch, 3, 512, 512].

This setting works on normal convnexts. But for your version with ACSConv, as I don't know the shape of the default input, error occurs. So I wonder the size of default input if I want to make use of context info.

My questions are as follows:

  • Does it have a shape of [Batch, Channels, Dimension, H, W] ? If not, what should it look like?

If so,

  • Should the channel be 1 if my images are in grayscale?
  • Does 'Dimension' means the 3D context of input images?

This question has been bothering me for a long time. Thanks for your consideration!

Raise NotImplementedError

Hi, I am trying to go through the train_3d experiment. I found that in the base_convert.py when calling the convert_module, it will raise NotImplementedError no matter what input module you give. This is quite confusing as it terminates the program without any expected output. Would you mind to tell me how to solve that?

'Conv2d' object has no attribute 'device'

Hello, thank you to share your work. I wanted to try your program but i got an AttributeError.

from torchvision.models import resnet18
from acsconv.converters import ACSConverter
model_2d = resnet18(pretrained=True)
model_3d = ACSConverter(model_2d)

with this error :

Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3427, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-126-2824c693a04c>", line 1, in <module>
    model_3d = ACSConverter(model_2d)
  File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/acsconv_converter.py", line 29, in __init__
    model = self.convert_module(model)
  File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/base_converter.py", line 30, in convert_module
    kwargs = {k: getattr(child, k) for k in arguments}
  File "/home/loiseau/.local/lib/python3.9/site-packages/acsconv/converters/base_converter.py", line 30, in <dictcomp>
    kwargs = {k: getattr(child, k) for k in arguments}
  File "/home/loiseau/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2d' object has no attribute 'device'

AdaptiveMaxPooling always returns indices after conversion

Hi, great repository. I've found one issue, that when an AdaptiveMaxPooling layer is converted it will always return the pooling indices afterwards.

Example code:

>>> import torch
>>> from acsconv import converters
The ``converters`` are currently experimental. It may not support operations including (but not limited to) Functions in ``torch.nn.functional`` that involved data dimension
>>> x2 = torch.randn(4,3,128,128)
>>> x3 = torch.randn(4,3,128,128,128)
>>> model2 = torch.nn.Sequential(torch.nn.Conv2d(3, 9, 3), torch.nn.AdaptiveMaxPool2d((32,32)))
>>> model2(x2).shape
torch.Size([4, 9, 32, 32])
>>> converter = converters.ACSConverter
>>> model3 = converter(model2)
>>> model3(x3).shape
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'tuple' object has no attribute 'shape'
>>> 

The problem is as far as I can see that in the base converter_file _triple_same converts return_indices=False to return_indices=(False,False,False), which is not False and by the AdaptiveMaxPool3D interpreted as True.

A simple bugfix would be to add if not isinstance(kwargs[k], bool): into the for loop over kwargs. I tested it and it seems to solve the problem. I'm happy to provide a pull request if you agree.

LIDC-IDRI experiment - doubt

After training using "python train_segmentation.py", Readme fild doesn't provide evaluation step and running the demo. I don't know how to evaluate, and how to run the demo? Could you provide the details?

Pretrained resnet conversion AttributeError

[---------------------------------------------------------------------------]
AttributeError                            Traceback (most recent call last)
<ipython-input-4-2a0f9d424028> in <module>
      4 output_2d = model_2d(input_2d)
      5 
----> 6 model_3d = ACSConverter(model_2d)
      7 # once converted, model_3d is using ACSConv and capable of processing 3D volumes.
      8 B, C_in, D, H, W = (1, 3, 64, 64, 64)

~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/acsconv_converter.py in __init__(self, model)
     27         """ Save the weights, convert the model to ACS counterpart, and then reload the weights """
     28         preserve_state_dict = model.state_dict()
---> 29         model = self.convert_module(model)
     30         model.load_state_dict(preserve_state_dict,strict=False) #
     31         self.model = model

~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/base_converter.py in convert_module(self, module)
     24             if isinstance(child, nn.Conv2d):
     25                 arguments = nn.Conv2d.__init__.__code__.co_varnames[1:]
---> 26                 kwargs = {k: getattr(child, k) for k in arguments}
     27                 kwargs = self.convert_conv_kwargs(kwargs)
     28                 setattr(module, child_name, self.__class__.target_conv(**kwargs))

~/miniconda3/envs/rENE/lib/python3.6/site-packages/ACSConv-0.1.0-py3.6.egg/acsconv/converters/base_converter.py in <dictcomp>(.0)
     24             if isinstance(child, nn.Conv2d):
     25                 arguments = nn.Conv2d.__init__.__code__.co_varnames[1:]
---> 26                 kwargs = {k: getattr(child, k) for k in arguments}
     27                 kwargs = self.convert_conv_kwargs(kwargs)
     28                 setattr(module, child_name, self.__class__.target_conv(**kwargs))

~/miniconda3/envs/rENE/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
    946                 return modules[name]
    947         raise AttributeError("'{}' object has no attribute '{}'".format(
--> 948             type(self).__name__, name))
    949 
    950     def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'Conv2d' object has no attribute 'kernel_size_'

model_2d = resnet18(pretrained=True)

Pretrained resnet is not converted properly.

lidc conversion to npz files

Hello

Thanks for sharing your work, could you please share as to how you generate the npz files for lidc dataset?

###############################EDIT
i see that the npz files contain 4 things,need some clarification as to
1.Does every (80,80,80) have a nodule?
2.What is answer1, answer2, answer3, answer4?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.