Code Monkey home page Code Monkey logo

Comments (14)

kathyliu579 avatar kathyliu579 commented on July 30, 2024

and also i found when i run the 2 th step of fundus.
there has some problems:

(torch17) qianying@merig:~/PycharmProjects/segtran-master/code$ python3 train2d.py --split all --maxiter 3000 --task fundus --net unet-scratch --ds train,valid,test --polyformer source --cp ../model/unet-scratch-refuge-train,valid,test-06072104/iter_7000.pth --sourceopt allpoly

Traceback (most recent call last):
  File "train2d.py", line 939, in <module>
    net = VanillaUNet(n_channels=3, num_classes=args.num_classes, 
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/unet2d/unet_model.py", line 32, in __init__
    self.polyformer = Polyformer(feat_dim=64, args=polyformer_args)
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/polyformer.py", line 117, in __init__
    polyformer_layers.append( PolyformerLayer(str(i), config) )
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/polyformer.py", line 24, in __init__
    self.in_ator_trans  = CrossAttFeatTrans(config, name + '-in-squeeze')
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 502, in __init__
    self.out_trans  = ExpandedFeatTrans(config,  name)
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 344, in __init__
    if not config.use_mince_transformer or config.mince_scales is None:
AttributeError: 'EasyDict' object has no attribute 'use_mince_transformer'

and i found these term show in before.

############## Mince transformer settings ##############
parser.add_argument("--mince", dest='use_mince_transformer', action='store_true',
                    help='Use Mince (Multi-scale) Transformer to save GPU RAM.')
parser.add_argument("--mincescales", dest='mince_scales', type=str, default=None, 
                    help='A list of numbers indicating the mince scales.')
parser.add_argument("--minceprops", dest='mince_channel_props', type=str, default=None, 
                    help='A list of numbers indicating the relative proportions of channels of each scale.')

emmm so what happens?

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

ohh i saw the previous issue.

 # if not config.use_mince_transformer or config.mince_scales is None:
        self.num_scales     = 0
        self.mince_scales   = None
        self.mince_channels = None
        # else:
        #     # mince_scales: [1, 2, 3, 4...]
        #     self.mince_scales   = config.mince_scales
        #     self.num_scales     = len(self.mince_scales)
        #     self.mince_channel_props = config.mince_channel_props
        #     self.mince_channel_indices, mince_channel_nums = \
        #         fracs_to_indices(self.feat_dim, self.mince_channel_props)

now i revise it like this. is it right??

and happens some other errors..

File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 506, in __init__
    self.keep_attn_scores = config.use_attn_consist_loss
AttributeError: 'EasyDict' object has no attribute 'use_attn_consist_loss'

how to fix it?

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

also have a question on fine tune on "k".
A polyformer layer consists of two sub-transformers 1 and 2. Does this paper only finetune the k of sub-transformers 1?
cause in the code, i only see:

            for poly_opt_mode in poly_opt_modes:
                if poly_opt_mode == 'allpoly':
                    optimized_params += [ translayers.named_parameters() ]
                elif poly_opt_mode == 'inator':
                    optimized_params += [ translayer.in_ator_trans.named_parameters() for translayer in translayers ]
                elif poly_opt_mode == 'k':
                    optimized_params += [ translayer.in_ator_trans.key.named_parameters()   for translayer in translayers ]
                elif poly_opt_mode == 'v':
                    optimized_params += [ translayer.in_ator_trans.out_trans.first_linear.named_parameters() for translayer in translayers ]
                elif poly_opt_mode == 'q':
                    optimized_params += [ translayer.in_ator_trans.query.named_parameters() for translayer in translayers ]

and the in_ator_trans is the sub-transformers 1, right?

from segtran.

askerlee avatar askerlee commented on July 30, 2024

Thanks for reporting the bug. I've just corrected "refuge" to "fundus". Also I've simplified the polyformer config.
Yes you are right. Fine-tuning k is only to finetune the k of sub-transformer 1.

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

from segtran.

askerlee avatar askerlee commented on July 30, 2024

“Also I've simplified the polyformer config.” so what file i should replace?

You can just do a "git pull origin master" to update the code.

in addition may i ask why in training, q k are shared ?

Yes correct. It's explained in the IJCAI paper, page 3:

  • In traditional transformers, the key and query projections are independently learned, enabling them to capture asymmetric relationships between tokens in natural language. However, the relationships between image units are often symmetric, such as whether two pixels belong to the same segmentation class... In both blocks, the query projections and key projections are tied to make the attention symmetric, for better modeling of the symmetric relationships between image units.
  • Empirically, tying q and k leads to better performance on medical image segmentation (I haven't tried more general segmentation tasks other than medical).

why only finetune k of transformer 1?

Because empirically when I just fine-tune k of transformer 1, it already performs well. I didn't try to fine-tune both, and I'm not sure how to intuitively understand the benefits of fine-tuning both layers for domain adaptation.

from segtran.

askerlee avatar askerlee commented on July 30, 2024

in addition, the poly dataset i downloaded has different folders... can you please upload your processed data?

You mean polyp?
For people in China:
https://pan.baidu.com/s/1TuiPyQirN4J2hQfQxMMHkQ?pwd=whjl
For people in other countries:
https://www.dropbox.com/s/s5v2kotxtvruigp/polyp.tar?dl=0

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

in addition, can you update the commands of polyp dataset ?
i am not sure for the commands of step 3 and 4 (training and test on target domain). can you have a look? if it is right, you can add to the "read me".

python3 train2d.py --task polyp --ds CVC-300 --split train --samplenum 5 --maxiter 1600 --saveiter 40 --net unet-scratch --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06101057/iter_500.pth --polyformer target --targetopt k --bnopt affine --adv feat --sourceds CVC-ClinicDB-train,Kvasir-train --domweight 0.002 --bs 3 --sourcebs 2 --targetbs 2

especially for the "sourceds ", i am not sure.

python3 test2d.py --gpu 1 --ds CVC-300--split test --samplenum 5 --bs 6 --task polyp –cpdir .. --net unet-scratch --polyformer target --nosave --iters 40-1600,40

especially for the "split".

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

from segtran.

nguyenlecong avatar nguyenlecong commented on July 30, 2024

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown?
image

If not, can you show the loss function and the command line you use?
This is my command line:
!python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter_14000.pth --sourceopt allpoly
Thank you!

from segtran.

kathyliu579 avatar kathyliu579 commented on July 30, 2024

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown? image

If not, can you show the loss function and the command line you use? This is my command line: !python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter_14000.pth --sourceopt allpoly Thank you!

Hi may i ask how you draw the loss fuction? i have not seen it. if you tell, i can check mine.

from segtran.

nguyenlecong avatar nguyenlecong commented on July 30, 2024

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown? image
If not, can you show the loss function and the command line you use? This is my command line: !python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter_14000.pth --sourceopt allpoly Thank you!

Hi may i ask how you draw the loss fuction? i have not seen it. if you tell, i can check mine.

You can load log from directory ../model/*/log/event... by:
from tensorboard.backend.event_processing import event_accumulator
ea = event_accumulator.EventAccumulator(logdir)
ea.Reload()
loss = pd.DataFrame(ea.Scalars('loss/loss'))

There are also loss/total_ce_loss and loss/total_dice_loss

from segtran.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.