Comments (20)
Hi, I really appreciate this wonderful work, but I got some problems on training on my custom dataset. I have a dataset with only 2 classes (0: background; 1:target), and class 0 accounts for 95%+, I just modify the config file as follow:
but mIoU will keep unchanged after a few epochs of training, the info is as follow:
Is there anything that I missed or any configs I overlooked? Hope someone could give me some suggestions, thx in advance!!
Hi, I was about to train on my own dataset too , and it has 2 classes and the negitive one is above 95% like yours.
I think we can share some experience for training .
how do you say?
from swin-transformer-semantic-segmentation.
Hi, I really appreciate this wonderful work, but I got some problems on training on my custom dataset. I have a dataset with only 2 classes (0: background; 1:target), and class 0 accounts for 95%+, I just modify the config file as follow:
but mIoU will keep unchanged after a few epochs of training, the info is as follow:
Is there anything that I missed or any configs I overlooked? Hope someone could give me some suggestions, thx in advance!!
Hi, I was about to train on my own dataset too , and it has 2 classes and the negitive one is above 95% like yours. I think we can share some experience for training . how do you say?
Hi, I would like to share with you some of my configs. Except the configs I've mentioned above, I've also tried to change the loss to LovaszLoss, but new bugs erupted, the loss became Nan. And I've tried to increase learning rate to 1e-4, but the train results were just the same as the results I posted above. However, another repository https://github.com/JIA-HONG-CHU/Swin-Transformer-add-EncNet-DaNet-DraNet-for-semantic-segmentation-on-Statelite-Dataset showed that Swin Transformer can be finetuned on the datasets which are similar to what I've used. How about your training process? : )
from swin-transformer-semantic-segmentation.
Hi, I really appreciate this wonderful work, but I got some problems on training on my custom dataset. I have a dataset with only 2 classes (0: background; 1:target), and class 0 accounts for 95%+, I just modify the config file as follow:
but mIoU will keep unchanged after a few epochs of training, the info is as follow:
Is there anything that I missed or any configs I overlooked? Hope someone could give me some suggestions, thx in advance!!
Hi, I was about to train on my own dataset too , and it has 2 classes and the negitive one is above 95% like yours. I think we can share some experience for training . how do you say?
Hi, I would like to share with you some of my configs. Except the configs I've mentioned above, I've also tried to change the loss to LovaszLoss, but new bugs erupted, the loss became Nan. And I've tried to increase learning rate to 1e-4, but the train results were just the same as the results I posted above. However, another repository https://github.com/JIA-HONG-CHU/Swin-Transformer-add-EncNet-DaNet-DraNet-for-semantic-segmentation-on-Statelite-Dataset showed that Swin Transformer can be finetuned on the datasets which are similar to what I've used. How about your training process? : )
I have just look into the codes, cuz I'm not familiar with mmcv so I have no imformation for u right now, and my photos resolution are not like pre-trained 512X512, and it's like infrared, I think I will trian from scratch. Thanks for your kindly share , I'll share my exp with you if I get any progress.
from swin-transformer-semantic-segmentation.
For your dataset with only two classes, maybe you can change "use_sigmoid=False" into "use_sigmoid=True". However, when I train with my own dataset also with two classes, loss always keeps zero
from swin-transformer-semantic-segmentation.
For your dataset with only two classes, maybe you can change "use_sigmoid=False" into "use_sigmoid=True". However, when I train with my own dataset also with two classes, loss always keeps zero
I've also tried your configs, and I've confronted 0-loss problem, too.
from swin-transformer-semantic-segmentation.
So you have handled it?Could you tell me how to solve the problem?Thank you!
from swin-transformer-semantic-segmentation.
I also have custom dataset with 2 classes.
Could you help me?
- What config file did you use?
- What is the format and directory structure of dataset for
ADE20KDataset
?
I have pairs of input image and masked image like this.
Thanks in advance.
from swin-transformer-semantic-segmentation.
For your dataset with only two classes, maybe you can change "use_sigmoid=False" into "use_sigmoid=True". However, when I train with my own dataset also with two classes, loss always keeps zero
I've also tried your configs, and I've confronted 0-loss problem, too.
I tried use focal_loss on my own dataset and it get some improved
from swin-transformer-semantic-segmentation.
If you have 2 classes background and target, and you use an ade-type custom dataset, I think num_classes = 1 and sigmoid=True.
from swin-transformer-semantic-segmentation.
I got the same problem with my dataset (ade20k config), after changing to reduce_zero_label=False
everything worked.
from swin-transformer-semantic-segmentation.
I got the same problem with my dataset (ade20k config), after changing to
reduce_zero_label=False
everything worked.
Did you also set num_classes = 1 and sigmoid=True?
from swin-transformer-semantic-segmentation.
@Jay-IPL Only num_classes = 2, sigmoid left untouch. Here is my config:
model = dict(
backbone=dict(
embed_dim=128,
depths=[2, 2, 18, 2],
num_heads=[4, 8, 16, 32],
window_size=7,
ape=False,
drop_path_rate=0.3,
patch_norm=True,
use_checkpoint=False
),
decode_head=dict(
in_channels=[128, 256, 512, 1024],
num_classes=2,
loss_decode=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0,
class_weight=[0.01, 1])
),
auxiliary_head=dict(
in_channels=512,
num_classes=2
))
from swin-transformer-semantic-segmentation.
@Jay-IPL Only num_classes = 2, sigmoid left untouch. Here is my config:
model = dict( backbone=dict( embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7, ape=False, drop_path_rate=0.3, patch_norm=True, use_checkpoint=False ), decode_head=dict( in_channels=[128, 256, 512, 1024], num_classes=2, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, class_weight=[0.01, 1]) ), auxiliary_head=dict( in_channels=512, num_classes=2 ))
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
from swin-transformer-semantic-segmentation.
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
@Jay-IPL Only num_classes = 2, sigmoid left untouch. Here is my config:
model = dict( backbone=dict( embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7, ape=False, drop_path_rate=0.3, patch_norm=True, use_checkpoint=False ), decode_head=dict( in_channels=[128, 256, 512, 1024], num_classes=2, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, class_weight=[0.01, 1]) ), auxiliary_head=dict( in_channels=512, num_classes=2 ))
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
But, I think use num_classes = 1 with sigmoid is the same when you using num_classes= 2 and with softmax , It's basiclly the same in mathematics
from swin-transformer-semantic-segmentation.
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
@Jay-IPL Only num_classes = 2, sigmoid left untouch. Here is my config:
model = dict( backbone=dict( embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7, ape=False, drop_path_rate=0.3, patch_norm=True, use_checkpoint=False ), decode_head=dict( in_channels=[128, 256, 512, 1024], num_classes=2, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, class_weight=[0.01, 1]) ), auxiliary_head=dict( in_channels=512, num_classes=2 ))
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
But, I think use num_classes = 1 with sigmoid is the same when you using num_classes= 2 and with softmax , It's basiclly the same in mathematics
Yes, you are right. But in original mmseg/datasets/ade.py, 'background' is not in CLASSES, CLASSES have 150 targets, and in ade config file num_classes=150. So, my question is that if you use num_classes=2 in your config file, your CLASSES should only have 1 target, right?
from swin-transformer-semantic-segmentation.
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
@Jay-IPL Only num_classes = 2, sigmoid left untouch. Here is my config:
model = dict( backbone=dict( embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7, ape=False, drop_path_rate=0.3, patch_norm=True, use_checkpoint=False ), decode_head=dict( in_channels=[128, 256, 512, 1024], num_classes=2, loss_decode=dict( type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, class_weight=[0.01, 1]) ), auxiliary_head=dict( in_channels=512, num_classes=2 ))
got it. But in custom_dataset.py did you set CLASSES = ('background','target') or CLASSES = (target')?
But, I think use num_classes = 1 with sigmoid is the same when you using num_classes= 2 and with softmax , It's basiclly the same in mathematics
Yes, you are right. But in original mmseg/datasets/ade.py, 'background' is not in CLASSES, CLASSES have 150 targets, and in ade config file num_classes=150. So, my question is that if you use num_classes=2 in your config file, your CLASSES should only have 1 target, right?
probably u r right, it make sense.
from swin-transformer-semantic-segmentation.
id left untou
I used classes=['background','target']
from swin-transformer-semantic-segmentation.
from swin-transformer-semantic-segmentation.
For your dataset with only two classes, maybe you can change "use_sigmoid=False" into "use_sigmoid=True". However, when I train with my own dataset also with two classes, loss always keeps zero
Have you solved the problem about loss always keeping zero? I meet the same problem when I train my own dataset.
from swin-transformer-semantic-segmentation.
For your dataset with only two classes, maybe you can change "use_sigmoid=False" into "use_sigmoid=True". However, when I train with my own dataset also with two classes, loss always keeps zero
Have you solved the problem about loss always keeping zero? I meet the same problem when I train my own dataset.
Have you find any method to improve the results? I meet the same problem.
from swin-transformer-semantic-segmentation.
Related Issues (20)
- How to convert pytorch to onnx? HOT 1
- key, val = kv.split('=', maxsplit=1) ValueError: not enough values to unpack (expected 2, got 1)
- where is get_started.md ? how to install the environment? HOT 4
- 如何将训练好的swin.pth文件转成tensortRT并进行推理,目标检测有这个脚本,分割没有该脚本嘛?
- question about the weight decay
- How to specify dataset path for training and validation
- 我有一个问题,紧急求助!
- KeyError: "EncoderDecoder: 'SwinTransformer is not in the backbone registry'",hope help
- Error:lib2to3.pgen2.parse.ParseError: bad input: type=20, value='<', context=('', (61, 4)) HOT 3
- I can not train swinTransformer on ADE datasets with multi-gpus
- size
- When will SwinV2 be supported for segmentation?
- remote sensing image HOT 1
- if the input and output file is 'png', 'tif' format, what do I need to modify? And what about 16bit file?
- The training results can not reach the author's effect HOT 1
- Training a model with a new dataset HOT 2
- Welcome update to OpenMMLab 2.0
- Why this Segmentation task related code,
- Downloading pretrained files from Baidu
- Why mmcv.runner doesn;t import
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from swin-transformer-semantic-segmentation.