shachoi / hanet Goto Github PK
View Code? Open in Web Editor NEWOfficial PyTorch implementation of HANet (CVPR 2020)
License: Other
Official PyTorch implementation of HANet (CVPR 2020)
License: Other
Hi,
I am just wondering what is the easiest way to supply the network with images from a different dataset and take the output masks. I appreciate any help.
Dear Sir.
As mentioned in the question. And How to caculate this? Wil this improve the results? It looks like you didn't use it in the training?
How to add the location code in HANet
How to download the BDD-100K for segmentation (7000 for training, 1000 for validation)?
ImportError: cannot import name 'cfg' from 'config' (/home/jbd/anaconda3/lib/python3.8/site-packages/config/init.py)
so,what is config and cfg? is a file in config.py or a model?
how to do a test? scripts / submit?
因為想用自己的資料來訓練
所以想問問組成tfrecord的那些資料是什麼
有先載gtFine
但沒載leftImg8bit(太大了)
可以問一下train/test/val底下的照片是.png原始rgb影像嗎
cityscapes
└ leftImg8bit_trainvaltest
└ leftImg8bit
└ train
└ val
└ test
└ gtFine_trainvaltest
└ gtFine
└ train
└ val
└ test
How to calculate the average number of pixels assigned to each class contained in a single image? Could you please the precise number in figure 1(a)? Thanks a lot!
Hi, sorry to bother. I want to confirm some details about the ablation studies in your paper concerning Coordconv. It says:
To compare with CoordConv [26], we conduct experiments by replacing standard convolutional layers after the backbone with CoordConv.
Hello! I would like to know if the number 128 in HAconv and PosEncoding1D corresponds to the 128 in pos_h?
Lines 55 to 64 in 9c31ef2
Lines 51 to 56 in 9c31ef2
Line 740 in 9c31ef2
self.pos_h = torch.arange(0, 64).unsqueeze(0).unsqueeze(2).expand(-1,-1,2048) # 0 ~ 127
Also, Look like you set two optims and schedulers for model'params(optimizer, scheduler) and hanet's params(optimizer_at, scheduler_at ). But in terms of config, they are the same except for weight_decay. Why is this? Also is it necessary to set different weight_deacy for the HA module?
Have you ever tried to experiment on a single GPU?
In your code:
self.attention_first = nn.Sequential(
nn.Conv1d(in_channels=in_channel, out_channels=mid_1_channel,
kernel_size=1, stride=1, padding=0, bias=False),
Norm2d(mid_1_channel),
nn.ReLU(inplace=True))
By default, norm2d is batchnorm2d, and batchnorm2d expected 4D input but it seems got 3D input.
i want to try the model on windows which is not supported to the distrubution. And i change the
net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
into
net = torch.nn.BatchNorm2d(net)
but i got a error which is "TypeError: new(): data must be a sequence (got DeepV3PlusHANet)".
And what should i do to run the code on windows, and how to change SyncBatchNorm with BatchNorm2d?
I want to test my pictures instead of pictures from cityscapes. Where should I change the path of pictures?
Great work!thx for the open source:thumbsup:
However,I hava some problems about the paras.
how to get the parameter “hanet_set” "hanet_pos" "pos_rfactor" "no_pos_dataset"
and what's about their meaning
HANet/scripts/train_r101_os8_hanet_best.sh
Lines 26 to 28 in 1d478fb
Lines 642 to 697 in 1d478fb
w/o the para "pos_h, pos_w" will influence the effect of Hanet? and could you tell me the way to get the para from other dataset?
Hi. Dear Sir. I'm still confused as to why we need to cross a blocks to multiply the attention map?
This is your pipeline and HANet model. As stated in the paper, there are lower- and higher-level feature maps in semantic seg-
mentation networks
I want to now why we can't get attention in one feature like what SENET have done.
Is there any iimplied meaning behind this or has it been experimentally proven to be ineffective?
I run the command:
CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/train_r101_o
and the following are the outputs, errors is about CUDA out of memory, I changed the batch size for training per gpu to 2, it still wrong. but if i change the size to 1, there is another error. Please help me , thanks !
Total world size: 4
Total world size: 4
Total world size: 4
Total world size: 4
My Rank: 1
args.dist_url:tcp://127.0.0.1:8043
My Rank: 2
args.dist_url:tcp://127.0.0.1:8043
My Rank: 3
args.dist_url:tcp://127.0.0.1:8043
My Rank: 0
args.dist_url:tcp://127.0.0.1:8043
Using pytorch sync batch norm
######## CityScapesUniformWithPos #########
Using pytorch sync batch norm
Using pytorch sync batch norm
######## CityScapesUniformWithPos #########
######## CityScapesUniformWithPos #########
Using pytorch sync batch norm
Logging : ./logs/0103/r101_os8_hanet_64_01/08_25_21/log_2020_08_25_21_00_32_rank_0.log
######## CityScapesUniformWithPos #########
08-25 21:00:33.038 train fine cities: ['train/aachen', 'train/bochum', 'train/bremen', 'train/cologne', 'train/darmstadt', 'train/dusseldorf', 'train/erfurt', 'train/hamburg', 'train/hanover', 'train/jena', 'train/krefeld', 'train/monchengladbach', 'train/strasbourg', 'train/stuttgart', 'train/tubingen', 'train/ulm', 'train/weimar', 'train/zurich']
08-25 21:00:33.058 Cityscapes-train: 2975 images
Unifrom : coarse images
08-25 21:00:33.681 Class Uniform Percentage: 0.5
08-25 21:00:33.682 Class Uniform items per Epoch:2975
08-25 21:00:33.683 cls 0 len 5866
08-25 21:00:33.683 cls 1 len 5184
08-25 21:00:33.683 cls 2 len 5678
08-25 21:00:33.683 cls 3 len 1312
08-25 21:00:33.683 cls 4 len 1723
08-25 21:00:33.683 cls 5 len 5656
08-25 21:00:33.683 cls 6 len 2769
08-25 21:00:33.683 cls 7 len 4860
08-25 21:00:33.683 cls 8 len 5388
08-25 21:00:33.683 cls 9 len 2440
08-25 21:00:33.683 cls 10 len 4722
08-25 21:00:33.683 cls 11 len 3719
08-25 21:00:33.683 cls 12 len 1239
08-25 21:00:33.683 cls 13 len 5075
08-25 21:00:33.683 cls 14 len 444
08-25 21:00:33.683 cls 15 len 348
08-25 21:00:33.683 cls 16 len 188
08-25 21:00:33.684 cls 17 len 575
08-25 21:00:33.684 cls 18 len 2238
08-25 21:00:33.686 val fine cities: ['val/lindau', 'val/frankfurt', 'val/munster']
08-25 21:00:33.688 Cityscapes-val: 500 images
Unifrom : coarse images
Unifrom : coarse images
Unifrom : coarse images
standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101
standard cross entropy
standard cross entropy
standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101
Model : DeepLabv3+, Backbone : ResNet-101
standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101
########### pretrained ##############
########### pretrained ##############
########### pretrained ##############
########### pretrained ##############
output_stride = 8
output_stride = 8
output_stride = 8
output_stride = 8
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
08-25 21:00:37.312 Model params = 64.807M
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:1-------
-----gpuid:3-------
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:0-------
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:2-------
args.snapshot:None
args.snapshot_pe:None
args.snapshot:None
args.snapshot_pe:None
args.snapshot:None
args.snapshot_pe:None
args.snapshot:None
args.snapshot_pe:None
i:0
i:0
i:0
i:0
0
length of train loader:186
0
length of train loader:186
0
length of train loader:186
0
length of train loader:186
Traceback (most recent call last):
File "train.py", line 632, in
Traceback (most recent call last):
Traceback (most recent call last):
File "train.py", line 632, in
File "train.py", line 632, in
Traceback (most recent call last):
File "train.py", line 632, in
main()
File "train.py", line 280, in main
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
main()
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "train.py", line 280, in main
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
main()
main()
File "train.py", line 280, in main
File "train.py", line 280, in main
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
result = self.forward(*input, **kwargs)
File "train.py", line 352, in train
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
result = self.forward(*input, **kwargs)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
out = self.bn3(out)
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
input = module(input)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
result = self.forward(*input, **kwargs)
out = self.bn3(out)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
out = self.bn3(out)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = self.bn3(out)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 2; 10.92 GiB total capacity; 10.22 GiB already allocated; 125.00 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 3; 10.92 GiB total capacity; 10.22 GiB already allocated; 119.00 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 10.92 GiB total capacity; 10.22 GiB already allocated; 124.62 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 1; 10.92 GiB total capacity; 10.22 GiB already allocated; 119.00 MiB free; 10.26 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in
main()
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/xuangen/anaconda3/envs/hanet/bin/python', '-u', 'train.py', '--local_rank=3', '--dataset', 'cityscapes', '--arch', 'network.deepv3.DeepR101V3PlusD_HANet_OS8', '--city_mode', 'train', '--lr_schedule', 'poly', '--lr', '0.01', '--poly_exp', '0.9', '--hanet_lr', '0.01', '--hanet_poly_exp', '0.9', '--max_cu_epoch', '10000', '--class_uniform_pct', '0.5', '--class_uniform_tile', '1024', '--syncbn', '--sgd', '--crop_size', '768', '--scale_min', '0.5', '--scale_max', '2.0', '--rrotate', '0', '--color_aug', '0.25', '--gblur', '--max_iter', '40000', '--bs_mult', '2', '--hanet', '1', '1', '1', '1', '0', '--hanet_set', '3', '64', '3', '--hanet_pos', '2', '1', '--pos_rfactor', '2', '--dropout', '0.5', '--pos_noise', '0.5', '--aux_loss', '--date', '0103', '--exp', 'r101_os8_hanet_64_01', '--ckpt', './logs/', '--tb_path', './logs/']' returned non-zero exit status 1.
Dear authors, I noticed there is no ResNet101-OS8 model for BDD100K dataset in your model zoo. Could you please share it at your convenience.
Dear shachoi:
I download your model on the google drive which you provide,I use the model HANet_r101_os8_city_0.80293.pth,it achieves 82.05 miou on the cityscapes val dataset,which is the same as you write in the paper.But when I use the model for test by the submit_r101_os8.sh you provide,and submit the res in the pred folder to the server,it can only achieve 72.09 miou,it's too low...so I just want to ask why this happened?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.