Code Monkey home page Code Monkey logo

hanet's People

Contributors

shachoi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hanet's Issues

Inference on new images

Hi,

I am just wondering what is the easiest way to supply the network with images from a different dataset and take the output masks. I appreciate any help.

What's the meaning of attention_loss?

Dear Sir.
As mentioned in the question. And How to caculate this? Wil this improve the results? It looks like you didn't use it in the training?

About the BDD-100K.

How to download the BDD-100K for segmentation (7000 for training, 1000 for validation)?

cannot import name 'cfg' from 'config'

ImportError: cannot import name 'cfg' from 'config' (/home/jbd/anaconda3/lib/python3.8/site-packages/config/init.py)
so,what is config and cfg? is a file in config.py or a model?

cityscapes 主要檔案內容

因為想用自己的資料來訓練
所以想問問組成tfrecord的那些資料是什麼
有先載gtFine
但沒載leftImg8bit(太大了)
可以問一下train/test/val底下的照片是.png原始rgb影像嗎
cityscapes
└ leftImg8bit_trainvaltest
└ leftImg8bit
└ train
└ val
└ test
└ gtFine_trainvaltest
└ gtFine
└ train
└ val
└ test

About figure1 in the paper

How to calculate the average number of pixels assigned to each class contained in a single image? Could you please the precise number in figure 1(a)? Thanks a lot!

About Ablation Studies of Coordconv

Hi, sorry to bother. I want to confirm some details about the ablation studies in your paper concerning Coordconv. It says:

To compare with CoordConv [26], we conduct experiments by replacing standard convolutional layers after the backbone with CoordConv.

  1. Does it mean in this experiment you only replace the 2 convolution layers after ASPP(in the decoder)?
  2. If so, why only replace the standard convolutions in decoder instead of encoder?

Params in HAconv

Hello! I would like to know if the number 128 in HAconv and PosEncoding1D corresponds to the 128 in pos_h?

pos_enc = (get_sinusoid_encoding_table((128//pos_rfactor)+1, dim) + 1)
self.pos_layer = nn.Embedding.from_pretrained(embeddings=pos_enc, freeze=True)
self.pos_noise = pos_noise
self.noise_clamp = 16 // pos_rfactor # 4: 4, 8: 2, 16: 1
self.pos_rfactor = pos_rfactor
if pos_noise > 0.0:
self.min = 0.0 #torch.tensor([0]).cuda()
self.max = 128//pos_rfactor #torch.tensor([128//pos_rfactor]).cuda()
self.noise = torch.distributions.normal.Normal(torch.tensor([0.0]), torch.tensor([pos_noise]))

HANet/network/HANet.py

Lines 51 to 56 in 9c31ef2

if self.pooling == 'mean':
#print("##### average pooling")
self.rowpool = nn.AdaptiveAvgPool2d((128//pos_rfactor,1))
else:
#print("##### max pooling")
self.rowpool = nn.AdaptiveMaxPool2d((128//pos_rfactor,1))

self.pos_h = torch.arange(0, 1024).unsqueeze(0).unsqueeze(2).expand(-1,-1,2048)//8 # 0 ~ 127

Add if I set pos_h to 64 (like under) , should the correspondence 128 be changed to 64 in HAconv and PosEncoding1D?

self.pos_h = torch.arange(0, 64).unsqueeze(0).unsqueeze(2).expand(-1,-1,2048)  # 0 ~ 127

Also, Look like you set two optims and schedulers for model'params(optimizer, scheduler) and hanet's params(optimizer_at, scheduler_at ). But in terms of config, they are the same except for weight_decay. Why is this? Also is it necessary to set different weight_deacy for the HA module?

Question about norm2d

Have you ever tried to experiment on a single GPU?
In your code:

self.attention_first = nn.Sequential(
               nn.Conv1d(in_channels=in_channel, out_channels=mid_1_channel,
               kernel_size=1, stride=1, padding=0, bias=False),
               Norm2d(mid_1_channel),
               nn.ReLU(inplace=True))  

By default, norm2d is batchnorm2d, and batchnorm2d expected 4D input but it seems got 3D input.

how to change SyncBatchNorm?

i want to try the model on windows which is not supported to the distrubution. And i change the
net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net) into
net = torch.nn.BatchNorm2d(net)
but i got a error which is "TypeError: new(): data must be a sequence (got DeepV3PlusHANet)".
And what should i do to run the code on windows, and how to change SyncBatchNorm with BatchNorm2d?

Path of pictures

I want to test my pictures instead of pictures from cityscapes. Where should I change the path of pictures?

about parameter

Great work!thx for the open source:thumbsup:
However,I hava some problems about the paras.
how to get the parameter “hanet_set” "hanet_pos" "pos_rfactor" "no_pos_dataset"
and what's about their meaning

--hanet_set 3 64 3 \
--hanet_pos 2 1 \
--pos_rfactor 8 \

is it just like (pos_h, pos_w) come from statistical approach to the dataset?
def __getitem__(self, index):
elem = self.imgs_uniform[index]
centroid = None
if len(elem) == 4:
img_path, mask_path, centroid, class_id = elem
else:
img_path, mask_path = elem
img, mask = Image.open(img_path).convert('RGB'), Image.open(mask_path)
img_name = os.path.splitext(os.path.basename(img_path))[0]
mask = np.array(mask)
mask_copy = mask.copy()
for k, v in id_to_trainid.items():
mask_copy[mask == k] = v
mask = Image.fromarray(mask_copy.astype(np.uint8))
# position information
pos_h = self.pos_h
pos_w = self.pos_w
# position information
# Image Transformations
if self.joint_transform_list is not None:
for idx, xform in enumerate(self.joint_transform_list):
if idx == 0 and centroid is not None:
# HACK
# We assume that the first transform is capable of taking
# in a centroid
img, mask, (pos_h, pos_w) = xform(img, mask, centroid, pos=(pos_h, pos_w))
else:
img, mask, (pos_h, pos_w) = xform(img, mask, pos=(pos_h, pos_w))
# Debug
if self.dump_images and centroid is not None:
outdir = '../../dump_imgs_{}'.format(self.mode)
os.makedirs(outdir, exist_ok=True)
dump_img_name = trainid_to_name[class_id] + '_' + img_name
out_img_fn = os.path.join(outdir, dump_img_name + '.png')
out_msk_fn = os.path.join(outdir, dump_img_name + '_mask.png')
mask_img = colorize_mask(np.array(mask))
img.save(out_img_fn)
mask_img.save(out_msk_fn)
if self.transform is not None:
img = self.transform(img)
if self.target_aux_transform is not None:
mask_aux = self.target_aux_transform(mask)
else:
mask_aux = torch.tensor([0])
if self.target_transform is not None:
mask = self.target_transform(mask)
pos_h = torch.from_numpy(np.array(pos_h, dtype=np.uint8))# // self.pos_rfactor
pos_w = torch.from_numpy(np.array(pos_w, dtype=np.uint8))# // self.pos_rfactor
return img, mask, img_name, mask_aux, (pos_h, pos_w)

w/o the para "pos_h, pos_w" will influence the effect of Hanet? and could you tell me the way to get the para from other dataset?

Why we need cross the segmentation main networks to gain attention?

Hi. Dear Sir. I'm still confused as to why we need to cross a blocks to multiply the attention map?
This is your pipeline and HANet model. As stated in the paper, there are lower- and higher-level feature maps in semantic seg-
mentation networks
Inked微信截图_20210822010818_LI
Inked微信截图_20210822013128_LI

I want to now why we can't get attention in one feature like what SENET have done.

Is there any iimplied meaning behind this or has it been experimentally proven to be ineffective?
v2-c76deb24c36a504b917c5abd71b86016_b

CUDA out of memory

I run the command:
CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/train_r101_o

and the following are the outputs, errors is about CUDA out of memory, I changed the batch size for training per gpu to 2, it still wrong. but if i change the size to 1, there is another error. Please help me , thanks !


Total world size: 4
Total world size: 4
Total world size: 4
Total world size: 4
My Rank: 1
args.dist_url:tcp://127.0.0.1:8043
My Rank: 2
args.dist_url:tcp://127.0.0.1:8043
My Rank: 3
args.dist_url:tcp://127.0.0.1:8043
My Rank: 0
args.dist_url:tcp://127.0.0.1:8043
Using pytorch sync batch norm
######## CityScapesUniformWithPos #########
Using pytorch sync batch norm
Using pytorch sync batch norm
######## CityScapesUniformWithPos #########
######## CityScapesUniformWithPos #########
Using pytorch sync batch norm
Logging : ./logs/0103/r101_os8_hanet_64_01/08_25_21/log_2020_08_25_21_00_32_rank_0.log
######## CityScapesUniformWithPos #########
08-25 21:00:33.038 train fine cities: ['train/aachen', 'train/bochum', 'train/bremen', 'train/cologne', 'train/darmstadt', 'train/dusseldorf', 'train/erfurt', 'train/hamburg', 'train/hanover', 'train/jena', 'train/krefeld', 'train/monchengladbach', 'train/strasbourg', 'train/stuttgart', 'train/tubingen', 'train/ulm', 'train/weimar', 'train/zurich']
08-25 21:00:33.058 Cityscapes-train: 2975 images
Unifrom : coarse images
08-25 21:00:33.681 Class Uniform Percentage: 0.5
08-25 21:00:33.682 Class Uniform items per Epoch:2975
08-25 21:00:33.683 cls 0 len 5866
08-25 21:00:33.683 cls 1 len 5184
08-25 21:00:33.683 cls 2 len 5678
08-25 21:00:33.683 cls 3 len 1312
08-25 21:00:33.683 cls 4 len 1723
08-25 21:00:33.683 cls 5 len 5656
08-25 21:00:33.683 cls 6 len 2769
08-25 21:00:33.683 cls 7 len 4860
08-25 21:00:33.683 cls 8 len 5388
08-25 21:00:33.683 cls 9 len 2440
08-25 21:00:33.683 cls 10 len 4722
08-25 21:00:33.683 cls 11 len 3719
08-25 21:00:33.683 cls 12 len 1239
08-25 21:00:33.683 cls 13 len 5075
08-25 21:00:33.683 cls 14 len 444
08-25 21:00:33.683 cls 15 len 348
08-25 21:00:33.683 cls 16 len 188
08-25 21:00:33.684 cls 17 len 575
08-25 21:00:33.684 cls 18 len 2238
08-25 21:00:33.686 val fine cities: ['val/lindau', 'val/frankfurt', 'val/munster']
08-25 21:00:33.688 Cityscapes-val: 500 images
Unifrom : coarse images
Unifrom : coarse images
Unifrom : coarse images
standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101

HANet layers 4

standard cross entropy
standard cross entropy
standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101

HANet layers 4

Model : DeepLabv3+, Backbone : ResNet-101

HANet layers 4

standard cross entropy
standard cross entropy
Model : DeepLabv3+, Backbone : ResNet-101

HANet layers 4

########### pretrained ##############
########### pretrained ##############
########### pretrained ##############
########### pretrained ##############
output_stride = 8
output_stride = 8
output_stride = 8
output_stride = 8
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
use PosEncoding1D
08-25 21:00:37.312 Model params = 64.807M
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:1-------
-----gpuid:3-------
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:0-------
-----------use get_optimizer_attention function------------
############# HANet Number 4
-----gpuid:2-------
args.snapshot:None
args.snapshot_pe:None

iteration 0

args.snapshot:None
args.snapshot_pe:None

iteration 0

args.snapshot:None
args.snapshot_pe:None

iteration 0

args.snapshot:None
args.snapshot_pe:None

iteration 0

i:0
i:0
i:0
i:0
0
length of train loader:186
0
length of train loader:186
0
length of train loader:186
0
length of train loader:186
Traceback (most recent call last):
File "train.py", line 632, in
Traceback (most recent call last):
Traceback (most recent call last):
File "train.py", line 632, in
File "train.py", line 632, in
Traceback (most recent call last):
File "train.py", line 632, in
main()
File "train.py", line 280, in main
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
main()
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "train.py", line 280, in main
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
main()
main()
File "train.py", line 280, in main
File "train.py", line 280, in main
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
i = train(train_loader, net, optim, epoch, writer, scheduler, args.max_iter, optim_at, scheduler_at)
File "train.py", line 352, in train
result = self.forward(*input, **kwargs)
File "train.py", line 352, in train
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
outputs = net(inputs, gts=gts, aux_gts=aux_gts, pos=(pos_h, pos_w), attention_loss=requires_attention)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
result = self.forward(*input, **kwargs)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
output = self.module(*inputs[0], **kwargs[0])
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/deepv3.py", line 416, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
x = self.layer3(x) # 100
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
out = self.bn3(out)
input = module(input)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
input = module(input)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
result = self.forward(*input, **kwargs)
out = self.bn3(out)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/myproject/HANet-master/network/Resnet.py", line 126, in forward
out = self.bn3(out)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = self.bn3(out)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 2; 10.92 GiB total capacity; 10.22 GiB already allocated; 125.00 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
result = self.forward(*input, **kwargs)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 473, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 3; 10.92 GiB total capacity; 10.22 GiB already allocated; 119.00 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 10.92 GiB total capacity; 10.22 GiB already allocated; 124.62 MiB free; 10.26 GiB reserved in total by PyTorch)
self.eps, exponential_average_factor, process_group, world_size)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/nn/modules/_functions.py", line 54, in forward
out = torch.batch_norm_elemt(input, weight, bias, mean, invstd, eps)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 1; 10.92 GiB total capacity; 10.22 GiB already allocated; 119.00 MiB free; 10.26 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in
main()
File "/home/xuangen/anaconda3/envs/hanet/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/xuangen/anaconda3/envs/hanet/bin/python', '-u', 'train.py', '--local_rank=3', '--dataset', 'cityscapes', '--arch', 'network.deepv3.DeepR101V3PlusD_HANet_OS8', '--city_mode', 'train', '--lr_schedule', 'poly', '--lr', '0.01', '--poly_exp', '0.9', '--hanet_lr', '0.01', '--hanet_poly_exp', '0.9', '--max_cu_epoch', '10000', '--class_uniform_pct', '0.5', '--class_uniform_tile', '1024', '--syncbn', '--sgd', '--crop_size', '768', '--scale_min', '0.5', '--scale_max', '2.0', '--rrotate', '0', '--color_aug', '0.25', '--gblur', '--max_iter', '40000', '--bs_mult', '2', '--hanet', '1', '1', '1', '1', '0', '--hanet_set', '3', '64', '3', '--hanet_pos', '2', '1', '--pos_rfactor', '2', '--dropout', '0.5', '--pos_noise', '0.5', '--aux_loss', '--date', '0103', '--exp', 'r101_os8_hanet_64_01', '--ckpt', './logs/', '--tb_path', './logs/']' returned non-zero exit status 1.

Bad results on the cityscapes test dataset

Dear shachoi:
I download your model on the google drive which you provide,I use the model HANet_r101_os8_city_0.80293.pth,it achieves 82.05 miou on the cityscapes val dataset,which is the same as you write in the paper.But when I use the model for test by the submit_r101_os8.sh you provide,and submit the res in the pred folder to the server,it can only achieve 72.09 miou,it's too low...so I just want to ask why this happened?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.