Code Monkey home page Code Monkey logo

beyond-part-models's People

Contributors

surajdonthi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beyond-part-models's Issues

Average pooling

Hi,

in paper, there is an average pooling before 1x1Conv

however, I don't know where is it in your model?

self.base = resnet50(
pretrained=True,
last_conv_stride=last_conv_stride,
last_conv_dilation=last_conv_dilation)
self.num_stripes = num_stripes

self.local_conv_list = nn.ModuleList()
for _ in range(num_stripes):
self.local_conv_list.append(nn.Sequential(
nn.Conv2d(2048, local_conv_out_channels, 1),
nn.BatchNorm2d(local_conv_out_channels),
nn.ReLU(inplace=True)
))

有关dropout的问题

我是用TensorFlow的,所以我不太清楚pytorch是否内含了dropout。在看你的pcb model的时候我似乎并没有看到dropout?

About loss function

@huanghoujing Thanks for your work, I'm sorry to bother you, but I'm confused now, could you please tell me, where does the code embody the PCB and RPP idea,where is the loss function? Looking forward to your reply!

ReLU after 1x1 conv

I am curious about the relu function which you use after the conv. It will force the feature to be all positive values and could you tell me the reason for using relu in the code below? I found there is no explicit word in the origin paper for this part. Thx!
for _ in range(num_stripes):
self.local_conv_list.append(nn.Sequential(
nn.Conv2d(2048, local_conv_out_channels, 1),
nn.BatchNorm2d(local_conv_out_channels),
nn.ReLU(inplace=True)
))

train combine mode error

python script/experiment/train_pcb.py
-d '(0,)'
--only_test false
--dataset combined
--trainset_part trainval
--exp_dir /data/exp_directory
--steps_per_log 20
--epochs_per_val 2

the error in bpm/dataset/init.py
elif name == 'combined':
assert part in ['trainval'], "Only trainval part of the combined dataset is available now."

about the im_mean and im_std

from the train_pcb.py
image
you set the im_mean and im_std to the fixed array, how do they come? And if i want to test .png picture ,how can i set these two parameters?

验证集

你好,我想问一下,程序里有用到验证集,在这个项目里起到什么样的作用呢?
在训练到20多轮的时候,验证集上的map和rank-1已经达到100%,但是在最后的测试集上分别为:78%,92%左右,请问这属于过拟合现象吗?

About the issue on training model by combined dataset

When I training at the step 560, I have a error: Step 20/Ep 1, 2.56s, loss 45.9606
Step 40/Ep 1, 2.56s, loss 44.4427
Step 60/Ep 1, 2.57s, loss 43.4606
Step 80/Ep 1, 2.57s, loss 42.1291
Step 100/Ep 1, 2.57s, loss 42.6228
Step 120/Ep 1, 2.56s, loss 41.2495
Step 140/Ep 1, 2.57s, loss 39.7805
Step 160/Ep 1, 2.57s, loss 38.2572
Step 180/Ep 1, 2.56s, loss 38.0040
Step 200/Ep 1, 2.58s, loss 37.0701
Step 220/Ep 1, 2.57s, loss 36.9859
Step 240/Ep 1, 2.56s, loss 31.4023
Step 260/Ep 1, 2.57s, loss 34.4426
Step 280/Ep 1, 2.56s, loss 33.4842
Step 300/Ep 1, 2.57s, loss 32.7389
Step 320/Ep 1, 2.56s, loss 33.1874
Step 340/Ep 1, 2.57s, loss 30.5525
Step 360/Ep 1, 2.57s, loss 32.8995
Step 380/Ep 1, 2.58s, loss 30.6072
Step 400/Ep 1, 2.57s, loss 29.6854
Step 420/Ep 1, 2.55s, loss 27.2513
Step 440/Ep 1, 2.56s, loss 29.8557
Step 460/Ep 1, 2.56s, loss 30.4018
Step 480/Ep 1, 2.56s, loss 29.9007
Step 500/Ep 1, 2.56s, loss 27.1777
Step 520/Ep 1, 2.56s, loss 27.2997
Step 540/Ep 1, 2.57s, loss 27.3183
Step 560/Ep 1, 2.56s, loss 25.4691
Ep 1, 1476.23s, loss 34.8283
Traceback (most recent call last):
File "script/experiment/train_pcb.py", line 498, in
main()
File "script/experiment/train_pcb.py", line 469, in main
mAP, Rank1 = validate()
File "script/experiment/train_pcb.py", line 389, in validate
if val_set.extract_feat_func is None:
AttributeError: 'TrainSet' object has no attribute 'extract_feat_func'

about finetune and scratch

I donnot understand the code param_groups = [{'params': finetuned_params, 'lr': cfg.finetuned_params_lr}, {'params': new_params, 'lr': cfg.new_params_lr}]. Do you train two types of the parameters so as to compare the map of them? Thanks

Testing results on CUHK03

Hi, I tried to train the model on CUHK03 and repeat the experiment, but results I got are lower compared to those reported on the paper, i.e.
Single Query: [mAP: 41.74%], [cmc1: 46.36%], [cmc5: 67.43%], [cmc10: 75.71%]
Re-ranked Single Query: [mAP: 58.11%], [cmc1: 56.71%], [cmc5: 69.71%], [cmc10: 77.00%]

Do you know any reasons of that?
Thanks.

test error in market 1501

您好,我下载了您训练好的market1501数据集的模型文件,利用它测试时效果不好,我不知道是什么问题,你知道原因吗,谢谢啦!
2018-03-23 16-24-08

About the code execution process

Dear huanghoujing, thanks for your work and happy new year, I have great interest in your work, however my pytorch version is 1.0, so, when i run your code, There is an error: 'RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated ', could you please tell me, where should be change in your code?

RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated

我这边运行代码出现下面的问题,请问您是否有遇到过?
`----------------------------------------
NO. Images: 31969
NO. IDs: 751
NO. Query Images: 3368
NO. Gallery Images: 15913
NO. Multi-query Images: 12688

./bpm/model/PCBModel.py:38: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
init.normal(fc.weight, std=0.001)
./bpm/model/PCBModel.py:39: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
init.constant(fc.bias, 0)
Traceback (most recent call last):
File "script/experiment/train_pcb.py", line 499, in
main()
File "script/experiment/train_pcb.py", line 439, in main
torch.cat([criterion(logits, labels_var) for logits in logits_list]))
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
*** Error in python': realloc(): invalid pointer: 0x00007fa5309148a0 ***

Different CMC for model_weight_file

I run the test mode with config.model_weight_file = "my model path", exp_dir= "" and config.model_weight_file = '', exp_dir="". And I got two different cmc results with much error. I only have one model file, and I check the code for load statics that there is no something different for the model loaded. So what the problem it may be?

Testing result on Market1501

Hi,

I download your pre-trained model and test it on Market1501 dataset following your README which using the default config of train_pcb.py with "only_test" parameter, however, the result scores are lower than that in your README release table. The logs are:
=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +2.88s, total 139.10s
Done, 139.30s
Computing distance...
Done, 2.00s
Computing scores...
Done, 10.08s
Single Query: [mAP: 64.96%], [cmc1: 88.66%], [cmc5: 94.95%], [cmc10: 96.35%]
Multi Query, Computing distance...
Done, 1.99s
Multi Query, Computing scores...
Done, 10.06s
Multi Query: [mAP: 73.32%], [cmc1: 91.69%], [cmc5: 96.53%], [cmc10: 97.68%]
Re-ranking distance...
Done, 73.66s
Computing scores for re-ranked distance...
Done, 10.90s
Re-ranked Single Query: [mAP: 85.22%], [cmc1: 91.57%], [cmc5: 94.98%], [cmc10: 96.20%]
Multi Query, Re-ranking distance...
Done, 65.32s
Multi Query, Computing scores for re-ranked distance...
Done, 10.88s
Re-ranked Multi Query: [mAP: 88.03%], [cmc1: 92.99%], [cmc5: 96.17%], [cmc10: 96.79%]

Issue of model

When I train a model on Market 1501, the training process seems fine.
In the training log, I got mAP 70%, cmc1 89% on test dataset.

But when I run your test script and load the my trained model for test, I only get mAP 3%, cmc1 11%.

If I load your model, I can get your results.

Do you have any idea what cause this gap? On issue I notice is that my ckpt.pth model is ~200M, which is twice the size of your model (100M).

About the input image size

Hello,If the input image size resized to 512×256,it will report an error. So the height to width ratio is can only be 3:1? If I want a size of 512×256,what should I do?

The error:
Traceback (most recent call last):
File "script/experiment/train_pcb.py", line 498, in
main()
File "script/experiment/train_pcb.py", line 436, in main
_, logits_list = model_w(ims_var)
File "/home/xiaozhenzhen/anaconda2/envs/pytorch-PCB/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/home/xiaozhenzhen/anaconda2/envs/pytorch-PCB/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 68, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/xiaozhenzhen/anaconda2/envs/pytorch-PCB/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 78, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/xiaozhenzhen/anaconda2/envs/pytorch-PCB/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
AssertionError

partition custom dataset

Hello, i wish to test your model on a custom dataset i created. How can i do that? How do i create the partitions file?

train on duke error

i have train on duke
when i test the result:
Single Query: [mAP: 2.83%], [cmc1: 9.87%], [cmc5: 17.91%], [cmc10: 22.44%]
don't know what's wrong. I change nothing but some error caused by difference between python2 and python3.

and I train on cuhk03 and market1501,all test result is very low.

the test output:
`------------------------------------------------------------
cfg.dict
{'ckpt_file': '/beyond-part-models/exp_directory/ckpt.pth',
'crop_prob': 0,
'crop_ratio': 1,
'dataset': 'duke',
'epochs_per_val': 1,
'exp_dir': '/beyond-part-models/exp_directory',
'finetuned_params_lr': 0.01,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'last_conv_stride': 1,
'local_conv_out_channels': 256,
'log_to_file': True,
'model_weight_file': '/beyond-part-models/exp_directory/duke/ckpt.pth',
'momentum': 0.9,
'new_params_lr': 0.1,
'num_stripes': 6,
'only_test': True,
'prefetch_threads': 2,
'resize_h_w': (384, 128),
'resume': False,
'run': 1,
'scale_im': True,
'seed': None,
'staircase_decay_at_epochs': (41,),
'staircase_decay_multiply_factor': 0.1,
'stderr_file': '/beyond-part-models/exp_directory/stderr_2018-03-26_14:31:06.txt',
'stdout_file': '/beyond-part-models/exp_directory/stdout_2018-03-26_14:31:06.txt',
'steps_per_log': 20,
'sys_device_ids': (0,),
'test_batch_size': 32,
'test_final_batch': True,
'test_mirror_type': None,
'test_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 32,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': None,
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'test',
'prng': <module 'numpy.random' from 'anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': False},
'test_shuffle': False,
'total_epochs': 60,
'train_batch_size': 64,
'train_final_batch': True,
'train_mirror_type': 'random',
'train_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 64,
'crop_prob': 0,
'crop_ratio': 1,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': 'random',
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'trainval',
'prng': <module 'numpy.random' from 'anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': True},
'train_shuffle': True,
'trainset_part': 'trainval',
'val_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 32,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': None,
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'val',
'prng': <module 'numpy.random' from 'anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': False},
'weight_decay': 0.0005}


duke trainval set

NO. Images: 16522
NO. IDs: 702


duke val set

NO. Images: 2401
NO. IDs: 100
NO. Query Images: 321
NO. Gallery Images: 2080
NO. Multi-query Images: 0


duke test set

NO. Images: 19889
NO. IDs: 1110
NO. Query Images: 2228
NO. Gallery Images: 17661
NO. Multi-query Images: 0

Keys not found in source state_dict:
base.layer1.1.bn1.bias
base.layer3.1.conv1.weight
base.layer4.1.bn3.running_var
base.layer2.1.bn2.running_var
base.layer2.2.bn3.weight
base.layer2.0.bn3.bias
base.layer3.3.bn3.bias
base.layer1.0.downsample.1.running_mean
base.layer3.2.bn3.running_mean
base.layer3.3.bn1.running_var
base.layer3.3.bn3.weight
base.layer1.1.bn1.weight
base.layer2.0.downsample.1.bias
base.layer4.0.bn1.bias
base.layer3.4.bn3.weight
base.layer3.5.conv3.weight
base.layer4.1.bn2.running_mean
base.layer1.2.bn1.bias
base.layer4.2.bn1.bias
base.layer1.0.bn2.weight
local_bn.running_mean
fc_list.0.weight
base.layer2.0.bn3.running_var
base.layer3.2.conv2.weight
base.layer2.2.bn3.bias
base.layer1.0.downsample.1.weight
base.layer2.3.bn2.bias
base.layer4.2.bn1.running_var
base.layer2.2.bn1.bias
base.layer2.2.conv1.weight
base.layer1.0.downsample.0.weight
base.layer3.0.bn1.bias
base.layer4.0.downsample.1.bias
fc_list.5.bias
base.layer4.0.bn1.weight
base.layer2.1.bn2.running_mean
base.layer3.0.downsample.1.running_var
base.layer3.4.bn1.weight
base.layer3.3.conv2.weight
base.layer1.2.bn3.running_mean
base.layer1.0.bn3.bias
base.layer2.1.conv3.weight
base.layer2.3.bn2.running_mean
base.layer1.2.conv1.weight
fc_list.2.weight
base.layer4.2.bn2.running_mean
base.layer2.3.bn1.running_var
base.layer1.2.bn1.running_var
base.layer2.0.bn2.running_var
base.layer1.1.bn1.running_mean
base.layer1.0.conv2.weight
base.layer2.2.bn2.bias
base.layer3.5.bn1.running_mean
base.layer2.0.conv2.weight
base.layer3.3.conv3.weight
base.layer1.0.bn1.running_mean
base.layer4.0.conv1.weight
fc_list.4.weight
base.layer2.2.bn1.weight
base.layer1.0.bn3.running_mean
base.layer2.0.bn1.weight
base.layer3.1.bn1.running_var
base.layer4.2.bn3.running_var
base.layer3.4.bn2.running_var
local_bn.weight
base.layer1.0.bn1.bias
base.layer2.0.bn3.weight
base.layer2.0.downsample.0.weight
base.layer1.1.bn2.weight
base.layer3.1.bn1.bias
base.layer3.5.bn3.weight
base.layer3.1.bn3.running_mean
base.layer3.2.bn1.weight
base.layer2.3.bn1.weight
base.layer4.1.bn3.running_mean
base.layer3.4.bn1.bias
base.layer1.0.bn3.running_var
base.layer3.3.bn1.running_mean
base.layer4.1.bn1.bias
base.layer3.5.bn2.weight
base.layer3.1.bn2.running_mean
base.layer1.0.bn2.running_var
fc_list.3.bias
base.layer1.1.conv1.weight
base.layer3.0.conv1.weight
base.layer2.0.conv1.weight
base.layer3.4.conv2.weight
base.layer4.1.bn2.bias
base.layer3.4.bn1.running_var
base.layer4.2.bn3.weight
fc_list.5.weight
base.layer4.1.bn1.running_var
base.layer3.0.conv2.weight
base.layer3.0.bn3.running_mean
base.layer3.0.bn3.weight
base.layer2.2.conv2.weight
base.layer3.5.bn1.weight
base.layer3.5.bn1.running_var
base.layer4.1.bn3.bias
base.layer3.5.bn3.running_var
base.layer3.4.bn2.running_mean
base.layer2.2.bn1.running_var
base.layer3.0.downsample.1.running_mean
base.layer2.3.bn3.bias
base.layer3.5.conv1.weight
base.layer2.0.bn2.weight
base.layer4.0.bn3.running_var
base.bn1.weight
base.bn1.running_mean
base.layer3.0.bn1.running_mean
base.layer4.1.bn2.weight
base.layer3.1.bn3.bias
base.layer3.3.bn1.bias
base.layer1.2.conv2.weight
base.layer2.3.bn3.running_mean
base.layer4.0.bn3.bias
base.layer3.2.bn2.running_mean
base.layer2.3.bn1.bias
base.layer2.1.bn2.weight
base.layer2.0.conv3.weight
base.layer2.2.bn3.running_mean
base.layer1.0.downsample.1.bias
base.layer3.3.conv1.weight
base.layer2.2.bn2.running_mean
base.layer3.3.bn1.weight
base.layer4.0.bn2.running_var
base.layer1.2.conv3.weight
base.layer3.4.conv3.weight
base.layer1.2.bn3.bias
base.layer3.3.bn2.running_var
fc_list.1.weight
base.layer4.0.downsample.1.running_var
local_conv.weight
base.layer3.0.bn3.bias
base.layer2.0.downsample.1.weight
base.layer2.3.conv2.weight
base.layer3.2.bn2.bias
base.layer1.0.bn2.running_mean
base.layer1.0.conv3.weight
base.layer4.1.conv3.weight
base.layer3.2.bn3.running_var
base.layer1.1.bn2.running_var
base.layer3.3.bn2.running_mean
base.layer3.0.bn3.running_var
base.layer2.1.bn1.running_mean
base.layer1.2.bn3.weight
base.layer3.3.bn2.weight
base.layer2.0.bn1.running_var
base.layer2.3.bn1.running_mean
base.layer3.0.bn2.running_mean
base.layer3.1.bn2.bias
base.layer1.1.bn1.running_var
base.layer1.1.bn2.running_mean
base.layer3.1.bn2.running_var
base.layer4.2.bn1.weight
base.layer3.2.bn3.weight
base.layer3.1.bn1.weight
base.layer3.4.bn3.running_mean
base.layer1.1.conv2.weight
base.layer3.1.bn1.running_mean
base.layer4.2.conv3.weight
base.bn1.running_var
base.layer3.3.bn3.running_mean
base.layer3.4.conv1.weight
base.layer3.5.bn3.bias
base.layer2.3.bn3.running_var
base.layer4.2.bn3.bias
fc_list.1.bias
base.layer3.0.downsample.0.weight
base.layer1.2.bn2.running_var
base.layer1.2.bn2.running_mean
base.layer4.0.bn3.weight
base.layer1.1.bn2.bias
base.layer3.5.bn2.running_mean
local_conv.bias
base.layer2.1.bn3.bias
base.layer1.0.bn3.weight
base.layer3.2.conv3.weight
base.layer2.2.bn2.running_var
base.layer4.2.bn2.bias
base.layer3.4.bn3.bias
base.conv1.weight
base.layer3.1.bn3.weight
base.layer1.2.bn1.running_mean
base.layer3.0.downsample.1.weight
base.layer2.1.bn1.weight
base.layer4.0.downsample.1.weight
base.layer1.0.bn1.weight
base.layer2.0.bn1.running_mean
base.layer2.0.bn3.running_mean
base.layer1.2.bn2.bias
base.layer3.1.conv2.weight
base.layer3.2.bn1.running_var
base.layer2.1.bn3.running_mean
base.layer4.1.bn1.running_mean
fc_list.0.bias
local_bn.running_var
base.layer4.2.bn3.running_mean
base.layer4.0.bn1.running_mean
base.layer2.2.bn1.running_mean
base.layer2.0.bn2.running_mean
base.layer2.0.bn1.bias
base.layer4.0.conv2.weight
base.layer4.0.downsample.1.running_mean
base.layer3.5.bn3.running_mean
base.layer1.1.bn3.running_mean
base.layer1.0.downsample.1.running_var
base.layer3.4.bn1.running_mean
base.layer2.3.conv3.weight
base.layer1.0.bn1.running_var
base.layer2.1.bn3.running_var
base.layer2.2.bn2.weight
base.layer3.1.bn3.running_var
fc_list.2.bias
base.layer1.1.bn3.bias
base.layer4.0.conv3.weight
base.layer1.1.bn3.running_var
base.layer1.0.bn2.bias
base.layer4.2.conv1.weight
base.layer4.2.bn2.running_var
base.layer2.2.conv3.weight
base.layer2.2.bn3.running_var
base.layer3.5.bn2.running_var
base.layer3.2.bn1.running_mean
base.layer3.1.bn2.weight
base.bn1.bias
base.layer3.2.bn2.weight
base.layer3.2.bn3.bias
base.layer3.2.conv1.weight
base.layer4.0.bn2.weight
base.layer3.5.bn1.bias
base.layer4.0.bn2.running_mean
base.layer2.3.bn3.weight
base.layer4.2.bn1.running_mean
base.layer3.0.conv3.weight
base.layer2.1.conv2.weight
base.layer3.5.conv2.weight
base.layer1.0.conv1.weight
base.layer4.1.bn2.running_var
base.layer2.1.bn2.bias
base.layer2.3.bn2.weight
base.layer3.3.bn2.bias
base.layer1.2.bn1.weight
base.layer1.2.bn3.running_var
base.layer4.1.bn3.weight
fc_list.3.weight
base.layer3.1.conv3.weight
base.layer2.0.downsample.1.running_var
base.layer3.0.bn2.running_var
local_bn.bias
base.layer3.2.bn1.bias
base.layer3.2.bn2.running_var
base.layer4.0.bn2.bias
base.layer2.1.bn1.running_var
base.layer2.1.conv1.weight
base.layer2.3.bn2.running_var
base.layer4.1.conv2.weight
base.layer4.0.bn1.running_var
base.layer3.4.bn3.running_var
base.layer1.1.bn3.weight
base.layer3.3.bn3.running_var
base.layer2.0.downsample.1.running_mean
base.layer3.4.bn2.weight
base.layer4.2.conv2.weight
base.layer3.0.bn1.weight
base.layer4.0.bn3.running_mean
base.layer4.1.conv1.weight
fc_list.4.bias
base.layer2.3.conv1.weight
base.layer2.1.bn1.bias
base.layer1.2.bn2.weight
base.layer3.4.bn2.bias
base.layer3.0.bn2.weight
base.layer3.5.bn2.bias
base.layer1.1.conv3.weight
base.layer4.2.bn2.weight
base.layer3.0.downsample.1.bias
base.layer2.1.bn3.weight
base.layer3.0.bn2.bias
base.layer4.1.bn1.weight
base.layer2.0.bn2.bias
base.layer4.0.downsample.0.weight
base.layer3.0.bn1.running_var
Keys not found in destination state_dict:
state_dicts
scores
ep
Loaded model weights from /beyond-part-models/exp_directory/duke/ckpt.pth

=========> Test on dataset: duke <=========

Extracting feature...
20/622 batches done, +12.88s, total 12.88s
�[F�[K40/622 batches done, +2.19s, total 15.07s
�[F�[K60/622 batches done, +2.06s, total 17.13s
�[F�[K80/622 batches done, +2.06s, total 19.19s
�[F�[K100/622 batches done, +2.01s, total 21.21s
�[F�[K120/622 batches done, +2.11s, total 23.32s
�[F�[K140/622 batches done, +2.17s, total 25.48s
�[F�[K160/622 batches done, +2.14s, total 27.62s
�[F�[K180/622 batches done, +1.96s, total 29.58s
�[F�[K200/622 batches done, +2.11s, total 31.69s
�[F�[K220/622 batches done, +2.17s, total 33.85s
�[F�[K240/622 batches done, +2.10s, total 35.95s
�[F�[K260/622 batches done, +2.10s, total 38.05s
�[F�[K280/622 batches done, +2.10s, total 40.15s
�[F�[K300/622 batches done, +2.14s, total 42.29s
�[F�[K320/622 batches done, +2.20s, total 44.49s
�[F�[K340/622 batches done, +2.06s, total 46.55s
�[F�[K360/622 batches done, +2.01s, total 48.56s
�[F�[K380/622 batches done, +2.01s, total 50.56s
�[F�[K400/622 batches done, +2.08s, total 52.65s
�[F�[K420/622 batches done, +1.98s, total 54.63s
�[F�[K440/622 batches done, +2.12s, total 56.74s
�[F�[K460/622 batches done, +2.11s, total 58.85s
�[F�[K480/622 batches done, +1.98s, total 60.83s
�[F�[K500/622 batches done, +2.07s, total 62.91s
�[F�[K520/622 batches done, +2.10s, total 65.00s
�[F�[K540/622 batches done, +2.20s, total 67.20s
�[F�[K560/622 batches done, +2.06s, total 69.27s
�[F�[K580/622 batches done, +2.05s, total 71.32s
�[F�[K600/622 batches done, +1.98s, total 73.30s
�[F�[K620/622 batches done, +2.12s, total 75.42s
Done, 75.74s
Computing distance...
Done, 0.53s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 11.04s
Single Query: [mAP: 2.83%], [cmc1: 9.87%], [cmc5: 17.91%], [cmc10: 22.44%]
Re-ranking distance...
Done, 76.46s
Computing scores for re-ranked distance...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 11.38s
Re-ranked Single Query: [mAP: 5.20%], [cmc1: 12.07%], [cmc5: 17.55%], [cmc10: 20.69%]
`

my train log:
`------------------------------------------------------------
cfg.dict
{'ckpt_file': '/data/home/lalai/experiments/beyond-part-models/exp_directory/duke/ckpt.pth',
'crop_prob': 0,
'crop_ratio': 1,
'dataset': 'duke',
'epochs_per_val': 2,
'exp_dir': '/data/home/lalai/experiments/beyond-part-models/exp_directory/duke',
'finetuned_params_lr': 0.01,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'last_conv_stride': 1,
'local_conv_out_channels': 256,
'log_to_file': True,
'model_weight_file': '',
'momentum': 0.9,
'new_params_lr': 0.1,
'num_stripes': 6,
'only_test': False,
'prefetch_threads': 2,
'resize_h_w': (384, 128),
'resume': False,
'run': 1,
'scale_im': True,
'seed': None,
'staircase_decay_at_epochs': (41,),
'staircase_decay_multiply_factor': 0.1,
'stderr_file': '/data/home/lalai/experiments/beyond-part-models/exp_directory/duke/stderr_2018-03-24_14:35:29.txt',
'stdout_file': '/data/home/lalai/experiments/beyond-part-models/exp_directory/duke/stdout_2018-03-24_14:35:29.txt',
'steps_per_log': 20,
'sys_device_ids': (2,),
'test_batch_size': 32,
'test_final_batch': True,
'test_mirror_type': None,
'test_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 32,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': None,
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'test',
'prng': <module 'numpy.random' from '/home/lalai/publib/anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': False},
'test_shuffle': False,
'total_epochs': 60,
'train_batch_size': 64,
'train_final_batch': True,
'train_mirror_type': 'random',
'train_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 64,
'crop_prob': 0,
'crop_ratio': 1,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': 'random',
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'trainval',
'prng': <module 'numpy.random' from '/home/lalai/publib/anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': True},
'train_shuffle': True,
'trainset_part': 'trainval',
'val_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 32,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': None,
'name': 'duke',
'num_prefetch_threads': 2,
'part': 'val',
'prng': <module 'numpy.random' from '/home/lalai/publib/anaconda3/lib/python3.6/site-packages/numpy/random/init.py'>,
'resize_h_w': (384, 128),
'scale': True,
'shuffle': False},
'weight_decay': 0.0005}


duke trainval set

NO. Images: 16522
NO. IDs: 702


duke val set

NO. Images: 2401
NO. IDs: 100
NO. Query Images: 321
NO. Gallery Images: 2080
NO. Multi-query Images: 0


duke test set

NO. Images: 19889
NO. IDs: 1110
NO. Query Images: 2228
NO. Gallery Images: 17661
NO. Multi-query Images: 0

Step 20/Ep 1, 7.21s, loss 37.1529
Step 40/Ep 1, 8.47s, loss 36.8084
Step 60/Ep 1, 6.29s, loss 35.1561
Step 80/Ep 1, 8.03s, loss 33.5103
Step 100/Ep 1, 7.31s, loss 29.4746
Step 120/Ep 1, 7.33s, loss 26.2669
Step 140/Ep 1, 5.22s, loss 27.0751
Step 160/Ep 1, 1.23s, loss 27.4869
Step 180/Ep 1, 1.22s, loss 24.4910
Step 200/Ep 1, 1.09s, loss 22.6488
Step 220/Ep 1, 0.73s, loss 21.1609
Step 240/Ep 1, 0.98s, loss 21.7364

Ep 1, 1130.23s, loss 28.9022
Step 20/Ep 2, 1.03s, loss 17.9511
Step 40/Ep 2, 0.61s, loss 16.5232
Step 60/Ep 2, 1.13s, loss 14.6668
Step 80/Ep 2, 0.86s, loss 15.4651
Step 100/Ep 2, 1.06s, loss 14.9441
Step 120/Ep 2, 1.11s, loss 14.2632
Step 140/Ep 2, 1.16s, loss 13.4817
Step 160/Ep 2, 1.18s, loss 14.0035
Step 180/Ep 2, 1.49s, loss 14.5250
Step 200/Ep 2, 1.26s, loss 12.7967
Step 220/Ep 2, 1.26s, loss 12.6710
Step 240/Ep 2, 1.11s, loss 12.8433
Ep 2, 298.14s, loss 14.8193

===== Test on validation set =====

Extracting feature...
20/76 batches done, +4.90s, total 4.90s
�[F�[K40/76 batches done, +5.04s, total 9.94s
�[F�[K60/76 batches done, +5.29s, total 15.23s
Done, 19.23s
Computing distance...
Done, 0.16s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.34s
Single Query: [mAP: 73.16%], [cmc1: 84.74%], [cmc5: 91.90%], [cmc10: 94.08%]

Step 20/Ep 3, 1.23s, loss 8.8453
Step 40/Ep 3, 0.98s, loss 9.3370
Step 60/Ep 3, 1.22s, loss 8.8667
Step 80/Ep 3, 1.06s, loss 11.2704
Step 100/Ep 3, 1.05s, loss 10.0581
Step 120/Ep 3, 1.36s, loss 9.3945
Step 140/Ep 3, 1.26s, loss 11.2640
Step 160/Ep 3, 1.30s, loss 9.3276
Step 180/Ep 3, 1.29s, loss 8.6810
Step 200/Ep 3, 1.15s, loss 8.6792
Step 220/Ep 3, 0.98s, loss 8.5763
Step 240/Ep 3, 1.07s, loss 8.2343

Ep 3, 313.73s, loss 9.1303
Step 20/Ep 4, 1.24s, loss 6.6244
Step 40/Ep 4, 1.39s, loss 5.3875
Step 60/Ep 4, 1.11s, loss 6.1857
Step 80/Ep 4, 1.26s, loss 5.7646
Step 100/Ep 4, 1.14s, loss 7.1421
Step 120/Ep 4, 1.41s, loss 6.9737
Step 140/Ep 4, 1.47s, loss 5.8727
Step 160/Ep 4, 1.56s, loss 6.3184
Step 180/Ep 4, 1.53s, loss 7.8259
Step 200/Ep 4, 1.22s, loss 7.4525
Step 220/Ep 4, 1.35s, loss 5.0663
Step 240/Ep 4, 1.20s, loss 5.5142
Ep 4, 328.72s, loss 6.3097

===== Test on validation set =====

Extracting feature...
20/76 batches done, +5.85s, total 5.85s
�[F�[K40/76 batches done, +6.28s, total 12.14s
�[F�[K60/76 batches done, +6.13s, total 18.27s
Done, 23.35s
Computing distance...
Done, 0.12s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.32s
Single Query: [mAP: 81.02%], [cmc1: 91.28%], [cmc5: 95.33%], [cmc10: 96.88%]

Step 20/Ep 5, 1.38s, loss 6.7970
Step 40/Ep 5, 1.39s, loss 4.7243
Step 60/Ep 5, 1.41s, loss 4.6256
Step 80/Ep 5, 1.34s, loss 3.7932
Step 100/Ep 5, 1.41s, loss 4.1115
Step 120/Ep 5, 1.40s, loss 4.3463
Step 140/Ep 5, 1.30s, loss 4.1978
Step 160/Ep 5, 1.33s, loss 5.5110
Step 180/Ep 5, 1.13s, loss 4.9879
Step 200/Ep 5, 1.11s, loss 5.5221
Step 220/Ep 5, 1.39s, loss 4.6285
Step 240/Ep 5, 1.24s, loss 4.8592

Ep 5, 335.30s, loss 4.7707
Step 20/Ep 6, 1.32s, loss 4.5329
Step 40/Ep 6, 1.23s, loss 4.2338
Step 60/Ep 6, 1.33s, loss 4.3702
Step 80/Ep 6, 1.44s, loss 4.1733
Step 100/Ep 6, 1.46s, loss 2.2477
Step 120/Ep 6, 1.28s, loss 3.6955
Step 140/Ep 6, 1.52s, loss 5.0271
Step 160/Ep 6, 1.31s, loss 3.3362
Step 180/Ep 6, 1.43s, loss 4.1300
Step 200/Ep 6, 1.27s, loss 4.7450
Step 220/Ep 6, 1.24s, loss 4.1717
Step 240/Ep 6, 1.44s, loss 3.9225
Ep 6, 341.12s, loss 3.8365

===== Test on validation set =====

Extracting feature...
20/76 batches done, +7.81s, total 7.81s
�[F�[K40/76 batches done, +7.32s, total 15.13s
�[F�[K60/76 batches done, +7.79s, total 22.92s
Done, 28.82s
Computing distance...
Done, 0.16s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.33s
Single Query: [mAP: 85.10%], [cmc1: 93.46%], [cmc5: 97.51%], [cmc10: 97.51%]

Step 20/Ep 7, 1.48s, loss 3.9880
Step 40/Ep 7, 1.33s, loss 2.2409
Step 60/Ep 7, 1.56s, loss 3.0331
Step 80/Ep 7, 1.56s, loss 1.9485
Step 100/Ep 7, 1.46s, loss 3.5058
Step 120/Ep 7, 1.63s, loss 3.1262
Step 140/Ep 7, 1.47s, loss 2.9111
Step 160/Ep 7, 1.55s, loss 3.8994
Step 180/Ep 7, 1.42s, loss 3.1996
Step 200/Ep 7, 1.66s, loss 3.2328
Step 220/Ep 7, 1.51s, loss 3.3790
Step 240/Ep 7, 5.99s, loss 3.1472

Ep 7, 501.70s, loss 3.0710
Step 20/Ep 8, 4.73s, loss 2.3981
Step 40/Ep 8, 5.56s, loss 1.7520
Step 60/Ep 8, 5.06s, loss 1.9629
Step 80/Ep 8, 5.68s, loss 2.1359
Step 100/Ep 8, 5.65s, loss 2.4756
Step 120/Ep 8, 5.40s, loss 2.4838
Step 140/Ep 8, 5.00s, loss 2.5320
Step 160/Ep 8, 2.93s, loss 2.5898
Step 180/Ep 8, 1.96s, loss 2.8185
Step 200/Ep 8, 1.75s, loss 2.2167
Step 220/Ep 8, 1.83s, loss 2.7298
Step 240/Ep 8, 4.88s, loss 2.9156
Ep 8, 1127.06s, loss 2.4928

===== Test on validation set =====

Extracting feature...
20/76 batches done, +28.79s, total 28.79s
�[F�[K40/76 batches done, +28.92s, total 57.71s
�[F�[K60/76 batches done, +34.64s, total 92.35s
Done, 115.06s
Computing distance...
Done, 0.14s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.32s
Single Query: [mAP: 90.42%], [cmc1: 95.64%], [cmc5: 98.75%], [cmc10: 99.69%]

Step 20/Ep 9, 1.72s, loss 1.5292
Step 40/Ep 9, 1.72s, loss 2.7577
Step 60/Ep 9, 1.77s, loss 2.5052
Step 80/Ep 9, 1.84s, loss 1.9735
Step 100/Ep 9, 1.88s, loss 2.3530
Step 120/Ep 9, 1.83s, loss 1.8199
Step 140/Ep 9, 1.85s, loss 1.8501
Step 160/Ep 9, 1.57s, loss 2.0554
Step 180/Ep 9, 1.56s, loss 2.2496
Step 200/Ep 9, 1.59s, loss 2.2351
Step 220/Ep 9, 1.56s, loss 2.7041
Step 240/Ep 9, 1.48s, loss 2.2564

Ep 9, 441.08s, loss 2.1062
Step 20/Ep 10, 1.52s, loss 2.4145
Step 40/Ep 10, 1.57s, loss 1.4760
Step 60/Ep 10, 1.62s, loss 1.4859
Step 80/Ep 10, 1.56s, loss 2.5719
Step 100/Ep 10, 1.48s, loss 1.7289
Step 120/Ep 10, 5.73s, loss 0.8954
Step 140/Ep 10, 6.19s, loss 1.7027
Step 160/Ep 10, 6.96s, loss 1.9217
Step 180/Ep 10, 5.15s, loss 1.9179
Step 200/Ep 10, 1.78s, loss 1.5794
Step 220/Ep 10, 1.77s, loss 2.4520
Step 240/Ep 10, 1.83s, loss 2.1151
Ep 10, 733.30s, loss 1.9265

===== Test on validation set =====

Extracting feature...
20/76 batches done, +10.20s, total 10.20s
�[F�[K40/76 batches done, +9.98s, total 20.18s
�[F�[K60/76 batches done, +10.01s, total 30.19s
Done, 37.87s
Computing distance...
Done, 0.15s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 90.62%], [cmc1: 96.57%], [cmc5: 98.75%], [cmc10: 100.00%]

Step 20/Ep 11, 1.86s, loss 1.3455
Step 40/Ep 11, 1.89s, loss 1.3079
Step 60/Ep 11, 1.82s, loss 1.2694
Step 80/Ep 11, 1.84s, loss 1.4525
Step 100/Ep 11, 1.85s, loss 1.1908
Step 120/Ep 11, 1.84s, loss 1.6790
Step 140/Ep 11, 1.75s, loss 1.7347
Step 160/Ep 11, 1.77s, loss 1.8032
Step 180/Ep 11, 1.75s, loss 1.5180
Step 200/Ep 11, 1.65s, loss 1.4147
Step 220/Ep 11, 1.62s, loss 1.1507
Step 240/Ep 11, 1.51s, loss 1.5878

Ep 11, 445.55s, loss 1.6031
Step 20/Ep 12, 6.93s, loss 1.4593
Step 40/Ep 12, 6.40s, loss 0.9982
Step 60/Ep 12, 6.67s, loss 0.8486
Step 80/Ep 12, 6.05s, loss 0.8264
Step 100/Ep 12, 3.63s, loss 0.8150
Step 120/Ep 12, 6.04s, loss 1.0049
Step 140/Ep 12, 6.99s, loss 1.3145
Step 160/Ep 12, 5.42s, loss 1.0128
Step 180/Ep 12, 5.75s, loss 1.2988
Step 200/Ep 12, 1.52s, loss 1.4204
Step 220/Ep 12, 1.52s, loss 1.2283
Step 240/Ep 12, 1.66s, loss 1.3039
Ep 12, 1176.28s, loss 1.3498

===== Test on validation set =====

Extracting feature...
20/76 batches done, +8.48s, total 8.48s
�[F�[K40/76 batches done, +8.42s, total 16.89s
�[F�[K60/76 batches done, +8.38s, total 25.27s
Done, 31.77s
Computing distance...
Done, 0.13s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.32s
Single Query: [mAP: 93.44%], [cmc1: 98.13%], [cmc5: 99.38%], [cmc10: 100.00%]

Step 20/Ep 13, 1.50s, loss 1.4066
Step 40/Ep 13, 1.48s, loss 1.2050
Step 60/Ep 13, 1.50s, loss 0.7747
Step 80/Ep 13, 1.63s, loss 0.8701
Step 100/Ep 13, 1.58s, loss 1.3612
Step 120/Ep 13, 5.54s, loss 0.9830
Step 140/Ep 13, 6.34s, loss 0.9768
Step 160/Ep 13, 6.41s, loss 0.8179
Step 180/Ep 13, 5.88s, loss 1.3573
Step 200/Ep 13, 5.62s, loss 1.4278
Step 220/Ep 13, 4.59s, loss 1.7004
Step 240/Ep 13, 5.99s, loss 1.5690

Ep 13, 1136.21s, loss 1.2088
Step 20/Ep 14, 6.87s, loss 1.7844
Step 40/Ep 14, 1.80s, loss 1.5293
Step 60/Ep 14, 1.82s, loss 0.9275
Step 80/Ep 14, 1.81s, loss 1.2436
Step 100/Ep 14, 1.76s, loss 1.1813
Step 120/Ep 14, 1.85s, loss 0.9720
Step 140/Ep 14, 1.78s, loss 1.0907
Step 160/Ep 14, 1.81s, loss 1.6212
Step 180/Ep 14, 1.83s, loss 1.4766
Step 200/Ep 14, 1.86s, loss 1.3269
Step 220/Ep 14, 1.83s, loss 1.2039
Step 240/Ep 14, 1.80s, loss 1.5747
Ep 14, 624.82s, loss 1.2893

===== Test on validation set =====

Extracting feature...
20/76 batches done, +9.89s, total 9.89s
�[F�[K40/76 batches done, +9.91s, total 19.80s
�[F�[K60/76 batches done, +9.73s, total 29.53s
Done, 36.87s
Computing distance...
Done, 0.15s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 95.71%], [cmc1: 99.07%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 15, 1.83s, loss 1.4634
Step 40/Ep 15, 1.77s, loss 1.1374
Step 60/Ep 15, 1.80s, loss 0.6987
Step 80/Ep 15, 1.84s, loss 0.9451
Step 100/Ep 15, 1.81s, loss 1.2228
Step 120/Ep 15, 1.80s, loss 1.4969
Step 140/Ep 15, 1.81s, loss 0.7959
Step 160/Ep 15, 1.76s, loss 0.7415
Step 180/Ep 15, 1.83s, loss 1.1018
Step 200/Ep 15, 1.76s, loss 0.6245
Step 220/Ep 15, 1.76s, loss 1.1938
Step 240/Ep 15, 1.79s, loss 0.9658

Ep 15, 463.00s, loss 1.0809
Step 20/Ep 16, 1.76s, loss 1.3684
Step 40/Ep 16, 1.72s, loss 0.6503
Step 60/Ep 16, 1.83s, loss 0.9544
Step 80/Ep 16, 1.73s, loss 1.2419
Step 100/Ep 16, 1.71s, loss 1.1132
Step 120/Ep 16, 1.79s, loss 1.4263
Step 140/Ep 16, 1.84s, loss 1.0175
Step 160/Ep 16, 1.79s, loss 0.9242
Step 180/Ep 16, 1.80s, loss 1.3025
Step 200/Ep 16, 1.72s, loss 1.6961
Step 220/Ep 16, 1.73s, loss 1.0583
Step 240/Ep 16, 1.77s, loss 0.8018
Ep 16, 458.77s, loss 1.0979

===== Test on validation set =====

Extracting feature...
20/76 batches done, +9.80s, total 9.80s
�[F�[K40/76 batches done, +9.35s, total 19.15s
�[F�[K60/76 batches done, +9.60s, total 28.75s
Done, 36.05s
Computing distance...
Done, 0.12s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 95.71%], [cmc1: 99.69%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 17, 1.82s, loss 1.7658
Step 40/Ep 17, 1.86s, loss 0.9665
Step 60/Ep 17, 1.82s, loss 1.3321
Step 80/Ep 17, 1.82s, loss 1.0438
Step 100/Ep 17, 1.87s, loss 1.0985
Step 120/Ep 17, 1.77s, loss 1.6692
Step 140/Ep 17, 1.79s, loss 1.2997
Step 160/Ep 17, 1.81s, loss 1.4560
Step 180/Ep 17, 1.80s, loss 1.5988
Step 200/Ep 17, 1.77s, loss 0.8912
Step 220/Ep 17, 1.82s, loss 2.2629
Step 240/Ep 17, 1.78s, loss 0.7568

Ep 17, 463.22s, loss 1.2547
Step 20/Ep 18, 1.85s, loss 1.4696
Step 40/Ep 18, 1.73s, loss 1.0148
Step 60/Ep 18, 1.82s, loss 0.6331
Step 80/Ep 18, 1.77s, loss 1.6283
Step 100/Ep 18, 1.72s, loss 0.6603
Step 120/Ep 18, 1.77s, loss 0.8993
Step 140/Ep 18, 1.74s, loss 0.9771
Step 160/Ep 18, 1.77s, loss 0.9187
Step 180/Ep 18, 1.86s, loss 0.9614
Step 200/Ep 18, 1.81s, loss 0.7266
Step 220/Ep 18, 6.24s, loss 1.2907
Step 240/Ep 18, 1.76s, loss 1.1530
Ep 18, 565.67s, loss 1.1337

===== Test on validation set =====

Extracting feature...
20/76 batches done, +10.06s, total 10.06s
�[F�[K40/76 batches done, +10.20s, total 20.25s
�[F�[K60/76 batches done, +9.76s, total 30.01s
Done, 37.65s
Computing distance...
Done, 0.16s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.32s
Single Query: [mAP: 96.66%], [cmc1: 99.07%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 19, 1.82s, loss 1.4631
Step 40/Ep 19, 1.58s, loss 1.0681
Step 60/Ep 19, 1.55s, loss 1.0982
Step 80/Ep 19, 1.56s, loss 0.4282
Step 100/Ep 19, 1.59s, loss 0.6797
Step 120/Ep 19, 6.49s, loss 0.8900
Step 140/Ep 19, 5.68s, loss 0.7002
Step 160/Ep 19, 6.40s, loss 0.7334
Step 180/Ep 19, 3.80s, loss 0.7241
Step 200/Ep 19, 5.70s, loss 1.2490
Step 220/Ep 19, 5.93s, loss 0.6562
Step 240/Ep 19, 4.90s, loss 0.8398

Ep 19, 956.42s, loss 0.9120
Step 20/Ep 20, 5.76s, loss 1.6519
Step 40/Ep 20, 6.70s, loss 1.1966
Step 60/Ep 20, 5.60s, loss 0.7015
Step 80/Ep 20, 4.90s, loss 0.8888
Step 100/Ep 20, 6.05s, loss 0.6650
Step 120/Ep 20, 6.26s, loss 1.0624
Step 140/Ep 20, 6.83s, loss 0.7333
Step 160/Ep 20, 5.22s, loss 0.7053
Step 180/Ep 20, 5.55s, loss 0.7561
Step 200/Ep 20, 7.12s, loss 1.2603
Step 220/Ep 20, 5.59s, loss 1.3975
Step 240/Ep 20, 6.27s, loss 1.5382
Ep 20, 1535.54s, loss 1.1708

===== Test on validation set =====

Extracting feature...
20/76 batches done, +41.86s, total 41.86s
�[F�[K40/76 batches done, +37.85s, total 79.71s
�[F�[K60/76 batches done, +39.37s, total 119.08s
Done, 141.53s
Computing distance...
Done, 0.09s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 96.57%], [cmc1: 99.07%], [cmc5: 99.69%], [cmc10: 100.00%]

Step 20/Ep 21, 4.60s, loss 1.3126
Step 40/Ep 21, 4.54s, loss 0.8253
Step 60/Ep 21, 4.65s, loss 1.2886
Step 80/Ep 21, 3.58s, loss 1.0457
Step 100/Ep 21, 4.79s, loss 0.5526
Step 120/Ep 21, 4.95s, loss 0.7717
Step 140/Ep 21, 5.09s, loss 1.2575
Step 160/Ep 21, 2.61s, loss 0.6418
Step 180/Ep 21, 3.93s, loss 1.4424
Step 200/Ep 21, 4.13s, loss 0.8641
Step 220/Ep 21, 3.79s, loss 0.5314
Step 240/Ep 21, 4.01s, loss 0.7727

Ep 21, 998.29s, loss 0.9790
Step 20/Ep 22, 3.74s, loss 1.2631
Step 40/Ep 22, 2.56s, loss 1.0863
Step 60/Ep 22, 1.56s, loss 1.2666
Step 80/Ep 22, 3.10s, loss 0.7895
Step 100/Ep 22, 4.87s, loss 0.7550
Step 120/Ep 22, 3.28s, loss 0.9252
Step 140/Ep 22, 4.11s, loss 0.5022
Step 160/Ep 22, 3.97s, loss 0.5220
Step 180/Ep 22, 4.79s, loss 0.7942
Step 200/Ep 22, 1.55s, loss 0.8698
Step 220/Ep 22, 6.53s, loss 0.6452
Step 240/Ep 22, 0.57s, loss 0.5758
Ep 22, 745.33s, loss 0.8412

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.03s, total 2.03s
�[F�[K40/76 batches done, +1.90s, total 3.93s
�[F�[K60/76 batches done, +1.91s, total 5.84s
Done, 7.30s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 98.02%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 23, 0.57s, loss 1.2071
Step 40/Ep 23, 0.58s, loss 0.8693
Step 60/Ep 23, 0.58s, loss 1.1442
Step 80/Ep 23, 0.58s, loss 0.7959
Step 100/Ep 23, 0.58s, loss 0.7636
Step 120/Ep 23, 0.57s, loss 1.2750
Step 140/Ep 23, 0.57s, loss 0.5597
Step 160/Ep 23, 0.58s, loss 0.4942
Step 180/Ep 23, 0.60s, loss 0.6408
Step 200/Ep 23, 0.58s, loss 0.9767
Step 220/Ep 23, 0.62s, loss 0.5247
Step 240/Ep 23, 0.58s, loss 0.9156

Ep 23, 149.57s, loss 0.9495
Step 20/Ep 24, 0.58s, loss 1.2844
Step 40/Ep 24, 0.58s, loss 1.1613
Step 60/Ep 24, 0.58s, loss 0.4354
Step 80/Ep 24, 0.58s, loss 0.6540
Step 100/Ep 24, 0.58s, loss 0.3444
Step 120/Ep 24, 0.58s, loss 0.6317
Step 140/Ep 24, 0.58s, loss 0.7375
Step 160/Ep 24, 0.58s, loss 0.6753
Step 180/Ep 24, 0.58s, loss 0.5427
Step 200/Ep 24, 0.58s, loss 0.5830
Step 220/Ep 24, 0.59s, loss 0.6239
Step 240/Ep 24, 0.58s, loss 0.6407
Ep 24, 149.60s, loss 0.7288

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.02s, total 2.02s
�[F�[K40/76 batches done, +1.92s, total 3.95s
�[F�[K60/76 batches done, +1.92s, total 5.87s
Done, 7.33s
Computing distance...
Done, 0.04s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 98.60%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 25, 0.57s, loss 0.8397
Step 40/Ep 25, 0.58s, loss 0.4416
Step 60/Ep 25, 0.57s, loss 0.6702
Step 80/Ep 25, 0.58s, loss 0.4207
Step 100/Ep 25, 0.57s, loss 0.3993
Step 120/Ep 25, 0.58s, loss 0.8637
Step 140/Ep 25, 0.58s, loss 0.5166
Step 160/Ep 25, 0.58s, loss 0.4670
Step 180/Ep 25, 0.58s, loss 0.8825
Step 200/Ep 25, 0.57s, loss 0.9797
Step 220/Ep 25, 0.58s, loss 0.6011
Step 240/Ep 25, 0.58s, loss 0.8492

Ep 25, 149.00s, loss 0.6469
Step 20/Ep 26, 0.57s, loss 0.4974
Step 40/Ep 26, 0.58s, loss 0.3895
Step 60/Ep 26, 0.58s, loss 0.2976
Step 80/Ep 26, 0.59s, loss 0.5827
Step 100/Ep 26, 0.58s, loss 0.4321
Step 120/Ep 26, 0.59s, loss 0.4128
Step 140/Ep 26, 0.58s, loss 0.3708
Step 160/Ep 26, 0.57s, loss 0.5052
Step 180/Ep 26, 0.58s, loss 0.6423
Step 200/Ep 26, 0.58s, loss 0.4070
Step 220/Ep 26, 0.58s, loss 0.4688
Step 240/Ep 26, 0.58s, loss 0.4549
Ep 26, 149.93s, loss 0.4671

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.04s, total 2.04s
�[F�[K40/76 batches done, +1.93s, total 3.97s
�[F�[K60/76 batches done, +1.93s, total 5.90s
Done, 7.37s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.31s
Single Query: [mAP: 99.32%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 27, 0.59s, loss 0.7168
Step 40/Ep 27, 0.60s, loss 0.9161
Step 60/Ep 27, 0.57s, loss 0.5513
Step 80/Ep 27, 0.58s, loss 0.3205
Step 100/Ep 27, 0.57s, loss 0.5772
Step 120/Ep 27, 0.57s, loss 0.2488
Step 140/Ep 27, 0.57s, loss 0.2286
Step 160/Ep 27, 0.58s, loss 0.2414
Step 180/Ep 27, 0.58s, loss 0.5102
Step 200/Ep 27, 0.57s, loss 0.2667
Step 220/Ep 27, 0.58s, loss 0.2900
Step 240/Ep 27, 0.58s, loss 0.1698

Ep 27, 149.58s, loss 0.4515
Step 20/Ep 28, 0.58s, loss 1.8641
Step 40/Ep 28, 0.58s, loss 0.7913
Step 60/Ep 28, 0.59s, loss 1.3012
Step 80/Ep 28, 0.60s, loss 0.8288
Step 100/Ep 28, 0.58s, loss 0.7013
Step 120/Ep 28, 0.58s, loss 0.8126
Step 140/Ep 28, 0.58s, loss 0.6532
Step 160/Ep 28, 0.58s, loss 0.6512
Step 180/Ep 28, 0.58s, loss 0.8726
Step 200/Ep 28, 0.58s, loss 0.7502
Step 220/Ep 28, 0.57s, loss 0.4386
Step 240/Ep 28, 0.58s, loss 0.5672
Ep 28, 149.87s, loss 0.7729

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.06s, total 2.06s
�[F�[K40/76 batches done, +1.93s, total 3.99s
�[F�[K60/76 batches done, +1.93s, total 5.93s
Done, 7.40s
Computing distance...
Done, 0.05s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.28s
Single Query: [mAP: 98.71%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 29, 0.58s, loss 0.3540
Step 40/Ep 29, 0.58s, loss 0.6476
Step 60/Ep 29, 0.57s, loss 0.7212
Step 80/Ep 29, 0.58s, loss 0.5859
Step 100/Ep 29, 0.58s, loss 0.4783
Step 120/Ep 29, 0.58s, loss 0.7391
Step 140/Ep 29, 0.57s, loss 0.3695
Step 160/Ep 29, 0.58s, loss 0.8245
Step 180/Ep 29, 0.58s, loss 0.6285
Step 200/Ep 29, 0.58s, loss 0.8962
Step 220/Ep 29, 0.57s, loss 0.6171
Step 240/Ep 29, 0.57s, loss 0.7354

Ep 29, 149.03s, loss 0.6326
Step 20/Ep 30, 0.57s, loss 0.9624
Step 40/Ep 30, 0.58s, loss 0.8231
Step 60/Ep 30, 0.58s, loss 0.8217
Step 80/Ep 30, 0.57s, loss 0.5845
Step 100/Ep 30, 0.57s, loss 1.1472
Step 120/Ep 30, 0.58s, loss 0.8399
Step 140/Ep 30, 0.58s, loss 0.9641
Step 160/Ep 30, 0.58s, loss 0.6502
Step 180/Ep 30, 0.58s, loss 1.1148
Step 200/Ep 30, 0.58s, loss 0.9405
Step 220/Ep 30, 0.58s, loss 1.0024
Step 240/Ep 30, 0.58s, loss 0.9224
Ep 30, 149.22s, loss 1.0070

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.04s, total 2.04s
�[F�[K40/76 batches done, +1.93s, total 3.97s
�[F�[K60/76 batches done, +1.90s, total 5.87s
Done, 7.31s
Computing distance...
Done, 0.05s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.40s
Single Query: [mAP: 97.42%], [cmc1: 99.38%], [cmc5: 99.69%], [cmc10: 100.00%]

Step 20/Ep 31, 0.58s, loss 2.6566
Step 40/Ep 31, 0.58s, loss 1.8471
Step 60/Ep 31, 0.58s, loss 1.6894
Step 80/Ep 31, 0.58s, loss 1.0760
Step 100/Ep 31, 0.59s, loss 2.1598
Step 120/Ep 31, 0.58s, loss 1.6868
Step 140/Ep 31, 0.58s, loss 1.0951
Step 160/Ep 31, 0.58s, loss 1.6602
Step 180/Ep 31, 0.58s, loss 1.1196
Step 200/Ep 31, 0.57s, loss 1.5896
Step 220/Ep 31, 0.58s, loss 1.6249
Step 240/Ep 31, 0.61s, loss 1.7398

Ep 31, 149.67s, loss 1.6903
Step 20/Ep 32, 0.58s, loss 2.8480
Step 40/Ep 32, 0.57s, loss 2.7165
Step 60/Ep 32, 0.57s, loss 1.8458
Step 80/Ep 32, 0.58s, loss 1.6278
Step 100/Ep 32, 0.58s, loss 1.6981
Step 120/Ep 32, 0.58s, loss 1.7396
Step 140/Ep 32, 0.58s, loss 1.4146
Step 160/Ep 32, 0.58s, loss 1.8949
Step 180/Ep 32, 0.58s, loss 1.7310
Step 200/Ep 32, 0.58s, loss 1.1906
Step 220/Ep 32, 0.58s, loss 0.8951
Step 240/Ep 32, 0.58s, loss 1.4697
Ep 32, 149.30s, loss 1.6267

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.05s, total 2.05s
�[F�[K40/76 batches done, +1.94s, total 3.99s
�[F�[K60/76 batches done, +1.94s, total 5.92s
Done, 7.38s
Computing distance...
Done, 0.05s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.30s
Single Query: [mAP: 96.76%], [cmc1: 98.44%], [cmc5: 99.69%], [cmc10: 100.00%]

Step 20/Ep 33, 0.58s, loss 1.1986
Step 40/Ep 33, 0.58s, loss 0.6627
Step 60/Ep 33, 0.58s, loss 0.7759
Step 80/Ep 33, 0.58s, loss 0.7220
Step 100/Ep 33, 0.57s, loss 0.7438
Step 120/Ep 33, 0.59s, loss 0.6115
Step 140/Ep 33, 0.57s, loss 0.8868
Step 160/Ep 33, 0.58s, loss 0.6064
Step 180/Ep 33, 0.58s, loss 0.8736
Step 200/Ep 33, 0.59s, loss 0.6104
Step 220/Ep 33, 0.58s, loss 0.8025
Step 240/Ep 33, 0.58s, loss 0.5442

Ep 33, 149.37s, loss 0.7536
Step 20/Ep 34, 0.58s, loss 0.8948
Step 40/Ep 34, 0.58s, loss 1.0146
Step 60/Ep 34, 0.57s, loss 0.9390
Step 80/Ep 34, 0.59s, loss 0.7427
Step 100/Ep 34, 0.57s, loss 0.4940
Step 120/Ep 34, 0.57s, loss 0.4074
Step 140/Ep 34, 0.58s, loss 0.4536
Step 160/Ep 34, 0.58s, loss 0.7832
Step 180/Ep 34, 0.58s, loss 0.2418
Step 200/Ep 34, 0.58s, loss 0.4310
Step 220/Ep 34, 0.57s, loss 0.3548
Step 240/Ep 34, 0.58s, loss 0.4576
Ep 34, 149.39s, loss 0.5633

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.08s, total 2.08s
�[F�[K40/76 batches done, +1.99s, total 4.07s
�[F�[K60/76 batches done, +1.98s, total 6.06s
Done, 7.65s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.31s
Single Query: [mAP: 99.29%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 35, 0.59s, loss 0.9558
Step 40/Ep 35, 0.60s, loss 0.3903
Step 60/Ep 35, 0.59s, loss 0.3868
Step 80/Ep 35, 0.61s, loss 0.3436
Step 100/Ep 35, 0.59s, loss 0.2549
Step 120/Ep 35, 0.61s, loss 0.2884
Step 140/Ep 35, 0.59s, loss 0.1748
Step 160/Ep 35, 0.60s, loss 0.1456
Step 180/Ep 35, 0.60s, loss 0.2465
Step 200/Ep 35, 0.60s, loss 0.2020
Step 220/Ep 35, 0.57s, loss 0.1251
Step 240/Ep 35, 0.59s, loss 0.5188

Ep 35, 153.27s, loss 0.2922
Step 20/Ep 36, 0.59s, loss 0.3976
Step 40/Ep 36, 0.59s, loss 0.1855
Step 60/Ep 36, 0.61s, loss 0.1755
Step 80/Ep 36, 0.57s, loss 0.1151
Step 100/Ep 36, 0.60s, loss 0.1553
Step 120/Ep 36, 0.59s, loss 0.2472
Step 140/Ep 36, 0.59s, loss 0.3345
Step 160/Ep 36, 0.58s, loss 0.1930
Step 180/Ep 36, 0.59s, loss 0.1125
Step 200/Ep 36, 0.59s, loss 0.0912
Step 220/Ep 36, 0.60s, loss 0.1325
Step 240/Ep 36, 0.60s, loss 0.1705
Ep 36, 153.33s, loss 0.2195

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.08s, total 2.08s
�[F�[K40/76 batches done, +2.01s, total 4.08s
�[F�[K60/76 batches done, +2.00s, total 6.08s
Done, 7.76s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.32s
Single Query: [mAP: 99.77%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 37, 0.57s, loss 0.3761
Step 40/Ep 37, 0.58s, loss 0.0988
Step 60/Ep 37, 0.58s, loss 0.1217
Step 80/Ep 37, 0.60s, loss 0.2060
Step 100/Ep 37, 0.58s, loss 0.1329
Step 120/Ep 37, 0.59s, loss 0.1196
Step 140/Ep 37, 0.61s, loss 0.0908
Step 160/Ep 37, 0.61s, loss 0.1273
Step 180/Ep 37, 0.59s, loss 0.1578
Step 200/Ep 37, 0.59s, loss 0.1790
Step 220/Ep 37, 0.59s, loss 0.0905
Step 240/Ep 37, 0.68s, loss 0.1255

Ep 37, 154.28s, loss 0.1501
Step 20/Ep 38, 0.58s, loss 0.1560
Step 40/Ep 38, 0.61s, loss 0.2213
Step 60/Ep 38, 0.61s, loss 0.1524
Step 80/Ep 38, 0.60s, loss 0.1111
Step 100/Ep 38, 0.58s, loss 0.2225
Step 120/Ep 38, 0.61s, loss 0.2908
Step 140/Ep 38, 0.61s, loss 0.2887
Step 160/Ep 38, 0.59s, loss 0.2227
Step 180/Ep 38, 0.60s, loss 0.1886
Step 200/Ep 38, 0.60s, loss 0.1447
Step 220/Ep 38, 0.62s, loss 0.1576
Step 240/Ep 38, 0.60s, loss 0.1302
Ep 38, 157.65s, loss 0.1481

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.07s, total 2.07s
�[F�[K40/76 batches done, +1.97s, total 4.05s
�[F�[K60/76 batches done, +2.02s, total 6.07s
Done, 7.63s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.35s
Single Query: [mAP: 99.94%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 39, 0.60s, loss 0.1304
Step 40/Ep 39, 0.59s, loss 0.0888
Step 60/Ep 39, 0.61s, loss 0.1322
Step 80/Ep 39, 0.62s, loss 0.1618
Step 100/Ep 39, 0.61s, loss 0.1251
Step 120/Ep 39, 0.58s, loss 0.2243
Step 140/Ep 39, 0.58s, loss 0.1110
Step 160/Ep 39, 0.59s, loss 0.0727
Step 180/Ep 39, 0.58s, loss 0.1053
Step 200/Ep 39, 0.61s, loss 0.1669
Step 220/Ep 39, 0.78s, loss 0.0773
Step 240/Ep 39, 0.58s, loss 0.1061

Ep 39, 155.61s, loss 0.1410
Step 20/Ep 40, 0.59s, loss 0.2788
Step 40/Ep 40, 0.61s, loss 0.3846
Step 60/Ep 40, 0.62s, loss 0.1863
Step 80/Ep 40, 0.59s, loss 0.2020
Step 100/Ep 40, 0.65s, loss 0.1071
Step 120/Ep 40, 0.61s, loss 0.1945
Step 140/Ep 40, 0.59s, loss 0.1115
Step 160/Ep 40, 0.58s, loss 0.0878
Step 180/Ep 40, 0.58s, loss 0.1581
Step 200/Ep 40, 0.58s, loss 0.1706
Step 220/Ep 40, 0.58s, loss 0.1358
Step 240/Ep 40, 0.59s, loss 0.1271
Ep 40, 155.86s, loss 0.1821

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.13s, total 2.13s
�[F�[K40/76 batches done, +2.00s, total 4.12s
�[F�[K60/76 batches done, +2.06s, total 6.19s
Done, 7.80s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.36s
Single Query: [mAP: 99.94%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

=====> Param group 0: lr adjusted to 0.001
=====> Param group 1: lr adjusted to 0.01
Step 20/Ep 41, 0.58s, loss 0.1239
Step 40/Ep 41, 0.59s, loss 0.0585
Step 60/Ep 41, 0.58s, loss 0.0487
Step 80/Ep 41, 0.60s, loss 0.0555
Step 100/Ep 41, 0.58s, loss 0.0566
Step 120/Ep 41, 0.60s, loss 0.1117
Step 140/Ep 41, 0.59s, loss 0.0532
Step 160/Ep 41, 0.60s, loss 0.1131
Step 180/Ep 41, 0.59s, loss 0.0902
Step 200/Ep 41, 0.61s, loss 0.0864
Step 220/Ep 41, 0.58s, loss 0.1161
Step 240/Ep 41, 0.60s, loss 0.1248
Ep 41, 156.11s, loss 0.0873
Step 20/Ep 42, 0.99s, loss 0.0511
Step 40/Ep 42, 0.81s, loss 0.0567
Step 60/Ep 42, 0.98s, loss 0.0792
Step 80/Ep 42, 1.00s, loss 0.0447
Step 100/Ep 42, 1.07s, loss 0.0755
Step 120/Ep 42, 0.87s, loss 0.0544
Step 140/Ep 42, 0.91s, loss 0.0523
Step 160/Ep 42, 0.97s, loss 0.0823
Step 180/Ep 42, 1.17s, loss 0.0497
Step 200/Ep 42, 1.27s, loss 0.1415
Step 220/Ep 42, 1.23s, loss 0.0573
Step 240/Ep 42, 1.24s, loss 0.0435
Ep 42, 263.17s, loss 0.0631

===== Test on validation set =====

Extracting feature...
20/76 batches done, +5.96s, total 5.96s
�[F�[K40/76 batches done, +6.65s, total 12.61s
�[F�[K60/76 batches done, +6.46s, total 19.07s
Done, 23.85s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.28s
Single Query: [mAP: 99.97%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 43, 1.32s, loss 0.0643
Step 40/Ep 43, 1.28s, loss 0.0546
Step 60/Ep 43, 1.36s, loss 0.0553
Step 80/Ep 43, 1.36s, loss 0.0500
Step 100/Ep 43, 1.29s, loss 0.0509
Step 120/Ep 43, 1.26s, loss 0.0457
Step 140/Ep 43, 1.39s, loss 0.0600
Step 160/Ep 43, 1.34s, loss 0.0494
Step 180/Ep 43, 1.29s, loss 0.0636
Step 200/Ep 43, 1.20s, loss 0.0638
Step 220/Ep 43, 1.33s, loss 0.0661
Step 240/Ep 43, 1.34s, loss 0.0416

Ep 43, 327.62s, loss 0.0582
Step 20/Ep 44, 1.30s, loss 0.0609
Step 40/Ep 44, 1.39s, loss 0.0514
Step 60/Ep 44, 1.20s, loss 0.0526
Step 80/Ep 44, 1.30s, loss 0.0723
Step 100/Ep 44, 1.24s, loss 0.0609
Step 120/Ep 44, 1.31s, loss 0.0429
Step 140/Ep 44, 1.31s, loss 0.0607
Step 160/Ep 44, 1.25s, loss 0.0522
Step 180/Ep 44, 1.58s, loss 0.0669
Step 200/Ep 44, 1.53s, loss 0.0583
Step 220/Ep 44, 1.65s, loss 0.0540
Step 240/Ep 44, 1.37s, loss 0.0747
Ep 44, 362.51s, loss 0.0572

===== Test on validation set =====

Extracting feature...
20/76 batches done, +7.66s, total 7.66s
�[F�[K40/76 batches done, +8.27s, total 15.93s
�[F�[K60/76 batches done, +8.46s, total 24.39s
Done, 31.15s
Computing distance...
Done, 0.11s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.31s
Single Query: [mAP: 99.98%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 45, 1.54s, loss 0.0826
Step 40/Ep 45, 1.46s, loss 0.0622
Step 60/Ep 45, 1.75s, loss 0.0482
Step 80/Ep 45, 1.83s, loss 0.0514
Step 100/Ep 45, 1.71s, loss 0.0594
Step 120/Ep 45, 1.72s, loss 0.0521
Step 140/Ep 45, 1.83s, loss 0.0519
Step 160/Ep 45, 1.82s, loss 0.0652
Step 180/Ep 45, 1.80s, loss 0.0636
Step 200/Ep 45, 1.75s, loss 0.0581
Step 220/Ep 45, 1.88s, loss 0.0791
Step 240/Ep 45, 1.75s, loss 0.0669

Ep 45, 445.55s, loss 0.0588
Step 20/Ep 46, 1.81s, loss 0.0532
Step 40/Ep 46, 1.63s, loss 0.0650
Step 60/Ep 46, 1.76s, loss 0.0478
Step 80/Ep 46, 1.82s, loss 0.0813
Step 100/Ep 46, 1.77s, loss 0.0458
Step 120/Ep 46, 1.73s, loss 0.0577
Step 140/Ep 46, 1.77s, loss 0.0732
Step 160/Ep 46, 1.78s, loss 0.0709
Step 180/Ep 46, 1.66s, loss 0.0446
Step 200/Ep 46, 1.81s, loss 0.0472
Step 220/Ep 46, 1.86s, loss 0.0620
Step 240/Ep 46, 1.82s, loss 0.0524
Ep 46, 451.01s, loss 0.0594

===== Test on validation set =====

Extracting feature...
20/76 batches done, +10.29s, total 10.29s
�[F�[K40/76 batches done, +10.23s, total 20.52s
�[F�[K60/76 batches done, +10.32s, total 30.84s
Done, 38.67s
Computing distance...
Done, 0.17s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.30s
Single Query: [mAP: 99.99%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 47, 1.85s, loss 0.0489
Step 40/Ep 47, 1.89s, loss 0.0465
Step 60/Ep 47, 0.58s, loss 0.0538
Step 80/Ep 47, 0.58s, loss 0.0684
Step 100/Ep 47, 0.62s, loss 0.0660
Step 120/Ep 47, 0.58s, loss 0.0597
Step 140/Ep 47, 0.58s, loss 0.0565
Step 160/Ep 47, 0.59s, loss 0.0817
Step 180/Ep 47, 0.59s, loss 0.0568
Step 200/Ep 47, 0.58s, loss 0.0700
Step 220/Ep 47, 0.60s, loss 0.0471
Step 240/Ep 47, 0.58s, loss 0.0552

Ep 47, 217.33s, loss 0.0629
Step 20/Ep 48, 0.58s, loss 0.0650
Step 40/Ep 48, 0.59s, loss 0.0587
Step 60/Ep 48, 0.60s, loss 0.0616
Step 80/Ep 48, 0.58s, loss 0.0532
Step 100/Ep 48, 0.59s, loss 0.0559
Step 120/Ep 48, 0.59s, loss 0.0492
Step 140/Ep 48, 0.58s, loss 0.0561
Step 160/Ep 48, 0.59s, loss 0.0549
Step 180/Ep 48, 0.60s, loss 0.0590
Step 200/Ep 48, 0.60s, loss 0.0461
Step 220/Ep 48, 0.59s, loss 0.0726
Step 240/Ep 48, 0.59s, loss 0.0683
Ep 48, 152.72s, loss 0.0635

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.15s, total 2.15s
�[F�[K40/76 batches done, +2.06s, total 4.20s
�[F�[K60/76 batches done, +2.12s, total 6.32s
Done, 7.90s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 99.98%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 49, 0.58s, loss 0.0650
Step 40/Ep 49, 0.58s, loss 0.0567
Step 60/Ep 49, 0.58s, loss 0.0575
Step 80/Ep 49, 0.58s, loss 0.0677
Step 100/Ep 49, 0.61s, loss 0.0554
Step 120/Ep 49, 0.58s, loss 0.0589
Step 140/Ep 49, 0.59s, loss 0.0942
Step 160/Ep 49, 0.60s, loss 0.0650
Step 180/Ep 49, 0.59s, loss 0.0844
Step 200/Ep 49, 0.58s, loss 0.0572
Step 220/Ep 49, 0.61s, loss 0.0674
Step 240/Ep 49, 0.60s, loss 0.0720

Ep 49, 153.30s, loss 0.0685
Step 20/Ep 50, 0.58s, loss 0.0639
Step 40/Ep 50, 0.60s, loss 0.0741
Step 60/Ep 50, 0.61s, loss 0.0835
Step 80/Ep 50, 0.66s, loss 0.0506
Step 100/Ep 50, 0.63s, loss 0.0669
Step 120/Ep 50, 0.64s, loss 0.0887
Step 140/Ep 50, 0.62s, loss 0.0759
Step 160/Ep 50, 0.63s, loss 0.0593
Step 180/Ep 50, 0.62s, loss 0.0695
Step 200/Ep 50, 0.58s, loss 0.0788
Step 220/Ep 50, 0.58s, loss 0.0754
Step 240/Ep 50, 0.59s, loss 0.0577
Ep 50, 170.46s, loss 0.0683

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.09s, total 2.09s
�[F�[K40/76 batches done, +2.12s, total 4.21s
�[F�[K60/76 batches done, +1.97s, total 6.18s
Done, 7.65s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.34s
Single Query: [mAP: 99.99%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 51, 0.58s, loss 0.0793
Step 40/Ep 51, 0.58s, loss 0.0684
Step 60/Ep 51, 0.58s, loss 0.0953
Step 80/Ep 51, 0.58s, loss 0.0784
Step 100/Ep 51, 0.58s, loss 0.0535
Step 120/Ep 51, 0.58s, loss 0.0521
Step 140/Ep 51, 0.57s, loss 0.0566
Step 160/Ep 51, 0.58s, loss 0.0725
Step 180/Ep 51, 0.58s, loss 0.0566
Step 200/Ep 51, 0.58s, loss 0.0652
Step 220/Ep 51, 0.59s, loss 0.0524
Step 240/Ep 51, 0.59s, loss 0.0628

Ep 51, 151.36s, loss 0.0683
Step 20/Ep 52, 0.60s, loss 0.0827
Step 40/Ep 52, 0.58s, loss 0.0586
Step 60/Ep 52, 0.58s, loss 0.0713
Step 80/Ep 52, 0.58s, loss 0.0776
Step 100/Ep 52, 0.58s, loss 0.0578
Step 120/Ep 52, 0.59s, loss 0.0614
Step 140/Ep 52, 0.62s, loss 0.0514
Step 160/Ep 52, 0.86s, loss 0.0887
Step 180/Ep 52, 0.83s, loss 0.0672
Step 200/Ep 52, 0.87s, loss 0.0909
Step 220/Ep 52, 0.86s, loss 0.0376
Step 240/Ep 52, 0.85s, loss 0.0709
Ep 52, 173.05s, loss 0.0684

===== Test on validation set =====

Extracting feature...
20/76 batches done, +2.01s, total 2.01s
�[F�[K40/76 batches done, +1.89s, total 3.90s
�[F�[K60/76 batches done, +2.07s, total 5.98s
Done, 7.45s
Computing distance...
Done, 0.03s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 100.00%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 53, 1.55s, loss 0.0575
Step 40/Ep 53, 1.53s, loss 0.0756
Step 60/Ep 53, 1.46s, loss 0.0695
Step 80/Ep 53, 1.48s, loss 0.0543
Step 100/Ep 53, 1.52s, loss 0.0594
Step 120/Ep 53, 1.53s, loss 0.0803
Step 140/Ep 53, 1.50s, loss 0.0664
Step 160/Ep 53, 1.55s, loss 0.0660
Step 180/Ep 53, 1.56s, loss 0.0601
Step 200/Ep 53, 1.49s, loss 0.0692
Step 220/Ep 53, 1.53s, loss 0.0850
Step 240/Ep 53, 1.55s, loss 0.0701

Ep 53, 343.79s, loss 0.0711
Step 20/Ep 54, 1.55s, loss 0.0431
Step 40/Ep 54, 1.45s, loss 0.0649
Step 60/Ep 54, 1.56s, loss 0.0689
Step 80/Ep 54, 1.64s, loss 0.0611
Step 100/Ep 54, 1.63s, loss 0.0845
Step 120/Ep 54, 1.73s, loss 0.0698
Step 140/Ep 54, 1.75s, loss 0.0726
Step 160/Ep 54, 1.76s, loss 0.0644
Step 180/Ep 54, 1.71s, loss 0.0760
Step 200/Ep 54, 1.73s, loss 0.0767
Step 220/Ep 54, 1.68s, loss 0.0540
Step 240/Ep 54, 1.80s, loss 0.1006
Ep 54, 421.63s, loss 0.0713

===== Test on validation set =====

Extracting feature...
20/76 batches done, +9.18s, total 9.18s
�[F�[K40/76 batches done, +9.09s, total 18.27s
�[F�[K60/76 batches done, +8.73s, total 27.00s
Done, 33.73s
Computing distance...
Done, 0.08s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.31s
Single Query: [mAP: 99.99%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 55, 9.66s, loss 0.0644
Step 40/Ep 55, 12.66s, loss 0.0597
Step 60/Ep 55, 7.02s, loss 0.0781
Step 80/Ep 55, 9.40s, loss 0.0846
Step 100/Ep 55, 8.76s, loss 0.0634
Step 120/Ep 55, 11.46s, loss 0.0692
Step 140/Ep 55, 8.49s, loss 0.0742
Step 160/Ep 55, 9.89s, loss 0.0872
Step 180/Ep 55, 7.41s, loss 0.0615
Step 200/Ep 55, 9.89s, loss 0.0757
Step 220/Ep 55, 7.41s, loss 0.0588
Step 240/Ep 55, 10.02s, loss 0.0740

Ep 55, 2253.52s, loss 0.0714
Step 20/Ep 56, 10.45s, loss 0.0653
Step 40/Ep 56, 7.42s, loss 0.0507
Step 60/Ep 56, 9.47s, loss 0.0762
Step 80/Ep 56, 8.14s, loss 0.1070
Step 100/Ep 56, 7.93s, loss 0.0661
Step 120/Ep 56, 6.00s, loss 0.0769
Step 140/Ep 56, 10.50s, loss 0.0592
Step 160/Ep 56, 7.77s, loss 0.0698
Step 180/Ep 56, 10.03s, loss 0.0651
Step 200/Ep 56, 9.42s, loss 0.0680
Step 220/Ep 56, 9.20s, loss 0.0605
Step 240/Ep 56, 6.21s, loss 0.0607
Ep 56, 2300.39s, loss 0.0730

===== Test on validation set =====

Extracting feature...
20/76 batches done, +44.58s, total 44.58s
�[F�[K40/76 batches done, +49.01s, total 93.59s
�[F�[K60/76 batches done, +43.04s, total 136.63s
Done, 181.25s
Computing distance...
Done, 0.64s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.29s
Single Query: [mAP: 100.00%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 57, 2.17s, loss 0.1029
Step 40/Ep 57, 2.28s, loss 0.0665
Step 60/Ep 57, 2.25s, loss 0.0782
Step 80/Ep 57, 2.20s, loss 0.0869
Step 100/Ep 57, 2.32s, loss 0.0539
Step 120/Ep 57, 2.29s, loss 0.0715
Step 140/Ep 57, 2.29s, loss 0.0842
Step 160/Ep 57, 2.27s, loss 0.0626
Step 180/Ep 57, 2.26s, loss 0.0653
Step 200/Ep 57, 2.33s, loss 0.0825
Step 220/Ep 57, 2.18s, loss 0.0775
Step 240/Ep 57, 2.21s, loss 0.0656

Ep 57, 585.82s, loss 0.0723
Step 20/Ep 58, 2.26s, loss 0.0684
Step 40/Ep 58, 2.21s, loss 0.0677
Step 60/Ep 58, 6.89s, loss 0.0679
Step 80/Ep 58, 9.03s, loss 0.0606
Step 100/Ep 58, 10.00s, loss 0.0598
Step 120/Ep 58, 4.76s, loss 0.0822
Step 140/Ep 58, 7.33s, loss 0.0647
Step 160/Ep 58, 10.61s, loss 0.0916
Step 180/Ep 58, 9.37s, loss 0.0769
Step 200/Ep 58, 8.81s, loss 0.0539
Step 220/Ep 58, 13.00s, loss 0.0705
Step 240/Ep 58, 12.67s, loss 0.0526
Ep 58, 2319.96s, loss 0.0704

===== Test on validation set =====

Extracting feature...
20/76 batches done, +27.16s, total 27.16s
�[F�[K40/76 batches done, +39.55s, total 66.71s
�[F�[K60/76 batches done, +34.20s, total 100.92s
Done, 144.13s
Computing distance...
Done, 1.21s
Computing scores...
User Warning: Version 0.18.1 is required for package scikit-learn, your current version is 0.19.1. As a result, the mAP score may not be totally correct. You can try pip uninstall scikit-learn and then pip install scikit-learn==0.18.1
Done, 0.30s
Single Query: [mAP: 100.00%], [cmc1: 100.00%], [cmc5: 100.00%], [cmc10: 100.00%]

Step 20/Ep 59, 5.51s, loss 0.0632
Step 40/Ep 59, 8.44s, loss 0.0819
Step 60/Ep 59, 6.65s, loss 0.0701
Step 80/Ep 59, 10.00s, loss 0.0816
Step 100/Ep 59, 9.75s, loss 0.0611
Step 120/Ep 59, 9.66s, loss 0.0696
Step 140/Ep 59, 9.40s, loss 0.0624
Step 160/Ep 59, 9.17s, loss 0.0679
Step 180/Ep 59, 5.24s, loss 0.0747
Step 200/Ep 59, 10.17s, loss 0.0704
Step 220/Ep 59, 5.08s, loss 0.0596
Step 240/Ep 59, 9.89s, loss 0.0907

Ep 59, 2208.26s, loss 0.0713
Step 20/Ep 60, 5.06s, loss 0.0728
Step 40/Ep 60, 9.74s, loss 0.0866
Step 60/Ep 60, 10.04s, loss 0.0615
Step 80/Ep 60, 9.64s, loss 0.0832
Step 100/Ep 60, 11.49s, loss 0.0668
Step 120/Ep 60, 10.06s, loss 0.0555
Step 140/Ep 60, 10.52s, loss 0.0884
Step 160/Ep 60, 9.91s, loss 0.0720
Step 180/Ep 60, 7.34s, loss 0.0914
Step 200/Ep 60, 10.57s, loss 0.0556
Step 220/Ep 60, 10.08s, loss 0.0716
Step 240/Ep 60, 7.55s, loss 0.0697
Ep 60, 2399.77s, loss 0.0717

===== Test on validation set =====

Extracting feature...
20/76 batches done, +50.25s, total 50.25s`

the performance of model trained on combined dataset

Hi
The performance you get in 3 datasets is based on different model trained on corresponding dataset, right?
Can we get the same performance if the model was trained with all 3 datatsets? What the performance you get?

test error in resnet.py

python script/experiment/train_pcb.py
-d '(0,)'
--only_test true
--dataset duke
--exp_dir /data/exp_directory
--model_weight_file /data/pcb_model_weights/duke/model_weight.pth

the error is:
duke test set

NO. Images: 31969
NO. IDs: 751
NO. Query Images: 3368
NO. Gallery Images: 15913
NO. Multi-query Images: 12688

Traceback (most recent call last):
File "script/experiment/train_pcb.py", line 495, in
main()
File "script/experiment/train_pcb.py", line 304, in main
train_set = create_dataset(**cfg.train_set_kwargs)
File "./bpm/model/PCBModel.py", line 19, in init
self.base = resnet50(pretrained=True, last_conv_stride=last_conv_stride)
File "./bpm/model/resnet.py", line 190, in resnet50
model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet50'])))
File "./bpm/model/resnet.py", line 152, in remove_fc
for key, value in state_dict.items():
RuntimeError: OrderedDict mutated during iteration

About the performance of model trained on market1501

Thanks for your work, I'm sorry to bother you.
I try to train the PCB on marker1501 , but the performance I get is very different from yours ( cmc1=76.69% cmc5=85.72%).At the same time , I used the 'model_weight.pth' you provided to test the models can reach the performance about cmc1=93.02% .Do you know the reason?

有关README

在README里提到:
| | Rank-1 (%) | mAP (%) | R.R. Rank-1 (%) | R.R. mAP (%) |
| Market1501 (Paper)| 92.40 | 77.30 | - | - |

可是我看论文中说的是:“In this paper, we report mAP = 81.6%, 69.2%, 57.5%
and Rank-1 = 93.8%, 83.3% and 63.7% for Market-
1501, Duke and CUHK03, respectively, setting new state
of the art on the three datasets. All the results are achieved
under the single-query mode without re-ranking. Re-
ranking methods will further boost the performance espe-
cially mAP. For example, when “PCB+RPP” is combined
with the method in [44], mAP and Rank-1 accuracy on
Market-1501 increases to 91.9% and 95.1%, respectively.”

为什么您提到的论文数据不一致呢?是我看错了么...

Error on inference

Getting this error when trying to do a prediction
ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-14-e0592f495007> in <module>()
      1 img_variable1 = Variable(img_tensor1)
----> 2 fc_out1 = model(img_variable1)
      3 
      4 # global_feat1, local_feat1 = fc_out1
      5 # print(global_feat1.size())

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

~/meet-up/internship/Person-ReId/beyond-part-models/bpm/model/PCBModel.py in forward(self, x)
     58         (stripe_h, feat.size(-1)))
     59       # shape [N, c, 1, 1]
---> 60       local_feat = self.local_conv_list[i](local_feat)
     61       # shape [N, c]
     62       local_feat = local_feat.view(local_feat.size(0), -1)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
     65     def forward(self, input):
     66         for module in self._modules.values():
---> 67             input = module(input)
     68         return input
     69 

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py in forward(self, input)
     35         return F.batch_norm(
     36             input, self.running_mean, self.running_var, self.weight, self.bias,
---> 37             self.training, self.momentum, self.eps)
     38 
     39     def __repr__(self):

/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
   1009         size = list(input.size())
   1010         if reduce(mul, size[2:], size[0]) == 1:
-> 1011             raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
   1012     f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
   1013     return f(input, weight, bias)

ValueError: Expected more than 1 value per channel when training, got input size [1, 256, 1, 1]

Please help. Pytorch version is 0.3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.