Comments (6)
I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?
Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:
python train.py \
--dataset cub200 \
--prefix resnet50_hashnet \
--hash_bit 64 \
--net ResNet50 \
--lr 1e-5 \
--class_num 1.0
{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.
from hashnet.
I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:
python train.py \ --dataset cub200 \ --prefix resnet50_hashnet \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 1.0 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.
我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。
from hashnet.
I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:python train.py \ --dataset cub200 \ --prefix resnet50_hashnet \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 1.0 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。
- About tanh after fc
I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84
- About class_num
I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.
python train.py \
--prefix resnet50_hashnet \
--dataset cub200 \
--hash_bit 64 \
--net ResNet50 \
--lr 1e-5 \
--class_num 200
{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0} {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:
Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783
MAP: 0.043476254169203095
But the final mAP is still very low.
Would you mind to share your code on your GitHub?
from hashnet.
I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:python train.py \ --dataset cub200 \ --prefix resnet50_hashnet \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 1.0 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。
- About tanh after fc
I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84
- About class_num
I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.
python train.py \ --prefix resnet50_hashnet \ --dataset cub200 \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 200 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0} {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:
Iter: 02680, loss: 0.57731044 Iter: 02690, loss: 0.37092209 Iter: 02700, loss: 0.46045765 Iter: 02710, loss: 0.41415858 ... Iter: 09996, loss: 0.42658427 Iter: 09997, loss: 0.40386644 Iter: 09998, loss: 0.39876112 Iter: 09999, loss: 0.43418783
MAP: 0.043476254169203095
But the final mAP is still very low.
Would you mind to share your code on your GitHub?
OK, can i have your email?
from hashnet.
I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:python train.py \ --dataset cub200 \ --prefix resnet50_hashnet \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 1.0 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。
- About tanh after fc
I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84
- About class_num
I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.
python train.py \ --prefix resnet50_hashnet \ --dataset cub200 \ --hash_bit 64 \ --net ResNet50 \ --lr 1e-5 \ --class_num 200 {'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0} {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}
Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:
Iter: 02680, loss: 0.57731044 Iter: 02690, loss: 0.37092209 Iter: 02700, loss: 0.46045765 Iter: 02710, loss: 0.41415858 ... Iter: 09996, loss: 0.42658427 Iter: 09997, loss: 0.40386644 Iter: 09998, loss: 0.39876112 Iter: 09999, loss: 0.43418783
MAP: 0.043476254169203095
But the final mAP is still very low.
Would you mind to share your code on your GitHub?OK, can i have your email?
Thanks
from hashnet.
Can anybody tell me the reason for low mAP? I am getting around 0.03. Any suggestion would be great. Thanks.
from hashnet.
Related Issues (20)
- Problem with MAP@k
- 你好,非常感谢分享的NUS数据集,但是下载解压发现数据的图片比原始要少 HOT 2
- Why take two batches as input HOT 4
- the mAP on cifar10 is only 0.30 HOT 5
- the scale tanh is useless HOT 1
- a question about training set HOT 1
- The mAP of ImageNet is different from that on paper HOT 7
- How did you generate the "train.txt" of MS-COCO? HOT 1
- The pretrained model HOT 1
- nuswide dataset HOT 1
- @bfan @caozhangjie HOT 1
- question on nuswide dataset
- HashNet on CUB200 HOT 1
- Anybody willing to share a pretrained model (from ImageNet or CoCo or similar)?
- AlexNet backbone result (COCO) HOT 3
- False sampling of data HOT 1
- test.py for CIFAR HOT 1
- number of training images for imagenet HOT 1
- question on nuswide81 dataset HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hashnet.