Code Monkey home page Code Monkey logo

Comments (6)

zhouxiaohang avatar zhouxiaohang commented on June 7, 2024

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.

Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.

Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.

Could you please give me some hints or share your train script with me? Thanks.

from hashnet.

rayLemond avatar rayLemond commented on June 7, 2024

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.

Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.

Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

from hashnet.

zhouxiaohang avatar zhouxiaohang commented on June 7, 2024

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095

But the final mAP is still very low.

Would you mind to share your code on your GitHub?

from hashnet.

rayLemond avatar rayLemond commented on June 7, 2024

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095

But the final mAP is still very low.

Would you mind to share your code on your GitHub?

OK, can i have your email?

from hashnet.

zhouxiaohang avatar zhouxiaohang commented on June 7, 2024

I change the network to resnet50 in the pytorch version HashNet, and i can not produce acceptable map on CUB dataset. I have tried some fine tuning, and the best map i can get is arount 40 with 64 bits. SGD and Adam won't work, the result is given by RMSprop.
Any idea what's the problem and how can i fix it?

Hi raymongL, I've also tried to adopted the PyTorch code to CUB200 dataset with finetuned ResNet50, but I can't make the loss converge. I've tried different optimizers like SGD, Adam, RMSprop, and different class_num values like 1.0 and 200.0, different lr values from 1e-5 to 1e-3.
Here is a set of parameters which I tried:

python train.py \
    --dataset cub200 \
    --prefix resnet50_hashnet \
    --hash_bit 64 \
    --net ResNet50 \
    --lr 1e-5 \
    --class_num 1.0

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 1.0}{'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

But the training loss is always around 0.69, and mAP is extremely as low as 0.04.
Could you please give me some hints or share your train script with me? Thanks.

我用的RMSprop,lr=1e-5,Resnet最后一层fc后面接一个tanh输出网络特征。除此之外都跟源码一致。Class_num是200,为什么要改成1?没太懂。。

  1. About tanh after fc

I think the original code has already tanh after fc, Do you use the original code?
https://github.com/thuml/HashNet/blob/master/pytorch/src/network.py#L84

  1. About class_num

I'm quite confused about class_num because in coco dataset it's 1.0 even there are 80 classes in coco. I've also perform an experiment with parameters that you mentioned above.

python train.py \      
    --prefix resnet50_hashnet \
    --dataset cub200 \              
    --hash_bit 64 \
    --net ResNet50 \ 
    --lr 1e-5 \  
    --class_num 200 

{'l_weight': 1.0, 'q_weight': 0, 'l_threshold': 15.0, 'sigmoid_param': 0.15625, 'class_num': 200.0}  {'type': 'RMSprop', 'optim_params': {'lr': 1.0, 'weight_decay': 1e-05}, 'lr_type': 'step', 'lr_param': {'init_lr': 1e-05, 'gamma': 0.5, 'step': 2000}}

Thel loss is shocking during the interval from 0.3 to 0.5, here is a glance of loss:

Iter: 02680, loss: 0.57731044
Iter: 02690, loss: 0.37092209
Iter: 02700, loss: 0.46045765
Iter: 02710, loss: 0.41415858
...
Iter: 09996, loss: 0.42658427
Iter: 09997, loss: 0.40386644
Iter: 09998, loss: 0.39876112
Iter: 09999, loss: 0.43418783

MAP: 0.043476254169203095
But the final mAP is still very low.
Would you mind to share your code on your GitHub?

OK, can i have your email?

Thanks

from hashnet.

sagar-dutta avatar sagar-dutta commented on June 7, 2024

Can anybody tell me the reason for low mAP? I am getting around 0.03. Any suggestion would be great. Thanks.

from hashnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.