Code Monkey home page Code Monkey logo

osr_closed_set_all_you_need's People

Contributors

sgvaze avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

osr_closed_set_all_you_need's Issues

about checkpoint

Thanks for your wonderful work

I want to know why the checkpoint is not saved when I only use the CEloss, I read the code but did not find where should be modify.

Can the trained weights be provided?

Hi,

I know this is a long shot, but is there a chance that the ResNet50 trained weights with the standard cross-entropy loss for fine-grained datasets (cub, aircraft, and cars) could be provided?

Thanks

There seems to be overlapping classes for CUB dataset

Hi, I was testing the validity of the dataset splits of CUB dataset generated by the given code.
There are two issues:

  1. The download url became inactive:
    tristandeleu/pytorch-meta#104 (comment)
    I solved this issue by using the code from the tprchmeta repository:
    https://github.com/tristandeleu/pytorch-meta/blob/d55d89ebd47f340180267106bde3e4b723f23762/torchmeta/datasets/cub.py

However, after downloading the CUB data and running cub.py, I found that 20 classes overlap between the known and unknown splits:
{161, 162, 166, 168, 169, 171, 174, 178, 179, 181, 182, 187, 188, 190, 192, 194, 196, 197, 198, 199}

Below are the splits I got right by
x = get_cub_datasets(None, None, split_train_val=False, train_classes=np.random.choice(range(200), size=100, replace=False))

CUB_test_known.csv
CUB_test_unknown.csv
CUB_train.csv
CUB_val.csv

Is this something expected?

Question about fine-grained dataset CUB and Aircraft

Hello
When I tried to reproduce the AUC results of the Easy and Medium datasets for the CUB and Aircraft datasets, I couldn't reproduce the results of the paper, for CUB, Easy and Medium were only 87.5 and 81.8; for Aircraft is 89.0 and 85.4.
I trained according to the bash script/osr_finegrained_train.sh and fix corresponding Optimal Hyper-parameters, and used the places_moco pre-trainmodel, but the results in the paper could not be reproduced. What is the reason?

image
image
image

Running pretrained configuration

Hi,

Thanks again for the code.

I tried to run with pretrained weights. Two questions about it please:

  1. Do you have any suggested configuration (hyperparameters, etc.) to run with one of the pretrained model?

  2. As you suggested in you code, I tried to download a model from:

https://github.com/nanxuanzhao/Good_transfer

Yet, I got the following error:

Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 354, in
res = main_worker(options, args)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 153, in main_worker
net = get_model(args, wrapper_class=wrapper_class)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/models/model_utils.py", line 251, in get_model
model.load_state_dict(state_dict)
File "/cs/labs/yedid/nivc/nn_code/envs/fa_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNetABN:
Missing key(s) in state_dict: "bn1.bns.0.weight", "bn1.bns.0.bias", "bn1.bns.0.running_mean", "bn1.bns.0.running_var", "bn1.bns.1.weight", "bn1.bns.1.bias", "bn1.bns.1.running_mean", "bn1.bns.1.running_var", "layer1.0.bn1.bns.0.weight", "layer1.0.bn1.bns.0.bias", "layer1.0.bn1.bns.0.running_mean", "layer1.0.bn1.bns.0.running_var", "layer1.0.bn1.bns.1.weight", "layer1.0.bn1.bns.1.bias", "layer1.0.bn1.bns.1.running_mean", "layer1.0.bn1.bns.1.running_var", "layer1.0.bn2.bns.0.weight", "layer1.0.bn2.bns.0.bias", "layer1.0.bn2.bns.0.running_mean", "layer1.0.bn2.bns.0.running_var", "layer1.0.bn2.bns.1.weight", "layer1.0.bn2.bns.1.bias", "layer1.0.bn2.bns.1.running_mean", "layer1.0.bn2.bns.1.running_var", "layer1.0.bn3.bns.0.weight", "layer1.0.bn3.bns.0.bias", "layer1.0.bn3.bns.0.running_mean", "layer1.0.bn3.bns.0.running_var", "layer1.0.bn3.bns.1.weight", "layer1.0.bn3.bns.1.bias ...

Could you please advise which model should I download or how to resolve that error?

Niv.

Random Augment M & N

Thanks for this amazing codebase.

I think the M & N for the Random Augment have been interchanged in the README and the code. M should be N and vice-versa.

As in the file data.augmentations.randaugment.py Line 277 Line 278 correspond to the correct setting. Please let me know if this is wrong. May be just update the README to alleviate any confusion.

Use of logits instead of softmax activations for OS scoring

Hi again,

I read from the paper that "[...] we propose the use of the maximum logit rather than softmax probability for the open-set scoring rule. Logits are the raw outputs of the final linear layer in a deep classifier, while the softmax operation involves a normalization such that the outputs can be interpreted as a probability vector summing to one. As the softmax operation normalizes out much of the feature magnitude information present in the logits, we find logits lead to better open-set detection results" . Then you have figure 6c that shows AUROC on the test set(s) and how it evolves as training goes on, using both max-logits and max-softmax for scoring, showing how it might be better to use max-of-logits.

However, the ARPL code for the Softmax loss (found here), which you are inheriting and using for testing, is a bit weird: it calls logits to the post-softmax activation, see here.

Since you are taking the (false) logits from calling the criterion (here) during testing, and then you have a few lines below the option of (re-)applying softmax to them if we are running with 'use_softmax_in_eval, I am wondering if what you are calling in your experiments from the paper "logits" are actually softmax(logits), and what you call softmax activations are indeed softmax(softmax(logits))?

Thanks!

Adrian

Error when trying to run bash_scripts/osr_train_tinyimagenet.sh

Hi,

Thank you for publishing the code!

I tried to follow the instruction to first run your method as described.

I received the error trace below.

Your assistance would be greatly appreciated.

Niv.


6
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 277, in
cifar_plus_n=args.out_num, args=args)
TypeError: get_class_splits() got an unexpected keyword argument 'args'
7
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 277, in
cifar_plus_n=args.out_num, args=args)
TypeError: get_class_splits() got an unexpected keyword argument 'args'
8
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 277, in
cifar_plus_n=args.out_num, args=args)
TypeError: get_class_splits() got an unexpected keyword argument 'args'
9
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 277, in
cifar_plus_n=args.out_num, args=args)
TypeError: get_class_splits() got an unexpected keyword argument 'args'
10
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/cs/labs/peleg/nivc/multimodal_ad/osr_closed_set_all_you_need/methods/ARPL/osr.py", line 277, in
cifar_plus_n=args.out_num, args=args)
TypeError: get_class_splits() got an unexpected keyword argument 'args'

How to classify a sample as unknown?

Hello! I am glad to see this work when I encounter OSR problems in practice.
I'm wonder that how to give the unknown label to a sample with the classifier trained on the closed set.

Training cub dataset low accuracy

Hello, thank you for maintaining this repo!

I have one question regarding a low accuracy using cub dataset.

Here is the bash file that I am using:

LOSS='ARPLoss'          # For TinyImageNet, ARPLoss and Softmax loss have the same
                        # RandAug and Label Smoothing hyper-parameters, but different learning rates

# Fixed hyper params for both ARPLoss and Softmax
DATASET='cub'
AUG_M=30
AUG_N=2
LABEL_SMOOTHING=0.1

# LR different for ARPLoss and Softmax
if [ $LOSS = "Softmax" ]; then
   LR=0.01
elif [ $LOSS = "ARPLoss" ]; then
   LR=0.001
fi

# GPU0-0 MIG-GPU-7391bfa5-fd39-632b-ac8a-cbd1359e940b/5/0


# tinyimagenet
for SPLIT_IDX in 0 1 2 3 4; do

  EXP_NUM=$(ls ${SAVE_DIR} | wc -l)
  EXP_NUM=$((${EXP_NUM}+1))
  echo $EXP_NUM

  ${PYTHON} -m methods.ARPL.osr --lr=0.001 --seed=0 \
                             --transform='rand-augment' \
                            --rand_aug_m=${AUG_M} --rand_aug_n=${AUG_N} --loss=${LOSS} --label_smoothing=${LABEL_SMOOTHING} \
                            --dataset=${DATASET} --image_size=448 --cs --num_restarts=2 --gpus 0 --split_idx=${SPLIT_IDX} \
                            --scheduler='cosine_warm_restarts_warmup' --split_train_val='True' --batch_size=32 --num_workers=16 --max-epoch=600 \
> ${SAVE_DIR}logfile_${DATASET}_cs_${LOSS}_${EXP_NUM}.out
done
Batch 150/150    Net 4.238 (4.178) G 19.857 (19.787) D 0.000 (0.000)

I believe this is because the loss of D(discriminator) becomes 0.

However, I was not able to get the accuracy more than 20%. Could you point me out what I am doing wrong?

Thank you!

Questions about the ensembling

Hi,

Thank you for sharing this great work! I'd like to know what are the configurations of the ensemble of models.

In the paper and supplementary, they only mentioned it "by bootstrapping the training data and training K = 5 ensembles." And in this code repo (line 151 and line 183), it seems that for each of the 5 dataset splits, only a single model is used?

This is confusing because in line 183, it uses split_idx as the index of all_paths_combined, which is a list of models from 5 experiments exp_ids. So do you mean each exp_id corresponds to a single model trained from one dataset split? Correct me if I misunderstood this part.

Looking forward to your reply.
Thank you!

where is the paper

hello i want to know where is the paper, i read the openset paper recently and consider if these methods are useful

How to predict a single img with label known or unknown?

Hello again!
I'm working on a task with your code, i have seen that all methods are evaluate by AUROC, a threshold independent metric, but now i need to predict the label by threshold, is there any func in code to do that?

Any plans to include Imagenet?

Hello again,

I have been able to train models for all benchmarks, old and newly proposed, so thanks for that!

I am just missing the Imagenet experiment, which appears in Table 3 of the paper (with an easy and a hard split). The ImageNet splits seem to be present in data/open_set_split/imagenet_osr_splits.pkl, but there is not pytorch dataset that implements them, nor do ImageNet hyperparameters appear in utils/paper_hyperparameters.csv. I was wondering if you have any plans to release the ImageNet experimental setup anytime soon? Thanks!!

Also, may I email you with a separate question about your work that does not fit in a github issue? Thank you very much!!

Adrian

the difference between MSP and MLS in code

The difference between MSP and MLS in the code lies solely in the value of the use_softmax_in_eval hyperparameter. Is this sentence correct?

And, when using classifier32__Softmax__cifar-10-10 and default_parameters, the highest AUROC achieved during training is 94.2567. However, when testing with openset_test.py, the value only reaches 60.12. Could you please tell me why this is happening?

By the way, which osr_mode I should choose when use openset_test.py.

OpenHybrid

Do the authors have an implementation of OpenHybrid in the codebase?

Thank you

About training results

Hi,

I found that the evaluation metrics are unstable among different epochs when I conducted experiments on the custom dataset. I would like to ask whether you report the last or the best epoch results in your training exps.

Clarification on ImageNet-21K

Hi,

Thanks for making the code public which helps a lot. I'm wondering if you were using the fall11 version of ImageNet-21K for the open splits. I'm now using the winter21 version and some of the classes in the 'Hard' split do not exist anymore (e.g., 'n10506915').

If this is true, then a problem is that the ImageNet website doesn't seem to host fall11 version anymore (correct me if I'm wrong). If this is the case, then would it be possible for you to update the 'Hard' split based on the available winter21 version?

Thanks

pretrained model don't match

Thanks for your work

When i train the model with parameters "--model=timm_resnet50_pretrained --resnet50_pretrain=places", and i download pretrained model from https://github.com\\nanxuanzhao\\Good_transfer, i get the error :"Missing key(s) in state_dict: "conv1.weight", "bn1.weight", "bn1.bias", "bn1.running_mean",.....................".

i think this error causes by mismatched weight and model, could you please tell me the right method to train model with pretrained weight, or right link to download weight?

Reproducing results

Hi! First, thank you very much for this work, it is very refreshing to see recent OSR methods put to test and finding out that mostly they are over-hyped over-complex approaches, and cross-entropy alone is so competitive if you give some care to training baselines properly, congratulations :)

I am trying to reproduce your results, but I am struggling to understand how to do it. I am starting from Tiny Imagenet, which I have been able to re-train successfuly, after:

  1. running the create_val_img_folder function on the dataset folder, and
  2. correcting lines 18 of methods/ARPL/core/train.py, as well as lines 25 and 42 of methods/ARPL/core/test.py, because options['use_gpu'] does not exist; those lines should probably be replaced by if not options['use_cpu'], which works ok.

Now, after properly manipulating config.py and bash_scripts/osr_train_tinyimagenet.sh, I carry out the entire training and I end up with a directory called, in this case, in methods/ARPL/log/(03.01.2022_|_32.677). Within this directory, one can find some tensorboard-related stuff, and two directories, namely checkpoints/ and arpl_models/tinyimagenet/checkpoints/. The former is empty and I guess it is created by mistake, whereas the latter contains a bunch of checkpoints, as it seems that you guys are storing a model checkpoint (and a "criterion checkpoint", which btw,I don't know what it is) each twenty epochs.

My question is, how exactly do I evaluate the final performance of this experiment? I.e.:

  • How do I know which is the checkpoint with the highest closed-set performance, that I should then be using to compute Accuracy on the closed set, AUC on the open classes, plus the OSCR score, like in Table 5 or Table 3?
  • Which piece of code should I use to evaluate the checkpoint, and how do I go about it?

I'm suspecting it might have something to do with methods/tests/openset_test.py, but I am not sure since there seem to be some hard-coded experiment names in there,and it seems to be only useful for evaluating the performance of an ensemble of five models. Could you please provide some instructions on how to assess final performance of a trained model?

Thanks!!

Adrian

P.S.: In the next days or weeks I might be asking some more questions about your work, thanks for the patience!

Longer training and stronger augmentation do not work for Cifar 100

Hi,

I am trying to use the RandAug instead of RandCrop and cosine learning rate schedule to train the whole Cifar100 dataset. But they do not work. The baseline is RandCrop with step learning rate schedule (initial learning rate is 0.1 and divided by 2 at [60, 120, 160] epochs with 300 epochs in total). The closed-set accuracy of baseline is 0.75, and RandAug and cosine learning rate schedule (restart 0 or 2 times for 600 epochs) does not work, can you give me some advise?

Best regards.

missing license info

Hello,

can you please add licensing info? Can we assume, based on ARPL repo, that this is also MIT license?

ResnetABN key matching error

Hello,

If I put --cs flag in this working script

${PYTHON} -m methods.ARPL.osr --lr=0.001 --model='timm_resnet50_pretrained' --resnet50_pretrain='places_moco' \
                             --transform='rand-augment' \
                            --rand_aug_m=${AUG_M} --rand_aug_n=${AUG_N} --loss=${LOSS} --label_smoothing=${LABEL_SMOOTHING} \
                            --dataset=${DATASET} --image_size=448 \
                            --scheduler='cosine_warm_restarts_warmup' --split_train_val='False' --batch_size=32 --num_workers=16 --max-epoch=600 \
                             --num_restarts=2 --seed=${SEED} --gpus 0 --feat_dim=2048 \

I am getting a key matching error:

RuntimeError: Error(s) in loading state_dict for ResNetABN:
        Missing key(s) in state_dict: "bn1.bns.0.weight", "bn1.bns.0.bias", "bn1.bns.0.running_mean", "bn1.bns.0.running_var", ...

Could you help me out how to handle this error? I downloaded the github repo from https://github.com/nanxuanzhao/Good_transfer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.