Code Monkey home page Code Monkey logo

cocl's Introduction

Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated Outlier Class Learning

This is the official implementation of the AAAI'24 paper titled Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated Outlier Class Learning

Dataset Preparation

In-distribution dataset

Please download CIFAR10, CIFAR100, and ImageNet-LT , place them in./datasets

Auxiliary/Out-of-distribution dataset

For CIFAR10-LT and CIFAR100-LT, please download TinyImages 300K Random Images for auxiliary in ./datasets

For CIFAR10-LT and CIFAR100-LT, please download SC-OOD benchmark for out-of-distribution in ./datasets

For ImageNet-LT, please download ImageNet10k_eccv2010 benchmark for auxiliary and out-of-distribution in ./datasets

All datasets follow PASCL

Training

CIFAR10-LT:

python train.py --gpu 0 --ds cifar10 --Lambda1 0.05 --Lambda2 0.05 --Lambda3 0.1 --drp <where_you_store_all_your_datasets> --srp <where_to_save_the_ckpt>

CIFAR100-LT:

python train.py --gpu 0 --ds cifar100 --Lambda1 0.05 --Lambda2 0.05 --Lambda3 0.1  --drp <where_you_store_all_your_datasets> --srp <where_to_save_the_ckpt>

ImageNet-LT:

python stage1.py --gpu 0,1,2,3 --ds imagenet --md ResNet50 --lr 0.1 --Lambda1 0.02 --Lambda2 0.01 --Lambda3 0.01 --drp <where_you_store_all_your_datasets> --srp <where_to_save_the_ckpt>

Testing

CIFAR10-LT:

for dout in texture svhn cifar tin lsun places365
do
python test.py --gpu 0 --ds cifar10 --dout $dout \
    --drp <where_you_store_all_your_datasets> \
    --ckpt_path <where_you_save_the_ckpt>
done

CIFAR100-LT:

for dout in texture svhn cifar tin lsun places365
do
python test.py --gpu 0 --ds cifar100 --dout $dout \
    --drp <where_you_store_all_your_datasets> \
    --ckpt_path <where_you_save_the_ckpt>
done

ImageNet-LT:

python test_imagenet.py --gpu 0 \
    --drp <where_you_store_all_your_datasets> \
    --ckpt_path <where_you_save_the_ckpt>

Acknowledgment

Part of our codes are adapted from these repos:

Outlier-Exposure - https://github.com/hendrycks/outlier-exposure - Apache-2.0 license

PASCL - https://github.com/amazon-science/long-tailed-ood-detection - Apache-2.0 license

Open-Sampling - https://github.com/hongxin001/open-sampling - Apache-2.0 license

Long-Tailed-Recognition.pytorch - https://github.com/KaihuaTang/Long-Tailed-Recognition.pytorch - GPL-3.0 license

License

This project is licensed under the Apache-2.0 License.

Citation

If you use this package and find it useful, please cite our paper using the following BibTeX.

@inproceedings{miao2024out,
  title={Out-of-distribution detection in long-tailed recognition with calibrated outlier class learning},
  author={Miao, Wenjun and Pang, Guansong and Bai, Xiao and Li, Tianqi and Zheng, Jin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={5},
  pages={4216--4224},
  year={2024}
}

cocl's People

Contributors

wenjunmiao avatar guansongpang avatar

Stargazers

Haifeng avatar Dorian avatar Jeff Carpenter avatar Yuyuan Liu avatar  avatar  avatar  avatar  avatar Linchn avatar  avatar  avatar

cocl's Issues

results different from paper

Hello, I trained cifar10-lt and cifar100-lt with the default parameters in the code, but the results I obtained are quite different from those in the paper.
Could you please tell me how can I achieve the results in the paper? Could you provide your commands, or are there any other parameters or settings that need to be noted?
This is what I used for training:

  • train:python train.py --gpu 0 --ds cifar100 --Lambda1 0.05 --Lambda2 0.05 --Lambda3 0.1 --drp ../long-tailed-ood-detection/SCOOD_dataset/data/images --srp ./checkpoints
  • test:'python test.py --gpu 0 --ds cifar100 --dout shvn --drp ../long-tailed-ood-detection/SCOOD_dataset/data/images --ckpt_path ./checkpoints/cifar100-0.01-OOD300000/ResNet18/e100-b128-256-adam-lr0.001-wd0.0005_Lambda10.05-Lambda20.05-Lambda30.1/replay3'
    and this is the result i got:
    Snipaste_2024-05-16_19-26-40

some question about the use of ood samples

Hi, thank you for your great work~ I have a few questions regarding my understanding of the paper:
1.According to Figure 1, I understand that auxiliary OOD data are also used during training, similar to outlier exposure, is that correct?
2.In the "Debiased Large Margin Learning Calibration" module, the schematic diagram does not illustrate the operation of pulling OOD categories. However, the text mentions "since we have already pulled OOD samples together in the joint LTR and outlier class learning in Eq. 2", does this mean there is an operation to pull the entire OOD category?
If so, considering that OOD data have diverse categories and their distribution is theoretically unknown, how can we ensure they cluster together in the feature space?

I would greatly appreciate it if you could provide some clarification.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.