Code Monkey home page Code Monkey logo

panda's People

Contributors

talreiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

panda's Issues

Question about early stopping

Hi, Thanks for your great work. I notice that " A weakness of the simple early-stopping approach, is the reliance on a hyper-parameter that may not generalize to new datasets. Although the optimal stopping epoch can be determined with a validation set containing anomalies, it is not available in our setting". I don' understand why simple early stopping with validation set is not available in OOC setting. Validation set could be part of all normal images, and the change of validation loss is very small within seval epoches, then stop training. And i don't see the early stopping in code.

About results of SPADE on MVTec

Dear authors,
Thank you for your excellent work. I am slightly confused about the SPADE method and its results on MVTec.
In Table 4, you reported the segmentation AUROC and PRO of 96.2 and 91.7. However, in Table 8, you reported the PRO value of 97.3, which I think is impossible and 97.3 doesn't match any metric reported in the paper.
You compared cat and ensemble methods for SPADE and it seems the ensemble method improved 0.2 AUROC scores for segmentation, according to the previous arxiv version https://arxiv.org/pdf/2005.02357.pdf. However, you changed the kernel size of Gaussian filter from 4 to 5 and I am not sure about the influence of this change.
Also, as the images were resized to 256 and then cropped to 224, why metrics were still calculated at 256×256 image resolution? Will this setting improve the segmentation performance?
I am looking forward to your reply and thank you in advance!

Question about center update

Hi, two more questions. Why not update center(torch.FloatTensor(feature_space).mean(dim=0)) of criterion during training? why the training loss is increasing no matter which dataset i use for training?

Outlier Exposure

Hi, I am experimenting with the outlier exposure code. It seems like the outlier exposure code does not use the KNN algorithm, and is simply a binary classification? Is this suppose to be the case? Thank you!!! Great paper ;)

Caltech Birds Dataset

Thank you for uploading. The code is well written and readable and I was able to reproduce the CIFAR10 results.

I am more interested in fine-grained datasets, as you reported in your paper in table 2. I have created a fork and added the Caltech Birds dataset (diff). I am using the first 20 classes as normal classes and the entire test set for evaluation as mentioned in appendix C. However, I am not able to reproduce your results. I only get an initial AUROC value of 76.9% and this gets worse during training, with and without --ewc. This is still higher than the compared methods but much lower than the reported 95.3%.

Could you please take a look and see what/if I got something wrong, or maybe you could also provide your code for the datasets in table 2?

Questions about loss increasement

Hi, thanks for your excellent work! I have a question to confirm.

During training on some datasets, I observe that both loss values and AUROC increase. I am wondering whether this phenomenon is reasonable. Could you please help explain it?

Thanks a lot!

performance of mvtec

Hi, do u have the anomaly detection performance (image-level) of PANDA on Mvtec Dataset ?

About self-supervised and pre-trained implement

Hi,

Thank you for sharing the implemented code with the paper.

As I try to get familiar with this field, I have a few questions about training and testing on the one-class Cifar-10 dataset. From Table 2 in sec 4.3, you mentioned that self-supervised methods didn't outperform the pre-trained ones, so you believe that using pre-trained methods is one of the cores of your work. In this case, is it legit that training a model on regular Cifar-10, which is standard Cifar-10 image classification, and then testing it on one-class Cifar-10 (maybe the criterion and Average ROC AUC are following from your code)?

Also, following the process, is it self-supervised or pre-trained by your knowledge?

Finally, correcting if I am wrong, I notice that the implemented code only evaluate one specific class at a time. Therefore, if I want to get the same Average ROC AUC% as Table 2, I have to average the ROC AUC from every class, right? For example, class 1: 90%, class 2: 91%, class 3: 90% class 4: 90% so the average ROC AUC is 90.5% by (90% + 91% + 90% + 91%)/4.

Many Thanks

How to make Fisher Matrix for other models?

Hello :) I'm so insterested in your work from korea.

In your repo, Fisher matrix you share is limited to resnet only, and I want to try PANDA(PANDA-EWC) with efficientnet and other latest models.
Your paper does not specify how to create a pth file you shared.
Can you tell me how to make or keywords?

Thanks😊

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.