Code Monkey home page Code Monkey logo

ttt_cifar_release's People

Contributors

test-time-training avatar yueatsprograms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ttt_cifar_release's Issues

Scripts to generate corrupted data

First of all, thank you so much for your work! The paper is really impressive!
Could you also provide the scripts to produce corrupted CIFAR data for testing?
Besides, is it possible to use other datasets to test its adaptability (train on CIFAR-10 and test on other datasets)? Say, 32 x 32 ImageNet dataset?

Standard versus online version

Thanks for the code! I tried both the "slow" (standard) and "online" setting for CIFAR-10.1

But I found that the improvement of the standard version is only 0.1% (15.4 v.s. 15.3), while the improvement of the online version is 1.6% (15.4 v.s. 13.8). I wonder if this result makes sense or not.

(I am using a newer pytorch version 1.7.1, not sure if this will make a big difference)

No "training alteration" case

Hi,
Thanks for sharing your code and also the nice YouTube video presentation of the paper. Specifically, I have a question about a scenario in the TTT method. In my understanding, applying TTT to an existing pre-trained model requires repeating the training with source domain data with the self-supervised head (ssh) (rotation prediction). After the training, a standard or online version of TTT can be applied to the test domain data by initializing the ssh weights with the ones obtained from the training phase. This means "training alteration" is necessary for TTT.

My question is what would happen if I did not alter the training and did not retrain the data with a ssh. Therefore, during the test, I simply started the weights of the ssh from random points? In what way would it affect the results? Have you tried that? I assume the results will be worse than the standard TTT.

Thanks,
Sorour

Parameters update during test time

Hello!
Maybe this is a stupid question, but I could not find where the parameters are updated in the test time.
Could you please point them for me?
Thank you!

About results

Thanks for the code!
I try to re-implement TTT-online.
On the original testset of CIFAR10, I got 8.58 % error rate, under joint training setting.
The improvements seem like less than results in paper.
My results:
TTT_results
the results in paper:
ๆ•่Žท

I wonder if this result makes sense or not.
Environment:
Python: 3.8.13
PyTorch: 1.12.0+cu116
Torchvision: 0.13.0+cu116
CUDA: 11.6
CUDNN: 8302
NumPy: 1.22.3
PIL: 9.0.1

Thanks

How adaptation work in test_adapt.py

Hello, Thank you for your great work.

However, I am wondering how adaptation work in test_adapt.py?
Since the weight of trained ssh did not be loaded in test_adapt.py, and after adapt_single(image) in line 114 there's not a step that sync the weight between ssh and net.

Thanks in advanced for any suggestion!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.