yueatsprograms / ttt_cifar_release Goto Github PK
View Code? Open in Web Editor NEWTTT Code Release
Home Page: https://test-time-training.github.io/
TTT Code Release
Home Page: https://test-time-training.github.io/
Hi, I'm very interested in this work. Have you ever tested this method on semantic segmentation task?
I think the load_state_dict method should set strict=False as there is no running_mean & running_var in GN unlike the BN layer used during training.
First of all, thank you so much for your work! The paper is really impressive!
Could you also provide the scripts to produce corrupted CIFAR data for testing?
Besides, is it possible to use other datasets to test its adaptability (train on CIFAR-10 and test on other datasets)? Say, 32 x 32 ImageNet dataset?
Thanks for the code! I tried both the "slow" (standard) and "online" setting for CIFAR-10.1
But I found that the improvement of the standard version is only 0.1% (15.4 v.s. 15.3), while the improvement of the online version is 1.6% (15.4 v.s. 13.8). I wonder if this result makes sense or not.
(I am using a newer pytorch version 1.7.1, not sure if this will make a big difference)
Hi,
Thanks for sharing your code and also the nice YouTube video presentation of the paper. Specifically, I have a question about a scenario in the TTT method. In my understanding, applying TTT to an existing pre-trained model requires repeating the training with source domain data with the self-supervised head (ssh) (rotation prediction). After the training, a standard or online version of TTT can be applied to the test domain data by initializing the ssh weights with the ones obtained from the training phase. This means "training alteration" is necessary for TTT.
My question is what would happen if I did not alter the training and did not retrain the data with a ssh. Therefore, during the test, I simply started the weights of the ssh from random points? In what way would it affect the results? Have you tried that? I assume the results will be worse than the standard TTT.
Thanks,
Sorour
Hello!
Maybe this is a stupid question, but I could not find where the parameters are updated in the test time.
Could you please point them for me?
Thank you!
Thanks for the code!
I try to re-implement TTT-online.
On the original testset of CIFAR10, I got 8.58 % error rate, under joint training setting.
The improvements seem like less than results in paper.
My results:
the results in paper:
I wonder if this result makes sense or not.
Environment:
Python: 3.8.13
PyTorch: 1.12.0+cu116
Torchvision: 0.13.0+cu116
CUDA: 11.6
CUDNN: 8302
NumPy: 1.22.3
PIL: 9.0.1
Thanks
Hello, Thank you for your great work.
However, I am wondering how adaptation work in test_adapt.py
?
Since the weight of trained ssh did not be loaded in test_adapt.py, and after adapt_single(image)
in line 114 there's not a step that sync the weight between ssh and net.
Thanks in advanced for any suggestion!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.