Comments (20)
Any news?
from pytorch-randaugment.
Great! Let me know how everything goes. The open sourcing will be done in a week or two!
from pytorch-randaugment.
Hi, @ildoonet ,
Thanks for contributing this nice code! I wonder if the performance gap is related to two possible misalignments?
(a) Seems you put RandAugment before RandomResizedCrop. If I understand correctly, the original Tensorflow repo puts RandAugment after RandomResizedCrop.
(b) Seems some the black pixels, which are originally outside of the image but now inside the crop due to transformations such as ShearX, should be filled with some value like 128 or pixel mean using the fillcolor parameter?
These two gaps are my impression when I came across both repos, but I am not sure if it's true, or if it is true, how would the performance be affected.
from pytorch-randaugment.
The code is now updated with randaugment and autoaugment: https://github.com/tensorflow/tpu/tree/master/models/official/resnet
from pytorch-randaugment.
Yes that is the code I mentioned. Also the code for ResNet-50 will be opensourced soon!
No I believe that is that the magnitude hyperparameter control. It is the threshold value that does not change.
from pytorch-randaugment.
@HobbitLong Thanks! They are surely things that can cause differences. As @BarretZoph mentioned, tensorflow's randaugment will be open this week or next, I will examine the code and share what was the problem.
from pytorch-randaugment.
I fix the above problem and got 23.22 top1 error with resnet50 and the N=2, M=9.
I guess there are more things to match but due to my current work, I will have to look into this after few weeks.
from pytorch-randaugment.
According to the author's email reply, I changed some parameters.
With epoch 180 (paper(RandAugment)'s epoch)
M\N | 1 | 2 | 3 |
---|---|---|---|
5 | 0.2284 | 0.2298 | 0.2321 |
7 | 0.2288 | 0.2286 | 0.2298 |
9 | 0.2286 | 0.2287 | 0.2308 |
11 | 0.2262 | 0.2265 | 0.2316 |
13 | 0.2283 | 0.2264 | 0.2299 |
15 | 0.2286 | 0.2264 | 0.2304 |
The performance with N=2, M=9 (reported optimal value) was not matched to the reported performance(top1 error = 22.4). Also there is no model that performs as well as the paper claims, even though we evaluated all models in the same space.
With epoch 270 (AutoAugment's epoch)
M\N | 1 | 2 | 3 |
---|---|---|---|
5 | 0.2271 | 0.2284 | 0.2312 |
7 | 0.2271 | 0.2287 | 0.2263 |
9 | 0.2262 | 0.2287 | 0.2286 |
11 | 0.2255 | 0.2253 | 0.2276 |
13 | 0.2241 | 0.2250 | 0.2275 |
15 | 0.2224 | 0.2246 | 0.2271 |
If we increase the training epoch to 270, one of the results is outperforming 'AutoAugment' and 'RandAugment'.
from pytorch-randaugment.
Have you tried using the randuagment code in https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet? This was the code used for training the ResNet model. Also we are almost done opensourcing the ResNet model.
from pytorch-randaugment.
@BarretZoph Is this the code you mentioned? Some parts are mismatched with this code, I will change them and conduct experiments soon.
from pytorch-randaugment.
@BarretZoph Is this correct?
Since SolarizeAdd have a addition parameter with "0"(zero), this doesn't affect the image.
from pytorch-randaugment.
@BarretZoph Thanks for opensourcing. I really look forward to it!
Also for SolarizeAdd, sorry for my misunderstanding. I will update this repo and start new experiments. Thanks.
from pytorch-randaugment.
I need to examine more since the performance doesn't match after I change augmentation search space as RandAugment's code. Best top1 error is 22.65 by training 180 epochs.
cc @BarretZoph
1 | 2 | 3 | |
---|---|---|---|
5 | 0.2318 | 0.2334 | 0.2371 |
7 | 0.2272 | 0.2330 | 0.2365 |
9 | 0.2287 | 0.2295 | 0.2338 |
11 | 0.2279 | 0.2287 | 0.2352 |
13 | 0.2285 | 0.2265 | 0.2337 |
15 | 0.2262 | 0.2280 | 0.2315 |
from pytorch-randaugment.
Hmm well the code in https://github.com/tensorflow/tpu/tree/master/models/official/resnet will be opensourced shortly. Hopefully that will resolve all of your issues.
from pytorch-randaugment.
@BarretZoph Thanks, I will look into your opensourced codes.
By checking codes provided by you, I'm not sure what is different. Below items are few things that might ruin the performance.
- In your paepr, "The image size was 224 by 244" seems to be a typo. Image size should be 224 by 224, right?
- Is this the base configuration you used? : https://github.com/tensorflow/tpu/blob/master/models/official/resnet/configs/resnet_config.py
- Parameters(learning rate, epochs, ...) are different than the paper. (I guess you modified this before you trained the model)
- dropblock is used, which is not in this repo.
- precision is fp16, where mine uses fp32.
- do you use Lars optimizer when you increase batch size(eg 4k)?
I guess if you opensource your codes as well as the configuration, it would be very helpful. Many Thanks!
from pytorch-randaugment.
Thanks I just fixed the image size in the paper! Yes it should be 224x224.
Yes that is the base config but a few things were changed (180 epochs and batch size of 4096 with 32 replicas).
Dropblock is actually not used in this. Since 'dropblock_groups': '' there are no groups specified, so it will not be applied. https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_main.py#L88
I believe that changing that changing the precision will not make much of a difference.
No LARS is not used with the 4K batch size.
from pytorch-randaugment.
@BarretZoph Thanks for the update. It will help a lot.
I found that the preprocessing part for resnet/efficientnet implemented by google is a bit different than the one most people use. It regularize harder by using smaller cropping region when it trains the model, and It use center-cropped image keeping same aspect-ratio when it tests the model.
I believe that this discrepancy cause some degradations. I will try it with this preprocessing.
- Pytorch's most favored random-crop preprocessing : https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html#RandomResizedCrop
- Yours : https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_preprocessing.py#L86
from pytorch-randaugment.
Any news?
from pytorch-randaugment.
Perhaps related to the issue I just posted on lower magnitudes leading to greater distortions? #24
from pytorch-randaugment.
Has the code Been updated for imagine reproduction?
from pytorch-randaugment.
Related Issues (20)
- Reproducing CIFAR-10 results HOT 4
- Posterize function error HOT 3
- Reproducing CIFAR-100 results HOT 1
- Questions about pyramidNet
- Small remark on --cv-ratio (i.e. validation set)
- The order of augmentation
- Why do you use M scale [0, 30] instead of [0, 10]?
- CIFAR10(0) w/ batch size 128 HOT 1
- The prob of applying augmentation? HOT 1
- Question about the modified operation types
- Can not reproduce the result (both paper & yours) HOT 3
- Brightness, Sharpness, Constrast, Color at magnitude 0 should have v=1?
- view size is not compatible with input tensor's size and stride HOT 1
- TypeError: 'NoneType' object is not callable
- RandAugment hyperparameters for markdown tables
- How to change the architecture?
- SmoothCrossEntropyLoss
- Can I use this project to generate a strategy for my own dataset? HOT 2
- Unable to reproduce Cifar-10 results for WRN-28-10 HOT 1
- It seems that you dont use Linear Evaluation Protocol?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-randaugment.