Comments (6)
Hi Edward, thank you very much, your advice saved me. A larger learning rate exposed the problem, the plot showed jitter, I debugged and fixed the problem, and now the plot is smooth and shows good increase in value. I'm going to try the big learning rate μTransfer on this.
Although it looks good now, the working principle is too hard for me, and the muP is really amazing.
from mup.
Thanks for your patience, Jianbin.
There are many considerations when training a very large model. In some sense, mup is a necessary but insufficient condition for successful training of large models. Other factors include the use of weight decay and floating point precision. Hope this can help with your investigation!
Another question is that the transformer example and mutransformers use different initialization methods, (init_std / d_model) ** 0.5 vs init_std * width_mult ** -0.5, are these two formulas equivalent in some sense? Will there be pros and cons?
They are equivalent up to a constant.
from mup.
Hi shjwudp,
Thanks for your interest in our work!
Your coordinate check plots seem identical across time steps, which is a sign that the learning rate is too small for the function to change. Can you try rerunning with a larger learning rate? It's possible that with a moderately larger learning rate, the muP run might blow up after a couple steps, in which case we can look into it further.
from mup.
Hi, @edwardjhu I've recently done some experiments, an extension of the previous discussion. I found that transferring the same hyperparameters from a 350M model to 1.3B scale works fine, but transferring to a larger model 2.7B blowup, does that mean my parameters are too aggressive? how should i avoid this?
My coord:
The same hyperparameters, 1.3B model and 2.7B model comparison: https://tensorboard.dev/experiment/RirdggEZS8O2rRU9clEy0g/#scalars
from mup.
Another question is that the transformer example and mutransformers use different initialization methods, (init_std / d_model) ** 0.5
vs init_std * width_mult ** -0.5
, are these two formulas equivalent in some sense? Will there be pros and cons?
from mup.
Hi Edward, thank you very much, your advice saved me. A larger learning rate exposed the problem, the plot showed jitter, I debugged and fixed the problem, and now the plot is smooth and shows good increase in value. I'm going to try the big learning rate μTransfer on this.
Although it looks good now, the working principle is too hard for me, and the muP is really amazing.
Hi Jiabin, Could you share info on what caused the jitter in your coord check plots? Its possible that I have a similar issue (#58).
Thanks!
from mup.
Related Issues (20)
- Positional Embeddings should be MuReadout parameters ? HOT 2
- Warmup schedule when changing the number of tokens/steps (GPT-3 experiment detail)
- Reproducing the training loss vs learning rates curve on MLP HOT 5
- Once the best HPs have been found, does the final model have to be trained with `mup` or can one just use the found HPs and train the model in a standard way?
- Is it possible to also scale the depth of the model? HOT 5
- _rescale_parameters() inconsistent with the paper for the tied embedding scenario? HOT 2
- µTransfer across batch size && weight decay setting
- Some questions about the implementation of muP.
- Interpreting jitter in coordcheck HOT 2
- FSDP support? HOT 3
- Usage with torch.compile in Pytorch 2? HOT 2
- dim_feedforward
- Unclear `assert_hidden_size_inf` triggers HOT 1
- About Learning rate decay HOT 2
- Questions for training gpt-2 using mup HOT 6
- Reproducing the validation accuracy vs learning rates curve on ResNet HOT 1
- coord_check for model that returns loss function directly
- Reproducing Figure 1 using 'examples/Transformer/main.py'
- mu parametrization for gated-mlp and group-query attention
- Increasing coord check for the network output
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mup.