Comments (9)
感谢~~,it works。
按照您的意思,总的来说应该是这样的:
-
Init_checkpoint是为了确定哪些参数是可以restore from the checkpoint的,也就是后面log 中有 INIT_FROM_CKPT的
-
output_dir 里的模型位置可以用来填充Init_checkpoint参数,如果两个参数都有的话,应该先从checkpoint确定哪些参数可以restore from the checkpoint, 应该是后面restore的时候,带INIT_FROM_CKPT的参数会从output_dir中模型中restore。
不知理解是否正确。
from bert-gpu.
It is a good question, you should be careful for them. The only thing you keep in mind is when you are working based on this project, you should make sure that the model is loaded right.
In fact I do not know the answer exactly. Something may not right but helpful:
https://blog.csdn.net/guotong1988/article/details/100539565
from bert-gpu.
In summary,
I guess init_checkpoint
is for the training of beginning.
output_dir
is for the predicting.
You should delete output_dir
for training and delete the init_checkpoint
for predicting, in order to make everything exact right.
from bert-gpu.
Thx~, I see that, It's helpful for me to understand the function.
But is there any right way to restore a interupted model and continue to train (modify num_train_steps = 300000 to num_train_steps= 600000)?
When I fill out both of the parameter with init_checkpoing = 'google_bert_model.ckpt' and output_dir = 'my_finetuing_bertmodel.ckpt', I found that the loss of the model is not cut off, but continuing, but higher than before step by step. I don't know is this right. Do you have any suggestion?
from bert-gpu.
参考 https://blog.csdn.net/guotong1988/article/details/88711524
from bert-gpu.
好像要加上-300000
之类的
from bert-gpu.
只能说试试,我记得我当时也遇到过这个问题,没深究。
from bert-gpu.
不知道,以实践结果为准。
it works?什么works?
from bert-gpu.
好吧,并不work。。。。Thx~
from bert-gpu.
Related Issues (20)
- run_pretraining_gpu.py not working HOT 5
- wrong when run_pretraining_gpu_v2 with init_checkpoint HOT 3
- Output model files compatible with Official Bert's pre-trained models? HOT 9
- OOM error HOT 1
- Suffer the Error: tensorflow.python.framework.errors_impl.InvalidArgumentError HOT 8
- XLNet support HOT 3
- GPT support
- The `global_step` update in `optimization_gpu.py` (line 74-75) is redundant. HOT 2
- TensorFlow2 support
- num_train_steps是一块卡还是多块卡的step? HOT 4
- 《How To Pre-train BERT In GPUs》
- 【Try】1-GPU pretrain with big learning rate for 100W-step, then 1-GPU pretrain with small learning rate for another 100W-step.
- train_batch_size and time required to pretrain HOT 2
- 关于多GPU训练的一些疑问咨询? HOT 13
- train 10W steps结束后,do_eva阶段出现错误 HOT 3
- Should line 74-75 in optimization_gpu.py be comment out? HOT 1
- an error like this : Segmentation fault (core dumped),Is the configuration wrong? HOT 9
- model_fn should return an EstimatorSpec. HOT 5
- 请问如何进一步实现梯度累计的功能?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bert-gpu.