Comments (16)
Switching from TPUEstimator -> Estimator solves the input_fn issue, seems like they both have different self._call_input_fn signatures. So I just switch on all estimators / dependent objects to non-tpu objects and their relevant params.
This still left one issue: the BERT optimizer in its current form is not multi-gpu friendly, so I adapted this implementation for multi-gpu in models/bert/optimization.py: https://github.com/HaoyuHu/bert-multi-gpu/blob/master/custom_optimization.py and now I am properly able to train.
In case anybody else is following this solution path: train_batch_size should now be specified as per-gpu batch size and then internally when computing num_train_steps you should multiply by N_GPUs to use your effective batch size. I am unable to get the gradient accumulation wrapper as it is now to work in the multi-gpu setting, but I can update with a solution if I end up trying to make it work. Anyway, I will close this issue for now.
Thanks!
from tapas.
@sarahpand It would be great if you could share your solution here!
from tapas.
I recommend you ask in their repo. It does seem that changing TPUEstimator -> Estimator and TPUEstimatorSpec for EstimatorSpec fixed the signature issue, so consider double checking that you didn't miss any instance of TPUEstimators.
from tapas.
Hello @nmallinar thanks the question. While we haven't tried to do multi-GPU training, following the docs the second approach is the correct one, using the arguments in RunConfig, since the scope is use in Keras models, not in Estimator based ones. I do wonder if this is supported in TPUEstimator/TPURunConfig as we are using, but if that's not the case, it should be easy to change.
I found this guide for multi GPU training on TF1 that might be useful, make sure to check the Estimator section. Apparently there's a JSON Environment variable that has to be properly set up.
The error you mention seems strange, since the third argument for self._call_input_fn is declared here. Can you do pip show
to get the version of tensorflow and tensor_flow estimator that you have on your runtime, they should be 1.14.
from tapas.
@eisenjulian Yes, I was thinking about adding switches from TPUEstimator -> Estimator in absence of use_tpu, I have seen similar designs in TF multi-gpu BERT training code in other repos. However, the error seems to indicate that the problem is not from passing in the distributed strategy object but rather something in the input_fn construction/calling and there may not need to be such switches (or even with the switches, the input_fn may still throw this error with an Estimator - I'll update when I get around to checking this).
Results of pip show:
Name: tensorflow-gpu
Version: 1.14.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires: protobuf, tensorboard, keras-applications, numpy, grpcio, wheel, tensorflow-estimator, wrapt, google-pasta, termcolor, astor, keras-preprocessing, six, absl-py, gast
Required-by: tapas
Name: tensorflow-estimator
Version: 1.14.0
Summary: TensorFlow Estimator.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: UNKNOWN
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires:
Required-by: tensorflow-gpu
I will look further into this guide you posted as well, thanks for the reference.
from tapas.
Also running into the same problem on multi-gpu, single gpu works fine but is much much slower than tpu. i have changed all TPUEstimator or Estimator objects, also tried to adapt BERT optimiser as per the link shared, am using MirroredStrategy and tried with and without specifying the devices . But the problem is either it doesnt run the process on the GPUs or if it shows as running, the volatile util shows 0% on both, believe its doesnt work. Would appreciate if you can share more about your work around ..
from tapas.
Hi @nmallinar, can you share your work about where we need to change from TPUEstimator
to Estimator
because I have the same issue with you. Thanks.
from tapas.
Hello @dhuy237, my implementation is based on a slightly outdated version of the Tapas codebase and unfortunately at the time I got a little busy to re-submit the code. I will plan to resolve those diffs and host a fork or submit a PR accordingly. If there is anything specific you are having trouble with on your end I may be able to help you debug though, as I ran into many errors along the way and might be able to help you avoid some of those same mistakes.
from tapas.
from tapas.
@nmallinar I still have this error when trying to train the model with a single GPU
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given
.
And I don't know where to start to fix this bug.
@sarahpanda Hope your solution can help me.
from tapas.
@dhuy237 so these are the non-TPU version of objects in run_task_main.py that I use:
run_config = tf.estimator.RunConfig(...)
estimator = tf.estimator.Estimator(...,
model_fn=model_fn,
config=run_config)
and in tapas_classifier_model.py:
output_spec = tf.estimator.EstimatorSpec(...)
I could not get it to work with tf *.tpu classes. I think I ran into this when I used the TPU version of one of these still.
from tapas.
@nmallinar I didn't notice that this repo is tapas
. I am trying to run this repo https://github.com/zihangdai/xlnet. I will check your solution with my project. Thank for your help.
from tapas.
from tapas.
After changing my code like @nmallinar:
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
scaffold=scaffold_fn)
run_config = tf.estimator.RunConfig(FLAGS)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={'batch_size': 8})
I don't get this error anymore:
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given
But I got this error:
Traceback (most recent call last):
File "run_coqa.py", line 1775, in <module>
tf.app.run()
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_coqa.py", line 1714, in main
estimator.train(input_fn=train_input_fn, max_steps=2000) # max_steps=FLAGS.train_steps)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_mo
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_mo
saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1420, in _train_wiec
scaffold=estimator_spec.scaffold)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 546, in __init_
self._save_path = os.path.join(checkpoint_dir, checkpoint_basename)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not FlagValues
Do you guys know why I got this?
from tapas.
Hello @dhuy237 I am bit confused by your stacktrace since I don't recognize the paths of the files, specially the run_coqa.py one. Can you confirm you are running the correct binary?
from tapas.
@eisenjulian I trying to run this repo https://github.com/stevezheng23/xlnet_extension_tf. So it is different from this repo. But I got the same error as @nmallinar had before.
After I changed my code as @nmallinar suggested, I have the TypeError
. I post the error here just hope someone can help me to solve it.
If it is not appropriate for this post, I can delete my comment. Thanks.
from tapas.
Related Issues (20)
- Training tables with more than 512 cells HOT 1
- how to train the model on hybridqa and ott-qa
- tapas installation is not working HOT 2
- does it support chinese
- Populate float_answer for Tapas Weak supervision for aggregation (WTQ). TypeError: Parameter to CopyFrom() must be instance of same class: expected language.tapas.Question got str. HOT 2
- The code for TableFormer
- Wrong calculation in table HOT 1
- no issue
- Does anyone know of a Tapas tokenizer that is written in Java, C, or Rust?
- create baseline results
- WQT to SQA format conversion script
- TensorFlow No Matching Distribution Error When Installing 'tapas-table-parsing'
- Sudden Increase in Loss value while finetuning
- Generating Pre-Training Data for TAPAS
- Installation Guide Unavailable on Colab Notebook
- Error: Getting Requirements to Build Wheel Failed HOT 1
- ValueError: Too many rows
- Couldn't find all answers
- Google colab setup.py issue HOT 1
- Files to reproduce QA results of NQ-Tables test set
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tapas.