Comments (12)
I found that if I let CUDA_VISIBLE_DEVICES='0', which means using only one GPU, the code works.
The error rises from trainer.py:
#Our model outputs do not work with DataPrallel, so forcing return tuple.
if isinstance(model, nn.DataParallel):
inputs["return_tuple"] = True
These lines of codes make error when the model forwards data
result = self.forward(*input, **kwargs)
from question_generation.
When the device is CPU, then this assertion error disappears.
from question_generation.
Hi @judywxy , what is your transformers version,
It runs fine with version 3.0.0
try pip install -U transformers==3.0.0
from question_generation.
@patil-suraj Oh, Thanks a lot.
I have the following installed:
tokenizers 0.8.1rc1
torch 1.6.0+cu101
torchfile 0.1.0
torchvision 0.7.0+cu101
tornado 6.0.3
tqdm 4.32.1
traitlets 4.3.2
transformers 3.0.2
typing 3.6.4
urllib3 1.24.2
visdom 0.1.8.8
wandb 0.9.5
So, you mean change transformers from 3.0.2 to 3.0.0 ?
from question_generation.
Yes, I havey tried it with 3.0.2 yet
from question_generation.
@patil-suraj Thanks for prompt reply. I will try with 3.0.0.
By the way, I trained a t5-small single task qg with transfermers' trainer
08/24/2020 06:24:24 - INFO - qgtrain - ***** Eval results *****
08/24/2020 06:24:24 - INFO - qgtrain - epoch = 9.999269539810081
08/24/2020 06:24:24 - INFO - qgtrain - eval_loss = 1.6273178581207517
It looks nice as the following three metrics show
BLEU_4 | METEOR | ROUGE_L
0.189037 | 0.252798 | 0.406141
Slightly better than the published counterpart model
So, I want to train a multi-task one like t5-multi and want to change the following config.
Besides changing the train_file_path, valid_file_path, output_dir, shall I also change the model_name_or_path from t5-small to t5-base? what about the tokenizer_name_or_path?
args = {
"model_name_or_path": "t5-small",
"model_type": "t5",
"tokenizer_name_or_path": "t5_qg_tokenizer",
"output_dir": "../QG_models03/t5-small-qg-hl",
"train_file_path": "../QG_data/train_data_qg_highlight_qg_format_t5.pt",
"valid_file_path": "../QG_data/valid_data_qg_highlight_qg_format_t5.pt",
"qg_format": "highlight_qg_format",
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 24,
"gradient_accumulation_steps": 8,
"learning_rate": 1e-4,
"num_train_epochs": 12,
"no_cuda": True,
"seed": 1, # Default 42
"do_train": True,
"do_eval": True,
"evaluate_during_training": True,
"logging_steps" :100
}
from question_generation.
@patil-suraj
after changing transformers from 3.0.2 to 3.0.0 and setting "no_cuda": False, The Assertion errors appear again!
tokenizers 0.8.0rc4
torch 1.6.0+cu101
torchfile 0.1.0
torchvision 0.7.0+cu101
tornado 6.0.3
tqdm 4.32.1
traitlets 4.3.2
transformers 3.0.0
typing 3.6.4
urllib3 1.24.2
visdom 0.1.8.8
wandb 0.9.5
from question_generation.
The Assertion error is related to the following code in trainer's Trainer class
# Our model outputs do not work with DataParallel, so forcing return tuple.
if isinstance(model, nn.DataParallel):
inputs["return_tuple"] = True
from question_generation.
Hey @judywxy ,
could check again if your version is correct, because that change was added in Trainer
after 3.0.0.
You can see the Trainer
at v3.0.0 here
from question_generation.
@patil-suraj
here are what installed. The transformers is in version 3.0.0
tokenizers 0.8.0rc4
torch 1.6.0+cu101
torchfile 0.1.0
torchvision 0.7.0+cu101
tornado 6.0.3
tqdm 4.32.1
traitlets 4.3.2
transformers 3.0.0
typing 3.6.4
urllib3 1.24.2
visdom 0.1.8.8
wandb 0.9.5
from question_generation.
Is this issue resolved?. I am still facing issues while training.
from question_generation.
I've got the same problem recently. My workaround was using only one GPU.
CUDA_VISIBLE_DEVICES=0 python run_qg.py \
--model_name_or_path t5-small \
--model_type t5 \
--tokenizer_name_or_path t5_qg_tokenizer \
--output_dir t5-small-qg-hl \
--train_file_path data/train_data_qg_hl_t5.pt \
--valid_file_path data/valid_data_qg_hl_t5.pt \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--seed 42 \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps 100
from question_generation.
Related Issues (20)
- Can we generate questions based on the type of question?
- abbrevation in question generated are some times coming in small case such as it instead of IT/ai instead AI etc
- Retraining the model valhalla/t5-small-e2e-qg with questions only
- Model trained for e2e-qg does not generate any questions
- How to get the `char end` and `char start` in the generated question and answers from valhalla/t5-base-qg-hl? HOT 1
- error while using onnx runtime
- Requirements.txt needed HOT 1
- How would you fix these issues to get the project running HOT 1
- Fine tuning a T5 model with another language
- How can i get this to work with transformer 4.x HOT 1
- Loss Function multi-task
- AttributeError: module 'dill._dill' has no attribute 'PY3' HOT 5
- requirements.txt file would be very useful
- unexepect <pad> HOT 1
- ValueError: substring not found HOT 1
- How can i run this project with transformers 4?. HOT 1
- Which Bart model used in the pipeline
- Is it possible to add a parameter for number of question to be generated in the question generation model
- No such file or directory: '/root/.cache/huggingface/datasets/squad_multitask/highlight_qg_format/1.0.0/dataset_info.json' HOT 5
- Fine-tuning using GPU HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from question_generation.