11/29/2021 06:01:22 - INFO - pytorch_transformers.modeling_utils - loading weights file checkpoints/1//bert-base-cased/pytorch_model.bin
11/29/2021 06:04:45 - INFO - pytorch_transformers.modeling_utils - Weights of SyntaxBertForTokenClassification not initialized from pretrained model: ['transitions', 'bert.syntax_encoder.emb.weight', 'bert.syntax_encoder.embed_pos.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.query.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.query.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.key.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.key.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.value.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.syntax_attention.value.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.finishing_linear_layer.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.finishing_linear_layer.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.ln_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.ln_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.feed_forward.W_1.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.feed_forward.W_1.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.feed_forward.W_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.feed_forward.W_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.0.ln_3.weight', 'bert.syntax_encoder.syntax_encoder.layers.0.ln_3.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.query.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.query.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.key.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.key.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.value.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.syntax_attention.value.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.finishing_linear_layer.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.finishing_linear_layer.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.ln_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.ln_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.feed_forward.W_1.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.feed_forward.W_1.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.feed_forward.W_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.feed_forward.W_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.1.ln_3.weight', 'bert.syntax_encoder.syntax_encoder.layers.1.ln_3.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.query.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.query.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.key.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.key.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.value.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.syntax_attention.value.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.finishing_linear_layer.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.finishing_linear_layer.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.ln_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.ln_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.feed_forward.W_1.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.feed_forward.W_1.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.feed_forward.W_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.feed_forward.W_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.2.ln_3.weight', 'bert.syntax_encoder.syntax_encoder.layers.2.ln_3.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.query.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.query.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.key.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.key.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.value.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.syntax_attention.value.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.finishing_linear_layer.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.finishing_linear_layer.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.ln_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.ln_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.feed_forward.W_1.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.feed_forward.W_1.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.feed_forward.W_2.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.feed_forward.W_2.bias', 'bert.syntax_encoder.syntax_encoder.layers.3.ln_3.weight', 'bert.syntax_encoder.syntax_encoder.layers.3.ln_3.bias', 'bert.syntax_encoder.syntax_encoder.ln.weight', 'bert.syntax_encoder.syntax_encoder.ln.bias', 'bert.syntax_encoder.out_mlp.0.weight', 'bert.syntax_encoder.out_mlp.0.bias', 'bert.pooler.out_mlp.0.weight', 'bert.pooler.out_mlp.0.bias', 'classifier.weight', 'classifier.bias', 'loss_func.model.transitions', 'loss_func.model.transition_mask', 'crf.transitions', 'crf.transition_mask']
11/29/2021 06:04:45 - INFO - pytorch_transformers.modeling_utils - Weights from pretrained model not used in SyntaxBertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'bert.pooler.dense.weight', 'bert.pooler.dense.bias']
11/29/2021 06:04:45 - INFO - main - Training/evaluation parameters Namespace(add_masked_ne_tokens=False, cache_dir='', config_name='', config_name_or_path='config/srl/bert-base/joint_fusion.json', data_dir='checkpoints/1//conll2005_srl_udv2', device=device(type='cpu'), do_eval=True, do_lower_case=False, do_train=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_seq_length=512, max_steps=-1, model_name_or_path='checkpoints/1//bert-base-cased', model_type='syntax_bert_tok', n_gpu=0, no_cuda=False, no_pretrained=False, num_labels=107, num_train_epochs=20.0, output_dir='/conll2005_srl_udv2/', output_mode='token_classification', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=32, per_gpu_train_batch_size=16, sample_rate=1.0, save_steps=1000, seed=42, task_name='conll2005brown_srl', tokenizer_name='', update_config_str=None, use_swa=False, wordpiece_aligned_dep_graph=True, write_eval_results=False)
11/29/2021 06:04:45 - ERROR - pytorch_transformers.modeling_utils - Model name '/conll2005_srl_udv2/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc). We assumed '/conll2005_srl_udv2/' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "main.py", line 426, in
main()
File "main.py", line 403, in main
model = model_class.from_pretrained(checkpoint)
File "/usr/local/lib/python3.7/dist-packages/pytorch_transformers/modeling_utils.py", line 492, in from_pretrained
**kwargs
File "/usr/local/lib/python3.7/dist-packages/pytorch_transformers/modeling_utils.py", line 194, in from_pretrained
raise e
File "/usr/local/lib/python3.7/dist-packages/pytorch_transformers/modeling_utils.py", line 180, in from_pretrained
resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies)
File "/usr/local/lib/python3.7/dist-packages/pytorch_transformers/file_utils.py", line 124, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file /conll2005_srl_udv2/ not found