Code Monkey home page Code Monkey logo

bert-train2deploy's Issues

使用TensorFlow-gpu会报错,但使用TensorFlow-cpu不会

InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(1024, 2), b.shape=(2, 768), m=1024, n=768, k=2
[[node bert/embeddings/MatMul (defined at D:\PycharmProjects\GitHubProjects\BERT-train2deploy-master\BERT-train2deploy-master\modeling.py:486) ]]
[[node mean/broadcast_weights/assert_broadcastable/is_valid_shape/has_valid_nonscalar_shape/has_invalid_dims/concat (defined at D:/PycharmProjects/GitHubProjects/BERT-train2deploy-master/BERT-train2deploy-master/run_mobile.py:756) ]]

服务端接受数据后无处理结果反馈

  • Serving Flask app "bert_base.server.http" (lazy loading)
  • Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
  • Debug mode: off
  • Running on http://0.0.0.0:8091/ (Press CTRL+C to quit)
    I:WORKER-0:[__i:gen:537]:ready and listening!
    I:PROXY:[htt:enc: 47]:new request from 172.17.0.1
    {'id': 111, 'texts': ['总的来说,这款手机性价比是特别高的。', '槽糕的售后服务!!!店大欺客'], 'is_tokenized': False}
    I:VENTILATOR:[__i:_ru:215]:new encode request req id: 1 size: 2 client: b'ddbf9d19-b839-4ba6-96c7-330586777d17'
    I:SINK:[__i:_ru:369]:job register size: 2 job id: b'ddbf9d19-b839-4ba6-96c7-330586777d17#1'
    I:WORKER-0:[__i:gen:545]:new job

评测代码是有问题的

使用提供的评测数据,得到的评测结果有问题。

eval_accuracy = 0.86040765
eval_f1 = 0.9527646
eval_loss = 0.5360181
eval_precision = 0.9510234
eval_recall = 0.95451

在precision和recall均在0.95时,accuracy理论上也在0.95左右
作者给出的评测代码对于多分类情况同样也是有问题的。
另外,因为tensorflow的tf.metrics实现的问题,在评测数据量较大时计算也会有问题。

服务端接收请求后无响应

服务端:

 * Serving Flask app 'bert_base.server.http' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on all addresses.
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://127.0.0.1:8091/ (Press CTRL+C to quit)
Process BertWorker-3:
Traceback (most recent call last):
  File "/home/long/anaconda3/envs/py36/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/bert_base-0.0.9-py3.6.egg/bert_base/server/__init__.py", line 490, in run
    self._run()
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/pyzmq-22.3.0-py3.6-linux-x86_64.egg/zmq/decorators.py", line 76, in wrapper
    return func(*args, **kwargs)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/bert_base-0.0.9-py3.6.egg/bert_base/server/zmq_decor.py", line 27, in wrapper
    return func(*args, **kwargs)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/bert_base-0.0.9-py3.6.egg/bert_base/server/__init__.py", line 508, in _run
    for r in estimator.predict(input_fn=self.input_fn_builder(receivers, tf), yield_single_examples=False):
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 622, in predict
    features, None, ModeKeys.PREDICT, self.config)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1149, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/bert_base-0.0.9-py3.6.egg/bert_base/server/__init__.py", line 466, in classification_model_fn
    pred_probs = tf.import_graph_def(graph_def, name='', input_map=input_map, return_elements=['pred_prob:0'])
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 405, in import_graph_def
    producer_op_list=producer_op_list)
  File "/home/long/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/framework/importer.py", line 535, in _import_graph_def_internal
    ', '.join(missing_unused_input_keys))
ValueError: Attempted to map inputs that were not found in graph_def: [segment_ids:0]
I:PROXY:[htt:enc: 47]:new request from 127.0.0.1
{'id': 111, 'texts': ['总的来说,这款手机性价比是特别高的。', '槽糕的售后服务!!!店大欺客'], 'is_tokenized': False}
I:VENTILATOR:[__i:_ru:215]:new encode request	req id: 1	size: 2	client: b'a70d9fe9-5fa6-487f-9e0d-6063053bd11b'
I:SINK:[__i:_ru:369]:job register	size: 2	job id: b'a70d9fe9-5fa6-487f-9e0d-6063053bd11b#1'

客户端:
curl -X POST http://127.0.0.1:8091/encode -H 'content-type: application/json' -d '{"id": 111,"texts": ["总的来说,这款手机性价比是特别高的。","槽糕的售后服务!!!店大欺客"], "is_tokenized": false}'

ModuleNotFoundError: No module named 'optimization'

使用以下命令训练模型,目录参数请根据各自的情况修改:

cd /mnt/sda1/transdat/bert-demo/bert/
export BERT_BASE_DIR=/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12
export GLUE_DIR=/mnt/sda1/transdat/bert-demo/bert/data
export TRAINED_CLASSIFIER=/mnt/sda1/transdat/bert-demo/bert/output
export EXP_NAME=mobile_0

sudo python run_mobile.py
--task_name=setiment
--do_train=true
--do_eval=true
--data_dir=$GLUE_DIR/$EXP_NAME
--vocab_file=$BERT_BASE_DIR/vocab.txt
--bert_config_file=$BERT_BASE_DIR/bert_config.json
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt
--max_seq_length=128
--train_batch_size=32
--learning_rate=2e-5
--num_train_epochs=5.0
--output_dir=$TRAINED_CLASSIFIER/$EXP_NAME

根据这个,我本地win10的运行命令如下:
python run_mobile.py --task_name=setiment --do_train=true --do_eval=true --data_dir=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/data/mobile_0 --vocab_file=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/vocab.txt --bert_config_file=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/bert_config.json --init_checkpoint=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/bert_model.ckpt --max_seq_length=80 --train_batch_size=16 --learning_rate=2e-5 --num_train_epochs=5.0 --output_dir=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/output/mobile_0

运行目录如下:
rundirectory

报错:
mnt/sda1/transdat/bert-demo/bert/data/mobile_0 --vocab_file=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/vocab.txt --bert_config_file=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/bert_config.json --init_checkpoint=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12/bert_model.ckpt --max_seq_length=80 --train_batch_size=16 --learning_rate=2e-5 --num_train_epochs=5.0 --output_dir=C:/Workspace/mnt/sda1/transdat/bert-demo/bert/output/mobile_0
Traceback (most recent call last):
File "run_mobile.py", line 25, in
import optimization
ModuleNotFoundError: No module named 'optimization'

请问需要安装什么模块吗?谢谢

用自己训练好的模型转换成pd文件后,启动ner服务报错

转换pd文件时成功完成,启动服务时报错
这是我的启动脚本

bert-base-serving-start -model_dir $TRAINED_CLASSIFIER/$EXP_NAME -bert_model_dir $BERT_BASE_DIR -model_pb_dir $TRAINED_CLASSIFIER/$EXP_NAME -mode NER -max_seq_len 128 -http_port 8091 -port 5575 -port_out 5576 -device_map 1

pd文件名:classification_model.pb
报错代码如下

E:NER_MODEL, Lodding...:[gra:opt:306]:fail to optimize the graph! float division by zero
Traceback (most recent call last):
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/server/graph.py", line 289, in optimize_ner_model
    labels=None, num_labels=num_labels, use_one_hot_embeddings=False, dropout_rate=1.0)
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/train/models.py", line 101, in create_model
    rst = blstm_crf.add_blstm_crf_layer(crf_only=True)
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/train/lstm_crf_layer.py", line 60, in add_blstm_crf_layer
    loss, trans = self.crf_layer(logits)
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/train/lstm_crf_layer.py", line 160, in crf_layer
    initializer=self.initializers.xavier_initializer())
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1496, in get_variable
    aggregation=aggregation)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1239, in get_variable
    aggregation=aggregation)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 562, in get_variable
    aggregation=aggregation)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 514, in _true_getter
    aggregation=aggregation)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 929, in _get_single_variable
    aggregation=aggregation)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 259, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 220, in _variable_v1_call
    shape=shape)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2511, in default_variable_creator
    shape=shape)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 263, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1568, in __init__
    shape=shape)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1698, in _init_from_args
    initial_value(), name="initial_value", dtype=dtype)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 901, in <lambda>
    partition_info=partition_info)
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/initializers.py", line 143, in _initializer
    limit = math.sqrt(3.0 * factor / n)
ZeroDivisionError: float division by zero
Traceback (most recent call last):
  File "/root/anaconda3/bin/bert-base-serving-start", line 10, in <module>
    sys.exit(start_server())
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/runs/__init__.py", line 17, in start_server
    server = BertServer(args)
  File "/root/anaconda3/lib/python3.7/site-packages/bert_base/server/__init__.py", line 102, in __init__
    raise FileNotFoundError('graph optimization fails and returns empty result')
FileNotFoundError: graph optimization fails and returns empty result

some coding error

old:

def init_predict_var(path):
    label2id_file = os.path.join(path, 'label2id.pkl')
    if os.path.exists(label2id_file):
        with open(label2id_file, 'rb') as rf:
            label2id = pickle.load(rf)
            id2label = {value: key for key, value in label2id.items()}
            num_labels = len(label2id.items())
    return num_labels, label2id, id2label

new:

def init_predict_var(path):
    num_labels, label2id, id2label = [None]*3
    label2id_file = os.path.join(path, 'label2id.pkl')
    if os.path.exists(label2id_file):
        with open(label2id_file, 'rb') as rf:
            label2id = pickle.load(rf)
            id2label = {value: key for key, value in label2id.items()}
            num_labels = len(label2id.items())
    return num_labels, label2id, id2label

then your need to import pickle
import pickle

请问下怎么设置CPU部署

服务端部署与启动
cd /mnt/sda1/transdat/bert-demo/bert/bert_svr

export BERT_BASE_DIR=/mnt/sda1/transdat/bert-demo/bert/chinese_L-12_H-768_A-12
export TRAINED_CLASSIFIER=/mnt/sda1/transdat/bert-demo/bert/output
export EXP_NAME=mobile_0
export CUDA_VISIBLE_DEVICES=-1
bert-base-serving-start
-model_dir $TRAINED_CLASSIFIER/$EXP_NAME
-bert_model_dir $BERT_BASE_DIR
-model_pb_dir $TRAINED_CLASSIFIER/$EXP_NAME
-mode CLASS
-max_seq_len 128
-http_port 8091
-port 5575
-port_out 5576
注释掉 -device_map 1

仍旧会使用GPU1

ImportError: No module named six

(tf) yang@yang-Precision-Tower-7810:/桌面/BERT-train2deploy-master$ cd /home/yang/桌面/BERT-train2deploy-master
(tf) yang@yang-Precision-Tower-7810:
/桌面/BERT-train2deploy-master$ export BERT_BASE_DIR=/home/yang/桌面/BERT-train2deploy-master/chinese_L-12_H-768_A-12
(tf) yang@yang-Precision-Tower-7810:/桌面/BERT-train2deploy-master$ export GLUE_DIR=/home/yang/桌面/BERT-train2deploy-master/data
(tf) yang@yang-Precision-Tower-7810:
/桌面/BERT-train2deploy-master$ export TRAINED_CLASSIFIER=/home/yang/桌面/BERT-train2deploy-master/output
(tf) yang@yang-Precision-Tower-7810:/桌面/BERT-train2deploy-master$ export EXP_NAME=mobile_0
(tf) yang@yang-Precision-Tower-7810:
/桌面/BERT-train2deploy-master$
(tf) yang@yang-Precision-Tower-7810:~/桌面/BERT-train2deploy-master$ sudo python run_mobile.py \

--task_name=setiment
--do_train=true
--do_eval=true
--data_dir=$GLUE_DIR/$EXP_NAME
--vocab_file=$BERT_BASE_DIR/vocab.txt
--bert_config_file=$BERT_BASE_DIR/bert_config.json
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt
--max_seq_length=128
--train_batch_size=32
--learning_rate=2e-5
--num_train_epochs=5.0
--output_dir=$TRAINED_CLASSIFIER/$EXP_NAME
[sudo] yang 的密码:
Traceback (most recent call last):
File "run_mobile.py", line 24, in
import modeling
File "/home/yang/桌面/BERT-train2deploy-master/modeling.py", line 26, in
import six
ImportError: No module named six
我在pycharm的terminal中运行,按照楼主的步骤走的,six已经装好啦,还是报错。是怎么回事呢?麻烦大神解答下

同一机器开启多个服务时卡住

请问有没有试过在一台机器上开启多个服务(设置不同的端口),有一台机器最多开启5个服务,后面再开就一直停留在load pb file最后一步,一直没有出现ready and listening。显存还有机器内存都还有很多空闲,不知道为什么。

没有出现“ready and listening”!

在本地可以起来服务,但包成docker后,一直无法出现“ready and listening”!的提示,说明接口服务没起来,能否帮忙查明原因
I:VENTILATOR:[__i:_ge:239]:get devices
I:VENTILATOR:[__i:_ge:271]:device map:
worker 0 -> cpu
I:SINK:[__i:_ru:317]:ready
I:VENTILATOR:[__i:_ru:180]:start http proxy
I:WORKER-0:[__i:_ru:497]:use device cpu, load graph from /usr/src/app/models/pbModelDir/classification_model.pb
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'e00184bb-7360-4fea-9c19-d9e3321bf9bb'
I:SINK:[__i:_ru:372]:send config client b'e00184bb-7360-4fea-9c19-d9e3321bf9bb'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'a654003f-0eca-4e5c-ba56-30f8f07ac053'
I:SINK:[__i:_ru:372]:send config client b'a654003f-0eca-4e5c-ba56-30f8f07ac053'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'9b5d0dfc-e3de-4ac7-8f54-f791ba56c3ea'
I:SINK:[__i:_ru:372]:send config client b'9b5d0dfc-e3de-4ac7-8f54-f791ba56c3ea'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'0a23f5f7-b79c-4f2e-94b1-891ef5477618'
I:SINK:[__i:_ru:372]:send config client b'0a23f5f7-b79c-4f2e-94b1-891ef5477618'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'60ba3b85-1538-414c-9425-915b057ae35d'
I:SINK:[__i:_ru:372]:send config client b'60ba3b85-1538-414c-9425-915b057ae35d'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'bda5c53a-850f-419f-9c9e-9b907a31f99d'
I:SINK:[__i:_ru:372]:send config client b'bda5c53a-850f-419f-9c9e-9b907a31f99d'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'c6f526e8-30eb-46a9-95b1-6a6b0ca3a887'
I:SINK:[__i:_ru:372]:send config client b'c6f526e8-30eb-46a9-95b1-6a6b0ca3a887'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'2735bc9e-0eb5-4c68-b896-500e60c42e56'
I:SINK:[__i:_ru:372]:send config client b'2735bc9e-0eb5-4c68-b896-500e60c42e56'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'0378aecd-e1e0-4304-9f44-4265098f6533'
I:SINK:[__i:_ru:372]:send config client b'0378aecd-e1e0-4304-9f44-4265098f6533'
I:VENTILATOR:[__i:_ru:199]:new config request req id: 0 client: b'f62f81bc-1f96-4ee0-93db-3bc530eecfb6'
I:SINK:[__i:_ru:372]:send config client b'f62f81bc-1f96-4ee0-93db-3bc530eecfb6'

  • Serving Flask app "bert_base.server.http" (lazy loading)
  • Environment: production
    WARNING: Do not use the development server in a production environment.
    Use a production WSGI server instead.
  • Debug mode: off
  • Running on http://0.0.0.0:8091/ (Press CTRL+C to quit)

预测很慢

有人使用do_predict=True时,发现预测分类很慢吗,怎么解决?

我的是英文NER的BERT模型,东西大同小异,但是我在用freeze_.graph.py的时候出现了如下问题,想请教下您会吗

tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key output_bias not found in checkpoint
[[node save/RestoreV2 (defined at freeze_graph.py:191) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
[[{{node save/RestoreV2/_393}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_397_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

然后我打开了checkpoint的文件,文件如下:
model_checkpoint_path: "model.ckpt-1136"
all_model_checkpoint_paths: "model.ckpt-0"
all_model_checkpoint_paths: "model.ckpt-1000"
all_model_checkpoint_paths: "model.ckpt-1136"

这边有什么问题吗
他说偏置没找到,不应该在ckpt里面吗?

freeze_graph.py错误

optimize_class_model方法里调用create_classification_model时会传入num_labels,但是这个num_labels在前面没定义。
#############################################################

增加 从label2id.pkl中读取num_labels, 这样也可以不用指定num_labels参数; 2019/4/17

    if not args.num_labels:
        num_labels, label2id, id2label = init_predict_var(tmp_dir)

#############################################################
如果执行脚本时输入了num_labels参数,则上面这段代码就不会执行,这就导致没有定义num_labels变量。

关于压缩pb

您好,在freeze_graph那一步中,您的line17 使用了import modeling,请问这个是项目提供的吗?为什么一直报找不到modeling

运行bert_base_serving_start时出错

I:?[35mVENTILATOR?[0m:lodding classification predict, could take a while...
I:?[35mVENTILATOR?[0m:contain 0 labels:dict_values(['0', '1'])
2020-01-14 21:09:35.241239: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic lib
rary cudart64_100.dll
pb_file exits F:\学习资料\毕设\code\bert-master\bert-master\output\classification_model.pb
I:?[35mVENTILATOR?[0m:optimized graph is stored at: F:\学习资料\毕设\code\bert-master\bert-master\output\classification_mod
el.pb
I:?[35mVENTILATOR?[0m:bind all sockets
I:?[35mVENTILATOR?[0m:open 8 ventilator-worker sockets, tcp://127.0.0.1:64609,tcp://127.0.0.1:64610,tcp://127.0.0.1:64611,t
cp://127.0.0.1:64612,tcp://127.0.0.1:64613,tcp://127.0.0.1:64614,tcp://127.0.0.1:64615,tcp://127.0.0.1:64616
I:?[35mVENTILATOR?[0m:start the sink
2020-01-14 21:09:37.534152: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic lib
rary cudart64_100.dll
I:?[32mSINK?[0m:ready
I:?[35mVENTILATOR?[0m:get devices
I:?[35mVENTILATOR?[0m:device map:
worker 0 -> gpu 0
2020-01-14 21:09:39.903511: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic lib
rary cudart64_100.dll
I:?[33mWORKER-0?[0m:use device gpu: 0, load graph from F:\学习资料\毕设\code\bert-master\bert-master\output\classification_
model.pb
WARNING:tensorflow:From d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server\helper.py:161: The name tf
.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server\helper.py:161: The name tf
.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.
Process BertWorker-3:
Traceback (most recent call last):
File "D:\Anaconda\lib\multiprocessing\process.py", line 297, in bootstrap
self.run()
File "d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server_init
.py", line 490, in run
self.run()
File "d:\anaconda\lib\site-packages\zmq\decorators.py", line 75, in wrapper
return func(*args, **kwargs)
File "d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server\zmq_decor.py", line 27, in wrapper
return func(*args, **kwargs)
File "d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server_init
.py", line 508, in _run
for r in estimator.predict(input_fn=self.input_fn_builder(receivers, tf), yield_single_examples=False):
File "d:\anaconda\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 622, in predict
features, None, ModeKeys.PREDICT, self.config)
File "d:\anaconda\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1149, in call_model_fn
model_fn_results = self.model_fn(features=features, **kwargs)
File "d:\anaconda\lib\site-packages\bert_base-0.0.9-py3.7.egg\bert_base\server_init
.py", line 466, in classification

model_fn
pred_probs = tf.import_graph_def(graph_def, name='', input_map=input_map, return_elements=['pred_prob:0'])
File "d:\anaconda\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "d:\anaconda\lib\site-packages\tensorflow_core\python\framework\importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "d:\anaconda\lib\site-packages\tensorflow_core\python\framework\importer.py", line 535, in _import_graph_def_interna
l
', '.join(missing_unused_input_keys))
ValueError: Attempted to map inputs that were not found in graph_def: [segment_ids:0]

freeze_graph.py的问题2

195行的latest_checkpoint = tf.train.latest_checkpoint(args.model_dir)
参数应该是 args.bert_model_dir

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.