Code Monkey home page Code Monkey logo

people_relation_extract's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

people_relation_extract's Issues

人物实体的位置

使用bert进行特征提取时,需要考虑每个词距离人物实体的相对位置吗。

博主您好

我已经看了您的几片关于NLP的文章。但是作为一个初学者,实践起来面对您的项目有点不知从何下手。比如训练模型怎么跑通,您有什么建议吗?

bert文件缺失

您好,跑您的代码发现
model_dir = os.path.join(file_path, 'chinese_L-12_H-768_A-12')
config_name = os.path.join(model_dir, 'bert_config.json')
ckpt_name = os.path.join(model_dir, 'bert_model.ckpt')
vocab_file = os.path.join(model_dir, 'vocab.txt')
这些文件代码里面没有

model_predict.py input_1 shape error

I:BERT_VEC:[graph:opt:144]:write graph to a tmp file: ./tmp_graph11
Traceback (most recent call last):
File "model_predict.py", line 48, in
predicted = model.predict(x_train)

ValueError: Error when checking input: expected input_1 to have shape (128, 768) but got array with shape (80, 768)

数据中###位置超过实体数目的问题

percent4,您好,看了您的仓库,收获非常大,在使用过程中发现了一个问题:

image

数据集中### 表示实体,但是句子中出现了多次实体,导致无法确定实体在句子中的具体位置,请问可以提供原始的文本数据,进行一下筛选么?筛选成每个句子只挖空两个位置,将大于两个位置的句子进行舍弃。

谢谢!

请教下源文件中使用的tensorflow版本号

运行model_train.py时,会报错:module 'tensorflow' has no attribute 'logging',原因是我使用的tensorflow2.0+中没有logging,所以想请问下源文件中的tensorflow版本号

KeyError: 'val_accuracy'

RuntimeWarning: Early stopping conditioned on metric val_accuracy which is not available. Available metrics are: val_loss,val_acc,loss,acc
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning

请教bert的特征提取用法

看文章,知道是用bert来提取特征,但是如何做的呢,我想学习,

还有fine tuning的方法又是如何做的呢,多谢多谢!

pip install h5py==2.10

          pip install h5py==2.10

Hello,when I run you code,I meet the problem as follow,can you give me some help?Thank you very much!
line 72, in as_bytes(bytes_or_text,))
TypeError: Expected binary or unicode string, got None

Originally posted by @longkeyy in #9 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.