Code Monkey home page Code Monkey logo

autophrasex's Introduction

AutoPhraseX

Python package PyPI version Python

Automated Phrase Mining from Massive Text Corpora in Python.

实现思路参考 shangjingbo1226/AutoPhrase,并不完全一致。

安装

pip install -U autophrasex

基本使用

from autophrasex import *

# 构造autophrase
autophrase = AutoPhrase(
    reader=DefaultCorpusReader(tokenizer=JiebaTokenizer()),
    selector=DefaultPhraseSelector(),
    extractors=[
        NgramsExtractor(N=4), 
        IDFExtractor(), 
        EntropyExtractor()
    ]
)

# 开始挖掘
predictions = autophrase.mine(
    corpus_files=['data/answers.10000.txt'],
    quality_phrase_files='data/wiki_quality.txt',
    callbacks=[
        LoggingCallback(),
        ConstantThresholdScheduler(),
        EarlyStopping(patience=2, min_delta=3)
    ])

# 输出挖掘结果
for pred in predictions:
    print(pred)

高级用法

本项目的各个关键步骤都是可以扩展的,所以大家可以自由实现自己的逻辑。

本项目大体上可以氛围以下几个主要模块:

  • tokenizer分词器模块
  • reader语料读取模块
  • selector高频短语的选择模块
  • extractors特征抽取器,用于抽取分类器所需要的特征
  • callbacks挖掘周期的回调模块

以下是每个模块的高级使用方法。

tokenizer

tokenizer用于文本分词,用户可以继承AbstractTokenizer实现自己的分词器。本库自带JiebaTokenizer

例如,你可以使用baidu/LAC来进行中文分词。你可以这样实现分词器:

# pip install lac

class BaiduLacTokenizer(AbstractTokenizer):

    def __init__(self, custom_vocab_path=None, model_path=None, mode='seg', use_cuda=False, **kwargs):
        self.lac = LAC(model_path=model_path, mode=mode, use_cuda=use_cuda)
        if custom_vocab_path:
            self.lac.load_customization(custom_vocab_path)

    def tokenize(self, text, **kwargs):
        text = self._uniform_text(text, **kwargs)
        results = self.lac.run(text)
        results = [x.strip() for x in results if x.strip()]
        return results

然后在构建reader的使用使用BaiduLacTokenizer:

reader = DefaultCorpusReader(tokenizer=BaiduLacTokenizer())

reader

reader用于读取语料,用户可以继承AbstractCorpusReader实现自己的Reader。本库自带DefaultCorpusReader

因为目前的extractor其实是依赖reader的(具体来说是extractor实现了reader的生命周期回调接口),因此想要重写reader,在有些情况下需要同时更改extractor的实现,此时自定义成本比较大,暂时不推荐重写reader

selector

selector用于选择高频Phrase,用户可以继承AbstractPhraseSelector实现自己的Phrase选择器。本库自带DefaultPhraseSelector

selector可以拥有多个phrase_filter,用于实现Phrase的过滤。关于phrase_filter本库提供了开放的接口,用户可以继承AbstractPhraseFilter实现自己的过滤器。本库自带了默认的过滤器DefaultPhraseFilter,并且在默认情况下使用。

如果你想要禁用默认的过滤器,转而使用自己实现的过滤器,可以在构造selector的时候设置:

# 自定义过滤器
class MyPhraseFilter(AbstractPhraseFilter):

    def apply(self, pair, **kwargs):
        phrase, freq = pair
        # return True to filter this phrase
        if is_verb(phrase):
            return True
        return False

selector = DefaultPhraseSelector(
    phrase_filters=[MyPhraseFilter()], 
    use_default_phrase_filters=False
)

考虑到有些过滤过程,使用按批处理可以显著提升速度(例如使用深度学习模型计算词性),phrase_filter提供了一个batch_apply方法。

举个例子,使用baidu/LAC来计算词性,从而实现Phrase的过滤:

class VerbPhraseFilter(AbstractPhraseFilter):

    def __init__(self, batch_size=100):
        super().__init__()
        self.lac = LAC()
        self.batch_size = batch_size

    def batch_apply(self, batch_pairs, **kwargs):
        predictions = []
        for i in range(0, len(batch_pairs), self.batch_size):
            batch_texts = [x[0] for x in batch_pairs[i: i + self.batch_size]]
            batch_preds = self.lac.run(batch_texts)
            predictions.extend(batch_preds)
        candidates = []
        for i in range(len(predictions)):
            _, pos_tags = predictions[i]
            if any(pos in ['v', 'vn', 'vd'] for pos in pos_tags):
                continue
            candidates.append(batch_pairs[i])
        return candidates

selector = DefaultPhraseSelector(
    phrase_filters=[VerbPhraseFilter()], 
    use_default_phrase_filters=False
)

extractor

extractor用于抽取分类器的特征。特征抽取器会在reader读取语料的时候进行必要信息的统计。因此extractor实现了reader的回调接口,所以在自定义特征抽取器的时候,需要同时继承AbstractCorpusReadCallbackAbstractFeatureExtractor

本库自带了以下几个特征抽取器:

  • NgramExtractorn-gram特征抽取器,可以计算phrase的pmi特征
  • IDFExtractoridf特征抽取器,可以计算phrase的doc_freqidf特征
  • EntropyExtractor特征抽取器,可以计算phrase的左右熵特征

上述自带的特征抽取器,都是基于n-gram统计的,因此都支持ngram的选择,也就是都可以自定义ngram_filter来过滤不需要统计的ngram。本库自带了DefaultNgramFilter,并且默认启用。用户可以实现自己的ngram_filter来灵活选取合适的ngram

举个例子,我需要过滤掉包含标点符号ngram

CHARACTERS = set('!"#$%&\'()*+,-./:;?@[\\]^_`{|}~ \t\n\r\x0b\x0c,。?:“”【】「」')

class MyNgramFilter(AbstractNgramFiter):

    def apply(self, ngram, **kwargs):
        if any(x in CHARACTERS for x in ngram):
            return True
        return False

autophrase = AutoPhrase(
    reader=DefaultCorpusReader(tokenizer=JiebaTokenizer()),
    selector=DefaultPhraseSelector(),
    extractors=[
        NgramsExtractor(N=4, ngram_filters=[MyNgramFilter()]), 
        IDFExtractor(ngram_filters=[MyNgramFilter()]), 
        EntropyExtractor(ngram_filters=[MyNgramFilter()]),
    ]
)
# 开始挖掘
...

用户可以继承AbstractFeatureExtractor实现自己的特征计算。只需要在构建autophrase实例的时候,把这些特征计算器传入即可,不需要做其他任何额外操作。

举个例子,我增加一个phrase是否是unigram的特征:

class UnigramFeatureExtractor(AbstractFeatureExtractorAbstractCorpusReadCallback):

    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def extract(self, phrase, **kwargs):
        parts = phrase.split(' ')
        features = {
            'is_unigram': 1 if len(parts) == 1 else 0
        }
        return features


autophrase = AutoPhrase(
    reader=DefaultCorpusReader(tokenizer=JiebaTokenizer()),
    selector=DefaultPhraseSelector(),
    extractors=[
        NgramsExtractor(N=N), 
        IDFExtractor(), 
        EntropyExtractor(),
        UnigramFeatureExtractor(),
    ]
)

# 可以开始挖掘了
...

callback

callback回调接口,可以提供phrase挖掘过程中的生命周期监听,并且实现一些稍微复杂的功能,例如EarlyStopping判断阈值Schedule等。

本库自带以下回调:

  • LoggingCallback提供挖掘过程的日志信息打印
  • ConstantThresholdScheduler在训练过程中调整阈值的回调
  • EarlyStopping早停,在指标没有改善的情况下停止训练

用户可以自己继承Callback实现自己的逻辑。

结果示例

以下结果属于本库比较早期的测试效果,目前本库的代码更新比较大,返回结果和下述内容不太一致。仅供参考。

新闻语料上的抽取结果示例:

成品油价格, 0.992766816097071
股份制银行, 0.992766816097071
公务船, 0.992766816097071
**留学生, 0.992766816097071
贷款基准, 0.992766816097071
欧足联, 0.992766816097071
新局面, 0.992766816097071
淘汰赛, 0.992766816097071
反动派, 0.992766816097071
生命危险, 0.992766816097071
新台阶, 0.992766816097071
知名度, 0.992766816097071
新兴产业, 0.9925660976153782
安全感, 0.9925660976153782
战斗力, 0.9925660976153782
战略性, 0.9925660976153782
私家车, 0.9925660976153782
环球网, 0.9925660976153782
副校长, 0.9925660976153782
流行语, 0.9925660976153782
债务危机, 0.9925660976153782
保险资产, 0.9920376397372204
保险机构, 0.9920376397372204
豪华车, 0.9920376397372204
环境质量, 0.9920376397372204
瑞典队, 0.9919345469537152
交强险, 0.9919345469537152
马卡报, 0.9919345469537152
生产力, 0.9911077251879798

医疗对话语料的抽取示例:

左眉弓, 1.0
支原体, 1.0
mri, 1.0
颈动脉, 0.9854149008885851
结核病, 0.9670815675552518
手术室, 0.9617546444783288
平扫示, 0.9570324222561065
左手拇指, 0.94
双膝关节, 0.94
右手中指, 0.94
拇指末节, 0.94
cm皮肤, 0.94
肝胆脾, 0.94
抗体阳性, 0.94
igm抗体阳性, 0.94
左侧面颊, 0.94
膀胱结石, 0.94
左侧基底节, 0.94
腰椎正侧, 0.94
软组织肿胀, 0.94
手术瘢痕, 0.94
枕顶部, 0.94
左膝关节正侧, 0.94
膝关节正侧位, 0.94
腰椎椎体, 0.94
承德市医院, 0.94
性脑梗塞, 0.94
颈椎dr, 0.94
泌尿系超声, 0.94
双侧阴囊, 0.94
右颞部, 0.94
肺炎支原体, 0.94

autophrasex's People

Contributors

jianfengzhai avatar luozhouyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

autophrasex's Issues

ZeroDivisionError: float division by zero

2021-04-16 11:12:39,442    INFO        autophrase.py   33] Load quality phrases finished. There are 10386 quality phrases in total.
2021-04-16 11:12:39,937    INFO        autophrase.py   36] Selected 1000 frequent phrases.
2021-04-16 11:12:39,938    INFO        autophrase.py   39] Size of initial positive pool: 118
2021-04-16 11:12:39,939    INFO        autophrase.py   40] Size of initial negative pool: 782
2021-04-16 11:12:39,940    INFO        autophrase.py   46] Starting to train model at epoch 1 ...
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-28-7f2482f60f66> in <module>
      4     strategy=strategy,
      5     N=4,
----> 6     epochs=10)
      7 
      8 for pred in predictions:

~/.pylib/Lib/site-packages/autophrasex/autophrase.py in mine(self, input_doc_files, quality_phrase_files, strategy, N, **kwargs)
     45         for epoch in range(kwargs.pop('epochs', 5)):
     46             logging.info('Starting to train model at epoch %d ...', epoch + 1)
---> 47             x, y = strategy.compose_training_data(pos_pool, neg_pool, **kwargs)
     48             self.classifier.fit(x, y)
     49             logging.info('Finished to train model at epoch %d', epoch + 1)

~/.pylib/Lib/site-packages/autophrasex/strategy.py in compose_training_data(self, pos_pool, neg_pool, **kwargs)
    155         for p in pos_pool:
    156             p = ' '.join(self.tokenizer.tokenize(p))
--> 157             examples.append((self.build_input_features(p), 1))
    158         for p in neg_pool:
    159             p = ' '.join(self.tokenizer.tokenize(p))

~/.pylib/Lib/site-packages/autophrasex/strategy.py in build_input_features(self, phrase, **kwargs)
    210         doc_freq = self.idf_callback.doc_freq_of(phrase)
    211         idf = self.idf_callback.idf_of(phrase)
--> 212         pmi = self.ngrams_callback.pmi_of(phrase)
    213         left_entropy = self.entropy_callback.left_entropy_of(phrase)
    214         right_entropy = self.entropy_callback.right_entropy_of(phrase)

~/.pylib/Lib/site-packages/autophrasex/callbacks.py in pmi_of(self, ngram)
     79         ngram_total_occur = sum(self.ngrams_freq[n].values())
     80         freq = self.ngrams_freq[n].get(''.join(ngram.split(' ')), 0)
---> 81         return self._pmi_of(ngram, n, freq, unigram_total_occur, ngram_total_occur)
     82 
     83     def pmi(self):

~/.pylib/Lib/site-packages/autophrasex/callbacks.py in _pmi_of(self, ngram, n, freq, unigram_total_occur, ngram_total_occur)
     61         indep_prob = reduce(
     62             mul, [self.ngrams_freq[1][unigram] for unigram in ngram.split(' ')]) / (unigram_total_occur ** n)
---> 63         pmi = math.log((joint_prob + self.epsilon) / (indep_prob + self.epsilon), 2)
     64         return pmi
     65 

ZeroDivisionError: float division by zero

Debug 后发现是 callback 中 epsilon 默认为 0 导致的,建议改为一个极小值,或者在开始的实例中手动传入 epsilon=1e-9 之类的参数,对使用者更友好一些

import出错;另外,能否提供示例代码所用的数据呢?

本文代码环境为 win10 core-i7 python3

该句导入出错:from autophrasex import AutoPhrase, BaiduLacTokenizer, Strategy

能否将LAC模块替换为其他相同功能的包,百度LAC包repos下也有人反馈有导入错误。

...autophrasex_demo.py", line 11, in <module>
    from autophrasex import AutoPhrase, BaiduLacTokenizer, Strategy
  File "D:\ProgramData\Anaconda3\lib\site-packages\autophrasex\__init__.py", line 3, in <module>
    from .autophrase import AutoPhrase
  File "D:\ProgramData\Anaconda3\lib\site-packages\autophrasex\autophrase.py", line 8, in <module>
    from .strategy import AbstractStrategy
  File "D:\ProgramData\Anaconda3\lib\site-packages\autophrasex\strategy.py", line 6, in <module>
    from LAC import LAC
  File "D:\ProgramData\Anaconda3\lib\site-packages\LAC\__init__.py", line 23, in <module>
    from .lac import LAC
  File "D:\ProgramData\Anaconda3\lib\site-packages\LAC\lac.py", line 28, in <module>
    import paddle.fluid as fluid
  File "D:\ProgramData\Anaconda3\lib\site-packages\paddle\__init__.py", line 29, in <module>
    from .fluid import monkey_patch_variable
  File "D:\ProgramData\Anaconda3\lib\site-packages\paddle\fluid\__init__.py", line 35, in <module>
    from . import framework
  File "D:\ProgramData\Anaconda3\lib\site-packages\paddle\fluid\framework.py", line 34, in <module>
    from .proto import framework_pb2
  File "D:\ProgramData\Anaconda3\lib\site-packages\paddle\fluid\proto\framework_pb2.py", line 11, in <module>
    from google.protobuf import descriptor_pb2
  File "D:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\descriptor_pb2.py", line 1840, in <module>
    __module__ = 'google.protobuf.descriptor_pb2'
TypeError: expected bytes, Descriptor found

参数‘corpus_files’ 和 ‘quality_phrase_files'的使用

你好,在实践中对参数‘corpus_files’ 和 ‘quality_phrase_files有些疑问。

  1. 如果想for循环地使用AutoPhraseX(例如语料被分为n份,依次对每份语料进行挖掘),corpus_files该参数只能对文件进行操作吗?我试图将该参数换成数组或者字符串,会报错。在不方便将处理过的语料写入txt文件的情况下(即语料被分为n份,n较大),如果想for循环地使用AutoPhraseX,我该怎么做呢?非常感谢!
  2. 当我使用简单的quality_phrase_files='userDic.txt'(例如userDic.txt中包含“知识图谱”),发现挖掘出来的结果中将不出现“知识图谱”,然后尝试将userDic.txt中的“知识图谱”删掉,挖掘结果中则出现“知识图谱”该词。尝试多种例子,产生了quality_phrase_files是停用词表的错觉,不知道是语料较少的问题或是使用方式不对的问题。

实践代码如下:
from autophrasex import *

构造autophrase

autophrase = AutoPhrase(
reader=DefaultCorpusReader(tokenizer=JiebaTokenizer()),
selector=DefaultPhraseSelector(),
extractors=[
NgramsExtractor(N=4),
IDFExtractor(),
EntropyExtractor()
]
)

开始挖掘

predictions = autophrase.mine(
corpus_files=['answers.txt'],
quality_phrase_files='userDic.txt', #quality_phrase_files??像是停用词
callbacks=[
LoggingCallback(),
ConstantThresholdScheduler(),
EarlyStopping(patience=2, min_delta=3)
# EarlyStopping()
]
)

输出挖掘结果

for pred in predictions:
print(pred)

非常感谢大家的帮助,谢谢!

IndexError: index 1 is out of bounds for axis 0 with size 1

def _predict_proba(self, phrases):
    features = [self._compose_feature(phrase) for phrase in phrases]
    pos_probs = [prob[1] for prob in self.classifier.predict_proba(features)] (**prob[1] in this line**)
    pairs = [(phrase, prob) for phrase, prob in zip(phrases, pos_probs)]
    return pairs

英文上抽取效果与原文有差距

您好,我使用您的代码跑英文数据集,但是结果并不理想,最高得分只有0.3多。我已经更换wiki_quality.txt为英文版本,并且将tokenizer改为spaCy实现,不知道问题出在哪里,请问您是否在英文数据集上测试过,有没有相应的代码可以提供呢,感谢!
image
image
image

IndexError: index 1 is out of bounds for axis 0 with size 1

你好,我想问下,输入文件的格式是怎样的?我运行的时候出现以下bug,我猜测应该是输入特征的问题导致本来应该是二维输出最后变成了一维的。我的输入文件就是每行一条文本无空格,比如:

我是一个人。
哈哈哈哈哈。
那里有个苹果。
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-3-097136691c2f> in <module>
      6         LoggingCallback(),
      7         ConstantThresholdScheduler(),
----> 8         EarlyStopping(patience=2, min_delta=3)
      9     ])

/usr/local/lib/python3.6/dist-packages/autophrasex/autophrase.py in mine(self, corpus_files, quality_phrase_files, N, epochs, callbacks, topk, filter_fn, **kwargs)
    122 
    123             callback.on_epoch_reorganize_phrase_pools_begin(epoch, pos_pool, neg_pool)
--> 124             pos_pool, neg_pool = self._reorganize_phrase_pools(pos_pool, neg_pool, **kwargs)
    125             callback.on_epoch_reorganize_phrase_pools_end(epoch, pos_pool, neg_pool)
    126 

/usr/local/lib/python3.6/dist-packages/autophrasex/autophrase.py in _reorganize_phrase_pools(self, pos_pool, neg_pool, **kwargs)
    157         new_pos_pool.extend(deepcopy(pos_pool))
    158 
--> 159         pairs = self._predict_proba(neg_pool)
    160         pairs = sorted(pairs, key=lambda x: x[1], reverse=True)
    161         # print(pairs[:10])

/usr/local/lib/python3.6/dist-packages/autophrasex/autophrase.py in _predict_proba(self, phrases)
    184     def _predict_proba(self, phrases):
    185         features = [self._compose_feature(phrase) for phrase in phrases]
--> 186         pos_probs = [prob[1] for prob in self.classifier.predict_proba(features)]
    187         pairs = [(phrase, prob) for phrase, prob in zip(phrases, pos_probs)]
    188         return pairs

/usr/local/lib/python3.6/dist-packages/autophrasex/autophrase.py in <listcomp>(.0)
    184     def _predict_proba(self, phrases):
    185         features = [self._compose_feature(phrase) for phrase in phrases]
--> 186         pos_probs = [prob[1] for prob in self.classifier.predict_proba(features)]
    187         pairs = [(phrase, prob) for phrase, prob in zip(phrases, pos_probs)]
    188         return pairs

IndexError: index 1 is out of bounds for axis 0 with size 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.