Comments (8)
I have just done a comparison between the different methods: splitting based on punctuation, Unicode segmentation, NLTK's Punkt model and spaCy. I used the Brown corpus as the benchmarking dataset.
Here are my results:
Method | Precision | Recall | F1 |
---|---|---|---|
Punctuation spliter | 0.896 | 0.915 | 0.906 |
Unicode segmentation | 0.938 | 0.912 | 0.925 |
NLTK Punkt | 0.907 | 0.875 | 0.891 |
spaCy | 0.924 | 0.908 | 0.916 |
Jupyter notebook with full analysis
Interestingly each model scores very similarly. Presumably because most sentences are quite easy (with a full stop and then a space) and you get very few which are more difficult (for example quotes, colons etc.). The Unicode segmentation surprisingly has the best score (F1) and has the added benefit that it's language independent (as previously suggested by @jbowles in #52).
I will do a PR on incorporating UnicodeSegmentation in the coming days.
from vtext.
spaCy has two sentence segmentation implementations. The default is based on the dependency parser which requires a statistical model. The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?"). (https://spacy.io/usage/linguistic-features#sbd)
On another note it looks like the Unicode sentence boundaries from unicode-rs/unicode-segmentation#24 has been implemented. I could look at how to incorporate this into this library?
from vtext.
The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?").
I think you can do this already with the RegexpTokenizer
using something like,
let tokenizer = RegexpTokenizerParams:default()
.pattern(r"[^\.!\?]".to_string())
.build()
.unwrap();
tokenizer.tokenize("some string. another one").collect();
(haven't checked that the regexp is correct) so I'm not sure we need a separate object for it. Maybe just indicating the appropriate regexp for sentence tokenization could be enough?
For the sentence boundaries from the unicode_segmentation crate, yes that would be great if you are interested to look into it! I would also be interested to know how it compares to the Spacy tokenizer that uses a language model.
from vtext.
Thanks! Sounds great. It's interesting indeed that Unicode segmentation is competitive even compared to spacy, and I imagine it's much faster.
from vtext.
PR #66 implements the thin wrapper around the Unicode sentence segmentation.
Regarding the "simple punctuation splitter": using a regex like [^\.!\?]
doesn't work as you would loose the punctuation at the end of each sentence. I also tried (.*?[\.\?!]\s?)
but here you would loose the trailing sentence if it didn't include a punctuation. For example:
Input = ["Here is one. Here is another! This trailing text is one more"]
Desired Output = ["Here is one.", "Here is another!", "This trailing text is one more"]
I don't think it's possible with Regex. Do you have any ideas?
Another tactic would be to create a itterator similar to how to spaCy and what I did in the Jupyter notebook. For example:
def split_on_punct(doc: str):
""" Split document by sentences using punctuation ".", "!", "?". """
punct_set = {'.', '!', '?'}
start = 0
seen_period = False
for i, token in enumerate(doc):
is_punct = token in punct_set
if seen_period and not is_punct:
if re.match('\s', token):
yield doc[start : i+1]
start = i+1
else:
yield doc[start : i]
start = i
seen_period = False
elif is_punct:
seen_period = True
if start < len(doc):
yield doc[start : len(doc)]
from vtext.
FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.
from vtext.
FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.
Thanks, that would be great!
Regarding the "simple punctuation splitter": using a regex like [^\.!\?] doesn't work as you would loose the punctuation at the end of each sentence.
I think using the regexp crate would still work but using split
instead of find_iter
to avoid the issue of the last sentence. Though I agree you would have to do a separate tokenizer for it.
from vtext.
Closing as resolved in #66 and #70, thanks again @joshlk !
from vtext.
Related Issues (18)
- Support different hash functions in HashingVectorizer HOT 2
- Word n-grams
- NLP pipeline design HOT 10
- Make estimators picklables
- Multi-OS Python wheels
- Better unicode support in tokenization rules HOT 1
- Build release wheels with LTO
- Implement IDF transforms
- Character n-grams HOT 4
- ENH Avoid copying tokens in tokenizers in Python HOT 1
- Better support of configuration parameters in vectorizers HOT 2
- General architecture feedback HOT 2
- Python wrappers HOT 1
- Make to_ascii_lowercase optional HOT 4
- Rename UnicodeSegmentTokenizer to UnicodeWordTokenizer HOT 1
- Standardize language option
- Fine-tune tokenizers
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vtext.