Comments (2)
Hi @roemmele,
Thanks for using EASSE on your research. My apologies, but the example in the README has not been updated according to the latest version of the package.
For your particular example, since you are only evaluating one sentence, I'd recommend using sentence_sari
instead of corpus_sari
. In addition, I'd suggest making it explicit that the input is already tokenized, so that EASSE doesn't try to do it internally.
sentence_sari(orig_sent="About 95 species are currently accepted .",
sys_sent="This is my simplified sentence that has no token overlap with the source or reference sentences.",
ref_sents=["About 95 species are currently known .", "About 95 species are now accepted .", "95 species are now accepted ."],
tokenizer='none')
Out[2]: 16.078042328042326
sentence_sari(orig_sent="About 95 species are currently accepted .",
sys_sent="species accepted .",
ref_sents=["About 95 species are currently known .", "About 95 species are now accepted .", "95 species are now accepted ."],
tokenizer='none')
Out[3]: 24.175824175824175
If you'd still like to use corpus_sari
for just one sentence, the calls should be:
corpus_sari(orig_sents=["About 95 species are currently accepted ."],
sys_sents=["This is my simplified sentence that has no token overlap with the source or reference sentences."],
refs_sents=[["About 95 species are currently known ."], ["About 95 species are now accepted ."], ["95 species are now accepted ."]],
tokenizer='none', use_f1_for_deletion=False)
Out[4]: 16.078042328042326
corpus_sari(orig_sents=["About 95 species are currently accepted ."],
sys_sents=["species accepted ."],
refs_sents=[["About 95 species are currently known ."], ["About 95 species are now accepted ."], ["95 species are now accepted ."]],
tokenizer='none', use_f1_for_deletion=False)
Out[5]: 24.175824175824175
It's important to notice that corpus_sari
calculates a system-level score. As such, you should set use_f1_for_deletion=True
(the default value) so that it behaves in the same way as the original JAVA implementation that's used in almost all papers to compute the score at the system-level. The original Python implementation (like in https://github.com/XingxingZhang/pysari) is for sentence-level SARI, and as such the deletion operation should be calculated using only precision. This is transparent in EASSE: if you are calculating the sentence-level score (i.e. only one sentence) use sentence_sari
, if you are going to calculate the system-level score use corpus_sari
. We take care of setting use_f1_for_deletion
in the most adequate way with the default values of that parameter for each function.
Also, notice how the references are passed to corpus_sari
. I'll provide another example with more sentences for better illustration:
corpus_sari(orig_sents=["About 95 species are currently accepted .", "The cat perched on the mat ."],
sys_sents=["species accepted .", "cat on mat ."],
refs_sents=[["About 95 species are currently known .", "The cat sat on the mat ."],
["About 95 species are now accepted .", "The cat is on the mat ."],
["95 species are now accepted .", "The cat sat ."]])
I hope this helps. If you have any more issues or require clarifications, please let us know.
from easse.
Oh, I see. Ok, I will try that, thank you!
from easse.
Related Issues (20)
- How to apply easse to custom data? HOT 1
- package taking long time to run and not providing output HOT 2
- Can't install EASSE! HOT 1
- How to use easse with a varying number of references? HOT 1
- sentence_sari issue HOT 3
- installation issue in windows 10 HOT 7
- result dismatch with paper HOT 7
- ERROR: local variable 'NISTTokenizer' referenced before assignment HOT 2
- multiple system report error HOT 2
- How to calculate fkgl, Bleu in python? HOT 1
- System outputs for MUSST HOT 1
- UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 5599: ordinal not in range(128) HOT 2
- Failed to compute SAMSA HOT 1
- Referenceless Quality Estimation HOT 3
- sacrebleu version HOT 1
- (pip) ERROR: Failed building wheel for easse HOT 2
- resourceKilled HOT 3
- bert score breaks with current latest version of matplotlib HOT 2
- Difference between easse and huggingface/datasets SARI computation HOT 5
- potential issue with SARI n-gram add-score HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from easse.