Code Monkey home page Code Monkey logo

mteval's Introduction

MTEval Toolkit

MTEval - Collection of evaluation metrics and algorithms for machine translation.

About

This software inplements some major machine translation evaluation metrics and evaluation algorithms to be easily used.

If you used MTEval toolkit, please refer this software on your document with below link:

Install

MTEval depends on below libraries:

  • Boost 1.48 or later

And uses below toolkit to build libraries and executalbes:

  • CMake 3.1.0 or later

First, we make a root directory of the build tree:

$ cd /path/to/mteval
$ mkdir build
$ cd build

Then we build the tool:

$ cmake ..
$ make -j <threads>

And optionally you can run unit tests:

$ make test

All executables (see next section) are stored in the build/bin directory.

Usage

MTEval currently have 3 executables:

  • mteval-corpus - corpus-wise evaluation
  • mteval-sentence - sentence-wise evaluation
  • mteval-pairwise - pairwise bootstrap resampling

mteval-corpus and mteval-sentence requires 1 reference and 1 hypothesis corpus to compute the goodness of the hypothesis.

mteval-pairwise requires 1 reference and 2 hypothesis corpus to compute statistical significance of 1st hypothesis against 2nd hypothesis.

For example, we use a small example set described below:

data/ref:
a b c d e
a b c d e
a b c d e
a b c d e

data/hyp1:
a b c d e
a b c d e f
a c d e
a b x d e

data/hyp2:
a b c d e
a b c d e f g
a b c d e
a b x d e

Then, we type below example commands and get results:

$ cd /path/to/mteval
$ build/bin/mteval-corpus -e BLEU RIBES -r data/ref -h data/hyp1
BLEU=0.666113	RIBES=0.969124

$ build/bin/mteval-sentence -e BLEU RIBES -r data/ref -h data/hyp1
BLEU=1.000000	RIBES=1.000000
BLEU=0.759836	RIBES=0.955443
BLEU=0.000000	RIBES=0.975310
BLEU=0.000000	RIBES=0.945742

$ build/bin/mteval-pairwise -i 1000 -s 100 -e BLEU RIBES -r data/ref -h data/hyp{1,2}
BLEU: p=0.986000 (14/1000) RIBES: p=0.089000 (911/1000)

(Note that results of mteval-pairwise changes randomly with a certain range)

Some evaluation metrics have parameters (e.g. maximum n-gram, or smoothing for BLEU). You may set these parameters using :param=value notation:

$ build/bin/mteval-corpus -e BLEU:ngram=5:smooth=1 -r data/ref -h data/hyp1
BLEU=0.676009

$ build/bin/mteval-corpus -e BLEU:smooth=1 -r data/ref -h data/hyp1
BLEU=0.696471

Omitted parameters are assumed as default value.

If you need to obtain inner statistics of each evaluation metrics, you can use --output-stats option for mteval-corpus and mteval-sentence:

$ build/bin/mteval-corpus --output-stats -e BLEU -r data/ref -h data/hyp1 \
  | tr '\t' '\n'
BLEU=0.666113
BLEU:len:hyp=20
BLEU:len:ref=20
BLEU:ngram:1:hyp=20
BLEU:ngram:1:match=18
BLEU:ngram:2:hyp=16
BLEU:ngram:2:match=12
BLEU:ngram:3:hyp=12
BLEU:ngram:3:match=7
BLEU:ngram:4:hyp=8
BLEU:ngram:4:match=4
BLEU:samples=4

Type mteval-*** --help to see more information for each command.

List of Metrics

  • BLEU

    • Identifier: BLEU
    • Parameters:
      • ngram: maximum n-gram length (default: 4)
      • smooth: additional counts for >1-gram (default: 0)
    • Statistics:
      • len:hyp: number of words in hypothesis sentences.
      • len:ref: number of words in reference sentences.
      • ngram:%d:hyp: number of n-grams in the hypothesis sentence.
      • ngram:%d:match: number of matched n-grams.
      • samples: number of evaluation samples.
  • NIST

    • Identifier: NIST
    • Parameters:
      • ngram: maximum n-gram length (default: 5)
    • Statistics:
      • len:hyp: number of words in hypothesis sentences.
      • len:ref: number of words in reference sentences.
      • ngram:%d:hyp: number of n-grams in the hypothesis sentence.
      • ngram:%d:match: cumulative weighted n-gram matches.
      • samples: number of evaluation samples.
  • RIBES

    • Identifier: RIBES
    • Parameters:
      • alpha: weight of unigram precision (default: 0.25)
      • beta: weight of brevity penalty (default: 0.1)
    • Statistics:
      • brevity: cumulative brevity penalty for each evaluation sample.
      • nkt: cumulative Kendall's tau for each evaluation sample.
      • prec: cumulative unigram precision for each evaluation sample.
      • samples: number of evaluation samples.
      • score: cumulative RIBES score for each evaluation sample.
  • Word Error Rate

    • Identifier: WER
    • Parameters:
      • substitute: weight of substituting ref/hyp words (default: 1.0)
      • insert: weight of inserting a hyp word (default: 1.0)
      • delete: weight of deleting a hyp word (default: 1.0)
    • Statistics:
      • distance: cumulative Levenshtein distance for each evaluation sample.
      • samples: number of evaluation samples.
      • score: cumulative WER score for each evaluation sample.

Contributors

  • Yusuke Oda (@odashi) - Most coding

We are counting more contributions from you.

Contact

If you find an issue, please contact Y.Oda

  • @odashi_t on Twitter (faster than E-Mail)
  • yus.takara (at) gmail.com

mteval's People

Contributors

akivajp avatar catalincraciun avatar odashi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mteval's Issues

BLEUが0になる問題

ref : This is a .
hyp: This is a test .
に対して ツールの mteval-sentence でBLEUを計算すると
BLEUが0になる問題(正確には0.537285)が発生するのですが, 何か対処策はありますでしょうか?

Documentation Required

Currently there is no documentation showing how to use mteval.
Specifically, it would be nice to have a description of the strings that can be given to the "-e" option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.