Code Monkey home page Code Monkey logo

codebleu's Introduction

CodeBLEU

Publish Test codecov PyPI version

This repository contains an unofficial CodeBLEU implementation that supports Linux, MacOS (incl. M-series) and Windows. It is available through PyPI and the evaluate library.

Available for: Python, C, C#, C++, Java, JavaScript, PHP, Go, Ruby, Rust.


The code is based on the original CodeXGLUE/CodeBLEU and updated version by XLCoST/CodeBLEU. It has been refactored, tested, built for macOS and Windows, and multiple improvements have been made to enhance usability.

Metric Description

An ideal evaluation metric should consider the grammatical correctness and the logic correctness. We propose weighted n-gram match and syntactic AST match to measure grammatical correctness, and introduce semantic data-flow match to calculate logic correctness. CodeBLEU
[from CodeXGLUE repo]

In a nutshell, CodeBLEU is a weighted combination of n-gram match (BLEU), weighted n-gram match (BLEU-weighted), AST match and data-flow match scores.

The metric has shown higher correlation with human evaluation than BLEU and accuracy metrics.

Installation

This library requires so file compilation with tree-sitter, so it is platform dependent.
Currently available for Linux (manylinux), MacOS and Windows with Python 3.8+.

The metrics is available as pip package and can be installed as indicated above:

pip install codebleu

or directly from git repo (require internet connection to download tree-sitter):

pip install git+https://github.com/k4black/codebleu.git

Also you have to install tree-sitter language you need (e.g. python, rust, etc):

pip install tree-sitter-python

Or you can install all languages:

pip install codebleu[all]

Note: At the moment (May 2024) precompiled languages are NOT available for arm64 (M1) MacOS, so you have to install and build tree-sitter languages manually, for example:

pip install pip install git+https://github.com/tree-sitter/tree-sitter-python.git

Usage

from codebleu import calc_codebleu

prediction = "def add ( a , b ) :\n return a + b"
reference = "def sum ( first , second ) :\n return second + first"

result = calc_codebleu([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None)
print(result)
# {
#   'codebleu': 0.5537, 
#   'ngram_match_score': 0.1041, 
#   'weighted_ngram_match_score': 0.1109, 
#   'syntax_match_score': 1.0, 
#   'dataflow_match_score': 1.0
# }

where calc_codebleu takes the following arguments:

  • refarences (list[str] or list[list[str]]): reference code
  • predictions (list[str]) predicted code
  • lang (str): code language, see codebleu.AVAILABLE_LANGS for available languages (python, c_sharp c, cpp, javascript, java, php, go and ruby at the moment)
  • weights (tuple[float,float,float,float]): weights of the ngram_match, weighted_ngram_match, syntax_match, and dataflow_match respectively, defaults to (0.25, 0.25, 0.25, 0.25)
  • tokenizer (callable): to split code string to tokens, defaults to s.split()

and outputs the dict[str, float] with following fields:

  • codebleu: the final CodeBLEU score
  • ngram_match_score: ngram_match score (BLEU)
  • weighted_ngram_match_score: weighted_ngram_match score (BLEU-weighted)
  • syntax_match_score: syntax_match score (AST match)
  • dataflow_match_score: dataflow_match score

Alternatively, you can use k4black/codebleu from HuggingFace Spaces (codebleu package required):

import evaluate
metric = evaluate.load("dvitel/codebleu")

prediction = "def add ( a , b ) :\n return a + b"
reference = "def sum ( first , second ) :\n return second + first"

result = metric.compute([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25))

Feel free to check the HF Space with online example: k4black/codebleu

Contributing

Contributions are welcome!
If you have any questions, suggestions, or bug reports, please open an issue on GitHub.

Make your own fork and clone it:

git clone https://github.com/k4black/codebleu

For development, you need to install library with all precompiled languages and test extra:
(require internet connection to download tree-sitter)

python -m pip install -e .[all,test]
python -m pip install -e .\[all,test\]  # for macos

For testing just run pytest:

python -m pytest

To perform a style check, run:

python -m isort codebleu --check
python -m black codebleu --check
python -m ruff codebleu
python -m mypy codebleu

License

This project is licensed under the terms of the MIT license.

Citation

Official CodeBLEU paper can be cited as follows:

@misc{ren2020codebleu,
      title={CodeBLEU: a Method for Automatic Evaluation of Code Synthesis}, 
      author={Shuo Ren and Daya Guo and Shuai Lu and Long Zhou and Shujie Liu and Duyu Tang and Neel Sundaresan and Ming Zhou and Ambrosio Blanco and Shuai Ma},
      year={2020},
      eprint={2009.10297},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}

codebleu's People

Contributors

dependabot[bot] avatar fasterinnerlooper avatar k4black avatar maximus12793 avatar yijunyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

codebleu's Issues

Can CodeBLEU be used to evaluate the similarity between codes in different languages?

Thank you very much for your implementation of the CodeBLEU evaluation method, it helped me a lot!
At the same time I have some questions:

  1. Can CodeBLEU be used to evaluate the similarity between codes in different languages?
  2. Is the evaluation of CodeBLEU at the corpus level like BLEU? If it is just to evaluate the CodeBLEU between two codes, is it accurate?

Looking forward to your reply! Thanks!

The dataflow score is 0

WARNING:root:WARNING: There is no reference data-flows extracted from the whole corpus, and the data-flow match score degenerates to 0. Please consider ignoring this score.

I get this warning when I'm calculating, and I want to know why is this a problem

Failure during HuggingFace Training loop (calc_codeblue doesn't exist)

Hi @k4black ,

I have the following error when performing evaluation using CodeBleu in my HuggingFace training loop. Do you know what the issue could be?

[cut from longer output]
   result = metric.compute(predictions=decoder_preds, references=decoder_labels, lang=lang)
  File "/usr/local/lib/python3.9/dist-packages/evaluate/module.py", line 462, in compute
    output = self._compute(**inputs, **compute_kwargs)
  File "/notebooks/cache/huggingface/modules/evaluate_modules/metrics/k4black--codebleu/0510675d8d105d7f64b0458864c7f4b7ec3995ff230e89d27d92b9a7a635654d/codebleu.py", line 113, in _compute
    return self.codebleu_package.calc_codebleu(
AttributeError: module 'codebleu' has no attribute 'calc_codebleu'

wrong codeBLEU metric value

Hi,

I am comparing the code lines:

codeline1= 'a = !((f >> 4) & 0x01);'
codeline2= 'a=!(((f >> 4) & 1U)!=0?true:false;)'

the metric is
CodeBLEU score: {'codebleu': 0.4474481492943273, 'ngram_match_score': 0.21711852081087685, 'weighted_ngram_match_score': 0.21711852081087685, 'syntax_match_score': 0.5555555555555556, 'dataflow_match_score': 0.8}

When the change the code to
codeline2='a=!(((f >> 4) & 1U)!=0?false:true;)'
the metric is
CodeBLEU score: {'codebleu': 0.4474481492943273, 'ngram_match_score': 0.21711852081087685, 'weighted_ngram_match_score': 0.21711852081087685, 'syntax_match_score': 0.5555555555555556, 'dataflow_match_score': 0.8}

I expected the dataflow match score to change, as the functionality is changed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.