Code Monkey home page Code Monkey logo

disagree's Introduction

versiontests

disagree - Assessing Annotator Disagreements in Python

This library aims to address annotation disagreements in manually labelled data.

I started it as a project to develop some understanding of Python packaging and workflow. (This is the primary reason for the messy release history and commit logs, for which I apologise.) But I hope this will be useful for a wider audience as well.

Install

To install, setup a virtualenv and do:

$ python3 -m pip install --index-url https://pypi.org/project/ disagree

or

$ pip3 install disagree

To update to the latest version do:

$ pip3 install --upgrade disagree

Build from source

# test first
python -m unittest discover test
# then build
python setup.py sdist

Background

Whilst working in NLP, I've been repeatedly working with datasets that have been manually labelled, and have thus had to evaluate the quality of the agreements between the annotators. In my (limited) experience of doing this, I have encountered a number of ways of it that have been helpful. In this library, I aim to group all of those things together for people to use.

Please suggest any additions/functionalities, and I will try my best to add them.

Summary of features

  • Visualisations

    • Ability to visualise bidisagreements between annotators
    • Ability to visualise agreement statistics
    • Retrieve summaries of numbers of disagreements and their extent
  • Annotation statistics:

    • Joint probability
    • Cohens kappa
    • Fleiss kappa
    • Pearson, Spearman, Kendall correlations
    • Krippendorff's alpha

Python examples

Worked examples are provided in the Jupyter notebooks directory.

Documentation

disagree.BiDisagreements(df)

BiDisagreements class is primarily there for you to visualise the disagreements in the form of a matrix, but has some other small functionalities.

  • df: Pandas DataFrame containing annotator labels

    • Rows: Instances of the data that is labelled
    • Columns: Annotators
    • Element [i, j] is annotator j's label for data instance i.
    • Entries must be integers, floats, strings, or pandas nan values
  • Attributes:

    • agreements_summary()
      • This will print out statistics on the number of instances with no disagreements, the number of bidisagreements, the number of tridisagreements, and the number of instances with worse cases (i.e. 3+ disagreements).
    • agreements_matrix()
      • This will return a matrix of bidisagreements. Do with this what you will! The intention is that you use something like matplotlib to visualise them properly.
      • Element $(i, j)$ is the number of times there is a bidisagreement involving label $i$ and label $j$.
    • labels_to_index()
      • Returns a dictionary mapping label names to indexes used in the agreements_matrix().

disagree.metrics.Metrics(df)

This module gives you access to a number of metrics typically used for annotation disagreement statistics.

  • Attributes:
    • joint_probability(ann1, ann2)

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
      • This gives the join probability of agreement between ann1 and ann2. You should probably not use this measure for academic purposes, but is here for completion.
    • cohens_kappa(ann1, ann2):

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
    • fliess_kappa()

      • No args
    • correlation(ann1, ann2, measure="pearson")

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
      • Paramater: measure, string, optional
        • Options: (pearson (default), kendall, spearman)
      • This gives you either pearson , kendall, or spearman correlation statistics between two annotators
    • metric_matrix(func)

      • Returns a matrix of size (num_annotators x num_annotators). Element $(i, j)$ is the statistic value for agreements between annotator $i$ and annotator $j$.
      • Parameter: func, name of function for the metric you want to visualise.
        • Options: (metrics.Metrics.cohens_kappa, metrics.Metrics.joint_probability)

disagree.metrics.Krippendorff(df)

  • Attributes
    • alpha(data_type="nominal")
      • In this library, Krippendorff's alpha can handle four data types, one of which must be specified:
        • nominal (default)
        • ordinal
        • interval
        • ratio

disagree's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disagree's Issues

Krippendorff's alpha returns 0 when there's only one disagreement

For a dataset like this:

dataset = {
  "a": [3, 3,    3,    3,    3],
  "b": [3, 3,    3,    3,    3],
  "c": [3, 3,    None, None, 3],
  "d": [3, 3,    3,    3,    1],
  "e": [3, None, 3,    3,    3],
}

Krippendorff's alpha ordinal or nominal in either of the cases, it returns 0.

I don't know if this is an issue of Krippendorff's alpha algorithm itself or the problem with the implementation.

Fleiss Kappa is negative

Fleiss Kappa is usually between 0 and 1. In your implementation, a perfect agreement is 0.0 and otherwise negative.

Issue with kripp.alpha(data_type="ordinal") ?

Hello:

Thank you very much for developing and releasing this package! I am using your latest release, v0.2.5

I am trying to use this to calculate Krippendorff's alpha for ordinal data. However, the code in the attached notebook generates a TypeError.

I have attached a small excel file with a simple test dataset, along with a jupyter notebook that demonstrates the error.

Here is the error output:


TypeError Traceback (most recent call last)
in ()
----> 1 kalpha = kripp.alpha(data_type="ordinal")

~/anaconda/envs/py36/lib/python3.6/site-packages/disagree/metrics.py in alpha(self, data_type)
477
478 observed_disagreement = self.disagreement(obs_or_exp="observed",
--> 479 data_type=data_type)
480 expected_disagreement = self.disagreement(obs_or_exp="expected",
481 data_type=data_type)

~/anaconda/envs/py36/lib/python3.6/site-packages/disagree/metrics.py in disagreement(self, obs_or_exp, data_type)
448 delta = self.delta_nominal(str(v1), str(v2))
449 elif data_type == "ordinal":
--> 450 delta = self.delta_ordinal(str(v1), str(v2))
451 elif data_type == "interval":
452 delta = self.delta_interval(str(v1), str(v2))

~/anaconda/envs/py36/lib/python3.6/site-packages/disagree/metrics.py in delta_ordinal(self, v1, v2)
419 v1, v2 = float(v1), float(v2)
420 val = 0
--> 421 for g in range(v1, v2 + 1):
422 element1 = self.coincidence_matrix_sum[g]
423 element2 = (self.coincidence_matrix_sum[v1] + self.coincidence_matrix_sum[v2]) / 2

TypeError: 'float' object cannot be interpreted as an integer

test_scores.xlsx
test_ordinal.ipynb.zip

utils missing

hi
I get the error

ModuleNotFoundError Traceback (most recent call last)
in
1 import sys
2 import disagree
----> 3 from disagree import metrics
4 import pandas as pd
5

~/.envs/researcherEnv/lib/python3.6/site-packages/disagree/metrics.py in
10 from collections import Counter
11 from tqdm import tqdm
---> 12 from utils import convert_dataframe
13
14 from scipy.stats import pearsonr, kendalltau, spearmanr

ModuleNotFoundError: No module named 'utils'

simply when trying to run your example at the imports
import disagree
from disagree import metrics

How is multi-labelled data supposed to be formatted?

It is unclear if the Krippendorff’s alpha implemented by disagree is able to support multi-labelled data, and if so, in what way it should be formatted. I have tried to pass lists and sets to represent the labels, but I get an Unhashable type error.

Cohen's kappa calculation

In your calculation, you always take the same number of instances num_intsances and not the number of instances that ann1 & ann2 have annotated. So, by your example, for annotators b and c num_instances = 15 but the common annotated instances are only 5.

df = self.df.dropna(subset=[ann1, ann2])
ann1_labels = df[ann1].values.tolist()
ann2_labels = df[ann2].values.tolist()
num_instances = self.df.shape[0]

I think that the last line of the above snippet should not be self.df.shape[0] but df.shape[0]

num_instances = df.shape[0]

disagree.metrics.Krippendorff(df)

hello,

I need to compute Krippendorff’s 𝛼 with ordinal annotations. I am wondering what the distance function is. Is it a Euclidean distance function? My second question is can you share a notebook for disagree.metrics.Krippendorff(df)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.