Code Monkey home page Code Monkey logo

cleanvision's Introduction

cleanlab helps you clean data and labels by automatically detecting issues in a ML dataset. To facilitate machine learning with messy, real-world data, this data-centric AI package uses your existing models to estimate dataset problems that can be fixed to train even better models.

# cleanlab works with **any classifier**. Yup, you can use PyTorch/TensorFlow/OpenAI/XGBoost/etc.
cl = cleanlab.classification.CleanLearning(sklearn.YourFavoriteClassifier())

# cleanlab finds data and label issues in **any dataset**... in ONE line of code!
label_issues = cl.find_label_issues(data, labels)

# cleanlab trains a robust version of your model that works more reliably with noisy data.
cl.fit(data, labels)

# cleanlab estimates the predictions you would have gotten if you had trained with *no* label issues.
cl.predict(test_data)

# A universal data-centric AI tool, cleanlab quantifies class-level issues and overall data quality, for any dataset.
cleanlab.dataset.health_summary(labels, confident_joint=cl.confident_joint)

Get started with: tutorials, documentation, examples, and blogs.

pypi os py_versions build_status coverage docs Slack Community Twitter Cleanlab Studio

Examples of various issues in Cat/Dog dataset automatically detected by cleanlab via this code:

        lab = cleanlab.Datalab(data=dataset, label="column_name_for_labels")
        # Fit any ML model, get its feature_embeddings & pred_probs for your data
        lab.find_issues(features=feature_embeddings, pred_probs=pred_probs)
        lab.report()

So fresh, so cleanlab

cleanlab cleans your data's labels via state-of-the-art confident learning algorithms, published in this paper and blog. See some of the datasets cleaned with cleanlab at labelerrors.com. This data-centric AI tool helps you find data and label issues, so you can train reliable ML models.

cleanlab is:

  1. backed by theory -- with provable guarantees of exact label noise estimation, even with imperfect models.
  2. fast -- code is parallelized and scalable.
  3. easy to use -- one line of code to find mislabeled data, bad annotators, outliers, or train noise-robust models.
  4. general -- works with any dataset (text, image, tabular, audio,...) + any model (PyTorch, OpenAI, XGBoost,...)

Examples of incorrect given labels in various image datasets found and corrected using cleanlab. While these examples are from image datasets, this also works for text, audio, tabular data.

Run cleanlab

cleanlab supports Linux, macOS, and Windows and runs on Python 3.8+.

Practicing data-centric AI can look like this:

  1. Train initial ML model on original dataset.
  2. Utilize this model to diagnose data issues (via cleanlab methods) and improve the dataset.
  3. Train the same model on the improved dataset.
  4. Try various modeling techniques to further improve performance.

Most folks jump from Step 1 → 4, but you may achieve big gains without any change to your modeling code by using cleanlab! Continuously boost performance by iterating Steps 2 → 4 (and try to evaluate with cleaned data).

Use cleanlab with any model for most ML tasks

All features of cleanlab work with any dataset and any model. Yes, any model: PyTorch, Tensorflow, Keras, JAX, HuggingFace, OpenAI, XGBoost, scikit-learn, etc. If you use a sklearn-compatible classifier, all cleanlab methods work out-of-the-box.

It’s also easy to use your favorite non-sklearn-compatible model (click to learn more)

cleanlab can find label issues from any model's predicted class probabilities if you can produce them yourself.

Some cleanlab functionality may require your model to be sklearn-compatible. There's nothing you need to do if your model already has .fit(), .predict(), and .predict_proba() methods. Otherwise, just wrap your custom model into a Python class that inherits the sklearn.base.BaseEstimator:

from sklearn.base import BaseEstimator
class YourFavoriteModel(BaseEstimator): # Inherits sklearn base classifier
    def __init__(self, ):
        pass  # ensure this re-initializes parameters for neural net models
    def fit(self, X, y, sample_weight=None):
        pass
    def predict(self, X):
        pass
    def predict_proba(self, X):
        pass
    def score(self, X, y, sample_weight=None):
        pass

This inheritance allows to apply a wide range of sklearn functionality like hyperparameter-optimization to your custom model. Now you can use your model with every method in cleanlab. Here's one example:

from cleanlab.classification import CleanLearning
cl = CleanLearning(clf=YourFavoriteModel())  # has all the same methods of YourFavoriteModel
cl.fit(train_data, train_labels_with_errors)
cl.predict(test_data)

Want to see a working example? Here’s a compliant PyTorch MNIST CNN class

More details are provided in documentation of cleanlab.classification.CleanLearning.

Note, some libraries exist to give you sklearn-compatibility for free. For PyTorch, check out the skorch Python library which will wrap your PyTorch model into a sklearn-compatible model (example). For TensorFlow/Keras, check out our Keras wrapper. Many libraries also already offer a special scikit-learn API, for example: XGBoost or LightGBM.


cleanlab is useful across a wide variety of Machine Learning tasks. Specific tasks this data-centric AI solution offers dedicated functionality for include:

  1. Binary and multi-class classification
  2. Multi-label classification (e.g. image/document tagging)
  3. Token classification (e.g. entity recognition in text)
  4. Regression (predicting numerical column in a dataset)
  5. Image segmentation (images with per-pixel annotations)
  6. Object detection (images with bounding box annotations)
  7. Classification with data labeled by multiple annotators
  8. Active learning with multiple annotators (suggest which data to label or re-label to improve model most)
  9. Outlier detection (identify atypical data that appears out of distribution)

For other ML tasks, cleanlab can still help you improve your dataset if appropriately applied. Many practical applications are demonstrated in our Example Notebooks.

Citation and related publications

cleanlab is based on peer-reviewed research. Here are relevant papers to cite if you use this package:

Confident Learning (JAIR '21) (click to show bibtex)
@article{northcutt2021confidentlearning,
    title={Confident Learning: Estimating Uncertainty in Dataset Labels},
    author={Curtis G. Northcutt and Lu Jiang and Isaac L. Chuang},
    journal={Journal of Artificial Intelligence Research (JAIR)},
    volume={70},
    pages={1373--1411},
    year={2021}
}
Rank Pruning (UAI '17) (click to show bibtex)
@inproceedings{northcutt2017rankpruning,
    author={Northcutt, Curtis G. and Wu, Tailin and Chuang, Isaac L.},
    title={Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels},
    booktitle = {Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence},
    series = {UAI'17},
    year = {2017},
    location = {Sydney, Australia},
    numpages = {10},
    url = {http://auai.org/uai2017/proceedings/papers/35.pdf},
    publisher = {AUAI Press},
}
Label Quality Scoring (ICML '22) (click to show bibtex)
@inproceedings{kuan2022labelquality,
    title={Model-agnostic label quality scoring to detect real-world label errors},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML DataPerf Workshop},
    year={2022}
}
Out-of-Distribution Detection (ICML '22) (click to show bibtex)
@inproceedings{kuan2022ood,
    title={Back to the Basics: Revisiting Out-of-Distribution Detection Baselines},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML Workshop on Principles of Distribution Shift},
    year={2022}
}
Token Classification Label Errors (NeurIPS '22) (click to show bibtex)
@inproceedings{wang2022tokenerrors,
    title={Detecting label errors in token classification data},
    author={Wang, Wei-Chen and Mueller, Jonas},
    booktitle={NeurIPS Workshop on Interactive Learning for Natural Language Processing (InterNLP)},
    year={2022}
}
CROWDLAB for Data with Multiple Annotators (NeurIPS '22) (click to show bibtex)
@inproceedings{goh2022crowdlab,
    title={CROWDLAB: Supervised learning to infer consensus labels and quality scores for data with multiple annotators},
    author={Goh, Hui Wen and Tkachenko, Ulyana and Mueller, Jonas},
    booktitle={NeurIPS Human in the Loop Learning Workshop},
    year={2022}
}
ActiveLab: Active learning with data re-labeling (ICLR '23) (click to show bibtex)
@inproceedings{goh2023activelab,
    title={ActiveLab: Active Learning with Re-Labeling by Multiple Annotators},
    author={Goh, Hui Wen and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
Incorrect Annotations in Multi-Label Classification (ICLR '23) (click to show bibtex)
@inproceedings{thyagarajan2023multilabel,
    title={Identifying Incorrect Annotations in Multi-Label Classification Data},
    author={Thyagarajan, Aditya and Snorrason, Elías and Northcutt, Curtis and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
Detecting Dataset Drift and Non-IID Sampling (ICML '23) (click to show bibtex)
@inproceedings{cummings2023drift,
    title={Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors},
    author={Cummings, Jesse and Snorrason, Elías and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
Detecting Errors in Numerical Data (ICML '23) (click to show bibtex)
@inproceedings{zhou2023errors,
    title={Detecting Errors in Numerical Data via any Regression Model},
    author={Zhou, Hang and Mueller, Jonas and Kumar, Mayank and Wang, Jane-Ling and Lei, Jing},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
ObjectLab: Mislabeled Images in Object Detection Data (ICML '23) (click to show bibtex)
@inproceedings{tkachenko2023objectlab,
    title={ObjectLab: Automated Diagnosis of Mislabeled Images in Object Detection Data},
    author={Tkachenko, Ulyana and Thyagarajan, Aditya and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
Label Errors in Segmentation Data (ICML '23) (click to show bibtex)
@inproceedings{lad2023segmentation,
    title={Estimating label quality and errors in semantic segmentation data via any model},
    author={Lad, Vedang and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}

To understand/cite other cleanlab functionality not described above, check out our additional publications.

Other resources

Easy mode: No-code Data Improvement

While this open-source package finds data issues, its utility depends on you having: a good existing ML model + an interface to efficiently fix these issues in your dataset. Providing all these pieces, Cleanlab Studio is a Data Curation platform to find and fix problems in any {image, text, tabular} dataset. Cleanlab Studio automatically runs optimized algorithms from this package on top of AutoML & Foundation models fit to your data, and presents detected issues (+ AI-suggested fixes) in an intelligent data correction interface.

Try it for free! Adopting Cleanlab Studio enables users of this package to:

  • work 100x faster (1 min to analyze your raw data with zero code or ML work; optionally use Python API)
  • produce better-quality data (10x more types of issues auto detected & corrected via built-in AI)
  • accomplish more (auto-label data, deploy ML instantly, audit LLM inputs/outputs, moderate content, ...)

The modern AI pipeline automated with Cleanlab Studio

Join our community

License

Copyright (c) 2017 Cleanlab Inc.

cleanlab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

cleanlab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

See GNU Affero General Public LICENSE for details. You can email us to discuss licensing: [email protected]

Commercial licensing

Commercial licensing is available for teams and enterprises that want to use cleanlab in production workflows, but are unable to open-source their code as is required by the current license. Please email us: [email protected]

cleanvision's People

Contributors

aenlemmea avatar bluelul avatar clu0 avatar cmauck10 avatar developer0hye avatar elisno avatar jwmueller avatar kadam-tushar avatar krmayankb avatar lemurpwned avatar manulpatel avatar sanjanag avatar smttsp avatar ulya-tkch avatar wirthual avatar yimingc9 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cleanvision's Issues

create a mypy type for imagelab.info

imagelab.info is an Imagelab class attribute designed to be a nested dictionary containing information relevant to the computation of issue types.
Right now the type assigned to imagelab.info is Dict[str, Any] which is very loose.
The task is to define a type alias for the variable that is tight in its definition and replaces the Any with the actual types that could be present in the dictionary.
Check this link for more details on how to create type aliases.
Refer to the documentation for more details on imagelab.info.

Add instructions on how to add a new issue type

Add detailed instructions on how a user/developer can add a new issue_type. Some points to consider:

  • Restrictions on where the new class is defined and when it is registered
  • Clarify the user's self-defined IssueManager should have attribute issue_name set to the same key string that they specify here (otherwise some users may be confused that "Custom" is a special built-in key with some special properties rather than self-defined)

add methods to list issue types

Add two helpful user-facing class methods to Imagelab:

  1. list_default_issue_types() -- prints out names of issues that will be checked for by default.
  2. list_possible_issue_types() -- prints out names of all possible issues this class can check for.

To see the value of 1, consider the case where user wants to alter the hyperparameters of one specific issue-type, but wants to run all the other issue-types with default hyperparameters (might even consider building more dedicated functionality to make such a workflow more convenient).

CC @elisno

Resolve type check issue in sorted()

src/cleanvision/issue_managers/image_property_issue_manager.py:162: error: Returning Any from function declared to return "Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]" [no-any-return]
Installing missing stub packages:
src/cleanvision/issue_managers/duplicate_issue_manager.py:137: error: Returning Any from function declared to return "Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]" [no-any-return]

Table describing all types of issues cleanvision can identify

We want one source of truth that describes all the types of checks this library can run and the name of the "key" that user can specify to solely detect this type of issue.

  • For now can temporarily put this table in the readme.
  • Later move this table to the documentation website.

add badges to readme

Once things like codecov are all setup, can just mostly reuse the same types of badges from the cleanlab library.

Don't forget to add badge for pypi once that is setup as well.

Screen Shot 2022-12-22 at 12 57 02 AM

Unit tests for duplicate check

Add unit tests to ensure we can:

  • separately search for exact-duplicates without searching for near-duplicates
  • separately search for near-duplicates without searching for exact-duplicates
  • search for both in single call to imagelab.find_issues()
  • search for exact duplicates in initial call to imagelab.find_issues() and then search for near duplicates in subsequent call to this same function from the same imagelab
  • search for near duplicates in initial call to imagelab.find_issues() and then search for exact duplicates in subsequent call to this same function from the same imagelab

Performance optimization (improve efficiency / runtimes / memory usage)

Try to speed up the runtime of this library on large datasets. This can be done via:

  • speeding up individual checks
  • reusing more computation across checks
  • using parallelism

This is a great issue to get started contributing to this repo! There are many ways to achieve speedup (speeding up individual checks will be the easiest). It is easy to verify speedup via basic benchmarking that also ensures the results remain the same.

Add mypy static typing to code

Current code pushes are not passing mypy type checking during CI actions because they lack definition of types. Add static typing to improve code readability and clarity.

Decide how to handle issues with very high prevalence

If issue occurs in over >= 50% of the images in dataset, it may be not interesting to tell user about it by default. So consider omitting such issues from the report() even though they would be very highly ranked if just sorting by prevalence.

Add guide for contributing new IssueChecks

Adding new issue checks should be standardized into a step-by-step process. Process should be detailed and shared in DEVELOPMENT.md to make it more accessible for interested contributors.

Improve codecov by adding more unit tests

Help us add more unit tests for this package to increase the code coverage. You can look at recent codecov reports (from PRs or commits to main branch) to see what lines of code are not currently covered by any unit test, and add a new unit test that executes these lines.

Make sure all unit tests you add run very quickly (always use a tiny dataset)

Add compelling images to readme

Should show examples of a few different types of issues imagelab can detect (the most compelling ones like near-duplicate, blurry, dark, etc -- whatever examples are listed in readme).

In future should include an image example of each issue type in the documentation.

New issue type: Detect abnormally big/small images

Could also be based on width/height alone.

Threshold to determine is_issue (for overly large images) can be set at:
size(image) > T * median(size(image) for image in dataset)
where T = 10 say. We don't want threshold to be percentage based, since that would always flag some images in any dataset.

The score could just be the percent of images that are bigger/smaller than this image.
Or could be size(image) / size(median_image) suitably renormalized to [0,1]

Show off the .issues, .info attributes of Imagelab in the demo

At the end of the “Run Imagelab with default settings” section of the demo notebook, it would be nice to showcase that:

If you want to learn more about your dataset, you can checkout the following attributes:

print(imagelab.issue_summary)

This is a dataframe where each row is ... each column ... (explain the issue_summary DF)

print(imagelab.issues)

This is a dataframe where each row is ... each column ... (explain the issues DF)

print(imagelab.info["some_interesting_key"])

This is a dict full of miscellaneous information about your dataset. It is quite large so here we only show "some_interesting_key" but this dict contains various other dataset information in its other keys.

Expand issue checking to labeled images

Currently the package handles identifying issues in unlabeled datasets only. Add functionality to handle taking in labels and running label-image specific issue checks on them (i.e. spurious correlations).

[Discussion] Consider maybe switching to a Reporter class

Suggestion from @elisno (original link)

In the future, would you be open to exploring an OO-approach where the report data (issues, issue_summary, filtered results) in encapsulated in a separate object? The class would still take sensible defaults and allow us to add more parameters to it, while this methods has a shorter/more stable signature.

@dataclass
class Reporter:
    issues: pd.Dataframe  # Imagelab.issues still keeps track of filepaths, right?
    issue_summary: pd.DataFrame
    ...

    def __post_init__(self) -> None:
        self._report: Any = # Initialize an empty "report" object

    def report(self) -> "Self":
        ...

    def to_string(self) -> str:
        ...

    def to_html(self) -> str:
        ...

    def __repr__(self) -> str:
        ...

class Imagelab:

    def report(self, reporter: Optional[Reporter] = None) -> Reporter:
        ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.