Code Monkey home page Code Monkey logo

snorkel's Introduction

PyPI - Python Version PyPI Conda docs coverage license

Programmatically Build and Manage Training Data

Announcement

The Snorkel team is now focusing their efforts on Snorkel Flow, an end-to-end AI application development platform based on the core ideas behind Snorkel—you can check it out here or join us in building it!

The Snorkel project started at Stanford in 2015 with a simple technical bet: that it would increasingly be the training data, not the models, algorithms, or infrastructure, that decided whether a machine learning project succeeded or failed. Given this premise, we set out to explore the radical idea that you could bring mathematical and systems structure to the messy and often entirely manual process of training data creation and management, starting by empowering users to programmatically label, build, and manage training data.

To say that the Snorkel project succeeded and expanded beyond what we had ever expected would be an understatement. The basic goals of a research repo like Snorkel are to provide a minimum viable framework for testing and validating hypotheses. Four years later, we’ve been fortunate to do not just this, but to develop and deploy early versions of Snorkel in partnership with some of the world’s leading organizations like Google, Intel, Stanford Medicine, and many more; author over sixty peer-reviewed publications on our findings around Snorkel and related innovations in weak supervision modeling, data augmentation, multi-task learning, and more; be included in courses at top-tier universities; support production deployments in systems that you’ve likely used in the last few hours; and work with an amazing community of researchers and practitioners from industry, medicine, government, academia, and beyond.

However, we realized increasingly–from conversations with users in weekly office hours, workshops, online discussions, and industry partners–that the Snorkel project was just the very first step. The ideas behind Snorkel change not just how you label training data, but so much of the entire lifecycle and pipeline of building, deploying, and managing ML: how users inject their knowledge; how models are constructed, trained, inspected, versioned, and monitored; how entire pipelines are developed iteratively; and how the full set of stakeholders in any ML deployment, from subject matter experts to ML engineers, are incorporated into the process.

Over the last year, we have been building the platform to support this broader vision: Snorkel Flow, an end-to-end machine learning platform for developing and deploying AI applications. Snorkel Flow incorporates many of the concepts of the Snorkel project with a range of newer techniques around weak supervision modeling, data augmentation, multi-task learning, data slicing and structuring, monitoring and analysis, and more, all of which integrate in a way that is greater than the sum of its parts–and that we believe makes ML truly faster, more flexible, and more practical than ever before.

Moving forward, we will be focusing our efforts on Snorkel Flow. We are extremely grateful for all of you that have contributed to the Snorkel project, and are excited for you to check out our next chapter here.

Quick Links

Getting Started

The quickest way to familiarize yourself with the Snorkel library is to walk through the Get Started page on the Snorkel website, followed by the full-length tutorials in the Snorkel tutorials repository. These tutorials demonstrate a variety of tasks, domains, labeling techniques, and integrations that can serve as templates as you apply Snorkel to your own applications.

Installation

Snorkel requires Python 3.11 or later. To install Snorkel, we recommend using pip:

pip install snorkel

or conda:

conda install snorkel -c conda-forge

For information on installing from source and contributing to Snorkel, see our contributing guidelines.

Details on installing with conda

The following example commands give some more color on installing with conda. These commands assume that your conda installation is Python 3.11, and that you want to use a virtual environment called snorkel-env.

# [OPTIONAL] Activate a virtual environment called "snorkel"
conda create --yes -n snorkel-env python=3.11
conda activate snorkel-env

# We specify PyTorch here to ensure compatibility, but it may not be necessary.
conda install pytorch==1.1.0 -c pytorch
conda install snorkel==0.9.0 -c conda-forge

A quick note for Windows users

If you're using Windows, we highly recommend using Docker (you can find an example in our tutorials repo) or the Linux subsystem. We've done limited testing on Windows, so if you want to contribute instructions or improvements, feel free to open a PR!

Discussion

Issues

We use GitHub Issues for posting bugs and feature requests — anything code-related. Just make sure you search for related issues first and use our Issues templates. We may ask for contributions if a prompt fix doesn't fit into the immediate roadmap of the core development team.

Contributions

We welcome contributions from the Snorkel community! This is likely the fastest way to get a change you'd like to see into the library.

Small contributions can be made directly in a pull request (PR). If you would like to contribute a larger feature, we recommend first creating an issue with a proposed design for discussion. For ideas about what to work on, we've labeled specific issues as help wanted.

To set up a development environment for contributing back to Snorkel, see our contributing guidelines. All PRs must pass the continuous integration tests and receive approval from a member of the Snorkel development team before they will be merged.

Community Forum

For broader Q&A, discussions about using Snorkel, tutorial requests, etc., use the Snorkel community forum hosted on Spectrum. We hope this will be a venue for you to interact with other Snorkel users — please don't be shy about posting!

Announcements

To stay up-to-date on Snorkel-related announcements (e.g. version releases, upcoming workshops), subscribe to the Snorkel mailing list. We promise to respect your inboxes — communication will be sparse!

Twitter

Follow us on Twitter @SnorkelAI.

snorkel's People

Contributors

4d4stra avatar ajratner avatar alldefector avatar anerirana avatar bhancock8 avatar brahmaneya avatar bryanhe avatar catalinvoss avatar danich1 avatar dependabot[bot] avatar dhimmel avatar fpoms avatar fsonntag avatar hangyao avatar hardianlawi avatar henryre avatar humzaiqbal avatar jason-fries avatar jasontlam avatar larskarg avatar lukehsiao avatar moreymat avatar netj avatar paidi avatar paroma avatar pmlandwehr avatar regoldman avatar stephenbach avatar thammegowda avatar vincentschen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snorkel's Issues

Create a RegexMatch entity mention operator

This will be almost identical to the existing DictionaryMatch operator.

This RegexMatch operator, if essentially copied from DictionaryMatch, will be able to trivially do e.g. POS tag sequence matches (using match_attrib=poses); however this could be wrapped / presented as a separate operator...?

Reload tags in MindTagger

When we reopen MindTagger, we can keep the same sample as before, but not reload the tags. @netj is there a nice way to do this with the API like how we retrieve the tags, or should we form and dump tags.json to the instance directory?

Write documentation

Example notebooks are the de facto documentation. Example notebooks are not actually documentation.

Error analysis workflow 1

  1. User gets random subsample of candidates in Mindtagger, and labels them
  2. User gets statistics over the labeling functions, some w.r.t. to this label set (e.g. empirical acc., etc.)
  3. Learn model
  4. Get precision stats
  5. Log (?) and repeat

Stats to show for label function development:

  • Coverage
  • Overlap
  • Conflict
  • Empirical accuracy
  • Show labeling functions and/or candidates that are conflict heavy (+ low emp. accuracy lfs)

Questions:

  • Should we be proscriptive, and automatically (opaquely) split their label set into a "label fn. validation set" and a test set (as default option which can be turned off)?
  • How to integrate ground truth that they bring in externally?

Add dependency tree helper functionality

E.g. user should be able to access a path_between attribute of a Relation object, etc. This can / is currently being done with treedlib, however I am trying to decouple these two repos... however can bring this back in under the covers in a more limited form (e.g. Relation objects initialize with several dependency path attributes like path_between, but don't expose the direct XPath mechanisms to the user)

Create a simple DocParser class

Desired initial functionalities:

  • Take as input a directory filepath, a single filename, a filename pattern
  • Strip XML, HTML (i.e. strip tags without corrupting basic sentence structures)

Ideally there would be some simple way to extend so that users could write basic XML/HTML parser modules (e.g. to grab metadata, preserve section structure, etc.) via some python library (e.g. lxml, beautifulsoup). This kind of solution would not be performant, but could potentially be very simple...

Save MindTagger Output

Refine saving and loading annotation dumps from MindTagger during DSR refinement. I realize "items.csv" is dumped in the MindTagger directory under some unique folder id and tags can be fetched using get_mindtagger_tags() on the MindTagger instance, but the metrics associated with these values should be wrapped up in some sort of "classification_report" type function.

Add "smart" Viewer sampling

Rather than a fully random sample, do we want

  • Some all-abstained candidates?
  • Some high conflict candidates?
  • Some low conflict candidates?
  • Some candidates with probability close to 0.5?

Fix parentheses encoding

Not a big deal, but shows up as -LRB- and -RRB- in MindTagger which looks like a lot like a gene to the underinformed

Add raw (untokenized) text as attribute to Sentence object

For regex matching, it would be very helpful to have access to the text of a single sentence without any tokenization. When tagging chemical names, for example, we frequently get these tokenization type artifacts:
Li ( 3 ) PS ( 4 ) vs. Li(3)PS(4)
Some of these can be fixed with modified regexes, but it would be nice to operate on the original text itself. As far as mapping back to tokens for entity tags, we could just consider a match as anything that overlaps with the original span.

Output marginals + calibration plots + histograms

We need to output the basic deepdive calibration plots (notebooks are perfect for this!), as well as potentially some other histograms which guide users towards correct error analysis / debugging procedures.

We also need to output the marginals, which is a minor sub-function to add in.

DB / DeepDive connectivity

One simple way to have db connectivity in the notebook is our favorite extension, ipython-sql. We could initially just build some helper functions around this (or any other psql connector).

However, in DDL we pass around an object containing the entire dataset (Relations)- this would allow us to connect to the database in a way that is opaque to the user, turning this Relations object into essentially a cache for the DeepDive db...

What else?

Parallelize nltk CoreNLP parser in simple way

Emphasis on simple- this is not going to be an optimal preprocessing setup either way, we just want to make it a bit better through simple means that don't require any additional installs, configs, etc.

Refactor Extractions so it isn't a big state machine?

Would be good to separate the concepts of Extractions as a data container/operator and the learning algorithms it implements? Should also probably implement Relation and Entity using a proxy pattern.

@ajratner let's talk before deciding either way on this?

edit: Adding question marks so it sounds like a question?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.