Code Monkey home page Code Monkey logo

mlops.net's Introduction

Hi there ๐Ÿ‘‹

mlops.net's People

Contributors

anoojnair avatar aslotte avatar brett-parker avatar dcostea avatar dependabot-preview[bot] avatar dependabot[bot] avatar gitter-badger avatar lqdev avatar memsranga avatar seankilleen avatar ssa3512 avatar walternative avatar willvelida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlops.net's Issues

Rename Methods Returning Task<T>

Some method names don't make it clear that they're async. For example RetrieveEntity method returns Task<TEntity>. To make it more clear that the method is async, consider renaming it RetrieveEntityAsync. Do the same for any other methods that return Task<T>.

CreateExperiment creates duplicates in Azure Storage

Describe the bug
When calling CreateExperiment duplicate experiments are creating if one with the same name already exists in Azure TableStorage

To Reproduce
Steps to reproduce the behavior:

  1. Call CreateExperiment two times in a row using Azure Storage

Expected behavior
I would expect that if an experiment with the same name already exist that no new experiment is created and rather the existing one's id is returned

Add PR Template

Should contain

  • Which issue it fixes (e.g. Fixes #1234)
  • Description of change

Add ability to get the best run in experiment based on a given metric

We need the ability to get the best run in an experiement based on a specific metric.
For example, let's say we value the F1 score for a given machine learning model we are training (contained in an experiment). Each time we train the model (run), we want to know if this model is better than a previously recorded model or not.

The reason we want this is so we don't need to upload every model that we don't care about.

I'm thinking something like this

public void IsBestRun(Guid runId, string metricName) 
{
    // 1. Fetch the given metric value for that run
    // 2. Given the experiment that run is apart of, fetch the best run for that metric
    // 3. Compare if this is the best run or not
}

Add xmldoc inheritdoc to interface implementations

We should add /// <inheritdoc/> to all interface implementation methods/properties so they are properly registered in xml doc comments with the base documentation when they are referenced directly instead of as an interface.

Tracking hyperparameters associated with a run.

Is your feature request related to a problem? Please describe.
Along with capturing the run and the corresponding metrics , we should capture Run duration as well.
Also is there a way to log hyperparameters used to train a model the same we log the metrics using a generic method? Is there a method or property on ml.net which returns a list of hyperparameters?

Describe the solution you'd like
Add 2 new properties for capturing run duration and hyperparameters on the Run class

Describe alternatives you've considered
N/A

Additional context
N/A

Rename MLLifecycleManager to MLOpsContext

Is your feature request related to a problem? Please describe.
MLLifecycleManager is a mouthful. That's a problem.

Describe the solution you'd like
An easier term to understand and annunciate is ModelContext and IModelContext
As part of this issue, we would rename MLLifecycleManager to ModelContext

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add regression example

We should add an example solution on how this SDK can be used. It should probably go hand-in-hand with a page for documentation as well.

Add summary comments on public variables and methods for intellisense

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
In order to provide good intellisense for our nuget package we need to add good XML documentation for our public properties and methods.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add multiclass classification example

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
Added an example of multi-class classification

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add an MLOpsBuilder to create a configured instance of MLLifeCycleManager

Is your feature request related to a problem? Please describe.
The initial setup of MLLifeCycleManager for small test scenarios seems problematic in that you are required to set up a backing store before using or it throws an exception. It seems to me there should be a default implementation that requires zero configuration, perhaps an in-memory dictionary based store.

Describe the solution you'd like
Add in-memory store implementations of IMetaDataStore and IModelRepository - these will be useful for testing various scenarios in which the user does not need to persist the data in a permanent storage mechanism. Make these the default implementation instead of throwing an exception via EnsureStorageProviderConfigured on every call to MLLifeCycleManager

Describe alternatives you've considered
Alternatively could use SQLite implementation as default as it does not require any configuration by default, however it does persist on disk which might be undesirable for testing various scenarios.

Additional context
N/A

Add support to track the training time(Run duration) for a model.

Is your feature request related to a problem? Please describe.
Along with capturing the run and the corresponding metrics , we should capture training time as well. Also Refer to #76

Describe the solution you'd like
Add a new property to capture training time.

Describe alternatives you've considered
N/A

Additional context
N/A

Add created date to IExperiment

Is your feature request related to a problem? Please describe.
I think it would be useful to know when one first started running an experiment.

Describe the solution you'd like
Add a new property for CreatedDate on IExperiment and it's associated implementations.
Note that this property should be immutable, e.g. when we run a new run for an experiment, this property should not be updated.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Sketch simple design for client UI

The purpose of this ticket is to draw a very simple sketch on how the UI could look like for the web client.

I'm thinking the simplest possible.

We have the following layers

  • Experiment, Runs, Metrics.

In the future we want to register models and deploy them but we can think of that UI later

Add ability to create experiment and run in one call

Is your feature request related to a problem? Please describe.
Currently, we need two lines for creating a run, first creating an experiment then one for creating a new run.

Describe the solution you'd like
It would be nice to have a method like:

public Task<Guid> CreateRunAsync(string experimentName) 
{
}

What the method would do is create the experiment and then a run and return the run id.

Describe alternatives you've considered
N/A

Add ability to associate run with git commit or comment

Is your feature request related to a problem? Please describe.
While out running (that's when my ideas come) I thought of Sammy's question on how we would know which code was used to train a specific model. Given that we currently would run an e.g. a GitHub Action on a training run, or when running locally, have no GitHub commit at all, I can see that we easily would lose track of which run/artifact belonged to what code commit, e.g. model pipeline.

Describe the solution you'd like
A simple solution for us to start with is to add two columns on the run entity, both optional.

  • GitHub commit hash
  • Comments field

This would allow us to add the GitHub commit hash to the run, and if running locally a comment on what e.g. what the changes are that one is trying. Ideally we would then be able to link the web client with the repo in which the model was trained so that you could click on the run and it would automatically bring up the Git commit (e.g. PR) for which this code was trained on.

Describe alternatives you've considered
Open for suggestions.

Add support to get best run and metrics that are zero-optimized

Is your feature request related to a problem? Please describe.
Our current IsBestRun method tries to find the best run for a given metric by finding the largest value. This works well for 99% or so of cases but e.g. log loss entropy should be closer to zero to be a good value.

Describe the solution you'd like
Add ability to define what a good run looks like for a given metric

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add support for SQLite

Registering models and metadata in the cloud is not always feasible.
To that effect we want to be able to add a provider to store these things on-premise

Add ability to run unit tests in CI pipeline

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
Run all unit tests in the solution.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add support to log artifact on a file share or local path

Similarly to #3 we want the ability to upload models/artifacts to a local file share.

We can either make this the default case when using SQLite or we can create separate extension methods so a user can decide if they want to store this locally or not (e.g. you may want to mix Azure for meta data and local for models)

Add support to log an artifact during run

During a run we want to upload a model to a container in Azure.

I'm envisioning the structure to look like a container name ModelRepository and each model being named the unique GUID for the run.

Add CI/CD pipeline for NuGet packaging and deployment

Background

We need to automate the build and deployment of our nuget packages.
Each package needs to have a consistent build number.

We should probably run the dotnet pack command with versioning after each CI build (we can have different versions for PR builds vs master builds).

The deployment to nuget.org should most likely be a separate workflow file as it should be manually triggered once we have a release.

Add workflow to publish packages to nuget.org

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
Add GitHub workflow to publish packages from a release branch to nuget.org

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Add setup instructions on how to get started with the repo

Is your feature request related to a problem? Please describe.
To get started with the repo, we should provide instructions or scripts to install all dependencies needed to run unit/integration tests and build the project.

E.g.

  • Setup SQLite (script or instructions)
  • Setup Azure Storage Account (ARM Template)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.