Code Monkey home page Code Monkey logo

ml_2_sql's Introduction

Machine learning to SQL

GitHub Repo stars GitHub last commit interpret Python Version GitHub

Table of Contents

ML2SQL

  1. What is it?
  2. Getting Started
  3. Input
  4. Output
  5. Remarks
  6. Troubleshooting

What is it?

An automated machine learning cli tool which trains, graphs performance and saves the model in SQL. Using interpretable ML models (from interpretml) to train models which are explainable and interpretable, so called 'glassbox' models. With the outputted model in SQL format which can be used to put a model in 'production' in an SQL environment. This tool can be used by anybody, but is aimed for people who want to do a quick analysis and/or deploy a model in an SQL system.

ML2SQL_demo

Philosophy:

  • For a quick analysis:
    • Automated training and model performance tested
    • Feature correlations, feature importance and model performance metrics
    • EBM gives more insights into feature importance then any other model
  • For model deployment in SQL:
    • Output model in SQL code
    • EBM, Decision Tree and linear/logistic regression
  • Explainable Boosting Machine (EBM) on par with other boosted methods while fully explainable

Note


Getting started

Set up
  1. Make sure you have python >= 3.8
  2. pip install ml2sql
  3. Run: ml2sql init This will create the folders, input/data/, input/configuration/ and trained_models/`

Quick Demo
  1. In the terminal in the root of this folder run: ml2sql run, follow the instructions on screen and select the demo data and config
  2. Check the output in the newly created folder

Quick Usage with Your Own Data
  1. Save csv file containing target and all features in the input/data/ folder (more info on input data)
  2. Run: ml2sql run
  3. Select your CSV file
  4. Select Create a new config and choose Automatic option (a config file will be made and can be edited later) (more info on config json)
  5. Select newly created config
  6. Choose a model (EBM is advised)
  7. Give a name for this model
  8. The output will be saved in the folder trained_models/<current_date>_<your_model_name>/
  9. The .sql file in the model folder will contain a SQL written model

Testing a Trained Model on a New Dataset
  1. Make sure the new dataset has the same variables as the dataset the model was trained on (same features and target)
  2. Save dataset in the input/data/ folder (more info on input data)
  3. In the terminal in the root of this folder run: ml2sql check-model
  4. Follow the instructions on screen
  5. The output will be saved in the folder trained_models/<selected_model>/tested_datasets/<selected_dataset>/


Input

Data

The csv file containing the data has to fulfill some basic assumptions:

  • Save the .csv file in the input/data folder
  • Target column should have more than 1 unique value
  • For binary classification (target with 2 unique values) these values should be 0 and 1
  • File name should be .csv and not consist of any spaces

Additional information

  • EBM can handle categorical values (these will be excluded when choosing decision tree or linear/logistic regression)
  • EBM can handle missing values

Configuration json (example)

This file will inform the script which column is the target, which are the features and several other parameters for pre and post training. You can copy and edit a config file from the already existing example in input/configuration/ or select Create a new config file in the second step when running the run.py file.

Configs are saved in input/configuration/.

Configuration file content

features

List with names of the columns which should be used as features

model_params

Dictionary of parameters that can be used with model of choice (optional). Check the model's documentation:

sql_split options:

  • false, outputs the SQL model as one SELECT statement, using column aliases within the same select statement
  • true, outputs the SQL model as several CTEs, this can be used if column aliases can't be referenced within the same SELECT statement

sql_decimals options:

  • Any whole positive number, rounds the 'scores' in the SQL file to X decimal places. Can be lowered to avoid any data type overflow problems, but will decrease precision.

file_type options (optional):

  • png, output of features importance graphs will be static .png (smaller file).
  • html, output of features importance graphs will be dynamic .html (bigger file and opens in browser).

pre_params

cv_type options (optional):

  • timeseriesplit, perform 5 fold timeseries split (sklearn implementation)
  • any other value, perform 5 fold stratified cross validation

max_rows options:

  • Any kind of whole positive number, will limit the data set in order to train faster (as simple as that)

time_sensitive_column options (optional):

  • Name of date column to do the time serie split over
    • used when cv_type = timeseriesplit

target

Name of target column (required)


Output (example)

The output consists of 4 parts:

  • Correlation matrix of the input features
  • Feature importance graphs (model specific)
  • Model performance graphs
  • The model itself in pickled and SQL form

Correlation matrices

Can be found in the created model's folder under /feature_info

Pearson Correlation Matrix (Numerical Features)

  • A Pearson correlation matrix for numerical features in the input data.
  • Visualized as a clustermap and saved as numeric_clustermap.png or numeric_clustermap.html.

Cramer's V Correlation Matrix (Categorical Features)

  • A Cramer's V correlation matrix for categorical features (object, category, boolean) in the input data.
  • Visualized as a clustermap and saved as categorical_clustermap.png or categorical_clustermap.html.

Feature importance

Can be found in the created model's folder under /feature_importance

For EBM and logistic/linear regression

  • an overview of the top important features
  • seperate feature importance graph per feature

For Decision tree

  • graph with gini index

Model performance

Can be found in the created model's folder under /performance

For Classification Models:

  1. Confusion Matrix
  • A confusion matrix is plotted and saved in both static (PNG) and interactive (HTML) formats for binary classification problems.
  • For multiclass classification, separate confusion matrices are plotted for each class.
  1. ROC Curve and Precision-Recall Curve
  • The tool plots the Receiver Operating Characteristic (ROC) curve and Precision-Recall curve for binary classification problems.
  • For multiclass classification, these curves are plotted for each class versus the rest.
  1. Calibration Plot
  • A calibration plot (reliability curve) is generated to assess the calibration of the predicted probabilities.
  1. Probability Distribution Plot
  • A probability distribution plot is created to visualize the distribution of predicted probabilities for the positive and negative classes (binary classification) or for each class (multiclass classification).

For Regression Models:

  1. Scatter Plot of Predicted vs. True Values
  • A scatter plot is generated to compare the predicted values against the true values.
  1. Quantile Error Plot
  • A box plot is created to visualize the prediction error across different quantiles of the true values.
  1. Regression Metrics Table
  • A table summarizing various regression performance metrics, such as:
    • Mean Absolute Error (MAE)
    • Mean Squared Error (MSE)
    • Root Mean Squared Error (RMSE)
    • R-squared
    • Adjusted R-squared
    • Mean Absolute Percentage Error (MAPE)
    • Explained Variance Score (EVS)
    • Max Error
    • Median Absolute Error (MedAE)
    • Mean Squared Log Error (MSLE)
    • Root Mean Squared Log Error (RMSLE)

The model

Can be found in the created model's folder under /model

  • Pickled version of the model is saved as .sav file
  • SQL version of the model is saved as .sql file

Remarks

Notes

  • Limited to 3 models (EBM, linear/logistic regression, and Decision Tree).
  • Data imbalance treatments (e.g., oversampling + model calibration) are not implemented.
  • Only accepts CSV files.
  • Interactions with more than 2 variables are not supported.

TODO list

Check docs/TODO.md for an extensive list of planned features and improvements. Feel free to open an issue in case a feature is missing or not working properly.

Troubleshooting

If you encounter an unclear error message after following the instructions above, feel free to create an Issue on the GitHub repository.

ml_2_sql's People

Contributors

kaspersgit avatar dependabot[bot] avatar

Stargazers

 avatar  avatar NguyenS (he/him) avatar Michael Corrado avatar Ömer Faruk Ballı avatar Luca Zavarella avatar InterpretML avatar Paul Koch avatar

Watchers

James Cloos avatar Kostas Georgiou avatar  avatar

ml_2_sql's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.