Code Monkey home page Code Monkey logo

ml-impute's Introduction

ML-Impute

A python package for synthetic data generation using single and multiple imputation.

Ml-Impute is a library for generating synthetic data for null-value imputation, notably with the ability to handle mixed datatypes. This package is based off of the research of Audigier, Husson, and Josse and their method of iterative factor analysis for singular data imputation.
The goal of this package is to:
(a) provide an open source package for use of this method in Python for the first time, and;
(b) to provide an efficient parallelization of the algorithm when extending it to both single and multiple imputation.

Note: I am currently a university student and may not have the time to continue to release updates and changes as fast as some other packages might. In the spirit of open-source code, please feel free to add pull requests or open a new issue if you have bug fixes or improvements. Thank you for your understanding and for your contributions.


Table of Contents


Installation

ML-Impute is currently available on PyPi.

Unix/Mac OS/Windows

pip install ml-impute

Usage

Currently, ML-Impute can handle both single and multiple imputation.

To follow a demonstration of both methods, proceed to the Example Section.

The following subsections provide an overview into each method along with their usage information.

To use the package post-installation via pip, instantiate the following object as follows:

from mpute import generator

gen = generator.Generator()

Generator.generate(self, dataframe, encode_cols, exclude_cols, max_iter, tol, explained_var, method, n_versions, noise)

Parameter Description
dataframe (required) Pandas dataframe object
encode_cols (optional, default=[]) Categorical columns to be encoded.
By default, ml-impute will encode all columns with object or category dtypes. However, many datasets contain numerical categorical data (ex/ Likert scales, classification types, etc.) that should be encoded.
exclude_cols (optional, default=[]) Categorical columns to be excluded from encoding and/or imputation.
On occastion, datasets will contain unique non-ordinal data (such as unique IDs) that, if encoded, will lead to large increases in memory usage and runtime. These columns should be excluded.
max_iter (optional, default=1000) The maximum number of iterations of imputation before exit.
tol (optional, default=1e-4) Tolerance bound for convergence.
If Frobenius norm relative error is < tol before max_iter is reached, exit.
explained_var (optional, default=0.95) Percentage of the total variance kept when reconstructing the dataframe after performing Singular Value Decomposition.
method (optional, default="single") Specification for use of single or multiple imputation method.
Possible values: ["single", "multiple"]
n_versions (optional, default=20) If performing multiple imputation, the number of generated dataframes.
If performing singular imputation, n_versions=1
noise (optional, default="gaussian") If performing multiple impuation, specify the type of noise added to each generated dataset to create variation. Gaussian noise is centered around 0 with a standard deviation of 0.1.
If performing singular imputation, noise=None
engine (optional, default="default") For either singular or multiple imputation, choose the engine through which the SVD is calculated.
Possible values: ["default", "dask"]
"default" utilizes the JAX numpy library for efficient SVD calculation and multiprocessing, and is recommended for speed.
"dask" creates a dask distributed scheduler which is used to compute the SVD. Given that this is an iterative method, this is recommended only when working with very large datasets.
Method Return Value
"single" imputed_df: a copy of the dataframe argument with synthetic data imputed for all null values
"multiple" df_dict: a dictionary containing each of the n_versions of generated datasets with variable synthetic data.
keys: [0, n_versions)
values: [dataframes]

Single Imputation

Single imputation works with the following line:

imputed_df = gen.generate(dataframe)

Multiple Imputation

Multiple imputation is as simple as the following:

imputed_dfs = gen.generate(dataframe method="multiple")

Example

For the following example, we will use the titanic example-dataset available in sklearn.datasets openml.

Build the titanic dataset and create a Generator object as follows:

import pandas as pd
from mpute import generator
from sklearn import datasets

titanic, target = datasets.fetch_openml("titanic", version=1, as_frame=True, return_X_y=True)
titanic['survived'] = target

gen = generator.Generator()

Single Imputation

imputed_df = gen.generate(titanic, exclude_cols=['name', 'cabin', 'ticket'])

Note: 'name', 'cabin', and 'ticket' are excluded as they mainly contain unique identifiers, therefore unnecessary for imputation and if encoded, would result in a significant increase in memory usage.
It is possible to replace the cabin column with two columns such as 'deck' and 'position', as these may be a determinant of survival. However, this preprocessing would have to occur beforehand


Multiple Imputation

Multiple imputation is as simple as the following:

imputed_dfs = gen.generate(titanic method="multiple")

That's all there is to it. Happy using!


License

ML-Impute is published under the MIT License. Please see the LICENSE file for more information.

ml-impute's People

Contributors

joshweiner avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.