Code Monkey home page Code Monkey logo

Comments (21)

janmotl avatar janmotl commented on July 24, 2024 2

I have uploaded a csv with the results.

Brief observations:

  1. OneHotEncoding is, on average, the best encoder (at least based on testing AUC).
  2. Each of the remaining encoders (out of the tested one) is on some datasets better than OneHotEncoding.

Notes:

  1. Parameter tuning was not performed.
  2. Peak memory consumption was not measured.
  3. Benchmark runtime on my laptop is ~24 hours (the csv reports average runtimes per fold, not sum, plus there is also some overhead like score calculation).

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024 2

Here are box plots of the results grouped just by encoder. Across the board, BinaryEncoder & OneHotEncoder seem to be the top-performing encoders, although there may not be statistically significant differences there. HashingEncoder seems to be the worst on average.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(15, 9))
sb.boxplot(data=results_df, x='encoder', y='test_auc', notch=True)
plt.grid(True, axis='y')

encoder-boxplot

Likely worth digging further into this data to gain some better insights.

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024 1

One concern with the benchmark is that no parameter tuning is performed. One finding from our recent sklearn benchmarking paper is that the sklearn defaults are almost always bad, and parameter tuning is almost always beneficial. In terms of measuring predictive performance, it is likely that parameter tuning is important here.

Another concern with the benchmark is that it seems to use the k-fold CV score as the test score. That may not be a problem here because parameter tuning is not performed, but if parameter tuning is added then it is possible for models/preprocessors with more parameters will have more chances to achieve a high score on the dataset.

Lastly, IMO returning the training score is probably pointless. That's the score the model achieves on the training data after training on the training data, so most of the time it will be ~100%.

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024 1

@rhiever I am concerned about the parameter tuning as well. However, I am more concerned about the parameters of the encoders than of the classifiers (simply because of the orientation of categorical-encoding library). My plan is to use the recommended settings of the classifiers from the referenced paper where available and only tune the parameters of the encoders. Do you have a recommended setting for the classifiers not mentioned in Table 4?

Good point. Can you recommend a solution to the issue?

Comparison of the training and testing score can be used for assessment/illustration of the overfitting - encoders like LeaveOneOutEncoder or TargetEncoder may potentially contribute to overfitting. In a fatal case, the classifier may have 100% accuracy on the training data and worse than random on the testing data. Hence, the code logs both, training and testing scores.

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024 1

Here's the results grouped by encoder + classifier.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['model'].unique()):
    plt.subplot(3, 3, index + 1)
    plt.title(clf)
    sb.boxplot(data=results_df.loc[results_df['model'] == clf], y='encoder', x='test_auc', notch=True)
    plt.grid(True, axis='x')
    plt.ylabel('')
    if index < 6 != 0:
        plt.xlabel('')
    if index % 3 != 0:
        plt.yticks([])
    plt.tight_layout()
    plt.xlim(0.4, 1.0)

encoder-clf-boxplot

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024 1

And here's grouping the other way around.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['encoder'].unique()):
    plt.subplot(3, 3, index + 1)
    plt.title(clf)
    sb.boxplot(data=results_df.loc[results_df['encoder'] == clf], y='model', x='test_auc', notch=True)
    plt.grid(True, axis='x')
    plt.ylabel('')
    if index < 6 != 0:
        plt.xlabel('')
    if index % 3 != 0:
        plt.yticks([])
    plt.tight_layout()
    plt.xlim(0.4, 1.0)

clf-encoder-boxplot

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024 1

Updated results are now in PR #110 (link).

Notable changes:

  1. Added Weight of Evidence encoder.
  2. Impact encoders (Target encoder, Leave One Out and Weight of Evidence) should now correctly apply the corrections on the training data. This required a complete overhaul of the benchmarking code because scikit pipelines are not compatible with transformers that accept both, X and y.
  3. Removed datasets that contained only numerical attributes as they were not contributing to the benchmark and they were merely increasing runtime.

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024 1

Yes, LOO and WOE overfit particularly with decision tree, gradient boosting and random forest.

Unfortunately, the graphs are not directly comparable because they are based on different subset of datasets.

Contrast encoders are not included because of issue #91.

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024 1

@eddiepyang The benchmark is now in this repository under examples/benchmarking_large.

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024

This repo might be a useful resource to pull code from. We've been running sklearn benchmarks over there and published the results on sklearn classifiers in this paper. You can find the code for the preprocessor benchmark that I've been running with sklearn preprocessors here.

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024

@rhiever: PMLB is awesome! However, do you/can you provide datasets with unprocessed categorical attributes? When I looked at the repository, all categorical attributes were already encoded with one-hot or ordinal encoding.

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024

I wrote a draft of the benchmark and it is at:
https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large
Edit: In the master branch under examples/benchmarking_large.

What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).

Some feedback?

from category_encoders.

wdm0006 avatar wdm0006 commented on July 24, 2024

@janmotl this is cool, would it be possible to add time-to-train and peak overall memory usage to the output from the benchmark?

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024

@wdm0006 I added memory consumption of the encoders. The code utilizes memory_profiler. However, I am not overly happy with the deployment of memory_profiler because it heavily impacts the runtime and, in my environment, it also breaks debug mode and parallelism.

Time-to-train of the whole pipeline is logged as fit_time. Time-to-train of the encoder alone is logged as fit_encoder_time.

from category_encoders.

rhiever avatar rhiever commented on July 24, 2024

The parameters recommended in Table 4 are a fine starting point, but as we suggest in the paper, algorithm parameter tuning (even a small grid search) should always be performed for every new problem.

wrt addressing the second issue I raised, the most popular solution is to use nested k-fold CV: within each training fold, perform k-fold CV for the parameter tuning. See this example.

from category_encoders.

discdiver avatar discdiver commented on July 24, 2024

Good stuff. Thanks for doing this.

Getting a 404 when I try to see the csv of results at https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv

from category_encoders.

discdiver avatar discdiver commented on July 24, 2024

Awesome @janmotl. Here's the latest performance chart. Interesting that WOE and LOO performed poorly.
screen shot 2018-09-03 at 11 08 29 pm

Why aren't the contrast encoders included in the analysis?

from category_encoders.

discdiver avatar discdiver commented on July 24, 2024

Thanks @janmotl. It's interesting Target doesn't overfit, too.

Is it worth running all available encoders on the same subset only?

I would argue some encoders are only appropriate to ordinal or nominal features, so a blanket test like this probably doesn't really make theoretic sense, although it would be nice if it did.

from category_encoders.

janmotl avatar janmotl commented on July 24, 2024

I rerun the benchmark on older versions of the code. And by applying bisection method, it turned out that following code in LOO:

def fit_transform(self, X, y=None, **fit_params):
    """
    Encoders that utilize the target must make sure that the training data are transformed with:
            transform(X, y)
    and not with:
            transform(X)
    """
    return self.fit(X, y, **fit_params).transform(X, y)

causes significant degradation of testing AUC (e.g. in case of decision trees from ~0.9 to ~0.5). Ironically enough, these lines were added into the code to activate leave-one-out functionality (see issue #116).

from category_encoders.

eddiepyang avatar eddiepyang commented on July 24, 2024

I wrote a draft of the benchmark and it is at:
https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large

What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).

Some feedback?

@janmotl
Nice work on the benchmarking, do you have an updated link for the description?

from category_encoders.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.