Comments (21)
I have uploaded a csv with the results.
Brief observations:
- OneHotEncoding is, on average, the best encoder (at least based on testing AUC).
- Each of the remaining encoders (out of the tested one) is on some datasets better than OneHotEncoding.
Notes:
- Parameter tuning was not performed.
- Peak memory consumption was not measured.
- Benchmark runtime on my laptop is ~24 hours (the csv reports average runtimes per fold, not sum, plus there is also some overhead like score calculation).
from category_encoders.
Here are box plots of the results grouped just by encoder. Across the board, BinaryEncoder & OneHotEncoder seem to be the top-performing encoders, although there may not be statistically significant differences there. HashingEncoder seems to be the worst on average.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(15, 9))
sb.boxplot(data=results_df, x='encoder', y='test_auc', notch=True)
plt.grid(True, axis='y')
Likely worth digging further into this data to gain some better insights.
from category_encoders.
One concern with the benchmark is that no parameter tuning is performed. One finding from our recent sklearn benchmarking paper is that the sklearn defaults are almost always bad, and parameter tuning is almost always beneficial. In terms of measuring predictive performance, it is likely that parameter tuning is important here.
Another concern with the benchmark is that it seems to use the k-fold CV score as the test score. That may not be a problem here because parameter tuning is not performed, but if parameter tuning is added then it is possible for models/preprocessors with more parameters will have more chances to achieve a high score on the dataset.
Lastly, IMO returning the training score is probably pointless. That's the score the model achieves on the training data after training on the training data, so most of the time it will be ~100%.
from category_encoders.
@rhiever I am concerned about the parameter tuning as well. However, I am more concerned about the parameters of the encoders than of the classifiers (simply because of the orientation of categorical-encoding library). My plan is to use the recommended settings of the classifiers from the referenced paper where available and only tune the parameters of the encoders. Do you have a recommended setting for the classifiers not mentioned in Table 4?
Good point. Can you recommend a solution to the issue?
Comparison of the training and testing score can be used for assessment/illustration of the overfitting - encoders like LeaveOneOutEncoder or TargetEncoder may potentially contribute to overfitting. In a fatal case, the classifier may have 100% accuracy on the training data and worse than random on the testing data. Hence, the code logs both, training and testing scores.
from category_encoders.
Here's the results grouped by encoder + classifier.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['model'].unique()):
plt.subplot(3, 3, index + 1)
plt.title(clf)
sb.boxplot(data=results_df.loc[results_df['model'] == clf], y='encoder', x='test_auc', notch=True)
plt.grid(True, axis='x')
plt.ylabel('')
if index < 6 != 0:
plt.xlabel('')
if index % 3 != 0:
plt.yticks([])
plt.tight_layout()
plt.xlim(0.4, 1.0)
from category_encoders.
And here's grouping the other way around.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['encoder'].unique()):
plt.subplot(3, 3, index + 1)
plt.title(clf)
sb.boxplot(data=results_df.loc[results_df['encoder'] == clf], y='model', x='test_auc', notch=True)
plt.grid(True, axis='x')
plt.ylabel('')
if index < 6 != 0:
plt.xlabel('')
if index % 3 != 0:
plt.yticks([])
plt.tight_layout()
plt.xlim(0.4, 1.0)
from category_encoders.
Updated results are now in PR #110 (link).
Notable changes:
- Added Weight of Evidence encoder.
- Impact encoders (Target encoder, Leave One Out and Weight of Evidence) should now correctly apply the corrections on the training data. This required a complete overhaul of the benchmarking code because scikit pipelines are not compatible with transformers that accept both,
X
andy
. - Removed datasets that contained only numerical attributes as they were not contributing to the benchmark and they were merely increasing runtime.
from category_encoders.
Yes, LOO and WOE overfit particularly with decision tree, gradient boosting and random forest.
Unfortunately, the graphs are not directly comparable because they are based on different subset of datasets.
Contrast encoders are not included because of issue #91.
from category_encoders.
@eddiepyang The benchmark is now in this repository under examples/benchmarking_large
.
from category_encoders.
This repo might be a useful resource to pull code from. We've been running sklearn benchmarks over there and published the results on sklearn classifiers in this paper. You can find the code for the preprocessor benchmark that I've been running with sklearn preprocessors here.
from category_encoders.
@rhiever: PMLB is awesome! However, do you/can you provide datasets with unprocessed categorical attributes? When I looked at the repository, all categorical attributes were already encoded with one-hot or ordinal encoding.
from category_encoders.
from category_encoders.
I wrote a draft of the benchmark and it is at:
https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large
Edit: In the master branch under examples/benchmarking_large
.
What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).
Some feedback?
from category_encoders.
@janmotl this is cool, would it be possible to add time-to-train and peak overall memory usage to the output from the benchmark?
from category_encoders.
@wdm0006 I added memory consumption of the encoders. The code utilizes memory_profiler
. However, I am not overly happy with the deployment of memory_profiler
because it heavily impacts the runtime and, in my environment, it also breaks debug mode and parallelism.
Time-to-train of the whole pipeline is logged as fit_time
. Time-to-train of the encoder alone is logged as fit_encoder_time
.
from category_encoders.
The parameters recommended in Table 4 are a fine starting point, but as we suggest in the paper, algorithm parameter tuning (even a small grid search) should always be performed for every new problem.
wrt addressing the second issue I raised, the most popular solution is to use nested k-fold CV: within each training fold, perform k-fold CV for the parameter tuning. See this example.
from category_encoders.
Good stuff. Thanks for doing this.
Getting a 404 when I try to see the csv of results at https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv
from category_encoders.
Awesome @janmotl. Here's the latest performance chart. Interesting that WOE and LOO performed poorly.
Why aren't the contrast encoders included in the analysis?
from category_encoders.
Thanks @janmotl. It's interesting Target doesn't overfit, too.
Is it worth running all available encoders on the same subset only?
I would argue some encoders are only appropriate to ordinal or nominal features, so a blanket test like this probably doesn't really make theoretic sense, although it would be nice if it did.
from category_encoders.
I rerun the benchmark on older versions of the code. And by applying bisection method, it turned out that following code in LOO:
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
return self.fit(X, y, **fit_params).transform(X, y)
causes significant degradation of testing AUC (e.g. in case of decision trees from ~0.9 to ~0.5). Ironically enough, these lines were added into the code to activate leave-one-out functionality (see issue #116).
from category_encoders.
I wrote a draft of the benchmark and it is at:
https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_largeWhat it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).
Some feedback?
@janmotl
Nice work on the benchmarking, do you have an updated link for the description?
from category_encoders.
Related Issues (20)
- get_feature_names_out is incompatible with sklearn estimators and eli5, consequently HOT 3
- Equivalent method to sklearn's partial_fit? HOT 1
- CountEncoder incorrectly counts Timestamp columns HOT 3
- Target encoding categories with a single training example HOT 1
- DOC: one of the source links is dead HOT 1
- Missing text in documentation HOT 2
- Support Pandas 2.1 HOT 1
- Feature Request: Count-Based Target Encoder (Dracula)? HOT 1
- Pandas' string columns are not recognized HOT 3
- Pandas copy-on-write doesn't work properly HOT 2
- pd.NA should behave as np.nan HOT 5
- Multidimensional/composite target encoding HOT 4
- FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. HOT 2
- Support for Spark HOT 1
- EOF Error Raised while Calling HashingEncoders function HOT 6
- why we combine this library with main sklearn ? HOT 1
- catboost encoder get different result with catboost HOT 8
- Combining with set_output can produce errors HOT 1
- AttributeError: 'DataFrame' object has no attribute 'unique' HOT 1
- [Question; need help; support request] Possible to join multiple CountEncoders after parallel (multiprocessing) fitting? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from category_encoders.