Code Monkey home page Code Monkey logo

mrpowers-benchmarks's Introduction

MrPowers Benchmarks

This repo performs benchmarking analysis on common datasets with popular query engines like pandas, Polars, DataFusion, and PySpark.

It draws inspiration from the h2o benchmarks but also includes different types of queries and uses some different execution methodologies (e.g. modern file formats).

The h2o benchmarks have been a great contribution to the data community, but they've made some decisions that aren't as relevant for modern workflows, see this section for more details.

Most readers of this repo are interested in the benchmarking results and don't actually want to reproduce the benchmarking results themselves. In any case, this repo makes it easy for readers to reproducing the results themselves. This is particularily useful if you'd like to run the benchmarks on a specific set of hardware.

This repo provides clear instructions on how to generate the datasets and descriptions of the results, so you can easily gain intuition about the actual benchmarks that are run.

Benchmarking is hard

It's difficult to build accurate benchmarks. Runtimes depends on the hardware, software versions, and data setup.

Accurate benchmarks are even harder when comparing different technologies. Certain frameworks will perform better with different files sizes and file formats. This benchmarking analysis tries to give a fair representation on the range of outcomes that are possible given the most impactful inputs.

The benchmarks presented in this repo should not be interpreted as definitive results. They're runtimes for specific data tasks, on one type of hardware, with a specific set of software versions. The code isn't necessarily optimized (we accept community contributions to restructure code).

The data community should find these benchmarks valuable, caveats aside.

Why benchmarks are important

Suppose you'd like to find the quickest way to join a 2GB CSV file with a 1GB Parquet file on your local machine.

You may not want to perform an exhaustive analysis yourself. You'll probably find it easier to look up some benchmarks and make and informed decision on the best alternative.

Trying out 10 different options that require figuring out how to use various different programming languages isn't realistic. Benchmarks serve to guide users to good options for their uses cases, keeping in mind their time constraints.

Benchmarks can be harmful when they're biased or improperly structured and give misleading conclusions. Benchmarks should not intentionally or unintentionally misguide readers and towards suboptimal technology choices.

Benchmarks should also pave the way for revolutionary technologies to gain adoption. When a new query engine figures out how to process data in a faster, more reliable manner, they should be able to quantify improvements to users via benchmarks. This helps drive adoption.

h2o benchmark methodology

The h2o benchmarks have certain limitations, as do all benchmarks.

This section explains some of the limitations of the h2o benchmarks, not as a criticism, but to explain the tradeoffs that were made in the h2o benchmarks. The h2o benchmarks are an excellent contribution to the data community and we should be grateful for the engineers that dedicated their time and effort to make them available.

No longer updated

The h2o benchmarks are no longer being updated, so we can't see the benchmarking results for new query engines and newer versions. We also can't see updates when engineers that are specialized in a certain query engine make code fixes.

It's completely reasonable that the h2o benchmarks are no longer maintained. These were updated for years and the data community benefited from many years of updates.

Single CSV file

The h2o benchmarks are run on data that's stored in a single CSV file.

This introduces a bias for query engines that are optimzied for running on a single file. This bias is somewhat offset because the data is persisted in memory before the queries are run.

Remember that CSV is a row based file format and column projection is not supported. You can't cherry pick certain columns and persist them in memory when running queries.

Data persisted in memory

The h2o benchmarks persist data in memory, but they are using CSV files, so they need to persist all the data in memory at once. They can't just persist the columns that are relevant to the query. Persisting all the columns causes certain queries to error out that wouldn't otherwise have issues if only 2 or 3 columns were persisted.

Persisting data in memory also hides performance benefits of querie engines that are capable of performing parallel I/O.

Engines that support parallel I/O

Certain query engines are designed to read and write data in parallel - others are not.

Query engine users often care about the entire time it takes to run a query. How long it takes to read the data, persist it in memory, and execute the query. The h2o benchmarks only give readers information on how long it takes to actually execute the query. This is certainly valuable information, but potentially misleading for the 50 GB benchmarks. Reading 50 GB of data is a significant performance advantage compared to reading a single file.

Small datasets

The h2o benchmarks only test 0.5 GB, 5 GB, and 50 GB datasets. They don't test 500 GB or 5 TB datasets.

This introduces a bias for query engines that can run on small datasets, but can't work on larger datsets.

Distributed engines vs single machine engines

Distributed query engines usually make tradeoffs so they're optimized to be run on a cluster of computers. The h2o benchmarks are run on a single machine, so they're biased against distributed query engines.

Distributed query engines of course offer a massive benefit for data practitioners. They can be scaled beyond the limits of a single machine and can be used to run analysis on large datasets. Including benchmarks on larger datasets with distributed clusters is a good way to highlight the strenghts of query engines that are designed to scale. This is also a good way to highlight the data volume limits of query engines designed to run on a single machine.

GPUs vs CPUs

The h2o bencharks include query engines that are designed to be run on CPUs and others that are designed to be run on GPUs. There are a variety of ways to present benchmarking results from different hardware:

  • present the results side-by-side, in the same graph
  • present the results in different graphs
  • present cost adjusted results

h2o decided to present the results side-by-side, which is reasonable, but there are other ways these results could habe been presented too.

Query optimization

The h2o benchmarks are well specified, so they don't give engines that have query optimizers a chance to shine. Engines with a query optimizer will rearrange a poorly specified query and make it run better. The h2o benchmarks could have included some poorly specified queries to highlight they query optimization strenghts of certain engines.

Lazy computations and collecting results

Certain query engines support lazy execution, so they don't actually run computations until it's absolutely necessary. Other query engines eagerly execute computations whenever a command is run.

Lazy computation engines generally split up data and execute computations in parallel. They don't collect results into a single DataFrame by default because it's usually better to keep the data split up for additional parallelism for downstream computations.

Running a query for a lazy computation engine generally involves two steps:

  • actually running the computation
  • collecting the subresults into a single DataFrame

Collecting the results into a single DataFrame arguably should not be included in the parallel engine computation runtime. That's an extra step that's required to get the result, but not usually necessary in a real-world use case.

It can unfortunately be hard to divide a query runtime into different compontents. Most parallel compute engine query runtimes include both, which is probably misleading.

Quickstart

This section explains how to download data, create software environments, and run benchmarks for the different execution engines.

Downloading datasets

  • Run aws s3 cp s3://coiled-datasets/h2o-benchmark/N_1e7_K_1e2_single.csv tmp/ to download the 0.5 GB h2o groupby dataset
  • Substitute 1e8 and 1e9 to download the 5 GB and 50 GB datasets

Polars benchmarks

  • Create the mr-polars environment with conda env create -f envs/mr-polars.yml
  • Activate the environment with conda activate mr-polars
  • Run the Polars benchmarks with python benchmarks/polars_h2o_groupby_csv.py tmp/N_1e7_K_1e2_single.csv

You'll get output like this that shows the runtime by h2o groupby queries:

task  duration
q1    0.038983
q2    0.117003
q3    0.114049
q4    0.044766
q5    0.140829
q6    0.189151
q7    0.109352
q8    0.817299
q9    0.198762

DataFusion benchmarks

  • Create the mr-datafusion environment with conda env create -f envs/mr-datafusion.yml
  • Activate the environment with conda activate mr-datafusion
  • Run the Polars benchmarks with python benchmarks/datafusion_h2o_groupby_csv.py tmp/N_1e7_K_1e2_single.csv

Dask benchmarks

  • Create the mr-dask environment with conda env create -f envs/mr-dask.yml
  • Activate the environment with conda activate mr-dask
  • Run python dask_csv_to_parquet.py tmp/N_1e7_K_1e2_single.csv tmp/N_1e7_K_1e2_parquet to break up the CSV dataset to 100 MB Parquet files
  • Run the Dask benchmarks with python benchmarks/dask_h2o_groupby_csv.py tmp/N_1e7_K_1e2_single.csv

mrpowers-benchmarks's People

Contributors

ghuls avatar mrpowers avatar ncclementi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mrpowers-benchmarks's Issues

Create h2o data files in a scalable manner

The h2o data creation R scripts work for small datasets, but aren't scalable. Here are the results on my machine:

Rscript groupby-datagen.R 1e7 1e2 0 0 # 7 seconds
Rscript groupby-datagen.R 1e8 1e1 5 1 # 3.5 minutes
Rscript groupby-datagen.R 1e9 1e1 5 1 # errors out, presumably due to a memory error

@ghuls created a sed script that should be more portable and easier to parallelize than the Rscript. I'm assuming we can parallelize the sed script to generate multiple CSV files in parallel, so this will scale. e.g. have groupby-datagen.R 1e9 1e1 5 1 output 50 1GB files instead of a single 50GB file.

@ghuls - can you create a PR and with your sed code? Any suggestions how to parallelize it? I can probably figure out how to parallelize it with Dask if that's the best option. Thanks!

Rearrange the README

Move the developer focused stuff to a separate file.

Focus the README more on users interested in results. Not folks that want to clone the repo and run the commands themselves.

import * is not a good practice

from dask_h2o_groupby_queries import *

from library import * is not good practice. It makes it hard to know what functions or classes come from that module.

I'd recommend choosing an alias, and doing something like this:

import dask_h2o_groupby_queries as ho_queries

or some other alias and then doing:

ho_queries.benchmark()

This way it's clear where the benchmark function comes from

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.