Code Monkey home page Code Monkey logo

pybm's People

Contributors

nicholasjng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

pybm's Issues

Support execution of benchmark targets as modules

Currently, the runner dispatches benchmarks by calling the .py files with Python inside the subprocess:

pybm/pybm/runners/base.py

Lines 110 to 111 in 721c293

# supply the ref by default.
command = [python, benchmark, f"--benchmark_context=ref={ref}"]

This brings some complication when working on code that is not installed into the current virtual environment (just check the sum example path setup to see what I mean).

Since not every project has a setup.py by default (and does not need one, either), there should be an option to run the target as a module, i.e. dispatch via

python -m path.to.benchmark [<options>].

This should be selectable via a toggle / switch, ideally as a command line option to pybm run.

Some additional notes:

Notice the dotted syntax above, since given / discovered file paths will need to be translated into valid module paths via:

  1. Stripping the .py suffix, and
  2. Substituting the slashes with dots.

Substituting slashes in path by string replacement directly is not a portable solution, since Windows uses backslashes. A global solution could be pathlib's as_posix API.

While this is not strictly necessary (the hacks from the sum example do work), these path hacks are annoying to research and get right, so this is potentially a big quality of life improvement.

Pass command line arguments directly from runner / builder / reporter class

One of pybm's deciding strengths is supposed to be its extensibility for virtual environment building, benchmark running and reporting. Right now, at most one or two classes of each of these are implemented, so there is not as much need for dynamic arguments (one could also say they are hardcoded for the time being).

Yet, with more components this will change, and a general method to expose the right additional arguments to the CLI command's ArgumentParser is necessary. This method should be made mandatory to implement for all subclasses, this way, at least one thinks more about additional functionality tailored to one's own use case.

Proposal:

  • Expose an add_arguments method to the EnvBuilder, BenchmarkRunner and Reporter classes.
  • Move existing command line arguments into the respective subclasses' add_arguments methods.
  • Create an argument group in argparse inside the CLI command that captures all the specific arguments and groups them together semantically (useful when calling -h/--help).

Add config overrides to `pybm init`

Currently, pybm init produces a barebones default config without the user's input. The first time the user can actually change values is after the creation / linkage of the virtual environment of the root environment.

However, there is good reason to allow users to specify overrides before that: Virtual environment creation arguments etc. should be able to be used to create root (and other environments for that matter).

Clearly, adding a command line option for every option is not scalable or future-proof. A possible solution could be an -O/--override switch, or a path to a file of overridden options and their values.

Add tab completion

A better CLI experience can be provided by enabling tab completion. Ideally, this should work both on Windows and macOS/Linux.

Extent: Autocompletion could be useful in the following scenarios:

  • Command names + option names
  • Benchmark environment names for env management commands other than create
  • Git branch names / tags
  • Refs in pybm compare

Options:

  1. GNU readline-based approach. Would mean additional effort for implementation, esp. on Windows.
  2. dvc's shtab, in library mode. Easier to integrate due to argparse support, but not operational for Windows.
  3. ???

Add generalized IO component

Requirements:

  • Store benchmark result data.
  • Load benchmark result data.
  • Calculate and visualize results.

With the last point, the reporter component that exists now would be replaced by a pybm.io module.

Upcoming IO components may include:

  • JSON (already existent) -> console
  • Database integrations (SQL/noSQL)
  • Pandas (+existing pandas integrations)

Reduce repetitions to statistics in `pybm report`

The Google Benchmark runner, and eventually also the timeit runner save results for each repetition into a separate JSON object. Contextual information like target name is always the same, and repetitions are marked by a running enumeration index (called repetition_index in GBM).

Since reporting raw repetitions is confusing, they should be reduced into descriptive statistics (mean, stddev, relative error, etc.). The user should also gain a large amount of control over what statistics they want.

Implementation detail steps:

  1. Implement standard reducers like mean, standard deviation, median, etc in a submodule (maybe pybm.runners.reducers ?)
  2. Insert a reducer step into the report pipeline, most likely just after the load.
  3. Pop standard reducer values (mean, stddev, etc.) and format them especially (e.g. µ +/- stddev) for display.
  4. Give the option to specify reducers as a reporter config value.

Enable checkout-only mode in `pybm run`

While the abstractions of the BenchmarkEnvironment are necessary to cover benchmark cases in full (with custom requirements, Python versions etc.), presumably, for a good amount of users, measuring the code performance between two git refs can be done with the same setup - then, a benchmark workflow with git checkouts makes sense (it also does not create extra worktrees that may not be needed).

Proposed solution: Add a --checkout switch to pybm run that covers checkout-based benchmarking. Then, instead of creating an environment, the environment's information is changed with git checkout commands.

Details:

  • Pick up the environment's changed information with a sync function
  • Add a git checkout context manager that reverts checkouts after benchmark

Split the `pybm env` command into multiple commands

Revert the state of affairs to what it was before. This would save some time, especially with added tab completion from #46.

Names could be:

  1. pybm create <-> pybm env create
  2. pybm destroy <-> pybm env delete
  3. pybm (in|unin)stall <-> pybm env (in|unin)stall
  4. pybm switch <-> pybm env switch

Add global configuration file

As single option overrides were added in #20 , the ability to customize a configuration upon creation was finally addressed. However, this solution does not scale well, and many overridden arguments result in long, tedious CLI calls.

Under the assumption that a fair number of these overrides occur due to user preferred settings over the defaults, there should be a method to specify a "global" configuration file containing persistent overridden values, similar to a global git config or gitignore.

The features required to allow this way of setting up a better configuration flow are:

  1. Saving the config in a predefined location (candidate: .config/pybm/config.yaml) under the user's home directory (Linux/mac: $HOME, Windows: %USERPROFILE%)
  2. Loading the overridden configuration values and setting them on a default config.
  3. Adding a command-line switch to selectively skip applying global configuration settings on pybm init calls (maybe --skip-global?)

Move away from YAML to TOML

Quite a few posts on the debate consider TOML to be a superior format to YAML in the context of config files. This is also empirically reinforced by the fact that contemporary Python packaging relies on TOML files as manifests.

For easier configuration management, it could be useful to move to TOML. Perhaps more people end up being familiar with TOML, it also has first-class support for datetimes. This move could be sensible before solving the global configuration ticket #21 .

Stop erroring on existing pybm config

If we want to make pybm a successful Github Action, it needs the ability to grab overrides for config values. Since global configs are not an option in runners, a local config can be committed to the git repository instead, too. Calling pybm init in this case should result in a no-op and not crash the whole benchmark before it has even run.

Implement `pybm apply`

The initial launch happened without pybm apply, now it is time to add that.

Command spec:
pybm apply -f <path/to/yaml> [options...]

The syntax is designed to be that of kubectl apply.

Details and necessary features:

  • YAML parsing, validation.
  • Progress printing / status information.

It could be wise to refactor the printing statements from other parts of the benchmarking pipeline before implementing this command.

Implement Poetry-based venv building

Since poetry is a popular way of Python project management, it would be nice to add support to not trip up folks using poetry for virtual environment management.

Main points of emphasis are going to be locating Python executables from created venvs (Poetry hides that away from the user) and linkage.

Fix passing builder CLI options by splitting command list on `--`

Presently, passing command line options to pybm env create/delete results in errors because argparse fails to parse these unknown arguments. As such, we need a different way of passing them to the builder - e.g. by separating CLI argument blocks with a -- token similar to git.

Add documentation for most parts of `pybm`

There should be extensive documentation on pybm for users attempting to customize or extend the behavior.

Where documentation is needed most (sorted by prio):

  • Reporter classes
  • Runner classes
  • Builder classes
  • Config
  • Utilities

This issue tracks documentation efforts for all of the above.

Canonicalize benchmark suite across all environments

Currently, pybm searches for the benchmarks on every branch and executes what it finds in each worktree. Depending on the commit history of the repository, different branches may not have the same set of benchmarks (or any benchmarks at all).

This issue can come up specifically when working on protected repositories; when you write an ad-hoc benchmark testing an improvement on a development branch, the main branch of the repository might not have a dedicated benchmark for this.

There are multiple ways to go about this; for instance, there could be a "single source of truth"-approach, where the benchmark suite from one chosen "reference" branch is checked out into all other branches. This can work entirely via subprocess, too (git examples are readily found on SO), but these checkouts need to be reverted after the benchmark in order not to mess up the git workspaces.

The implementation could be a command-line switch for pybm run, since this is a situational feature that should be available for each run separately. Additionally, some extra git functionality for checkouts via subprocess needs to be implemented, with extra care for the subsequent teardown (sounds like a job for a context manager).

Create a GitHub Action for easy benchmarking in GitHub CI

Main points and obstacles:

  • Detecting checkout mode vs. environment mode (whether benchmarks can be fast-forwarded) from requirement file diffs
  • Fixing pybm init error status (tracked in #45)
  • Report in the runner's stdout for easy inspection in the logs (via console)

YAML arguments:

  • benchmark resource (folder, file, glob), required
  • refs, default main/master (?) and the compare branch -> how do we get this?
  • mode: checkout vs environment, default checkout -> infer this mode from state of requirement files
  • Max allowed reduction, default 20% (?)
  • Allowed total number of performance regressions, default 5 (?) -> -1 meaning as many regressions allowed as possible

Implement legacy checkouts to support git 2.17.0

This came up in user feedback. Currently, git restore --source is used to check out benchmark files from other branches, which has multiple advantages over git checkout <branch> -- <path/to/file>. However, it requires a minimum of git 2.23.0, higher than what is strictly required by the essential git worktree machinery.

From a support point of view, it would be nicer to expose both options and give the user the choice between them depending on the git version, maybe as a config option.

Implementation of benchmark sourcing with git checkout could roughly look like this in pseudo-code (all git commands to be run as subprocesses):

Begin on old-ref.

  1. git checkout <new-ref> -- <path/to/file> (this adds everything to the staging area, a difference from git restore)
  2. git reset HEAD -- <path/to/file>
  3. git checkout <old-ref> -- <path/to/file>
  4. git clean -df from worktree root.

Step 2) unstages the changes, step 3) reverts the checkout, step 4) cleans up any untracked files created in the process. Errors in restoring the old checkout will have to be ignored (in case git complains about files not being present, duh, that's why they were sourced from a different reference in the first place).

Add a test suite

In order to make the development workflow more robust, a test suite should be created for some of the most delicate aspects of pybm. To name a few:

  • git worktree wrapper + utils
  • EnvBuilder
  • Runner(s)
  • Reporter(s)
  • Environment Store
  • PybmConfig

These should then be incorporated into a GitHub testing action, along with a pre-commit action for linting and typechecking as specified in the pre-commit config.

UPDATE: pre-commit hooks were merged in #10.
UPDATE 2: An end-to-end test benchmarking with checkouts was added in #15 .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.