Code Monkey home page Code Monkey logo

blackbench's Introduction

Hey, nice to see you!

Hello! I’m Richard. I go by ichard26 on the internet. My pronouns are he/they.

I’m a hobby Python developer and OSS contributor. Over the years, I've served on the core team for several projects:

  • pip: the Python package installer
  • mypyc: a Python-to-C transcompiler for typed code, built on top of mypy
  • black: your friendly code formatter for Python, known for its uncomprising code style
  • bandersnatch: a simple mirror tool to generate a PEP 503 compliant index

I also serve as a moderator of the Python Discord community.

From time to time, I dabble in C and front-end web development. When I’m not doing development work, I like to take photographs, go for walks .. or both at the same time! 📷

Anyhow, I hope that you're having a good day :)

blackbench's People

Contributors

dependabot[bot] avatar ichard26 avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

blackbench's Issues

blackbench needs more code to run Black against

At the moment of writing doesn't come with much benchmark data to run black against. It's basically a 21.6b0 checkout of black's source code + one string micro target (based off of Jelle's discovery of ESP causing slowdowns). I'd appreciate more code files to throw to black.

I'm looking for two kinds of targets: "normal" and "micro" . Normal targets should represent real-world code (so the benchmarking data actually represents real-world performance). Micro targets should be small and are focused on one area of black formatting (and mostly exist to measure performance in a specific area, like string processing).

In terms of hard rules: normal targets shouldn't be bigger than ~2000 lines (this is to keep time requirements to run the benchmark based off the target managable), and micro targets shouldn't be bigger than ~400 lines.

Thank you in advance!

Inject metadata into the pyperf JSON data

Soo I tried implementing this but I ended up breaking a few integration tests because the test suite is *that* brittle (or strict but that feels too positive for my questionable test suite :P) :/ I'mma defer this to the alpha where hopefully I find the time and energy to rewrite the integration tests to be less brittle (and just generally more maintainable).

Anyway in terms of implementation I'd like to see the following information be injected:

  • Blackbench version;
  • Black version;
  • Task name & description; and
  • Target name & description.

Add various pre-checks to defend against user error (eg. pyperf args, Black version, black.Mode() config)

You can configure the benchmarking parameters directly through blackbench run out.json -- $pyperf-args. Which is quite useful to increase or debug benchmark stability. Unfortunately messing up the pyperf arguments is quite easy and leads to every single benchmark to fail ... let's catch this before any benchmark runs. Same thing with the black.Mode() configuration via --format-config.

  • black.Mode(...)
  • pyperf arguments
  • black version & selected task

Also, if possible, try to at least warn if the current Black version is incompatible with the current task. Unfortunately development versions makes this difficult, so it's not too important if this gets deferred or ends up never implemented.

Create a release process (oh and automate it!)

While I'm not ready to release blackbench to the greater world out there, I am getting close so it would be nice to have the infrastructure to do so in place by then. I know that since this project is using flit it's probably quite simple, but I'd like this project to be easy to maintain down the road (especially if I end up disappearing and someone else is now in charge).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.