Code Monkey home page Code Monkey logo

Comments (3)

LebedevRI avatar LebedevRI commented on September 27, 2024 1

(What you generally want to look at, is the two Time columns.)
Unless you specifically know that you need something different,
i'd recommend to take a look at the https://github.com/google/benchmark/blob/b7ad5e04972d3bd64ca6a79c931c88848b33e588/docs/tools.md,
and try to use benchmark/tools/compare.py script.

from benchmark.

LebedevRI avatar LebedevRI commented on September 27, 2024

The first repetitions picks the iteration count, the other repetitions just keep using the same iteration count.

I also assumed that mean/median etc would report a value of the mean/media of all repetition not the number of repetitions.

It is important to note that within a single repetition,
we do not record the individual times each iteration took,
only the overall accumulated total time all iterations of an repetition took.

So the aggregate statistics (over repetitions) are correctly saying that
they were calculated from (10, in your case) "repetition count",
not total number of iterations over all repetitions.

So everything is working as intended.

from benchmark.

thomthom avatar thomthom commented on September 27, 2024

I realize I've missed some key fundamentals of this tool. Thank you for the insight and your time to clarify how it works.

I was doing benchmarks while working on performance improvements. I'd set my benchmarks to have a min run time of 2 seconds which seemed fine at the time. Only later when I added parallelization to my code did the results between runs to be more varied. I guess that's because as the code was using more CPU on multiple CPUs I started seeing more noise from the rest of the system.

That lead me to try out repetitions. But all along I was focusing on the iteration count.

Do I understand it correctly that the iterations are not the best to compare between run, or between builds. That instead I should focus on the median run-time of the benchmark itself? And use that as a base to compare performance improvement?
(Seeing how iterations get locked to the first run, that metric seem like an unstable one to compare.)

from benchmark.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.