Code Monkey home page Code Monkey logo

benchmark-memory's Introduction

benchmark-memory

CI Code Climate Inline docs

benchmark-memory is a tool that helps you to benchmark the memory usage of different pieces of code. It leverages the power of memory_profiler to give you a metric of the total amount of memory allocated and retained by a block, as well as the number of objects and strings allocated and retained.

Installation

Add this line to your application's Gemfile:

gem "benchmark-memory"

And then execute:

$ bundle

Or install it yourself as:

$ gem install benchmark-memory

Usage

Following the examples of the built-in Benchmark and Evan Phoenix's benchmark-ips, the most common way of using benchmark-memory is through the Benchmark.memory wrapper. An example might look like this:

require "benchmark/memory"

# First method under test
def allocate_string
  "this string was dynamically allocated"
end

# Second method under test
def give_frozen_string
  "this string is frozen".freeze
end

Benchmark.memory do |x|
  x.report("dynamic allocation") { allocate_string }
  x.report("frozen string") { give_frozen_string }

  x.compare!
end

This example tests two methods that are defined inline. Note that you don't have to define them inline; you can just as easily use a method that you require before the benchmark or anything else that you can place in a block.

When you run this example, you see the difference between the two reports:

Calculating -------------------------------------
  dynamic allocation    40.000  memsize (     0.000  retained)
                         1.000  objects (     0.000  retained)
                         1.000  strings (     0.000  retained)
       frozen string     0.000  memsize (     0.000  retained)
                         0.000  objects (     0.000  retained)
                         0.000  strings (     0.000  retained)

Comparison:
       frozen string:          0 allocated
  dynamic allocation:         40 allocated - Infx more

Reading this output shows that the "dynamic allocation" example allocates one string that is not retained outside the scope of the block. The "frozen string" example, however, does not allocate anything because it reuses the frozen string that we created during the method definition.

Options

There are several options available when running a memory benchmark.

Suppress all output (Quiet Mode)

Benchmark.memory(:quiet => true)

Passing a :quiet flag to the Benchmark.memory method suppresses the output of the benchmark. You might find this useful if you want to run a benchmark as part of your test suite, where outputting to STDOUT would be disruptive.

Enable comparison

Benchmark.memory do |x|
  x.compare!
end

Calling #compare! on the job within the setup block of Benchmark.memory enables the output of the comparison section of the benchmark. Without it, the benchmark suppresses this section and you only get the raw numbers output during calculation.

By default, this compares the reports by the amount of allocated memory. You can configure the comparison along two axes. The first axis is the metric, which is one of: :memory, :objects, or :strings. The second is the value, which is either :allocated or :retained.

Depending on what you're trying to benchmark, different configurations make sense. For example:

Benchmark.memory do |bench|
  bench.compare! memory: :allocated
  # or, equivalently:
  # bench.compare!
end

The purpose of the default configuration is benchmarking the total amount of memory an algorithm might use. If you're trying to improve a memory-intensive task, this is the mode you want.

An alternative comparison might look like:

Benchmark.memory do |bench|
  bench.compare! memory: :retained
end

When you're looking for a memory leak, this configuration can help you because it compares your reports by the amount of memory that the garbage collector does not collect after the benchmark.

Hold results between invocations

Benchmark.memory do |x|
  x.hold!("benchmark_results.json")
end

Often when you want to benchmark something, you compare two implementations of the same method. This is cumbersome because you have to keep two implementations side-by-side and call them in the same manner. Alternatively, you may want to compare how a method performs on two different versions of Ruby. To make both of these scenarios easier, you can enable "holding" on the benchmark.

By calling #hold! on the benchmark, you enable the benchmark to write to the given file to store its results in a file that can the benchmark reads in between invocations of your benchmark.

For example, imagine that you have a library that exposes a method called Statistics.calculate_monthly_recurring_revenue that you want to optimize for memory usage because it keeps causing your worker server to run out of memory. You make some changes to the method and commit them to an optimize-memory branch in Git.

To test the two implementations, you could then write this benchmark:

require "benchmark/memory"

require "stats" # require your library file here
data = []       # set up the data that it will call here

Benchmark.memory do |x|
  x.report("original")  { Stats.monthly_recurring_revenue(data) }
  x.report("optimized") { Stats.monthly_recurring_revenue(data) }

  x.compare!
  x.hold("bm_recurring_revenue.json")
end

Note that the method calls are the same for both tests and that we have enabled result holding in the "bm_recurring_revenue.json" file.

You could then run the following (assuming you saved your benchmark as benchmark_mrr.rb:

$ git checkout main
$ ruby benchmark_mrr.rb
$ git checkout optimize-memory
$ ruby benchmark_mrr.rb

The first invocation of ruby benchmark_mrr.rb runs the benchmark in the "original" entry using your code in your main Git branch. The second invocation runs the benchmark in the "optimized" entry using the code in your optimize-memory Git branch. It then collates and compares the two results to show you the difference between the two.

When enabling holding, the benchmark writes to the file passed into the #hold! method. After you run all of the entries in the benchmark, the benchmark automatically cleans up its log by deleting the file.

Supported Ruby Versions

This library aims to support and is tested against the following Ruby versions:

  • Ruby 2.5
  • Ruby 2.6
  • Ruby 2.7
  • Ruby 3.0
  • Ruby 3.1
  • Ruby 3.2
  • Ruby 3.3

If something doesn't work on one of these versions, it's a bug.

This library may inadvertently work (or seem to work) on other Ruby versions, however, we will only give support for the versions listed above.

If you would like this library to support another Ruby version or implementation, you may volunteer to be a maintainer. Being a maintainer entails making sure all tests run and pass on that implementation. When something breaks on your implementation, you will be responsible for providing patches in a timely fashion. If critical issues for a particular implementation exist at the time of a major release, we may drop support for that Ruby version.

Versioning

This library aims to adhere to Semantic Versioning 2.0.0. Report violations of this scheme as bugs. Specifically, if we release a minor or patch version that breaks backward compatibility, that version should be immediately yanked and/or a new version should be immediately released that restores compatibility. We will only introduce breaking changes to the public API with new major versions. As a result of this policy, you can (and should) specify a dependency on this gem using the Pessimistic Version Constraint with two digits of precision. For example:

spec.add_dependency "benchmark-memory", "~> 0.1"

Acknowledgments

This library wouldn't be possible without two projects and the people behind them:

  • Sam Saffron's memory_profiler does all of the measurement of the memory allocation and retention in the benchmarks.
  • I based much of the code around Evan Phoenix's benchmark-ips project, since it has a clean base from which to work and a logical organization. I also wanted to go for feature- and DSL-parity with it because I really like the way it works.

License

The gem is available as open source under the terms of the MIT License.

benchmark-memory's People

Contributors

alexwayfer avatar dblock avatar dependabot[bot] avatar leonovk avatar michaelherold avatar nirebu avatar rzane avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

benchmark-memory's Issues

Start measuring GC pressure

Total memory usage can't show the whole picture, so we should measure GC pressure as well. At the very least, we want to know how much time is spent in garbage collection during a benchmark so we can compare that with the amount of memory allocated and retained.

I'm not sure what information is available in the built-in Ruby profiling tools. I know GC.stat shows some detailed information about the garbage collector. There's also GC::Profiler but I haven't dug in enough to figure out what's useful there.

Switch to GitHub Actions for CI

Travis is being really slow lately so I think it's time to move.

Things that I would like to see:

  1. Test failures block the build and run on the current Travis matrix
  2. Linting failures (i.e. Rubocop) block the build and run once

(@AlexWayfer mentioned being interested in this, but if someone else wants to do it, I'm open to that.)

Order comparison results by baseline similar to benchmark-ips

When I'm writing benchmarks, I almost always want to compare against an existing result. benchmark-ips supports this with:

Benchmark.ips do |x|
  # ...
  x.compare! order: :baseline
end

I'd love to have something similar for benchmark-memory, so that the the comparison values (ie 1.54x) is relative to the first defined report.

Ractors benchmark memory seg fault

Hello, thanks for this gem. I am trying to setup a memory benchmark with new ruby ractors and i its returns a segfault, maybe is a problem with ruby its self or memory-profiler

Ruby versions tested: 3.1 and 3.2.2

# frozen_string_literal: true
require 'benchmark'
require 'benchmark/memory' # gem install benchmark-memory

def factorial(n)
  n == 0 ? 1 : n * factorial(n - 1)
end

Benchmark.memory do |x|
  x.report('ractors:') do
    ractors = []
    4.times do
      ractors << Ractor.new do
        1000.times { factorial(1000) }
      end
    end
    # take response from ractor, so it will actually execute
    ractors.each(&:take)
  end
end
Calculating -------------------------------------
main.rb:104: warning: Ractor is experimental, and the behavior may change in future versions of Ruby! Also there are many implementation issues.
[BUG] Segmentation fault at 0x0000000000000000
ruby 3.2.2 (2023-03-30 revision e51014f9c0) [x86_64-linux]

-- Control frame information -----------------------------------------------

Calculate and show churn?

I toyed with the idea of calculating memory "churn", which I define as the ratio of retained memory to allocated memory. This is trivial to calculate but I'm not sure whether this is a useful metric.

I'd like some discussion around the usefulness of churn to help decide whether to include it or not.

Clean up output

Currently, the output looks a little wonky because we're showing object counts and string counts as floating point numbers so they are always displayed with three zeros after the decimal point. I originally thought it would be nice to keep the same output as benchmark-ips so I went with it.

After further reflection, I think we should come up with our own display. I like the collapse of large memory values into larger units so I think that should stay ... but we should truncate any .000 measurements when the allocations are small enough to be reported in bytes.

The trick here will be making the display of mixed units look nice. I'd also ideally like to keep the report at or around 49 characters, but I'm willing to relax that to 79 characters if need be.

Consider releasing

Hey, the fact that benchmark-memory (from rubygem) still has memory_profiler ~> 0.9 in its dependencies is really cumbersome (conflicts with other gems). The last release is 2017, do you consider a new release at some point ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.