Code Monkey home page Code Monkey logo

alloccheck.jl's People

Contributors

baggepinnen avatar gbaraldi avatar masonprotter avatar prbzrg avatar topolarity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alloccheck.jl's Issues

on Julia v1.11 non-allocating internal functions are reported as allocating

On nightly Julia, f_noalias! is reported as allocating:

function f_noalias!(x, y)
  Base.mightalias(x, y) && throw("No aliasing allowed!")
  x .= y
end
using AllocCheck
versioninfo()
check_allocs(f_noalias!, Tuple{Vector{Int}, Vector{Int}})

Initially I reported this to Julia, however vtjnash says that at least one of the two internal functions that AllocCheck blames for the allocations does not allocate:

jl_genericmemory_copyto doesn't allocate, so you should file this on AllocCheck instead

[Feature] Warning output instead of error

In the current example allocations will trigger an error. However this may not always be desired, thus I propose a configuration option for a warning instead.

julia> using AllocCheck

julia> @check_allocs multiply(x,y) = x * y
multiply (generic function with 1 method)

julia> multiply(1.5, 2.5) # call automatically checked for allocations
3.75

julia> multiply(rand(3,3), rand(3,3)) # result matrix requires an allocation
ERROR: @check_alloc function encountered 1 errors (1 allocations / 0 dynamic dispatches)

Error from within AllocCheck when calling @check_allocs function

I really like the idea of this package, and my first thought was to give it a whirl with my KangarooTwelve package (v1.0.0), since that has a few methods that should never allocate. However, it looks like I've run into a bug within AllocCheck? I'm using Julia 1.10-rc1 for reference.

julia> using KangarooTwelve

julia> using AllocCheck

julia> @check_allocs k12s(a, b) = KangarooTwelve.k12_singlethreaded(a, b)
k12s (generic function with 1 method)

julia> k12s(UInt8[], UInt8[])
ERROR: type DynamicDispatch has no field name
Stacktrace:
  [1] getproperty
    @ ./Base.jl:37 [inlined]
  [2] ==(self::AllocCheck.DynamicDispatch, other::AllocCheck.DynamicDispatch)
    @ AllocCheck ~/.julia/packages/AllocCheck/xTVrb/src/types.jl:38
  [3] isequal(x::AllocCheck.DynamicDispatch, y::AllocCheck.DynamicDispatch)
    @ Base ./operators.jl:133
  [4] ht_keyindex(h::Dict{Any, Nothing}, key::AllocCheck.DynamicDispatch)
    @ Base ./dict.jl:275
  [5] haskey
    @ Base ./dict.jl:569 [inlined]
  [6] in
    @ Base ./set.jl:92 [inlined]
  [7] (x::AllocCheck.DynamicDispatch, itr::Set{Any})
    @ Base ./operators.jl:1304
  [8] _unique!(f::typeof(identity), A::Vector{Any}, seen::Set{Any}, current::Int64, i::Int64)
    @ Base ./set.jl:338
  [9] _unique!(f::typeof(identity), A::Vector{Any}, seen::Set{AllocCheck.DynamicDispatch}, current::Int64, i::Int64)
    @ Base ./set.jl:346
 [10] unique!(f::typeof(identity), A::Vector{Any}; seen::Nothing)
    @ Base ./set.jl:331
 [11] unique!
    @ ./set.jl:319 [inlined]
 [12] _unique!
    @ ./set.jl:357 [inlined]
 [13] unique!
    @ ./set.jl:423 [inlined]
 [14] find_allocs!(mod::LLVM.Module, meta::@NamedTuple{entry::LLVM.Function, compiled::Dict{Any, Any}}; ignore_throw::Bool)
    @ AllocCheck ~/.julia/packages/AllocCheck/xTVrb/src/AllocCheck.jl:165
 [15] (::AllocCheck.var"#22#26"{GPUCompiler.CompilerJob{…}, Bool})(ctx::LLVM.Context)
    @ AllocCheck ~/.julia/packages/AllocCheck/xTVrb/src/compiler.jl:118
 [16] JuliaContext(f::AllocCheck.var"#22#26"{GPUCompiler.CompilerJob{…}, Bool})
    @ GPUCompiler ~/.julia/packages/GPUCompiler/2mJjc/src/driver.jl:47
 [17] compile
    @ ~/.julia/packages/AllocCheck/xTVrb/src/compiler.jl:113 [inlined]
 [18] actual_compilation(cache::Dict{…}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{…}, compiler::AllocCheck.var"#compile#25"{…}, linker::AllocCheck.var"#link#27"{…})
    @ GPUCompiler ~/.julia/packages/GPUCompiler/2mJjc/src/execution.jl:125
 [19] cached_compilation(cache::Dict{…}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{…}, compiler::Function, linker::Function)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/2mJjc/src/execution.jl:103
 [20] macro expansion
    @ ~/.julia/packages/AllocCheck/xTVrb/src/compiler.jl:145 [inlined]
 [21] macro expansion
    @ ./lock.jl:267 [inlined]
 [22] compile_callable(f::var"###k12s#233", tt::Type{Tuple{Vector{UInt8}, Vector{UInt8}}}; ignore_throw::Bool)
    @ AllocCheck ~/.julia/packages/AllocCheck/xTVrb/src/compiler.jl:106
 [23] compile_callable
    @ ~/.julia/packages/AllocCheck/xTVrb/src/compiler.jl:103 [inlined]
 [24] k12s(a::Vector{UInt8}, b::Vector{UInt8})
    @ Main ./REPL[3]:140
 [25] top-level scope
    @ REPL[4]:1
Some type information was truncated. Use `show(err)` to see complete types.

@check_allocs interacts badly with splatted args and kwargs

I ran into a quick issue in one of the first few tests of @check_allocs I did. I think the MWE is pretty self-explanatory.

julia> @check_allocs g(args...; kwargs...) = f(args...; kwargs...)
ERROR: syntax: invalid "..." on non-final argument around /home/tec/.julia/packages/AllocCheck/xTVrb/src/macro.jl:153
Stacktrace:
 [1] top-level scope
   @ REPL[2]:1

Interestingly it seems fine if I only splat args or kwargs

julia> @check_allocs g(; kwargs...) = f(; kwargs...)
g (generic function with 1 method)

julia> @check_allocs g(args...) = f(args...)
g (generic function with 2 methods)

ERROR: '@check_allocs' not documentable

using AllocCheck

"""
swap two values from vector v at indices i and j
"""
@check_allocs swap!(v::AbstractVector, i, j) = begin
  v[i], v[j] = v[j], v[i]
  v
end
$ julia bug.jl
ERROR: LoadError: cannot document the following expression:

#= [...]/bug.jl:6 =# @check_allocs swap!(v::AbstractVector, i, j) = begin
            #= [...]/bug.jl:6 =#
            #= [...]/bug.jl:7 =#
            (v[i], v[j]) = (v[j], v[i])
            #= /home/tb/Downloads/bug.jl:8 =#
            v
        end

'@check_allocs' not documentable. See 'Base.@__doc__' docs for details.

Stacktrace:
 [1] error(::String, ::String)
   @ Base ./error.jl:44
 [2] top-level scope
   @ ~/Downloads/bug.jl:3
in expression starting at [...]/bug.jl:3

Allocation types are sometimes bad on Julia 1.10

When using Julia 1.10 (beta3):

julia> check_allocs(*, (Matrix{Float64},Matrix{Float64}))[end]
Allocation of Any in /cache/build/default-amdci5-3/julialang/julia-release-1-dot-10/usr/share/julia/stdlib/v1.10/LinearAlgebra/src/matmul.jl:970
  | throw(DimensionMismatch(lazy"A has size $(size(A)), B has size $(size(B)), C has size $(size(C))"))

Stacktrace:
 [1] matmul2x2!(C::Matrix{Float64}, tA::Char, tB::Char, A::Matrix{Float64}, B::Matrix{Float64}, _add::LinearAlgebra.MulAddMul{true, true, Bool, Bool})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.10.0-beta2+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/matmul.jl:970
...

versus Julia master (f919e8f16c):

julia> check_allocs(*, (Matrix{Float64},Matrix{Float64}))[end]
Allocation of DimensionMismatch in /home/topolarity/repos/julia/usr/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:970
  | throw(DimensionMismatch(lazy"A has size $(size(A)), B has size $(size(B)), C has size $(size(C))"))

Stacktrace:
 [1] matmul2x2!(C::Matrix{Float64}, tA::Char, tB::Char, A::Matrix{Float64}, B::Matrix{Float64}, _add::LinearAlgebra.MulAddMul{true, true, Bool, Bool})
   @ LinearAlgebra ~/repos/julia/usr/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:970
   ...

Call-site macro

Came up on discourse.

Would be nice to have a macro that is similar to @code_* and others and that can be used like

@check_allocs multiply(rand(10,10), rand(10,10))

rather than having to augment the method definition (and run the allocation check on every call). I assume this is straightforward given that there already is the check_allocs function that works on the function signature alone.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Merge allocation sites per callstack

In the current version, we show all allocations a function and it's callees, might do. But that can unnecessarily expose internals that the user might not care about. So maybe we should have an option to merge them via callsite, so that if I have a function call that potentially does 5 allocations, we show the allocation site at the "top-level".

design of `@check_allocs` for static compilation

The way @check_allocs currently works is unintuitive, since it is a static analysis but requires you to actually call the function. Of course we need that to pick up actual argument types, and it works ok for running tests, but we will need something that is compatible with a static compilation workflow. For example, a macro that records the method in a list somewhere, and then if you compile the code with PackageCompiler or StaticCompiler it calls check_allocs on all specializations to report errors before generating the output. We can use this issue to hash out exactly how it should work.

Add documentation to clarify that calling an allocation-free function can still allocate

Although e.g. check_allocs(foo, (Ref{Int64},)) may report no allocations, it is still possible for a call to it to require an allocation (or indeed for every possible call to it to require an allocation for ABI reasons). To make the problem worse, once solution (1) from #30 is implemented, there will be a significant dynamic dispatch overhead + allocation to every AllocCheck.jl function call on Julia <= 1.9

Based on discussions with @baggepinnen, this shouldn't be a showstopper since a bounded amount of one-time allocation on function entry is acceptable if users are careful about how they structure their application code.

However, it's still very counter-intuitive so it's probably worth having some targeted explanation in the docs.

Check dynamic dispatch

Maybe add a check_dynamic_dispatch function that only checks the presence of dynamic dispatch?

Pipeline discrepancies + Inference instability under recursion

@oscardssmith brought up the important point today in the meeting that Inference results can depend on the order of queries to inference, since a cached result may be used to resolve a recursive loop more precisely.

Since inference is not always monotonic, we don't even have the guarantee that something being alloc-free when run with a fresh cache will also be alloc-free when it runs after we already have more precise results cached from other function calls.

Some possible solutions:

  • Compile the function to a @ccallable with GPUCompiler and use that instead of the native Julia function (maybe wrap this in a @generated function to automate the check + compilation)
  • Annotate the function to disable the use of inference caches to resolve recursive loops (i.e. require that the function must always infer as if no cached results were available)

P.S. A closely-related issue is that our pipeline-parity guarantees with GPUCompiler vs. Base Julia are pretty weak right now. Under optimization pipeline flags, code coverage, etc. it's quite easy for the two to fall out of sync, in which case we lose soundness.

Add TypedSyntax.jl support

The idea here is to add support for printing a (type-annotated) function excerpt as part of our error messages.

This may or may not make the MVP milestone.

Kwargs in anonymous function

So, I've hit another kwargs related bug with what I hope is a fairly self-explanatory MWE.

julia> using AllocCheck

julia> @check_allocs (k) -> (k * 2)
#4 (generic function with 1 method)

julia> @check_allocs (; k) -> (k * 2)
ERROR: UndefVarError: `#8` not defined
Stacktrace:
 [1] macro expansion
   @ ~/.julia/packages/AllocCheck/xTVrb/src/macro.jl:154 [inlined]
 [2] top-level scope
   @ REPL[3]:1

Repeated allocation instances

I have the following little function I'm using to experiment with

using AllocCheck

send_control(u) = sum(abs2, u) # Dummy control function
calc_control() = 1.0
get_measurement() = [1.0]

function example_loop(data, t_start, y_start, y_old, ydf, Ts, N, r)
    for i = 1:N
        t = time() - t_start
        y = get_measurement()[] - y_start # Subtract initial position for a smoother experience
        yd = (y - y_old) / Ts
        ydf = 0.9*ydf + 0.1*yd
        # r = 45sin(2π*freq*t)
        u = calc_control()
        send_control([u])
        log = [t, y, ydf, u, r(t)]
        data[:, i] .= log
        y_old = y
    end
end

## Generate some example input data
r = t->(2 + 2floor(t/4)^2)
N = 10
data = Matrix{Float64}(undef, 5, N) 
t_start = 1.0
y_start = 0.0
y_old = 0.0
ydf = 0.0
Ts = 1.0

typetuple = typeof.((data, t_start, y_start, y_old, ydf, Ts, N, r))

AllocCheck.check_allocs(example_loop, typetuple, ignore_throw=true)

On Julia v1.10 beta3, I get several repeated AllocInstances for each actual allocation. With ignore_throw = true, I get 18 alloc instances, where at least 4 different sources of allocations are repeated 3 times each (12 instances for 4 real allocations)

Using BenchmarkTools, it appears as if the number of allocations actually happens in each loop iteration is 3 (N=10 loop iterations)

julia> @btime example_loop($data, $t_start, $y_start, $y_old, $ydf, $Ts, $N, $r)
  907.344 ns (30 allocations: 2.19 KiB)

This matches with what I would expect, get_measurement, [u] and log = [t, y, ydf, u, r(t)]

Inconsistent with `@allocations` (Bug?)

julia> function mycopy!(A, B)
           @. A = B
           return nothing
       end
mycopy! (generic function with 1 method)

julia> A = zeros(10,10);

julia> B = zeros(10,10);

julia> @allocations mycopy!(A,B);

julia> @allocations mycopy!(A,B)
0

julia> check_allocs(mycopy!, typeof.((A,B)))
1-element Vector{Any}:
 Allocation of Array in ./array.jl:409
  | copy(a::T) where {T<:Array} = ccall(:jl_array_copy, Ref{T}, (Any,), a)

Stacktrace:
  [1] copy
    @ ./array.jl:409 [inlined]
  [2] unaliascopy
    @ ./abstractarray.jl:1490 [inlined]
  [3] unalias
    @ ./abstractarray.jl:1474 [inlined]
  [4] broadcast_unalias
    @ ./broadcast.jl:977 [inlined]
  [5] preprocess
    @ ./broadcast.jl:984 [inlined]
  [6] preprocess_args
    @ ./broadcast.jl:987 [inlined]
  [7] preprocess
    @ ./broadcast.jl:983 [inlined]
  [8] copyto!
    @ ./broadcast.jl:1000 [inlined]
  [9] copyto!
    @ ./broadcast.jl:956 [inlined]
 [10] materialize!
    @ ./broadcast.jl:914 [inlined]
 [11] materialize!
    @ ./broadcast.jl:911 [inlined]
 [12] mycopy!(A::Matrix{Float64}, B::Matrix{Float64})
    @ Main ./REPL[12]:2

Note that @allocated gives 0 whereas check_allocs highlights an allocation.

First attempts

Hey guys, thanks for working on this package!

I am trying it out on some of my controller implementations (think the controller that swings up the pendulum you saw at Juliacon) and I am noticing a few patterns in my code that I want your opinions on. I am opening this issue to discuss whether those patterns can be accommodated by AllocCheck, or I need to think about best practices for writing code that is more in line with what AllocCheck can analyze

Optional logging

The first pattern I tend to use rather often is optional logging, for example

verbose && @info "Pos = $x"

where verbose is a keyword argument. Scattering these print statements around is useful for debugging (and also to enable Ctrl-C to terminate the control loop), but I usually disable the logging and printing when "running for real". The simple function below gave me 156 allocations when verbose = true and 0 when verbose = false (nice!)

using AllocCheck

function controller(; verbose=true)
    a = 0.0
    for i = 1:100
        a = a + i
        verbose && @info "a = $a"
        Libc.systemsleep(0.01)
    end
end

controller() # Works
check_ir(controller, ())

My question is, was I lucky with the definition

function controller(; verbose=false)
   ...
end

isempty(check_ir(controller, ()))

or this is indeed something I can rely on?

Side question, verbose here is a keyword argument, and it's the value rather than the type that is important. When I tried with verbose = false, I changed the definition of controller to default to verbose = false. Was I saved by constant propagation?

Check loop rather than function

Most of my controller implementations look something like below, where I first allocate some working memory (logvector) and then run an almost infinite loop

function run_almost_forever()
    N = a_large_number
    logvector = zeros(N)
    for i = 1:N
        y = sample_measurement()
        logvector[i] = y
        u = controller(y)
        apply_control(u)
        Libc.systemsleep(0.01)
    end
end

Here, I am only concerned with the loop not allocating, and am fine with the initial allocation of logvector = zeros(N). I could refactor this code to split out the loop body:

function run_almost_forever2()
    N = a_large_number
    logvector = zeros(N)
    for i = 1:N
        loop_body(logvector)
    end
end

function loop_body(logvector)
    y = sample_measurement()
    logvector[i] = y
    u = controller(y)
    apply_control(u)
    Libc.systemsleep(0.01)
end

and analyze loop_body using check_ir. My question is then, how can I be sure that there is no allocation coming from

    for i = 1:N
        loop_body(logvector)
    end

for example, due to inference failure on logvector?

A very nice interface (you don't have to accommodate this, it's just some thinking out loud) would be

function run_almost_forever()
    N = a_large_number
    logvector = zeros(N)
    @no_allocs for i = 1:N
        y = sample_measurement()
        logvector[i] = y
        u = controller(y)
        apply_control(u)
        Libc.systemsleep(0.01)
    end
end

where I use the macro annotation @no_allocs to indicate exactly which block is not allowed to allocate.

Error recovery

The last pattern touches upon what we spoke about in the meeting yesterday, exceptions. I often wrap the control loop in try/catch/finally so that I can make sure exit gracefully (apply emergency brake etc.) and also GC.enable(true) if something goes wrong. The following worked nicely, no allocations

function treading_lightly()
    a = 0.0
    GC.enable(false)
    try
        for i = 10:-1:-1
            a += i
        end
    catch
        exit_gracefully()
    finally
        GC.enable(true)
    end
    a
end
exit_gracefully() = nothing

treading_lightly() # Works

check_ir(treading_lightly, ())

but computing sqrt(i) in the loop does generate some allocations (as expected)

function treading_lightly()
    a = 0.0
    GC.enable(false)
    try
        for i = 10:-1:-1
            a += sqrt(i)
        end
    catch
        exit_gracefully()
    finally
        GC.enable(true)
    end
    a
end
exit_gracefully() = nothing

treading_lightly() # Works

check_ir(treading_lightly, ())

Here, I'd like to indicate that I have thought about the potential problem with sqrt(-1) and that I am fine with this being a possibility. I know that the GC will not trigger during the execution of the emergency recovery exit_gracefully since I have turned the GC off, and I also know that the allocations due to this error will not accumulate since I'm about to terminate the controller and turn the GC on again.

What would be a good strategy to handle this situation?

Unrelated to allocations, but in this situation, I would also like to make sure that exit_gracefully is fully compiled before starting to execute treading_lightly, since if I ever need to call exit_gracefully I probably cannot tolerate any JIT latency. Manually calling exit_gracefully to compile it is not always an option.

MethodError upon using `check_allocs` on a non-existent method

julia> using AllocCheck

julia> function alloc_in_catch(x)
          try
              Base.inferencebarrier(nothing) # Prevent catch from being elided
          catch
              return Any[] # in catch block: filtered by `ignore_throw=true`
          end
          return Int64[]
       end
alloc_in_catch (generic function with 1 method)

julia> length(check_allocs(alloc_in_catch, (); ignore_throw=false))
ERROR: MethodError: no method matching alloc_in_catch()
<snip> ...

julia> function alloc_in_catch()
          return Int64[]
       end
alloc_in_catch (generic function with 2 methods)

julia> length(check_allocs(alloc_in_catch, (); ignore_throw=false))
ERROR: MethodError: no method matching alloc_in_catch()
The applicable method may be too new: running in world age 25448, while current world is 25449.
<snip> ...

The first MethodError is expected, but the second one is a surprise.

Find a way to get source excerpts for REPL code?

julia> @check_allocs foo() = rand(Any[1,1.0]) + true
julia> try foo() catch e e.errors[end] end
Dynamic dispatch in ./REPL[2]:1
  | (source not available)
Stacktrace:
 [1] var"##foo#225"()
   @ Main ./REPL[2]:1

For dynamic dispatches especially, this source information is pretty important.

Built-in functions (e.g. `jl_type_error`) should provide better error messages

Right now a call to jl_type_error prints as:

 Allocation of Any in ./foo.jl
  | y = foo(x, z)::Int64
Stacktrace:
 [1] alloc_in_catch(x::Any)
   @ Main ./REPL[4]:3

The allocation type should probably be a TypeError.

Or maybe we want specific diagnostics for this that say "type-assertion may allocate on error" or similar.

Collection has multiple elements, expected exactly 1 element

I might have stumbled upon a little bug

julia> using AllocCheck
[ Info: Precompiling AllocCheck [9b6a8646-10ed-4001-bbdc-1d2f46dfbb1a]

julia> function controller(verbose::Bool)
                  a = 0.0
                  for i = 1:100
                      a = a + i
                      verbose && @info "a = $a"
                      Libc.systemsleep(0.01)
                  end
              end
controller (generic function with 1 method)

julia> check_allocs(controller, (Bool,))

ERROR: ArgumentError: Collection has multiple elements, must contain exactly 1 element
Stacktrace:
 [1] only
   @ AllocCheck ./iterators.jl:1527 [inlined]
 [2] rename_calls_and_throws!(f::LLVM.Function, job::GPUCompiler.CompilerJob{GPUCompiler.NativeCompilerTarget, AllocCheck.NativeParams})
   @ AllocCheck ~/.julia/dev/AllocCheck/src/AllocCheck.jl:193
 [3] (::AllocCheck.var"#15#16"{Bool, Vector{AllocCheck.AllocInstance}, GPUCompiler.CompilerJob{GPUCompiler.NativeCompilerTarget, AllocCheck.NativeParams}})(ctx::LLVM.Context)
   @ AllocCheck ~/.julia/dev/AllocCheck/src/AllocCheck.jl:258
 [4] JuliaContext(f::AllocCheck.var"#15#16"{Bool, Vector{AllocCheck.AllocInstance}, GPUCompiler.CompilerJob{GPUCompiler.NativeCompilerTarget, AllocCheck.NativeParams}})
   @ GPUCompiler ~/.julia/packages/GPUCompiler/2mJjc/src/driver.jl:47
 [5] #check_allocs#14
   @ AllocCheck ~/.julia/dev/AllocCheck/src/AllocCheck.jl:246 [inlined]
 [6] check_allocs(func::Any, types::Any)
   @ AllocCheck ~/.julia/dev/AllocCheck/src/AllocCheck.jl:243
 [7] top-level scope
   @ REPL[4]:1

Similar Package - TestNoAllocations.jl - Crossreference or Merging?

Hello everyone :)
I have recently created a very similar Package called TestNoAllocations.jl which solves a similar purpose of finding allocations.

Considering the limitations of this package, AllocCheck.jl and TestNoAllocations.jl seem to complement each other pretty well.

Due to TestNoAllocations.jl only running during tests, there are no runtime costs at all, however, the guarantees of AllocCheck.jl are much stronger.

Is there any interest of a CrossReference between the two packages in the README or merging the package into one unified one?

Adjust to `Memory{T}` changes

Now that JuliaLang/julia#51319 we need to update AllocCheck to support the changed C API.

There's also a chance that this might have made some array-like optimizations more opaque to us. We should check some basic array allocations and see what the situation is.

`@check_allocs` bypasses native Julia monomorphization limits

@check_allocs sum_args(args...) = sum(args)
for x=1:1000
    v = collect(1:x)
    s = sum_args(v...)
    println("sum(1:$(x)) = ", s)
end

This code will compile 1000 different (non-allocating) copies of foo, where typically Julia would typically limit the expansion to just one or two extra arguments.

For comparison, this code without @check_allocs is many, many times faster:

@noinline sum_args(args...) = sum(args)
for x=1:1000
    v = collect(1:x)
    s = sum_args(v...)
    println("sum(1:$(x)) = ", s)
end

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.