Comments (4)
Gradients taken via reverse-mode AD do cost O(1) operations w.r.t. the input size, but can have relatively high constant costs depending on how the target function is written.
What you're running into here is that ReverseDiff can't intercept unvectorizable loops as a single unit, so it has to record every scalar operation encountered as part of loop execution. When executing the tape forwards/backwards, updating buffers for the scalar operations can often be slower than performing the scalar operation itself. Thus, you get the performance you're seeing here.
You can achieve better performance by vectorizing your code, since ReverseDiff can intercept vectorized operations as individual nodes in the computation graph (also note that compilation isn't really benefiting you in this case):
using ReverseDiff: GradientTape, gradient!, compile, @forward
using BenchmarkTools: @btime
@forward φ1(r) = r^2 + r^4 + exp(-r)
@forward φ2(r) = -0.1 * r^3
function F(x)
N = length(x)
f = φ1(x[2]-x[1])
for i = 3:N
f += φ1(x[i]-x[i-1]) + φ2(x[i] - x[i-2])
end
return f
end
function Fvec(x)
N = length(x)
f = φ1(x[2] - x[1])
tmp1 = x[3:N]
tmp2 = x[2:(N-1)]
tmp3 = x[1:(N-2)]
f += sum(φ1.(tmp1 - tmp2) + φ2.(tmp1 - tmp3))
return f
end
N = 40000
f_tape = GradientTape(F, rand(N))
compiled_f_tape = compile(f_tape)
fvec_tape = GradientTape(Fvec, rand(N))
compiled_fvec_tape = compile(fvec_tape)
x = rand(N)
results = zeros(N)
println("Time for F(x):")
@btime F($x)
println("Time for gradient!(results, f_tape, x):")
@btime gradient!($results, $f_tape, $x)
println("Time for gradient!(results, compiled_f_tape, x):")
@btime gradient!($results, $compiled_f_tape, $x)
println("Time for Fvec(x):")
@btime Fvec($x)
println("Time for gradient!(results, fvec_tape, x):")
@btime gradient!($results, $fvec_tape, $x)
println("Time for gradient!(results, compiled_fvec_tape, x):")
@btime gradient!($results, $compiled_fvec_tape, $x)
Output on my machine:
Time for F(x):
1.729 ms (0 allocations: 0 bytes)
Time for gradient!(results, f_tape, x):
35.348 ms (0 allocations: 0 bytes)
Time for gradient!(results, compiled_f_tape, x):
37.608 ms (0 allocations: 0 bytes)
Time for Fvec(x):
2.040 ms (16 allocations: 2.44 MiB)
Time for gradient!(results, fvec_tape, x):
6.098 ms (0 allocations: 0 bytes)
Time for gradient!(results, compiled_fvec_tape, x):
6.123 ms (0 allocations: 0 bytes)
from reversediff.jl.
Fantastic - thank you. I will start experimenting a lot more with this now
from reversediff.jl.
Very nice, thanks! To be clear, is this a fundamental limitation of reverse-mode AD, or something that could be optimized in the future?
from reversediff.jl.
To be clear, is this a fundamental limitation of reverse-mode AD, or something that could be optimized in the future?
If an AD tool can prove (or allow users to assert) that a loop can be converted to a map/broadcast call, then it can perform such an optimization. Alternatively, it can provide limited differentiable control flow primitives to prevent the graph from blowing up.
I'm definitely planning on exploring these optimizations at some point.
from reversediff.jl.
Related Issues (20)
- Error when using scalar vs. vector to operate on tracked inupt HOT 1
- Record `Broadcast.broadcasted` instead of `Broadcast.broadcast`
- MethodError: ReverseDiff.TrackedReal ... is ambiguous.
- double free crash with multi-threaded code only when using multiple threads
- @grad_from_chainrules macro fails when using multi-output functions HOT 2
- ReverseDiff documentation shows issue that has been fixed? Nested differentiation of a closure? HOT 1
- `MethodError: *(::Diagonal, ::ReverseDiff.TrackedArray)` is ambiguous.
- `@grad_from_chainrules` hygiene: cannot use custom types in method signature HOT 3
- Define `typemin` for tracked reals.
- ReverseDiff defines a huge number of methods. HOT 3
- Nested differentiation of closures yields incorrect results. Any news on the fix?
- Enhancement proposal: Modular tape caching HOT 16
- Bug: Derivative of transposed-vector times matrix is incorrect. HOT 5
- Strange bug when deferring to ChainRules HOT 1
- Add ChainRulesCore RuleConfig? HOT 1
- mean BigFloat precision
- MethodError: vcat(::ReverseDiff.TrackedArray{Float32, Float32, 2, Matrix{Float32}, Matrix{Float32}}, ::Matrix{Float32}) is ambiguous. HOT 4
- Method ambiguities reported by Aqua
- DiffResults objects are not re-aliased properly HOT 2
- ERROR: LoadError: Some tests did not pass: 146 passed, 0 failed, 1 errored, 0 broken. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from reversediff.jl.