juliadiff / forwarddiff.jl Goto Github PK
View Code? Open in Web Editor NEWForward Mode Automatic Differentiation for Julia
License: Other
Forward Mode Automatic Differentiation for Julia
License: Other
I am a new user to ForwardDiff and not super Julia savvy. I tried installing it on a brand new Amazon cloud instance running RHEL 7. This Linux install is clean and up to date; I have installed nothing other than Julia 0.4-dev and dependencies on it. I got Julia from
https://copr.fedoraproject.org/coprs/nalimilan/julia/
Pkg.add("ForwardDiff") works fine.
When I type using ForwardDiff I get the following error:
julia> using ForwardDiff
ERROR: LoadError: syntax: invalid "import" statement
while loading /home/ec2-user/.julia/v0.4/ForwardDiff/src/ForwardDiff.jl, in expression starting on line 25
I have being using ForwardDiff.jl for a while. My use case is a little bit different as I need to differentiate f(x, args...)
only with respect to x
. Of source, in typical usage this can be accomplished by closure. By in some instances it is not possible.
I locally extended the api to allow args...
to be passed down down (see, e.g., here).
I have been following the work on #27 (which is great by the way) and I was wondering whether either such extension would be possible or whether the new API will allow additional arguments.
Hi All
I would love to contribute to the autodiff tools, but I have hit a hickup that is making me so despondent that I am seriously considering giving up on Julia for a while until it has matured.
I have spent a large effort on developing a dualnumber type (similar to my existing MATLAB implementation) that can take matrix-valued fields. This allows one get forward-mode AD to work smoothly through all kinds of matrix operations, which will be executed efficiently in BLAS/LAPACK. This has worked really well for me in MATLAB.
I have gone to the trouble of translating most of the functionality I have in MATLAB, but now I find I have a piece if code that I simply cannot debug.
If anybody is interested, my code is here:
https://github.com/bsxfan/ad4julia/blob/master/DualNumbers.jl
There are certainly many bugs and missing features remaining in this code, but at the moment I am stuck at the cat() function, which overloads vertical and horizontal concatention of dualnumber matrices.
This function behaves really weirdly. It is supposed to return a DualNum object. It manages to construct that object and can display it, via show() inside my cat() function. But when I return that value, it has disappeared!
julia> require("DualNumbers.jl")
julia> using(DualNumbers)
julia> a = dualnum(1)
standard part: 1.0
differential part: 0.0
julia> b = dualnum(2)
standard part: 2.0
differential part: 0.0
julia> c = [a b]
here in cat
ST:
1x2 Float64 Array:
1.0 2.0
DI:
1x2 Float64 Array:
0.0 0.0
D:
standard part: 1x2 Float64 Array:
1.0 2.0
differential part: 1x2 Float64 Array:
0.0 0.0
Note that the stuff that gets displayed is my debug info from cat(). Nothing is returned:
julia> typeof(c)
Nothing
I don't know how to fix this.
Assuming the source code would be relatively brief,
it would be helpful to see a couple of simple examples in full:
e.g. how to use it with sin(x::Real) x in -Pi/2..Pi/2
and perhaps, f(z::Complex) z in whatever range looks good
Hi Scidom,
I have the following model:
point = 2 # can be used within function
y = [0:(point - 1)]
function GPCM(para)
a = para[1]
d = para[2]
tau = para[3:4]
t = para[5]
nu = zeros(point)
for k = 1:point
nu[k] = exp(a.*(y[k].*(t-d) - sum(tau[1:k]) )) [1]
end
de = sum(nu)
p = nu ./ de
end
para = [1.,0.,0.,0.,1.]
Jac = forwarddiff_jacobian(GPCM, Float64, fadtype=:typed)
Jac(para)
But, it returns an error message:
InexactError()
in setindex! at array.jl:307
It indicates that the nu[k] is not right. Hessian cannot be obtained as well. Any suggestion about this?
Thanks.
This is cool stuff! I have a few questions...
If I understand, GradientNumbers share a single epsilon (e) so the algebra of GradientNumbers is completely determined by that of the reals, linearity, plus e^2 = 0, i.e.
(a+b_e)_(c+d_e) = a_c + (a_d+b_c)*e
There is no concept of distinct epsilons e1 and e2 for GradientNumbers. Is that correct?
HessianNumbers, on the other hand have 2 distinct epsilons e1 and e2. The documentation (in the source code) states that
e1 != 0, e2 != 0
e1^2 = e2^2 = (e1*e2)^2 = 0
But what is the product e1*e2? Do you assume e1 and e2 commute?
Consider the sum e1+e2. Is this an epsilon? If e1+e2 is an epsilon and e1 and e2 commute, then e1*e2 = 0, which would not be very interesting. I guess the epsilons are not closed under addition (too bad!).
Similarly, for TensorNumbers, do you assume the 3 epsilons commute? It seems so.
It would be interesting to consider non-commuting epsilons closed under addition such that
(e1+e2)^2 = e1_e2 + e2_e1 = 0 => e1_e2 = -e2_e1.
This would allow you to use AD ideas for (Kaehler) differential forms and more general abstract differential algebras.
More generally, I think it is interesting to consider differentials of epsilons, e.g.
df = @f_i de^i (Einstein summation)
I did some work along these lines in grad school (13+ years ago... ouch. Getting old :))
It gets fun if you let 0-forms (functions) and 1-forms (differentials) not commute, i.e.
f(x) de = de f(x+e)
This has a nice geometric interpretation. We can consider e to be an infinitesimal curve and multiplying the function on the left of de evaluates at the beginning of the curve and multiplying on the right evaluates at the end of the curve.
This is nice because
[de,f] = f'(x) de
i.e. the derivative may be expressed as a commutator which automatically satisfies the product rule.
[de,fg] = [de,f] g + f [de,g]
(but the order matters above :))
First of all, thank you for this package!
When I define a simple function, it seems its derivate function is not type-stable:
julia> f(x) = 2x
f (generic function with 1 method)
julia> g = derivative(f)
d (generic function with 1 method)
julia> @code_warntype g(1)
Variables:
x::Int64
##f#7365::F
##x#7366::Int64
###s31#7367::Type{Void}
##result#7368::Any
####grad#7364#7369::Tuple{Int64}
Body:
begin # /Users/ken/.julia/v0.4/ForwardDiff/src/api/derivative.jl, line 43:
##result#7368 = call(ForwardDiff.ForwardDiffResult,(f::F)($(Expr(:new,
ForwardDiff.GradientNumber{1,Int64,Tuple{Int64}}, :(x::Int64), :($(Expr(:new,
ForwardDiff.Partials{Int64,Tuple{Int64}}, :((top(tuple))(1)::Tuple{Int64})))))))::Any)::Any
return (ForwardDiff.derivative)(##result#7368)::Any
end::Any
I would imagine calling this derivative function in a tight loop will hinder performance, right? But do correct me if I'm wrong.
ForwardDiff.forwarddiff_hessian
intermittently returns hessians that look like they are unset memory - - they are completely wrong. Since this happens intermittently and only in certain settings I haven't been able to narrow down exactly what the problem is, but I have a couple relatively minimal examples that reproduce the problem reliably on my computer.
I happen to have discovered it when using atan
. I haven't been able to reproduce it for simple polynomials and haven't systematically tried other functions yet. Here's the simplest example:
using ForwardDiff
function my_fun2{T <: Number}(y::Array{T})
@assert length(y) == 1
atan(y[1])
end
my_hess2 = ForwardDiff.forwarddiff_hessian(my_fun2, Float64, fadtype=:typed, n=1)
y = [4.]
original = my_hess2(y);
bad_versions = 0
max_iters = 1000
for iter = 1:max_iters
this_hess = my_hess2(y)
if maximum(abs(this_hess - original)) > 1
println("$iter: ", this_hess)
bad_versions += 1;
end
end
bad_versions / max_iters # Often around 0.04 or so, though sometimes 0.0
I don't know if it's a red herring, but happened to discover it when indexing into the argument using a closure. The following, more complicated example, fails much more often:
# This produces the problem intermittently:
indices = Int64[3, 2, 1]
coeffs = [10., 100., 1000.]
function my_fun{T <: Number}(y::Array{T})
@assert length(y) == 3
T[ coeffs[1] * y[indices[3]], coeffs[2] * y[indices[2]] ^ 2, coeffs[2] * atan(y[indices[3]]) ]
end
function get_hess_func_vec(my_fun_arg::Function, K::Int64)
[ ForwardDiff.forwarddiff_hessian(y -> my_fun(y)[k], Float64, fadtype=:typed, n=K) for k=1:K ]
end
my_hess_funcs = get_hess_func_vec(my_fun, 3);
y = [1., 2., 3.]
# Repeated evaluation of this function gives erratic results:
original = my_hess_funcs[3](y);
bad_versions = 0
max_iters = 1000
for iter = 1:max_iters
this_hess = my_hess_funcs[3](y)
if maximum(abs(this_hess - original)) > 1
println("$iter: ", this_hess)
bad_versions += 1;
end
end
bad_versions / max_iters # As low as 0.02 or so, often around 0.5 (of course, the original could be wrong, too)
Please let me know if you have any ideas or want more details.
@fredo-dedup, I opened this as a separate issue, so as to comply with what has been asked. Once you create your ReverseDiffSource
package and place your autodiff code there, I will then clean up the present ForwardDiff
to register it.
Consider the following function
using ForwardDiff;
srand(1)
y = randn(100);
x = randn(100,2);
g(theta) = x.*(y-x*theta);
gn(theta) = vec(sum(g(theta), 1));
gn([.1,.1])
I want to use typed_fad_gradient
to calculate the derivative of gn
with respect to theta
. This code
h1 = ForwardDiff.typed_fad_gradient(gn, Float64);
h1([.1,.1])
provides the correct gradient
2x2 Array{Float64,2}:
-103.869 11.1779
11.1779 -119.393
as it can be seen by comparing it with the analytic gradient
-x'x
Suppose now the function to be derived is defined using a linear algebra
operation, i.e.,
p = ones(100);
wgn(theta) = g(theta)'p;
h2 = ForwardDiff.typed_fad_gradient(wgn, Float64);
h2([.1,.1])
Although wgn([.1,.1]).==gn([.1,.1])
, the gradient returned by type_fad_gradient
is
2x2 Array{Float64,2}:
103.869 -11.1779
-11.1779 119.393
which is different from h1([.1,.1])
and from -x'x
by a factor of -1
.
So, first, this is a wicked package and I'm a complete convert to this AD thing!
I was wondering what the rationale is behind the type restrictions on functions. I can make ForwardDiff work excellently most of the time but I'm hitting a snag in one place. I'm doing triangulation and interpolation on unstructured data points, and using the package GeometricalPredicates to test if things are in triangles. The snag is that GeometricalPredicates has a custom Point(x::Real, y::Real)
type, and ForwardDiff needs that to be Point(x::Number, y::Number)
to pass its custom number type through. Given that we're currently limited to differentiating real functions anyway, is there any particular reason for that restriction? Is it to make adding complex differentiation easier later on?
Hi,
I was making this example an I noticed something weird which I couldn't figure out. I was working with J2 plasticity model (see link, under Reduced von Mises equation for different stress conditions). When I took gradient from function it only gave NaN vector
https://en.wikipedia.org/wiki/Von_Mises_yield_criterion
Code:
>> using ForwardDiff
>> function J2(ฯ)
e1 = (ฯ[1] - ฯ[2])^2
e2 = (ฯ[2] - ฯ[3])^2
e3 = (ฯ[3] - ฯ[1])^2
e4 = ฯ[4]^2
e5 = ฯ[5]^2
e6 = ฯ[6]^2
return sqrt(( e1 + e2 + e3 + 6 * (e4 + e5 + e6)) / 2.)
end
>> x= [100.0, 0.0, 0.0, 0.0, 0.0, 0.0]
>> dJ2 = ForwardDiff.gradient(J2)
>> dJ2(x)
[NaN,NaN,NaN,NaN,NaN,NaN]
I also tested the typical forward derivate
>> h = 1e-12
>> numerical_vals = zeros(Float64, 6)
>> for i=1:6
vals = copy(x)
vals[i] += h
numerical_vals[i] = (J2(vals)- J2(x)) / h
end
>> numerical_vals
6-element Array{Float64,1}:
0.99476
-0.49738
-0.49738
0.0
0.0
0.0
Background: I just updated into Julia 0.5 dev version so might that have something to do with this?
Here is a function I like to calculate a gradient of:
function getW33(kv :: Vector)
k = kv[1]
sqrt2l= sqrt(2.0)
t2 = k*k
t3 = acos(t2 - 1.0)
t4 = 2.0 - t2
t6 = 2*ฯ - t3
t5 = 1.0/t4
W = (t6 * sqrt(t5) - k)*t5
return W
end
Its actual derivative at kv = [-0.5]
is -2.8185256628482382
but using the function gradient
I get an answer which is only 10 digits accurate:
g = gradient(getW33)
g([-0.5])[1]
> -2.8185256629946482
Using complex step derivative I get full 16 digits of accuracy.
kv = [complex(-0.5,1E-30)]
imag(getW33(kv)) / 1E-30
> -2.8185256628482387
I was under the impression that the ForwardDiff was accurate to machine precision, is that wrong ?
I am on Julia 0.4, MacOSX, LLVM 3.3
thanks,
Nitin
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
.Tests fail, but package loads.
.Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
On 0.4 release, I get this:
julia> using ForwardDiff
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/Partials.jl:49
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:58
WARNING: Base.MathConst is deprecated, use Base.Irrational instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:137
WARNING: Base.MathConst is deprecated, use Base.Irrational instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:137
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/HessianNumber.jl:47
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/TensorNumber.jl:47
If memory serves, a pervious version of ForwardDiff supported functions that looked like this:
function sphere2cart!(xin::Vector, xout::Vector)
rho, theta, phi = xin
rho_sin_phi = rho * sin(phi)
xout[1] = rho_sin_phi * cos(theta)
xout[2] = rho_sin_phi * sin(theta)
xout[3] = rho * cos(phi)
end
However now (according to the docs, anyway), it seems I am only allowed to write:
function sphere2cart(xin::Vector)
rho, theta, phi = xin
rho_sin_phi = rho * sin(phi)
x = rho_sin_phi * cos(theta)
y = rho_sin_phi * sin(theta)
z = rho * cos(phi)
return [x, y, z]
end
Sadly, this second form is considerably slower, due to the allocation of the result vector. Is there any way take the Jacobian of a vector-valued function written using the result placement style?
Hi,
I'm working on a problem, which has a system of equations. I would like to get partial derivates of the whole system.
julia> using ForwardDiff
julia> f1(s) = (s/3)^2
f1 (generic function with 1 method)
julia> f2(s, l) = s^2 + exp(3*l/sqrt(4))
f2 (generic function with 1 method)
julia> function f(x)
return f1(x[1]) + f2(x[1], x[2])
end
f (generic function with 1 method)
julia> g = forwarddiff_gradient(f, Float64)
g (generic function with 1 method)
julia> g([0.1, 56.2])
ERROR: BoundsError: attempt to access 1-element Array{DualNumbers.Dual{Float64},
1}:
0.1+0.0du
at index [2]
in dual_fad at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\dual_fad\univariate_
range.jl:6
in g at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\dual_fad\univariate_range.j
l:22
Is there an error in my code or should I make a macro, which would derivate all the terms?
The minimal example that i could find:
julia> f(x) = x/2
f (generic function with 1 method)
julia> dfdx = derivative(f)
d (generic function with 1 method)
julia> g(x) = dfdx(x[1])*f(x[2])
g (generic function with 1 method)
julia> ForwardDiff.gradient(g, [1., 2.])
# Does not return, freezes
Note that this bug doesn't appear if f
is identity function or something like 2x
, also when g
takes only one argument everything is OK, but the bug appears in all other, more complex cases. I use Julia v0.4.0-rc2
, ForwardDiff v0.1.0
.
The test files still start with
using AutoDiff
even though the package name has changed.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("ForwardDiff")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated
>>> 'using ForwardDiff' log
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Warning: New definition
* at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with:
* at bool.jl:50.
To fix, define
*
before the new definition.
Warning: New definition
* at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:108
is ambiguous with:
* at bool.jl:50.
To fix, define
*
before the new definition.
Warning: New definition
* at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:133
is ambiguous with:
* at bool.jl:50.
To fix, define
*
before the new definition.
Julia Version 0.4.0-dev+1318
Commit 7a7110b (2014-10-27 03:52 UTC)
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
(T<:Real,GraDual{T<:Real,n})(Bool,T<:Number)(Bool,_<:GraDual{Bool,n})(T<:Real,FADHessian{T<:Real,n})(Bool,T<:Number)(Bool,_<:FADHessian{Bool,n})(T<:Real,FADTensor{T<:Real,n})(Bool,T<:Number)(Bool,_<:FADTensor{Bool,n})
>>> test log
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Warning: New definition
* at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with:
* at bool.jl:50.
To fix, define
... truncated ...
in include at ./boot.jl:242
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:230
in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
in include at ./boot.jl:242
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:242
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
ERROR: mismatch of non-finite elements:
hessian(output) = [NaN]
hessianf(args...) = 0.6213341259967982
in test_approx_eq at test.jl:125
in test_approx_eq at test.jl:143
in include at ./boot.jl:242
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:242
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl, in expression starting on line 276
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl, in expression starting on line 5
INFO: Testing ForwardDiff
=============================[ ERROR: ForwardDiff ]=============================
failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
in error at error.jl:21
in test at pkg/entry.jl:719
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:68
in process_options at ./client.jl:221
in _start at ./client.jl:362
in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
>>> end of log
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("ForwardDiff")' log
INFO: Cloning cache of ForwardDiff from git://github.com/JuliaDiff/ForwardDiff.jl.git
INFO: Installing Calculus v0.1.8
INFO: Installing DualNumbers v0.1.3
INFO: Installing ForwardDiff v0.0.2
INFO: Installing NaNMath v0.0.2
INFO: Package database updated
>>> 'Pkg.test("ForwardDiff")' log
INFO: Testing ForwardDiff
Running tests:
* dual_fad.jl
* GraDual.jl
* FADHessian.jl
ERROR: mismatch of non-finite elements:
hessian(output) = [NaN]
hessianf(args...) = -352.65949394886985
in test_approx_eq at test.jl:101
in test_approx_eq at test.jl:119
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
while loading /home/vagrant/.julia/v0.3/ForwardDiff/test/FADHessian.jl, in expression starting on line 254
while loading /home/vagrant/.julia/v0.3/ForwardDiff/test/runtests.jl, in expression starting on line 5
=============================[ ERROR: ForwardDiff ]=============================
failed process: Process(`/home/vagrant/julia/bin/julia /home/vagrant/.julia/v0.3/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
in error at error.jl:21
in test at pkg/entry.jl:718
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:67
in process_options at ./client.jl:213
in _start at ./client.jl:354
>>> End of log
As I am trying to connect ForwardDiff with Lora, I came to realise that this can work in one of two possible cases, namely when the log-target function has a signature such as logtarget(x::Vector)
. There is a second case, however, which is perhaps more common and interesting in real applications; in this latter case, the log-target takes a second argument, which may be a cell array (or dictionary, but let's say cell array for now) of auxiliary variables.
The main question is whether we can allow ForwardDiff to work with functions with signature logtarget(x::Vector, y::Vector)
. In principle, ForwardDiff will carry on working the same way by performing autodiff with respect to x only, i.e. by ignoring any subsequent input argument after the first one. y will only appear in the body of logtarget to pass additional values in the body of log-target, but as far as autodiff goes y will be a constant.
All it takes to add this extra feature is a slight change of API allowing extra arguments that won't change the course of differentiation. Do you think we could possibly deal with this?
Is there a quick way to compute a vector of second partial derivatives? Essentially, what I want is the diagonal of the Hessian, but since I only want a diagonal, computing the entire matrix is way too costly and it feels like there should be a better way.
The dual numbers logic at https://github.com/JuliaOpt/Optim.jl/blob/master/src/autodiff.jl and https://github.com/EconForge/NLsolve.jl/blob/master/src/autodiff.jl should be moved here. What sort of interface should we provide? The tricky part is the memory management and allowing users to avoid allocating new vectors on each evaluation.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
.Tests fail, but package loads.
.Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
This is not strictly related to this package but I thought maybe one of you could explain this, just close else. I was playing around a bit with the short implementation of dual numbers I found in @mlubin's Github page. I then wanted to benchmark a bit to see performance differences. I then get results that calling the function with dual numbers is faster than with normal reals (??).
First, here is the short implementation
importall Base
immutable Dual{T} <: Number
re::T
ษ::T
end
real(z::Dual) = z.re
dual(z::Dual) = z.ษ;
(+)(x::Dual,y::Dual) = Dual(real(x)+real(y), dual(x)+dual(y))
(-)(x::Dual,y::Dual) = Dual(real(x)-real(y), dual(x)-dual(y))
(*)(x::Dual,y::Dual) = Dual(real(x)*real(y), real(x)*dual(y)+real(y)*dual(x))
(/)(x::Dual,y::Dual) = Dual(real(x)/real(y), (dual(x)*real(y)-real(x)*dual(y))/(real(y)*real(y)))
exp(x::Dual) = Dual(exp(real(x)), dual(x)*exp(real(x)))
sin(x::Dual) = Dual(sin(real(x)), dual(x)*cos(real(x)))
cos(x::Dual) = Dual(cos(real(x)), -dual(x)*sin(real(x)));
promote_rule{S<:Real,T<:Real}(::Type{Dual{S}},::Type{T}) = Dual{promote_type(T,S)};
convert{T<:Real}(::Type{Dual{T}}, x::Real) = Dual(convert(T,x), zero(T));
const ษ = Dual(0.0, 1.0);
This is the trial function I used
f(x) = exp(x) / (cos(x)^3 + sin(x)^3)
And the benchmarking:
Pkg.clone("https://github.com/johnmyleswhite/Benchmarks.jl")
using Benchmarks
@benchmark f(ฯ/4 + ษ)
@benchmark f(ฯ/4 + im)
@benchmark f(ฯ/4)
which gives
julia> @benchmark f(ฯ/4 + ษ)
================ Benchmark Results ========================
Time per evaluation: 104.91 ns [104.65 ns, 105.17 ns]
Number of evaluations: 3726101
Time spent benchmarking: 0.52 s
julia> @benchmark f(ฯ/4 + im)
================ Benchmark Results ========================
Time per evaluation: 315.03 ns [314.58 ns, 315.48 ns]
Number of evaluations: 1437201
Time spent benchmarking: 0.54 s
julia> @benchmark f(ฯ/4)
================ Benchmark Results ========================
Time per evaluation: 198.68 ns [196.78 ns, 200.58 ns]
Number of evaluations: 2314101
Time spent benchmarking: 0.50 s
Am I making some obvious mistake here because it seems that the dual numbers are significantly faster than the real numbers. That can't be possible...
I'd like to propose using source code transformation instead of operator overloading. Operator overloading is slow in general and not really optimized in Julia; the compiler doesn't seem to optimize away the temporary objects, and the GC isn't great at handling them.
On the other hand, Julia makes implementing source code transformation much easier than in other languages. The AST of a function is (mostly) user accessible:
julia> x(t) = 2t
julia> methods(x).defs.func.code
AST(:($(Expr(:lambda, {:t}, {{}, {{:t, :Any, 0}}, {}}, quote # none, line 1:
return *(2,t)
end))))
The transformed functions can be compiled with the JIT, so that AD incurs only a one-time cost. With this approach, AD in Julia will be very competitive with the state of the art.
Of course, development time is limited, and I'm not sure how much I'll be able to work on this in the short term. It is certainly reasonable to go forward with an operator overloading approach just to get something working and to have a basis for comparison.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("ForwardDiff")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated
>>> 'using ForwardDiff' log
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Julia Version 0.4.0-dev+1119
Commit de2ba52 (2014-10-17 04:00 UTC)
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
>>> test log
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.
WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.
WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
in log2 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:149
in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
... truncated ...
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:230
in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
ERROR: mismatch of non-finite elements:
hessian(output) = [NaN]
hessianf(args...) = -124.36073976692232
in test_approx_eq at test.jl:125
in test_approx_eq at test.jl:143
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl, in expression starting on line 265
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl, in expression starting on line 5
Running tests:
* dual_fad.jl
* GraDual.jl
* FADHessian.jl
INFO: Testing ForwardDiff
=============================[ ERROR: ForwardDiff ]=============================
failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
in error at error.jl:21
in test at pkg/entry.jl:719
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:68
in process_options at ./client.jl:221
in _start at ./client.jl:362
in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
>>> end of log
Some background of my use case:
I need to solve a smallish nonlinear problem in a lot of points where each points have a set of different parameters. For a specific point I make a closure from the parameters of the point and use ForwardDiff
to compute the jacobian of that closure. I then pass the closure and the jacobian to an external solver and everything works great.
I now wanted to try a solver that only allows for in place updates of the function values. This means I should use the output_length
option to jacobian
. When I do that, the analysis performance takes a large hit because from what I believe is the compilation time in generating the function here in each new point.
Since I have a constant output_length
it should in theory be possible to cache something similar to the newf
function that takes the function I want to compute the jacobian of as an extra argument instead of creating the function from scratch.
Would this be possible/sensible?
This seems like a small detail to fix up:
derivative(sin)(pi)
ERROR: MethodError: `convert` has no method matching convert(::Type{Irrational{:ฯ}}, ::Int64)
This may have arisen from a call to the constructor Irrational{:ฯ}(...),
since type constructors fall back to convert methods.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Cloning cache of ForwardDiff from git://github.com/JuliaDiff/ForwardDiff.jl.git
INFO: Installing Calculus v0.1.4
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated
ERROR: mismatch of non-finite elements:
hessian(output) = [NaN]
hessianf(args...) = -352.65949394886985
in test_approx_eq at test.jl:101
in test_approx_eq at test.jl:119
in include at ./boot.jl:244
in include_from_node1 at ./loading.jl:128
in anonymous at no file:8
in include at ./boot.jl:244
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
while loading /home/idunning/pkgtest/.julia/v0.3/ForwardDiff/test/FADTensor.jl, in expression starting on line 475
while loading /home/idunning/pkgtest/.julia/v0.3/ForwardDiff/run_tests.jl, in expression starting on line 5
INFO: Package database updated
See https://groups.google.com/d/msg/julia-users/4Icx10pmQoI/SSvkgIRyCwAJ
cc @KristofferC (thanks for figuring that out in the mailing list thread)
Hi, I got my code working with the help of last issue: #30. Though I found a another example how I would like to make my code to work
Like last example:
using ForwardDiff
f1(s) = (s/3)^2
f2(x) = x[1]^2 + exp(3*x[2]/sqrt(4))
function f(x)
g = forwarddiff_gradient(f2, Float64, fadtype=:typed, n=2)
return f1(x[1]) + g(x)
end
g = forwarddiff_gradient(f, Float64, fadtype=:typed, n=2)
g([-1., 2.0])
ended up with error message:
LoadError: MethodError: `g` has no method matching g(::Array{ForwardDiff.GraDual{Float64,2},1})
while loading In[11], in expression starting on line 9
in f at In[11]:6
in g at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\typed_fad\GraDual.jl:188
I already found a workaround, but Im not sure should it work like this:
using ForwardDiff
f1(s) = (s/3)^2
f2(x) = x[1]^2 + exp(3*x[2]/sqrt(4))
function f(x)
xd = map(t->t.v, x)
g = forwarddiff_gradient(f2, Float64, fadtype=:typed, n=2)
return f1(x[1]) + g(xd)
end
g = forwarddiff_gradient(f, Float64, fadtype=:typed, n=2)
g([-1., 2.0])
which gave me a hessia as expected:
2x2 Array{Float64,2}:
-0.222222 0.0
-0.222222 0.0
note: I'm working with finite element material models, which happen to have this kind of functions
Just a heads up that something is broken in type inference when using Julia master.
Using this simple function:
function foo(x)
println(eltype(x))
b = 5.0
c = x / b
println(eltype(c))
d = x * (1/b)
println(eltype(d))
return 3 * c
end
gives on 0.4.1 the normal:
julia> ForwardDiff.jacobian(foo, rand(2))
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
2x2 Array{Float64,2}:
0.6 0.0
0.0 0.6
However, on 0.5 this gives:
julia> ForwardDiff.jacobian(foo, rand(2))
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
########
ForwardDiff.GradientNumber{N,T,C} # <------ NOTE
########
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ERROR: no promotion exists for Int64 and ForwardDiff.GradientNumber{N,T,C}
[inlined code] from promotion.jl:160
in .* at arraymath.jl:118
in foo at none:9
in _calc_jacobian at /home/kristoffer/.julia/v0.5/ForwardDiff/src/api/jacobian.jl:101
in jacobian at /home/kristoffer/.julia/v0.5/ForwardDiff/src/api/jacobian.jl:84
in eval at ./boot.jl:263
It seems that the division x/b
cause julia to lose the type inference for the array.
Hi,
Is it possible to calculate Hessian of a vector valued function ?
E.g, lets say I have a function:
f(x::Vector) = x./(sqrt(x[1]^2+x[2]^2+x[3]^3))
I can easily calculate its Jacobian by:
ForwardDiff.jacobian(f,rand(3))
but I get a convert
error when I try to calculate Hessian. I assume this is because the hessian is expecting a scalar function. Is there trick to get around this ?
The hessian for the above example would be a 3d tensor, with each matrix component corresponding to a hessian w.r.t. to each output component of the function.
As mentioned here using isless(g::GradientNumber, x::Real)
works but <(g::GradientNumber, x::Real)
doesn't.
I found that <(g::GradientNumber, x::Real)
calls <(x::Real, y::Real)
from julia's promotion.jl instead of calling isless(g::ForwardDiffNumber, x::Real)
. This promotes both arguments to GradientNumber
and that reports there is no a corresponding method (although, I thought it was going to call the method argument_error
from here).
I'm not sure if this is a method ambiguity or an inference bug.
It would be useful for JuMP to have a method to compute jacobian-vector and jacobian-matrix products. The current approach for computing jacobians can be interpreted as a jacobian-matrix products with the identity matrix.
Here's the implementation in ReverseDiffSparse: https://github.com/mlubin/ReverseDiffSparse.jl/blob/2530b758bb341d3c51e8c1195134922193e1cfb2/src/hessian.jl#L231
See discussion here.
Does anybody have any practical arguments for why we shouldn't do this?
julia> Pkg.test("ForwardDiff")
INFO: Testing ForwardDiff
Running tests:
* dual_fad.jl
Warning: New definition
*(T<:Real,ForwardDiff.GraDual{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with:
*(Bool,T<:Number) at bool.jl:47.
To fix, define
*(Bool,_<:ForwardDiff.GraDual{Bool,n})
before the new definition.
Warning: New definition
*(T<:Real,ForwardDiff.FADHessian{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:108
is ambiguous with:
*(Bool,T<:Number) at bool.jl:47.
To fix, define
*(Bool,_<:ForwardDiff.FADHessian{Bool,n})
before the new definition.
Warning: New definition
*(T<:Real,ForwardDiff.FADTensor{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:133
is ambiguous with:
*(Bool,T<:Number) at bool.jl:47.
To fix, define
*(Bool,_<:ForwardDiff.FADTensor{Bool,n})
before the new definition.
* GraDual.jl
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:149
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
* FADHessian.jl
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:212
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:225
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
* FADTensor.jl
WARNING: int64(x::FloatingPoint) is deprecated, use round(Int64,x) instead.
in depwarn at ./deprecated.jl:40
in int64 at deprecated.jl:29
in t2h at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:104
in ^ at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:221
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:8
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:327
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:347
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
in depwarn at ./deprecated.jl:40
in oftype at deprecated.jl:29
in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:349
in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:347
in include at ./boot.jl:250
in include_from_node1 at ./loading.jl:129
in anonymous at no file:8
in include at ./boot.jl:250
in include_from_node1 at loading.jl:129
in process_options at ./client.jl:305
in _start at ./client.jl:389
INFO: ForwardDiff tests passed
INFO: No packages to install, update or remove
Previously discussed here.
For example, if somebody calls hessian(f,x)
they should be able to tell the function to return โf(x)
and/or f(x)
as well, since those values are already calculated by way of the Hessian calculation.
Here some ideas for what this might look like:
ForwardDiffNumber
, then provide methods in the API to extract the data you want from it. Something like this:julia> data = hessian(f, x, alldata=true) # returns ForwardDiffNumber
julia> gradient(data) # extract gradient from data
julia> hessian(data) # extract hessian from data
I don't really like alldata
as a name for the keyword argument, but you get my drift.
# returns in the same order as it's given, with Hessian first
julia> hess, val, grad = hessian(f, x, also=(:value, :gradient))
Thoughts?
ForwardDiff does not appear to support the atan2
function:
using ForwardDiff
function angle(xin)
x, y = xin
atan2(y, x)
end
angleJ = jacobian(angle)
angleJ([1.0, 2.0]) # exception!
I don't know enough about how to program for the dual number implementation used by ForwardDiff. However, I believe the derivative at all points is the same as the derivative of CORRECTION: We should be using the partial derivatives of this function w.r.t. to x and y (see below).arctan(y/x)
, i.e. 1 / (1 + (y/x)^2)
.
This issue is being filed by a script, but if you reply, I will see it.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their test (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3).
The results of this script are used to generate a package listing enhanced with testing results.
The status of this package, ForwardDiff, on...
'No tests, but package loads.' can be due to their being no tests (you should write some if you can!) but can also be due to PackageEvaluator not being able to find your tests. Consider adding a test/runtests.jl
file.
'Package doesn't load.' is the worst-case scenario. Sometimes this arises because your package doesn't have BinDeps support, or needs something that can't be installed with BinDeps. If this is the case for your package, please file an issue and an exception can be made so your package will not be tested.
This automatically filed issue is a one-off message. Starting soon, issues will only be filed when the testing status of your package changes in a negative direction (gets worse). If you'd like to opt-out of these status-change messages, reply to this message.
Shouldn't GraDual{T} be comparable to T so that isless(graDual, x) == isless(graDual.v, x)
?
Otherwise it's not possible to differentiate x > 0 ? x : -x
To avoid confusion, we should:
Currently, the chunk_size
option only allows for sizes that divide evenly into the length of the input vector.
As @mlubin pointed out, this isn't always the most reasonable restriction, and could prevent some functions with big, sad input dimensions to be unsupported by the package without access to gobs of memory.
I opened an issue so that we conclude the ongoing discussion on possible partial rewriting of the forward AD code.
Dual
is not so bad choice. autodiff()
is my favourite name for our AD functions, nevertheless we need to address the issue of having several AD modes, such as forward, reverse or hybrid. This could be done by using separate function names, which we need to device, or by having a function argument mode
in autodiff()
, i.e. autodiff(x, ..., mode="forward")
so as to choose the mode of differentiation. To generalize further, we may need another function argument, say method
, to choose between operator overloading (oo) and source code transformation (sct), for example autodiff(x, ..., mode="forward", method="oo")
.n
parameter of the ADForward
type does not exhibit any improvement in performance, I suggest we remove it.@scidom, @mlubin, @kmsquire, @powerdistribution : Here I go, for a (probably biased) proposal of a common interface for the forward/reverse mode symbolic derivation function :
Base.uncompress_ast
can still be build around it for inputs as functions. + it doesn't require a function creation on the part of the caller (useful if the function is called recursively for higher order derivation).method=:forward
or method=:reverse
parameter to indicate which algo to use (and not limited to these 2 when new methods are implemented).Name of the function : differentiate()
? diff()
, ? derive()
?
This confused me for a while (well, in a more complex setting):
julia> derivative(x->0.0, 5.)
ERROR: MethodError: `convert` has no method matching convert(::Type{ForwardDiff.ForwardDiffResult{Float64}}, ::Float64)
This may have arisen from a call to the constructor ForwardDiff.ForwardDiffResult{Float64}(...),
since type constructors fall back to convert methods.
Closest candidates are:
call{T}(::Type{T}, ::Any)
convert{T}(::Type{T}, ::T)
ForwardDiff.ForwardDiffResult{T}(::ForwardDiff.ForwardDiffNumber{N,T<:Number,C})
...
in call at /home/mauro/.julia/v0.4/ForwardDiff/src/api/ForwardDiffResult.jl:11
in derivative at /home/mauro/.julia/v0.4/ForwardDiff/src/api/derivative.jl:24
Could the error give a hint that zero
needs to be used?
Also, the documentation says that the method needs to be type-stable, which x->0.0
is. Is the requirement that input and output type need to be equal?
@mlubin I open this issue as a placeholder - I will start organizing the tests in ForwardDiff
this weekend. Would you like to take care of adding some tests for forward mode AD based on dual numbers to share the workload? :)
Am I doing something forbidden here?
# Simple working example
f(x) = x[1]^2
g = forwarddiff_gradient(f, Float64, fadtype=:typed)
g([3.0])
#6.0, OK!
# But rewrite it in this way:
f(x) = [x[1]^2]' * [1.0]
g = forwarddiff_gradient(f, Float64, fadtype=:typed)
g([3.0])
# -6.0, Wups!
Edit: Oh, this is almost exactly: #24 but it should have been fixed in #25?
Edit2: Looked into it a bit more, the problem is not with transpose but with conjugate:
A = ForwardDiff.GraDual{Float64,1}[GraDual(3.0, [1.0])]
julia> A'*[1.0]
1-element Array{ForwardDiff.GraDual{Float64,1},1}:
GraDual(3.0,
[-1.0])
julia> A.'*[1.0]
1-element Array{ForwardDiff.GraDual{Float64,1},1}:
GraDual(3.0,
[1.0])
Basically, why does conj on a Array{ForwardDiff.GraDual{Float64,1},1}
invert the gradient part?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.