julianlsolvers / optim.jl Goto Github PK
View Code? Open in Web Editor NEWOptimization functions for Julia
License: Other
Optimization functions for Julia
License: Other
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("Optim")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.4.0
INFO: Package database updated
INFO: METADATA is out-of-date a you may not have the latest version of Optim
INFO: Use `Pkg.update()` to get the latest versions of your packages
>>> 'using Optim' log
Julia Version 0.4.0-dev+712
Commit 4eb631e (2014-09-21 04:29 UTC)
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
>>> test log
Running tests:
* bfgs.jl
* gradient_descent.jl
* momentum_gradient_descent.jl
* grid_search.jl
* l_bfgs.jl
* levenberg_marquardt.jl
* newton.jl
* cg.jl
* nelder_mead.jl
* optimize.jl
* simulated_annealing.jl
* interpolating_line_search.jl
* api.jl
* golden_section.jl
* brent.jl
* type_stability.jl
ERROR: assertion failed: dphia < 0
in secant2! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:423
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:333
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:188
in l_bfgs at /home/idunning/pkgtest/.julia/v0.4/Optim/src/l_bfgs.jl:165
in optimize at /home/idunning/pkgtest/.julia/v0.4/Optim/src/optimize.jl:113
in anonymous at no file:42
in include at ./boot.jl:246
in include_from_node1 at ./loading.jl:128
in anonymous at no file:31
in include at ./boot.jl:246
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
in _start_3B_3605 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/type_stability.jl, in expression starting on line 36
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl, in expression starting on line 29
INFO: Testing Optim
================================[ ERROR: Optim ]================================
failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: Optim had test errors
in error at error.jl:21
in test at pkg/entry.jl:719
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:68
in process_options at ./client.jl:213
in _start at ./client.jl:354
in _start_3B_3605 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
>>> end of log
So I just presented JuliaOpt at the CAJUN meeting and the first question I got was "how does L-BFGS perform?" and I had to say "probably pretty good"(!)
Is there a good way to go about benchmarking it versus something else? Maybe even NLOpt?
The simple backtracking line search that we're using now is tolerable for use, but it makes too many calls to the objective function and often selects inferior points along the lines being searched. We should implement several algorithms and compare them -- as well as match them to the different optimization algorithms that generate lines for search.
just an upvote for this task mentioned in the README
please add a conjugate gradient solver :-)
I noticed that the Newton method is implemented in such a way that the step size (as embodied in the parameter alpha
) is not reset across iterations. This affects at least the backtracking linesearch.
In practice, if at some iteration the step size as decided by the line search is very small, then at the next iteration, the line search will start with the previous value for alpha
, instead of starting with one. So the parameter alpha
can only decrease across iterations when using the backtracking linesearch.
This can be highly inefficient for some problems, resulting in many more iterations than necessary.
Is this a deliberate choice? In my NLsolve
package, I chose to always start linesearches with alpha=1
, since it results in much more efficients outcomes for the problems that I tested: JuliaNLSolvers/NLsolve.jl@b147866
Hi all
Really enjoying getting started on Julia. Huge thank you for all that you're doing.
Am trying to get Optim running and am getting the following error message for a particular problem:
assertion failed: dphia < 0
in secant2! at hz_linesearch.jl:423
in push! at array.jl:458
Any idea what would cause this?
Here is the setup:
function linRegCost(θ, X, y, m, λ)
cost = (1/(2_m) * (X_θ - y)' * (X_θ - y)
+ λ/(2_m) * sum(θ[2:end].^2)
)[1]
end
f(θ) = linRegCost(θ, X, y, m, λ)
function linRegGradient!(θ, grad, X, y, m, λ)
grad = (1/m * X' * (X*θ - y)
+ λ/m * [0, θ[2:end]]
)
end
g!(θ, grad) = linRegGradient!(θ, grad, X, y, m, λ)
initial_θ = zeros(size(X, 2))
res = optimize(f, g!, initial_θ)
I am seeing the following error intermittently
ERROR: assertion failed: :((dphi0<0))
This comes from https://github.com/johnmyleswhite/Optim.jl/blob/master/src/cgdescent.jl#L155
The trouble is that this issue is intermittent. Sometimes it goes away after calling the function with the same arguments again. Sometimes I need to restart julia to get it to work. When it does work, I get the correct results
I am trying to do a simple logistic regression using cgdescent. So I have a largish dataset I am using. So my question for the moment is really to ask for some hints about where to investigate, so that I can create a replicable dataset.
$ ./julia -e 'versioninfo()'
Julia Version 0.3.0-prerelease+1692
Commit 736251d* (2014-02-23 06:21 UTC)
Platform Info:
System: Linux (i686-redhat-linux)
CPU: Genuine Intel(R) CPU T2250 @ 1.73GHz
WORD_SIZE: 32
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY)
LAPACK: libopenblas
LIBM: libopenlibm
$ ./julia ~/.julia/Optim/test/runtests.jl
Warning: New definition
sum(DenseArray{T<:Number,N},Union((Int32...,),Array{Int32,1},Int32)) at /home/rick/.julia/NumericExtensions/src/reducedim.jl:241
is ambiguous with:
sum(BitArray{N},Any) at bitarray.jl:1570.
To fix, define
sum(BitArray{N},Union((Int32...,),Array{Int32,1},Int32))
before the new definition.
Running tests:
* bfgs.jl
* curve_fit.jl
* gradient_descent.jl
* momentum_gradient_descent.jl
* grid_search.jl
* l_bfgs.jl
* levenberg_marquardt.jl
* newton.jl
* cg.jl
* nelder_mead.jl
* optimize.jl
* simulated_annealing.jl
* interpolating_line_search.jl
* api.jl
$
I have an issue with the optimize function. Consider an example:
function estimate(Y,X)
f(theta) = sum([Y[i]-dot(X[i,:],theta) for i = 1:length(Y)])
result = optimize(f,
[1e-2, 1e-2],
method = :nelder_mead,
iterations = 10,
ftol = 1e-5)
result.minimum
end
If X is an integer array, then it works fine:
Y = [1.5, 2]
X = [[1, 2] [3, 4]]
estimate(Y,X)
2-element Array{Float64,1}:
31.7193
173.417
However, if the second argument is float, then it causes a trouble:
Z = [[1.0, 2] [3, 4]]
estimate(Y,Z)
Argument dimensions are not map-compatible.
So far, I have noticed such a behavior only when using the optimize function. Am I doing anything wrong?
I have recently implemented the Levenberg-Marquardt algorithm and curve_fit
wrapper that is useful for fitting non-linear models (see https://github.com/blakejohnson/julia-tools/blob/master/NonlinearFit.jl). Would you have any interest in adding it to the Optim module?
I guess you could integrate it into optimize
by looking at the length of the return value of the cost function.
I just tried the following code from the help doc:
using Optim
f(x) = 2x^2+3x+1
optimize(f, -2.0, 1.0)
and got the following error:
no method optimize(Function,Float64,Float64)
passing instead:
optimize(f, [-2.0, 1.0])
yields
ERROR: no method *(Array{Float64,1},Array{Float64,1})
in power_by_squaring at intfuncs.jl:87
in f at none:1
in nelder_mead at /Users/VGupta/.julia/v0.2/Optim/src/nelder_mead.jl:135
in optimize at /Users/VGupta/.julia/v0.2/Optim/src/optimize.jl:402
I'm using Julia v.0.2 if that's of any use, and downloaded Optim earlier today.
Lately I've found myself constructing some pretty complicated closures to work around the current Optim interface.
As such, I'd like to propose a revised protocol for functions passed to Optim. Instead of working with a function f(x::Array) -> Float64
, I'd like to work with functions of the form f(x::Array, extra::Any)
where extra
can be used to provide any additional information needed to evaluate f
given x
. This would make several things simpler:
What would people think of making this kind of breaking change? It would make Optim much more general, but might also make the interface less intuitive for newcomers since you'd want to do something like f(x, nothing)
when your function doesn't depend on additional information.
Hi All,
This may be by design but it doesn't seem that newton's method is supported when simply provides the function f and asks julia to autodiff derivatives... Was this intentional please? If so why?
See line 375 of optimize.jl
VG
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Installing Calculus v0.1.4
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.3.1
INFO: Installing Options v0.2.2
INFO: Package database updated
INFO: METADATA is out-of-date — you may not have the latest version of Optim
INFO: Use `Pkg.update()` to get the latest versions of your packages
ERROR: assertion failed: dphia < 0
in secant2! at /home/idunning/pkgtest/.julia/v0.3/Optim/src/linesearch/hz_linesearch.jl:438
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.3/Optim/src/linesearch/hz_linesearch.jl:355
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.3/Optim/src/linesearch/hz_linesearch.jl:201
in l_bfgs at /home/idunning/pkgtest/.julia/v0.3/Optim/src/l_bfgs.jl:165
in optimize at /home/idunning/pkgtest/.julia/v0.3/Optim/src/optimize.jl:113
in anonymous at no file:42
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:29
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/test/type_stability.jl, in expression starting on line 36
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/test/runtests.jl, in expression starting on line 27
INFO: Package database updated
In the README, it mentions OptionsMod to set options, but it doesn't seem to be supported any more.
julia> using OptionsMod
ERROR: OptionsMod not found
julia> Pkg.add("OptionsMod")
ERROR: unknown package OptionsMod
I'm on:
julia> versioninfo()
Julia Version 0.3.0-rc1+60
Commit a327b47* (2014-07-17 19:50 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin13.3.0)
The REQUIRE
file still has Options and Distributions.
Does this package still really depends on them?
I noticed the following:
julia> tic(); using Optim; toc()
elapsed time: 10.270654383 seconds
10.270654383
but if I comment out using Distributions
in Optim.jl
I get
julia> tic(); using Optim; toc()
elapsed time: 2.164259956 seconds
2.164259956
For comparison:
julia> tic(); using NLopt; toc()
elapsed time: 3.528717153 seconds
3.528717153
With the arrival of package precompilation, this is nowhere near the key issue it once was, but I suspect the vast majority of users are not precompiling many packages.
Distributions is used only by curve_fit
, which is a bit outside of the mainstream anyway. Any thoughts on whether this should be moved to a separate package?
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
.Tests fail, but package loads.
.Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Installing ArrayViews v0.4.4
INFO: Installing Calculus v0.1.3
INFO: Installing Distributions v0.4.7
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.2.0
INFO: Installing Options v0.2.2
INFO: Installing PDMats v0.2.0
INFO: Installing StatsBase v0.4.0
INFO: Package database updated
ERROR: sum_impl not defined
in _sum! at reducedim.jl:145
in sum at reducedim.jl:166
in sum at reducedim.jl:174
in levenberg_marquardt at /home/idunning/pkgtest/.julia/v0.3/Optim/src/levenberg_marquardt.jl:57
in curve_fit at /home/idunning/pkgtest/.julia/v0.3/Optim/src/curve_fit.jl:15
in include at boot.jl:244
in anonymous at no file:26
in include at boot.jl:244
in include_from_node1 at loading.jl:128
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/test/curve_fit.jl, in expression starting on line 9
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/test/runtests.jl, in expression starting on line 24
INFO: Package database updated
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Package doesn't load.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Package doesn't load.
means that PackageEvaluator did not find tests for your package. Additionally, trying to load your package with using
failed.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Installing ArrayViews v0.4.6
INFO: Installing Calculus v0.1.4
INFO: Installing Distributions v0.5.2
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.3.0
INFO: Installing Options v0.2.2
INFO: Installing PDMats v0.2.1
INFO: Installing StatsBase v0.5.3
INFO: Package database updated
ERROR: NumericExtensions not found
in require at loading.jl:47
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in reload_path at loading.jl:152
in _require at loading.jl:67
in require at loading.jl:54
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in reload_path at loading.jl:152
in _require at loading.jl:67
in require at loading.jl:54
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in reload_path at loading.jl:152
in _require at loading.jl:67
in require at loading.jl:51
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
while loading /home/idunning/pkgtest/.julia/v0.3/PDMats/src/PDMats.jl, in expression starting on line 9
while loading /home/idunning/pkgtest/.julia/v0.3/Distributions/src/Distributions.jl, in expression starting on line 4
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/src/Optim.jl, in expression starting on line 5
while loading /home/idunning/pkgtest/.julia/v0.3/Optim/testusing.jl, in expression starting on line 1
INFO: Package database updated
Hi,
We've formed the @JuliaOpt organization to be the home of optimization-related packages in Julia. We're hoping to publicly announce it in the coming days. Are you interested in transferring Optim to the organization?
Optim seems to easily meet our QA guidelines (http://juliaopt.org/). The only small change I'd ask is to move the run_tests.jl
script to test/runtests.jl
for consistency with the other packages.
Thanks!
I just installed the Optim package and when I tried to run the code in the example, I keep getting the error listed above:
julia> using Optim
julia> function rosenbrock(x::Vector)
return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end
function rosenbrock_gradient!(x::Vector, storage::Vector)
storage[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
storage[2] = 200.0 * (x[2] - x[1]^2)
end
function rosenbrock_hessian!(x::Vector, storage::Matrix)
storage[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
storage[1, 2] = -400.0 * x[1]
storage[2, 1] = -400.0 * x[1]
storage[2, 2] = 200.0
end
rosenbrock_hessian!(Array{T,1},Array{T,2}) at /Applications/JuliaStudio.app/Contents/Resources/juliaengine/main.jl:3
julia> f = rosenbrock
g! = rosenbrock_gradient!
h! = rosenbrock_hessian!
rosenbrock_hessian!(Array{T,1},Array{T,2}) at /Applications/JuliaStudio.app/Contents/Resources/juliaengine/main.jl:3
julia> optimize(f, [0.0, 0.0])
no method mean(Array{Float64,2},Int64)
I can't even seem to find any use of the mean function in optimize.jl or nelder_mead.jl. How can I get around this?
I didn't see any examples in the documentation that showed how you can optimize an objective function (with corresponding gradient and hessian) that depends on some variables you are not trying to optimize on.
What is the appropriate way to handle such a situation?
aviks ~/.julia/Optim $ julia -e "print(versioninfo())"
Julia Version 0.2.0
Commit e912b26fa4 2013-04-12 19:33:38*
Platform Info:
OS_NAME: Darwin
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT NO_AFFINITY)
LAPACK: libopenblas
LIBM: libopenlibm
aviks ~/.julia/Optim $ julia/julia run_tests.jl
Running tests:
* test/curve_fit.jl
ERROR: no method convert(Type{Array{Float64,2}},Array{Any,2})
in levenberg_marquardt at /Users/aviks/.julia/Optim/src/levenberg_marquardt.jl:57
in curve_fit at /Users/aviks/.julia/Optim/src/curve_fit.jl:15
in include_from_node1 at loading.jl:88
in anonymous at no file:23
in include_from_node1 at loading.jl:88
in process_options at client.jl:253
in _start at client.jl:332
at /Users/aviks/.julia/Optim/test/curve_fit.jl:9
at /Users/aviks/.julia/Optim/run_tests.jl:24
aviks ~/.julia/Optim $ git rev-parse HEAD
fea408b5afbb7fb9733828d0f815e7f433d8c25d
julia> using Optim
ERROR: Cholesky not defined
in include_from_node1 at loading.jl:92
in reload_path at loading.jl:112
in require at loading.jl:48
in include_from_node1 at loading.jl:92
in reload_path at loading.jl:112
in require at loading.jl:48
at /Users/gusl/.julia/Distributions/src/Distributions.jl:1094
at /Users/gusl/.julia/Optim/src/Optim.jl:5
It would be beneficial if we could pass optional arguments to the cost function, the C-version of NLOpt supports this as well as the scipy optimization routines.
An optimization procedure may terminates for a variety of reasons:
Currently, the optimization functions returns OptimizationResults
use a field converged
to indicate the condition of exit. But this may not be able to provide accurate information if the procedure was terminated due to special reasons (e.g. numerical problems).
Using a more informative exitflag (instead of only a boolean variable) also addresses the problems such as the one you encountered at (line 209 of l_bfgs.jl). In such cases, you can simply terminate the procedure, and use a proper exitflag to tell the caller what happened.
Here is a possible list of exit flags: http://www.mathworks.com/help/optim/ug/fmincon.html
However, I think using symbols instead of integers might make it more user-friendly.
Optim.optimize(x->x[1]^2+x[2]^2,[1.,1.])
Results of Optimization Algorithm
* Algorithm: Nelder-Mead
* Starting Point: [1.0,1.0]
* Minimum: [0.16666666666666666,0.16666666666666666]
* Value of Function at Minimum: 0.055556
* Iterations: 4
* Convergence: true
* |x - x'| < NaN: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: true
* |g(x)| < NaN: false
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 9
* Gradient Call: 0
And worse:
Optim.optimize(x->x[1]^2+x[2]^2,[1.,1.],grtol=1e-14)
Results of Optimization Algorithm
* Algorithm: Nelder-Mead
* Starting Point: [1.0,1.0]
* Minimum: [0.16666666666666666,0.16666666666666666]
* Value of Function at Minimum: 0.055556
* Iterations: 4
* Convergence: true
* |x - x'| < NaN: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: true
* |g(x)| < NaN: false
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 9
* Gradient Call: 0
I finally think I have figured out the source the assertion errors I've encountered using Optim with the default hz_linesearch: hz_linesearch does not check that the gradients of the line search objective function at a candidate point are non-NaN when finding the initial "bracket".
As a reminder, the error I have encountered several times before looks like this:
ERROR: assertion failed: lsr.slope[ib] < 0
in bisect! at /Users/tcovert/.julia/v0.3/Optim/src/linesearch/hz_linesearch.jl:577
That is: bisect is being called on a range where the slope of the function at the right end of the range is not negative, even though bisect was promised that it would be. Bisect correctly freaks out here and stops the party.
Bisect thinks its supposed to get a range like this because the logic of the initial bracketing step goes like this:
However, if dphic is NaN, we are going to get to step 2 (since NaN is not >= 0), and we could end up bisecting.
I think this is an easy fix. In the "c-shrinking" stage that happens before the initial bracketing (lines 214-223 of hz_linesearch.jl), the testing "while" condition should also test for the non-NaN-ness of dphic.
Does this make sense?
Having phic be finite while dphic is NaN is somewhat of a corner case, but it seems to happen to me all the time. Recall that dphic = d/dc (f(x + c * s) = dot(grad(f(x + c * s)),s). If the gradient in one dimension has a positive infinite slope, while the gradient in another dimension has a negative infinite slope, dphic = NaN.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("Optim")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.4.0
INFO: Package database updated
>>> 'using Optim' log
Julia Version 0.4.0-dev+949
Commit 7dd704b (2014-10-05 02:28 UTC)
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
>>> test log
ERROR: assertion failed: dphia < 0
in secant2! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:423
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:333
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:188
in l_bfgs at /home/idunning/pkgtest/.julia/v0.4/Optim/src/l_bfgs.jl:165
in optimize at /home/idunning/pkgtest/.julia/v0.4/Optim/src/optimize.jl:113
in anonymous at no file:42
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:31
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:293
in _start at ./client.jl:362
in _start_3B_3761 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/type_stability.jl, in expression starting on line 36
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl, in expression starting on line 29
Running tests:
* bfgs.jl
* gradient_descent.jl
* momentum_gradient_descent.jl
* grid_search.jl
* l_bfgs.jl
* levenberg_marquardt.jl
* newton.jl
* cg.jl
* nelder_mead.jl
* optimize.jl
* simulated_annealing.jl
* interpolating_line_search.jl
* api.jl
* golden_section.jl
* brent.jl
* type_stability.jl
INFO: Testing Optim
================================[ ERROR: Optim ]================================
failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: Optim had test errors
in error at error.jl:21
in test at pkg/entry.jl:719
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:68
in process_options at ./client.jl:221
in _start at ./client.jl:362
in _start_3B_3761 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
>>> end of log
Just a suggestion: In optimization theory, 'minimum' is the minimal value of a function, while the point that attains the minimum is called the 'minimizer'. So in this code res = optimize(objective, 0.0 1.0)
, you might save a lot of future confusion by changing res.minimum
to res.minimizer
.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Installing Calculus v0.1.3
INFO: Installing Distributions v0.3.0
INFO: Installing DualNumbers v0.1.0
INFO: Installing NumericExtensions v0.3.6
INFO: Installing Optim v0.3.0
INFO: Installing Options v0.2.2
INFO: Installing StatsBase v0.3.8
INFO: REQUIRE updated.
Warning: could not import Base.foldl into NumericExtensions
Warning: could not import Base.foldr into NumericExtensions
Warning: could not import Base.sum! into NumericExtensions
Warning: could not import Base.maximum! into NumericExtensions
Warning: could not import Base.minimum! into NumericExtensions
Warning: could not import Base.foldl into NumericExtensions
Warning: could not import Base.foldr into NumericExtensions
Warning: could not import Base.sum! into NumericExtensions
Warning: could not import Base.maximum! into NumericExtensions
Warning: could not import Base.minimum! into NumericExtensions
ERROR: assertion failed: :((dphia<0))
in secant2! at /home/idunning/pkgtest/.julia/v0.2/Optim/src/linesearch/hz_linesearch.jl:438
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.2/Optim/src/linesearch/hz_linesearch.jl:355
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.2/Optim/src/linesearch/hz_linesearch.jl:201
in l_bfgs at /home/idunning/pkgtest/.julia/v0.2/Optim/src/l_bfgs.jl:165
in optimize at /home/idunning/pkgtest/.julia/v0.2/Optim/src/optimize.jl:107
in anonymous at no file:42
in include at boot.jl:238
at /home/idunning/pkgtest/.julia/v0.2/Optim/test/type_stability.jl:44
at /home/idunning/pkgtest/.julia/v0.2/Optim/test/runtests.jl:31
INFO: REQUIRE updated.
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.
Tests pass.
Tests fail, but package loads.
Tests pass.
means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads.
means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using
worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
>>> 'Pkg.add("Optim")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing Optim v0.4.0
INFO: Package database updated
>>> 'using Optim' log
Julia Version 0.4.0-dev+789
Commit 98c32a3 (2014-09-26 06:09 UTC)
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
>>> test log
ERROR: assertion failed: dphia < 0
in secant2! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:423
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:333
in hz_linesearch! at /home/idunning/pkgtest/.julia/v0.4/Optim/src/linesearch/hz_linesearch.jl:188
in l_bfgs at /home/idunning/pkgtest/.julia/v0.4/Optim/src/l_bfgs.jl:165
in optimize at /home/idunning/pkgtest/.julia/v0.4/Optim/src/optimize.jl:113
in anonymous at no file:42
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
in anonymous at no file:31
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
in _start_3B_3605 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/type_stability.jl, in expression starting on line 36
while loading /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl, in expression starting on line 29
Running tests:
* bfgs.jl
* gradient_descent.jl
* momentum_gradient_descent.jl
* grid_search.jl
* l_bfgs.jl
* levenberg_marquardt.jl
* newton.jl
* cg.jl
* nelder_mead.jl
* optimize.jl
* simulated_annealing.jl
* interpolating_line_search.jl
* api.jl
* golden_section.jl
* brent.jl
* type_stability.jl
INFO: Testing Optim
================================[ ERROR: Optim ]================================
failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/Optim/test/runtests.jl`, ProcessExited(1)) [1]
================================================================================
INFO: No packages to install, update or remove
ERROR: Optim had test errors
in error at error.jl:21
in test at pkg/entry.jl:719
in anonymous at pkg/dir.jl:28
in cd at ./file.jl:20
in cd at pkg/dir.jl:28
in test at pkg.jl:68
in process_options at ./client.jl:213
in _start at ./client.jl:354
in _start_3B_3605 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
>>> end of log
I have just run optimize with Nelder-Mead method on a simple function with one maximum, and it yielded a wrong result. Code, output with comparison with LBFGS below :
--- code ---
using Optim
function testOneMode(x::Vector)
return (x[1]^2 - 1.)^2
end
optimize(testOneMode, [0.5], method = :nelder_mead)
optimize(testOneMode, [0.5], method = :l_bfgs)
--- . ---
--- output of NM optimize ---
Results of Optimization Algorithm
--- output of LBFGS optimise ---
Am I missing something (like NM is coded to return the max rather than the min) ?
Using :
Version 0.2.1 (2014-02-11 06:30 UTC)
Official http://julialang.org/ release
x86_64-apple-darwin12.5.0
julia> Pkg.status()
Required packages:
General curve fitting methods, such as the LM method in this library (and maybe a simple linear one?), seem a reasonable functionality to have in julia/base
. Is it worth trying to do this, and if so, would that functionality be utilized in this library or require duplication?
The trick is that the current implementations here in Optim.jl are entangled and would probably need to be separated and simplified for julia/base
inclusion (honestly, I'm just guessing here, but that seems a reasonable assertion). For example, levenberg_marquardt()
returns a MultivariateOptimizationResults
which curve_fit()
then simplifies. Also, curve_fit()
calculates the jacobian using finite differencing in the Calculus package. These make perfect sense in this library, but maybe less so for julia/base
.
The code looks pretty simple, so it wouldn't be a problem to just disentangle and submit a general pull request to julia. However, blatant duplication seems like a poor solution.
is there a way to have an objective function
f(x,a,b)
where x
is the variable I want to minimize over, and a,b
are parameters that are set somewhere else in the program? asked differently: can we have?
optimize(f::Function,vargs...)
$ julia run_tests.jl
Running tests:
* test/bfgs.jl
WARNING: A_mul_B(A,B,C) is deprecated, use A_mul_B!(A,B,C) instead.
in depwarn at deprecated.jl:29
in A_mul_B at deprecated.jl:19
in bfgs at /home/mlubin/.julia/Optim/src/bfgs.jl:102
in include at boot.jl:238
WARNING: A_mul_B(A,B,C) is deprecated, use A_mul_B!(A,B,C) instead.
in depwarn at deprecated.jl:29
in A_mul_B at deprecated.jl:19
in bfgs at /home/mlubin/.julia/Optim/src/bfgs.jl:151
in include at boot.jl:238
It would be useful to allow the users to specify the initial Hessian matrix (instead of always forcing it to be the identity matrix).
A mail list thread: https://groups.google.com/forum/#!topic/julia-users/J5NgwNRTf6k
It shouldn't be difficult to implement. However, caution has to be exercise as some algorithms don't rely on Hessian to work, and some use inverse Hessian (instead of the Hessian itself).
I was adapting my little MinFinder algorithm to the new fminbox
API and I noticed that the remaining cg
function has 3 more arguments than the generic optimize
currently allows. These are eta
, P
and precondprep
.
Changing the optimizer=cg
keyword argument in fminbox
to another method fails with
unrecognized keyword argument "eta"
How about adding these 3 arguments to the optimize
function definition and using optimize with method=optimizer
in fminbox
instead of the current optimizer
? @timholy, I believe you wrote this code and if you want I can help with writing this in a PR if you agree. You probably also know better how this will play with your pending PR #50.
Slightly related, I saw optimize.jl
is pretty verbose, and that this change will add even a few more lines to it. Maybe this calls for a rewrite?
I can assist here if someone has a good idea for it.
I am very happy with the current status of the package.
I think it would be great to include a section/paragraph that documents the use of fg!
for people who would like to write functions that want to compute both objective value & gradient.
Hi, I'm running Julia 0.2.0 on Windows (64 bit). Running the example in doc for Rosenbrock.
for the command:
optimize(f, [0.0, 0.0])
or
optimize(f, [0.0, 0.0], method = :nelder_mead)
error message was:
ERROR: no method OptimizationResults(ASIIString,Array{Float64,1}.Array{Float64,1},Float64,Int32,Bool, Bool, OptimizationTrace,Int32, Int32, Array{Float64,1})
same result for the Curve Fit Demo
model(xpts, p) = p[1]_exp(-xpts._p[2])
xpts = linspace(0,10,20)
data = model(xpts, [1.0 2.0]) + 0.01*randn(length(xpts))
beta, r, J = curve_fit(model, xpts, data, [0.5, 0.5])
errors = estimate_errors(beta, r, J)
Any ideas?
thanks
Mark
Is it somehow possible to inform about the progress while running? Something like current estimates, steps done, etc.?
There are three implementations of numerical derivates at present. I think we should try to narrow them down. My implementation seems to be more precise but is also more expensive than Tim's.
I couldn't find any mention of it in the julia mailing lists but apparently the {} notation should not be used for Any[] anymore. With julia0.4 I get many warnings of the following kind:
WARNING: deprecated syntax "{a,b, ...}" use "Any[a,b, ...]" instead.
This is a little bit annoying since it unnecessarily litters up the terminal.
At current stage, if I set show_trace
to be true
, it shows something as:
State of Optimization Algorithm
* Iteration: 1
* State: [-13.621713566688001,30.43215474581049]
* f(State): 555.8335811657051
* Additional Information:
* g(x): [-13.621694401562563,30.43215539941171]
* Maximum component of g(x): 30.43215539941171
* ~inv(H): 2x2 Float64 Array:
0.84352 0.349592
0.349592 0.21898
State of Optimization Algorithm
* Iteration: 2
* State: [-6.10358894953822,13.635616414811501]
* f(State): 111.62167838244122
* Additional Information:
* g(x): [-6.081289579727934,13.630405311476677]
* Maximum component of g(x): 13.630405311476677
* ~inv(H): 2x2 Float64 Array:
0.999137 0.000938874
0.000938874 1.00011
State of Optimization Algorithm
* Iteration: 3
* State: [-5.39757188816153,12.048951804078238]
* f(State): 87.22397174773185
* Additional Information:
* g(x): [-5.348905122581963,12.033421449620741]
* Maximum component of g(x): 12.033421449620741
* ~inv(H): 2x2 Float64 Array:
0.98971 0.0117916
0.0117916 0.998946
....
Preferably, I would like to see one line for each iteration, which shows the iteration number, objective value, and the norm of gradient, etc (just like in MATLAB).
The current way is overly verbose, especially when the solution is high dimensional. In most cases, I wouldn't want to see those additional information such as the entire gradient, or inverse Hessian.
Hi all
I have a few years of experience with implementing (I like to write my own code) and using various large-scale optimization algorithms in MATLAB. I have worked with non-linear conjugate gradient, and various versions of quasi and truncated Newton solvers, using both line-search and trust-region variants.
In my experience, the line-search component of an optimizer is very important, but one very often sees this neglected. The line-search balances two conflicting requirements:
In the BFGS/LBFGS algorithms, the line-search is of particular importance, because if the line search fails to meet the Wolfe-conditions, the inverse Hessian approximation can end up being non-positive definite, which will send the optimizer going uphill instead of downhill. A nice line search should satisfy these conditions.
If anyone feels like translating from MATLAB, a version of my favourite line search is available here:
http://www.cs.umd.edu/users/oleary/software/
You will have to translate both cvsrch.m and cstep.m.
I have been using this implemention in my own optimizers for a while now and I find the extra complication was well worth it. This line search also helps to make the optimizer deal more efficiently with non-convex regions of the objective function.
As a further recommendation for this line-search algorithm, it is used in Poblano:
https://software.sandia.gov/trac/poblano/
If someone else does not do this translation, I will probably end up doing it myself, but I can't promise when ...
Currently, the function cgdescent
still relies on the old-style options (from Options
).
We should change to use the standard keyword arguments instead of this, and then remove the dependency on Options
.
cc: @timholy
I'm not sure where the best place to discuss Optim is so I'll post this here.
In my field, computational chemistry, it is common to enforce a maximum step size constraint at each optimization step, when working with atomic systems. This is very important when the gradient is large because a large step can be taken without this constraint. A large step will often lead away from the current basin of attraction to a different one. Finding the minimum associated with the current geometry is often why the optimization is being performed. Also, if the initial force is large enough, all of the atoms might be placed so far apart that the energy (function value) goes to zero. This certainly is a minimum, but it is a trivial one.
I would like to know if anyone has any thoughts on adding a maximum step size to Optim. The way that it is typically implemented would be to calculate the length of the proposed step size and if it is larger than the maximum, set it to the maximum.
One reason that a maximum step size makes sense in atomic systems is that the second derivates very rapidly with changes in bond lengths. This means that curvature information learned at the current position should only be used locally. For example, a typical maximum step size might be between 0.1 and 0.3 Angstroms (several percent of an equilibrium bond length).
I am currently writing a paper that benchmarks various optimization algorithms and implementations in atomic systems and I would like to include Optim, however, as it currently stands, Optim immediently pushes all of the atoms extremely far apart in the first step, due to the large initial gradient.
MultivariateOptimizationResults constructor will fail of the function being optimized doesn't return a Float64. It would seem best if the Float64 params could be made Real.
Location found @ https://github.com/JuliaOpt/Optim.jl/blob/master/src/types.jl#L28
We currently have a mixed API for optimization: the functions I wrote work very differently from the functions that Tim wrote. To unify things, I'd like to propose a new API that I hope we can all standardize on. The proposal is quite long, but I think it touches on all of the issues we need to confront. I'm opening it as an issue because I expect we'll want to debate the design for a while before implementing anything.
To simplify the discussion, let's introduce some notation.
My API worked exclusively with pure functions, which I'll refer to as:
f
denotes a function from R^n to R. The result is returned as a Real
of some sort.g
denotes the gradient of f
, which makes g
a function from R^n to R^n. The result is returned as a Vector{T}
for some Real
type T
.h
denotes the Hessian of f
, which makes h
a function from R^n to R^n*m. The result is returned as a Matrix{T}
for some Real
type T
.Tim's API employed mutating functions, which I'll refer to as:
f
is the same as above.g!
denotes the gradient of f
, but one which mutates an input argument so that it is called as g!(storage, x)
. I'd like to transition this over to g!(x, storage)
. As will be seen, I'd also like to remove the nothing
arguments being used in the current implementation. Because function is impure, nothing is returned.h!
denotes the Hessian of f
, but one which mutates an input argument so that it is called as h!(storage, x)
. Again, I'd like to transition this over to h!(x, storage)
. Because function is impure, nothing is returned.fg!
denotes a coupled pair of function and gradient that get evaluated simultaneously for efficiency. This coupled pair is called as fg!(x, storage)
and returns the value of f
evaluated at x
after mutating storage
.One could also consider functions like gh!
and fgh!
, but I'm not currently aware of a proposed use for those things.
Using this notation, my proposed new API is the following:
g
and h
and enforce the use of mutating functions g!
and h!
. This may confuse some users, but I think the gains are worth the pain.g!
and h!
using finite differencing. I've already added the ability to do finite-differencing by mutating an array to the Calculus package in preparation for this.f
, g!
and fg!
into a single unit that permits multiple dispatch. These differentiable functions will become the core backend construct of the Optim package. End-users will not need to provide them, because we will generate these values for users automatically. But users who want to exploit forms like fg!
can use these types, which will prevent the automatic creation of wrappers.Specifically, I propose creating the following types and methods:
immutable OnceDifferentiableFunction
f::Function
g!::Function
end
immutable CoupledOnceDifferentiableFunction
f::Function
g!::Function
fg!::Function
end
immutable TwiceDifferentiableFunction
f::Function
g!::Function
h!::Function
end
Using these functions, we could create methods like the following:
For pure function calls:
callf(Function, x)
callf(OnceDifferentiableFunction, x)
callf(CoupledOnceDifferentiableFunction, x)
callf(TwiceDifferentiableFunction, x)
For mutating gradient function calls:
callg!(OnceDifferentiableFunction, x, storage)
callg!(CoupledOnceDifferentiableFunction, x, storage)
callg!(TwiceDifferentiableFunction, x, storage)
For mutating Hessian function calls:
callh!(TwiceDifferentiableFunction, x, storage)
For simultaneous function and mutating gradient function calls:
callfg!(OnceDifferentiableFunction, x, storage)
callfg!(CoupledOnceDifferentiableFunction, x, storage)
callfg!(TwiceDifferentiableFunction, x, storage)
Using these types and functions, we should be able to express all of the computations we're doing now while doing much less memory allocation. Also, the use of multiple dispatch should make it easier to do redirection by automatically creating gradients when needed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.