Code Monkey home page Code Monkey logo

blackboxoptim.jl's Introduction

BlackBoxOptim.jl

CI codecov

BlackBoxOptim is a global optimization package for Julia (http://julialang.org/). It supports both multi- and single-objective optimization problems and is focused on (meta-)heuristic/stochastic algorithms (DE, NES etc) that do NOT require the function being optimized to be differentiable. This is in contrast to more traditional, deterministic algorithms that are often based on gradients/differentiability. It also supports parallel evaluation to speed up optimization for functions that are slow to evaluate.

We note that there are a few alternative Julia packages with similar goals that can be explored (if you know of other libraries to list here, please post an issue or submit a PR):

  • PRIMA is a Julia interface to a modern re-implementation of Powell's optimization algorithms (BOBYQA, COBYLA etc)

Installation

using Pkg; Pkg.add("BlackBoxOptim")

or latest master directly from github:

using Pkg; Pkg.clone("https://github.com/robertfeldt/BlackBoxOptim.jl")

from a Julia repl.

Usage

To show how the BlackBoxOptim package can be used, let's implement the Rosenbrock function, a classic problem in numerical optimization. We'll assume that you have already installed BlackBoxOptim as described above.

First, we'll load BlackBoxOptim and define the Rosenbrock function (in 2 dimensions):

using BlackBoxOptim

function rosenbrock2d(x)
  return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end

We can now call the bboptimize() function, specifying the function to be optimized (here: rosenbrock2d()) and the range of values allowed for each of the dimensions of the input:

res = bboptimize(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2)

We get back an optimization result object that we can query to, for example, get the best fitness and solution candidate:

best_fitness(res) < 0.001        # Fitness is typically very close to zero here...
length(best_candidate(res)) == 2 # We get back a Float64 vector of dimension 2

BlackBoxOptim will default to using an adaptive differential evolution optimizer in this case and use it to try to locate a solution where both elements can be Floats in the range -5.0:5.0. If you wanted a different range of allowed values for the second dimension of the solution you can specify that with a range of allowed values. In this case you do not need to specify the number of dimensions since that is implicit from the number of ranges supplied:

bboptimize(rosenbrock2d; SearchRange = [(-5.0, 5.0), (-2.0, 2.0)])

If you want to use a different optimizer that can be specified with the Method keyword. For example, using the standard differential evolution optimizer DE/rand/1/bin:

bboptimize(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2, Method = :de_rand_1_bin)

You can (this currently requires the master branch so ] add BlackBoxOptim#master) give a starting (initial candidate) point for the search when calling bboptimize but beware that very little checking is done on it so be sure to provide a candidate of the right length and inside the search space:

x0 = [1.0, 1.0] # starting point (aka initial candidate)
res = bboptimize(rosenbrock2d, x0; SearchRange = (-5.0, 5.0), NumDimensions = 2, MaxTime = 0.1)
isapprox(best_fitness(res), 0.0)

Note that the rosenbrock2d() function is quite easy to optimize. Even a random search will come close if we give it more time:

bboptimize(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2, Method = :random_search, MaxTime = 10.0)

But if we optimize the same rosenbrock function in, say, 30 dimensions that will be very hard for a random searcher while sNES or DE can find good solutions if we give them some time. We can compare optimizers using the compare_optimizers() function:

function rosenbrock(x)
  return( sum( 100*( x[2:end] .- x[1:end-1].^2 ).^2 .+ ( x[1:end-1] .- 1 ).^2 ) )
end

res = compare_optimizers(rosenbrock; SearchRange = (-5.0, 5.0), NumDimensions = 30, MaxTime = 3.0);

You can find more examples of using BlackBoxOptim in the examples directory.

Providing initial solution(s)

One or multiple initial solutions can be provided as the 2nd argument to the bboptimize function, for example:

myproblem(x) = (x[1] - 3.14)^2 + (x[2] - 7.2)^4
optimum = [3.14, 7.2]
good_guess = [3.0, 7.2]
res1 = bboptimize(myproblem, good_guess; NumDimensions = 2, SearchRange = (-10.0, 10.0));
@assert isapprox(best_fitness(res1), myproblem(optimum); atol = 1e-30)

two_good_guesses = [good_guess, [3.1, 7.3]]
res2 = bboptimize(myproblem, two_good_guesses; NumDimensions = 2, SearchRange = (-10.0, 10.0));
@assert isapprox(best_fitness(res2), myproblem(optimum); atol = 1e-30)

Note that such solutions are not evaluated when added but rather when compared to other candidates during a search. Thus, depending on the number of optimization steps and population size (for population-based optimizers), it might take a certain number of steps before your provided solution is considered. If you want a more instant effect, investigate if a non population-based optimizers is more appropriate and gives better results.

Multi-objective optimization

Multi-objective evaluation is supported by the BorgMOEA algorithm. Your fitness function should return a tuple of the objective values and you should indicate the fitness scheme to be (typically) Pareto fitness and specify the number of objectives. Otherwise the use is similar, here we optimize the Schaffer1 function:

fitness_schaffer1(x) = (sum(abs2, x), sum(abs2, x .- 2.0))
res = bboptimize(fitness_schaffer1; Method=:borg_moea,
            FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true),
            SearchRange=(-10.0, 10.0), NumDimensions=3, ϵ=0.05,
            MaxSteps=50000, TraceInterval=1.0, TraceMode=:verbose);

pareto_frontier(res) would give a vector of all Pareto-optimal solutions and corresponding fitness values. If we simply want to get one individual with the best aggregated fitness:

bs = best_candidate(res)
bf = best_fitness(res)

By default, the aggregated fitness is the sum of the individual objective values, but this could be changed when declaring the fitness scheme, e.g. the weighted sum with weights (0.3, 0.7):

weightedfitness(f) = f[1]*0.3 + f[2]*0.7

...
    FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true, aggregator=weightedfitness)
...

Of course, once the Pareto set (pareto_frontier(res)) is found, one can apply different criteria to filter the solutions. For example, to find the solution with the minimal first objective:

pf = pareto_frontier(res)
best_obj1, idx_obj1 = findmin(map(elm -> fitness(elm)[1], pf))
bo1_solution = params(pf[idx_obj1]) # get the solution candidate itself...

or to use different weighted sums:

weighedfitness(f, w) = f[1]*w + f[2]*(1.0-w)
weight = 0.4 # Weight on first objective, so second objective will have weight 1-0.4=0.6
best_wfitness, idx = findmin(map(elm -> weighedfitness(fitness(elm), weight), pf))
bsw = params(pf[idx])

Configurable Options

The section above described the basic API for the BlackBoxOptim package. There is a large number of different optimization algorithms that you can select with the Method keyword (adaptive_de_rand_1_bin, adaptive_de_rand_1_bin_radiuslimited, separable_nes, xnes, de_rand_1_bin, de_rand_2_bin, de_rand_1_bin_radiuslimited, de_rand_2_bin_radiuslimited, random_search, generating_set_search, probabilistic_descent, borg_moea).

In addition to the Method parameter, there are many other parameters you can change. Some key ones are:

  • MaxTime: For how long can the optimization run? Defaults to false which means that number of iterations is the given budget, rather than time.
  • MaxFuncEvals: How many evaluations that are allowed of the function being optimized.
  • TraceMode: How optimization progress should be displayed (:silent, :compact, :verbose). Defaults to :compact that outputs current number of fitness evaluations and best value each TraceInterval seconds.
  • PopulationSize: How large is the initial population for population-based optimizers? Defaults to 50.
  • TargetFitness. Allows to specify the value of the best fitness for a given problem. The algorithm stops as soon as the distance between the current best_fitness() and TargetFitness is less than FitnessTolerance. This list is not complete though, please refer to the examples and tests directories for additional examples.

State of the Library

Existing Optimizers

  • Natural Evolution Strategies:
    • Separable NES: separable_nes
    • Exponential NES: xnes
    • Distance-weighted Exponential NES: dxnes
  • Differential Evolution optimizers, 5 different:
    • Adaptive DE/rand/1/bin: adaptive_de_rand_1_bin
    • Adaptive DE/rand/1/bin with radius limited sampling: adaptive_de_rand_1_bin_radiuslimited
    • DE/rand/1/bin: de_rand_1_bin
    • DE/rand/1/bin with radius limited sampling (a type of trivial geography): de_rand_1_bin_radiuslimited
    • DE/rand/2/bin: de_rand_2_bin
    • DE/rand/2/bin with radius limited sampling (a type of trivial geography): de_rand_2_bin_radiuslimited
  • Direct search:
    • Generating set search:
      • Compass/coordinate search: generating_set_search
      • Direct search through probabilistic descent: probabilistic_descent
  • Resampling Memetic Searchers:
    • Resampling Memetic Search (RS): resampling_memetic_search
    • Resampling Inheritance Memetic Search (RIS): resampling_inheritance_memetic_search
  • Stochastic Approximation:
    • Simultaneous Perturbation Stochastic Approximation (SPSA): simultaneous_perturbation_stochastic_approximation
  • RandomSearch (to compare to): random_search

For multi-objective optimization only the BorgMOEA (borg_moea) is supported but it is a good one. :)

Multithreaded and Parallel Function Evaluation

NB! There are problems with the multi-threaded evaluation on Julia 1.6 and later. We will be investigating this and hope to fix in a future release. For now the related tests have been deactivated. Sorry for the inconvenience.

For some (slow) functions being optimized and if you have a multi-core CPU you can gain performance by using multithreaded or parallel evaluation. This typically requires an optimization algorithm that evaluates many candidate points in one batch. The NES family (xnes, dxnes etc), for example. See the file

examples/rosenbrock_parallel.jl

for one example of this. On Julia 1.3 and later it is typically better to use multithreading see the file

examples/multithreaded_optimization.jl

for some examples.

Guide to selecting an optimizer

In our experiments the radius limited DE's perform better than the classic de_rand_1_bin DE in almost all cases. And combining it with adaptive setting of the weights makes it even better. So for now adaptive_de_rand_1_bin_radiuslimited() is our recommended "goto" optimizer. However, the difference between the top performing DE's is slight.

Some NES variants (separable or dx) can sometimes beat the DE optimizers in the tests we have done. But xnes and dxnes are often very slow and while the separable NES isn't it is less robust. So we don't recommend it as a default, robust choice.

We maintain a list of optimizers ranked by performance when tested on a large set of problems. From the list we can see that the adaptive differential evolution optimizers (adaptive_de_rand_1_bin and/or adaptive_de_rand_1_bin_radiuslimited) are typically on top when it comes to mean rank. The generating_set_search often gives best results (its MedianLogTimesWorseFitness is typically in the range 0.3-0.6, which means its median fitness value is 10^0.3=2.0 to 10^0.6=4.0 times worse than the best fitness found on a problem) and is faster (ranked first on run time, typically) but it is not as robust as the DE optimizers and thus is ranked lower on mean rank (per problem).

Overall we recommend one of the DE optimizers as a generally good choice since their runtime is often good and they are robust and works well both for simpler, separable problems as well as for more complex ones. They also tend to scale better to high-dimensional settings. Of course, optimizer performance varies depending on the problem and the dimensionality so YMMV.

Acknowledgements

We acknowledge the support from the Swedish Scientific Council ("Vetenskapsrådet") in the projects 2015-04913 and 2020-05272.

blackboxoptim.jl's People

Contributors

alyst avatar ayadlin avatar baggepinnen avatar beastyblacksmith avatar chrisrackauckas avatar davidanthoff avatar devmotion avatar floswald avatar github-actions[bot] avatar iainnz avatar jbrea avatar johanbluecreek avatar juliatagbot avatar karajan9 avatar libbum avatar matthieugomez avatar mbauman avatar multidis avatar progtw avatar ranjanan avatar robertfeldt avatar tkelman avatar tpapp avatar ven-k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blackboxoptim.jl's Issues

MethodError: no method matching start(::BlackBoxOptim.OptimizationResults)

Trying to run all the examples, high_dimensional_rosenbrock.jl and some others that were not modified last week throw the following error:

fitness_for_opt(problemname, 2, 10, 100, :adaptive_de_rand_1_bin)

Rosenbrock, n = 2, optimizer = adaptive_de_rand_1_bin
Starting optimization with optimizer BlackBoxOptim.DiffEvoOpt{BlackBoxOptim.FitPopulation{Float64},BlackBoxOptim.RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}}
0.00 secs, 0 evals, 0 steps

Optimization stopped after 101 steps and 0.0009999275207519531 seconds
Termination reason: Max number of steps (100) reached
Steps per second = 101007.32093466858
Function evals per second = 130009.42298521698
Improvements/step = 0.36
Total function evaluations = 130

Best candidate found: [-2.78048,7.65913]

Fitness: 14.809475789

MethodError: no method matching start(::BlackBoxOptim.OptimizationResults)
Closest candidates are:
start(::SimpleVector) at essentials.jl:170
start(::Base.MethodList) at reflection.jl:258
start(::IntSet) at intset.jl:184
...

in fitness_for_opt(::String, ::Int64, ::Int64, ::Int64, ::Symbol) at .\In[2]:8

And the next line:

for (dims, steps) in [(128, 1e6), (256, 2e6), (512, 4e6)]
println("Num dims = ", dims, ", Num steps = ", steps)
fitness = fitness_for_opt(problemname, dims, 50, steps, :adaptive_de_rand_1_bin)
end

Num dims = 128, Num steps = 1.0e6

Rosenbrock, n = 128, optimizer = adaptive_de_rand_1_bin
Starting optimization with optimizer BlackBoxOptim.DiffEvoOpt{BlackBoxOptim.FitPopulation{Float64},BlackBoxOptim.RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}}
0.00 secs, 0 evals, 0 steps
0.50 secs, 64308 evals, 64202 steps, improv/step: 0.174 (last = 0.1741), fitness=2442.363745254
1.00 secs, 125149 evals, 125044 steps, improv/step: 0.176 (last = 0.1788), fitness=412.429653553
1.50 secs, 185556 evals, 185457 steps, improv/step: 0.181 (last = 0.1906), fitness=318.203482602
2.00 secs, 250841 evals, 250747 steps, improv/step: 0.182 (last = 0.1840), fitness=313.772299580
2.51 secs, 322866 evals, 322775 steps, improv/step: 0.183 (last = 0.1872), fitness=311.988327655
3.01 secs, 387349 evals, 387261 steps, improv/step: 0.183 (last = 0.1840), fitness=310.106016270
3.51 secs, 456980 evals, 456895 steps, improv/step: 0.185 (last = 0.1943), fitness=286.289713893
4.01 secs, 529895 evals, 529812 steps, improv/step: 0.184 (last = 0.1818), fitness=239.909922582
4.51 secs, 604067 evals, 603990 steps, improv/step: 0.185 (last = 0.1872), fitness=172.293105809
5.01 secs, 669166 evals, 669092 steps, improv/step: 0.186 (last = 0.1925), fitness=159.858206148
5.51 secs, 738414 evals, 738345 steps, improv/step: 0.185 (last = 0.1851), fitness=104.056642604
6.01 secs, 811251 evals, 811185 steps, improv/step: 0.185 (last = 0.1846), fitness=102.906458438
6.51 secs, 875571 evals, 875507 steps, improv/step: 0.185 (last = 0.1865), fitness=101.486934244
7.02 secs, 945691 evals, 945630 steps, improv/step: 0.186 (last = 0.1913), fitness=100.566451284

Optimization stopped after 1000001 steps and 7.42300009727478 seconds
Termination reason: Max number of steps (1000000) reached
Steps per second = 134716.55488286095
Function evals per second = 134724.50315165074
Improvements/step = 0.186081
Total function evaluations = 1000060

Best candidate found: [0.999778,0.999393,0.999552,1.0012,1.00109,1.00006,1.00102,1.00077,0.999943,1.00044,1.00014,0.999701,0.999263,1.00038,1.00085,1.00081,1.00108,0.999933,0.999358,1.0001,1.00049,1.00019,1.00014,1.00014,1.00052,1.00036,0.999297,0.999253,1.00031,1.00052,1.00007,0.999845,1.00015,0.999746,1.00027,1.00116,0.999125,0.99844,1.00011,0.999508,0.999093,0.999971,0.9983,0.998624,0.999006,0.999172,1.00071,1.00038,0.999396,0.999204,0.999713,0.99944,0.99863,0.998816,0.999312,0.999532,1.00013,0.99914,0.998135,0.996522,0.996164,0.992173,0.981551,0.962452,0.933084,0.871296,0.765644,0.597394,0.368063,0.142401,0.0278034,0.0111991,0.00811688,0.0122003,0.00899323,0.00968402,0.0101368,0.00984478,0.0115218,0.00968908,0.00847069,0.0106596,0.0107605,0.0106498,0.00781984,0.0106147,0.00428776,-0.316451,0.620049,0.800388,0.898521,0.948649,0.973744,0.987115,0.99399,0.996537,0.998642,1.00033,0.998881,0.999249,0.998546,0.997564,0.9964,0.991797,0.98334,0.967661,0.941945,0.891133,0.804099,0.653335,0.434981,0.199469,0.0473854,0.0125062,0.00962304,0.0102804,0.0100706,0.010631,0.00931846,0.0109869,0.00989174,0.00902658,0.0102114,0.0101546,0.00885919,0.0105129,0.010692,0.000286411]

Fitness: 99.828907853

MethodError: no method matching start(::BlackBoxOptim.OptimizationResults)
Closest candidates are:
start(::SimpleVector) at essentials.jl:170
start(::Base.MethodList) at reflection.jl:258
start(::IntSet) at intset.jl:184
...

in fitness_for_opt(::String, ::Int64, ::Int64, ::Float64, ::Symbol) at .\In[2]:8
in macro expansion; at .\In[7]:3 [inlined]
in anonymous at .<missing>:?

Inexact Error() when using BorgMOEA

I'm getting an InexactError() when I try to use the BorgMOEA method to fit multiple objectives at once. Attached is a Jupyter Notebook with a toy ODE system that I'm trying to fit to some experimental data where I have multiple initial conditions (which I set up with a Monte Carlo Problem).

BorgMOEA-Test.zip

It seems if I have a smaller population size I either won't get the error or the optimization will run a few times before throwing the error.

Is this a known bug, or user error?

Output statement:

Starting optimization with optimizer BlackBoxOptim.BorgMOEA{BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}},BlackBoxOptim.FitPopulation{BlackBoxOptim.IndexedTupleFitness{3,Float64}},BlackBoxOptim.FixedGeneticOperatorsMixture,BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}}
0.00 secs, 0 evals, 0 steps
pop.size=1000 arch.size=0 n.restarts=0
InexactError()

Stacktrace:
[1] trunc(::Type{Int64}, ::Float64) at ./float.jl:672
[2] ϵ_index at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/ntuple_fitness.jl:128 [inlined]
[3] macro expansion at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/ntuple_fitness.jl:147 [inlined]
[4] ϵ_index(::Tuple{Float64,Float64,Float64}, ::Array{Float64,1}, ::Type{Val{true}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/ntuple_fitness.jl:146
[5] BlackBoxOptim.IndexedTupleFitness(::Tuple{Float64,Float64,Float64}, ::Float64, ::Array{Float64,1}, ::Type{Val{true}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/ntuple_fitness.jl:167
[6] convert(::Type{BlackBoxOptim.IndexedTupleFitness{3,Float64}}, ::Tuple{Float64,Float64,Float64}, ::BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/ntuple_fitness.jl:270
[7] fitness(::Array{Float64,1}, ::BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}}, ::Int64) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/evaluator.jl:56
[8] update_fitness!(::BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}}, ::BlackBoxOptim.Candidate{BlackBoxOptim.IndexedTupleFitness{3,Float64}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/evaluator.jl:88
[9] update_population_fitness!(::BlackBoxOptim.BorgMOEA{BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}},BlackBoxOptim.FitPopulation{BlackBoxOptim.IndexedTupleFitness{3,Float64}},BlackBoxOptim.FixedGeneticOperatorsMixture,BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/borg_moea.jl:215
[10] step!(::BlackBoxOptim.BorgMOEA{BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}},BlackBoxOptim.FitPopulation{BlackBoxOptim.IndexedTupleFitness{3,Float64}},BlackBoxOptim.FixedGeneticOperatorsMixture,BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/borg_moea.jl:100
[11] step! at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/opt_controller.jl:246 [inlined]
[12] run!(::BlackBoxOptim.OptRunController{BlackBoxOptim.BorgMOEA{BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}},BlackBoxOptim.FitPopulation{BlackBoxOptim.IndexedTupleFitness{3,Float64}},BlackBoxOptim.FixedGeneticOperatorsMixture,BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/opt_controller.jl:282
[13] run!(::BlackBoxOptim.OptController{BlackBoxOptim.BorgMOEA{BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.ProblemEvaluator{Tuple{Float64,Float64,Float64},BlackBoxOptim.IndexedTupleFitness{3,Float64},BlackBoxOptim.EpsBoxArchive{3,Float64,BlackBoxOptim.EpsBoxDominanceFitnessScheme{3,Float64,true,Base.#sum}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}},BlackBoxOptim.FitPopulation{BlackBoxOptim.IndexedTupleFitness{3,Float64}},BlackBoxOptim.FixedGeneticOperatorsMixture,BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ParetoFitnessScheme{3,Float64,true,Base.#sum},BlackBoxOptim.RangePerDimSearchSpace,Void}}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/opt_controller.jl:416
[14] #bboptimize#86(::Array{Any,1}, ::Function, ::Function, ::Dict{Symbol,Any}) at /Users/mcfefa/.julia/v0.6/BlackBoxOptim/src/bboptimize.jl:70
[15] (::BlackBoxOptim.#kw##bboptimize)(::Array{Any,1}, ::BlackBoxOptim.#bboptimize, ::Function, ::Dict{Symbol,Any}) at ./:0 (repeats 2 times)
[16] include_string(::String, ::String) at ./loading.jl:522

Parallel execution not working

I'm trying to follow the example Rosenbrock_parallel.jl
Specificially the multi-threaded part with bboptimize and multiple workers gives me errors

"Error: LoadError: On worker 2:
KeyError: key BlackBoxOptim [...] not found"

I'm running Julia 1.0

Suggestions? I'm really interested to test the capabilities of multi CPU on this package

Restarting

Hi,

Thank your for this package.

I am very much interested in re-starting an optimization. I slightly modified your example here: https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/examples/restarting_optimization.jl

I noticed something strange. It is as if results from the previous steps were not used, and that calling bboptimize(opt) would re-start the optimization from scratch.

Here is the code I used:

using BlackBoxOptim
using Plots

# Source: https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/examples/restarting_optimization.jl
# Modifications:
# * run the minimization by batch of 10 iterations, 100 times
# * run the minimization in a single shot, 1000 iterations
# Expected result:
# The two strategies should give the same results
rosenbrock2d(x) = abs2(1.0 - x[1]) + 100.0 * abs2(x[2] - x[1]^2)


opt1 = bbsetup(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2,
    MaxSteps = 10, TraceMode = :silent);

# Run the minimization "10 by 10"
listRes = [bboptimize(opt1) for i in 1:100]

# Now let's compare with running 1000 iterations in a single shot
opt2 = bbsetup(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2,
    MaxSteps = 1000, TraceMode = :silent);

resOneShot = bboptimize(opt2)
bestCandidateOneShot = best_candidate(resOneShot)
bestFitnessOneShot = best_fitness(resOneShot)

# Now compare fitness progress:
progressFitness = map(best_fitness, listRes)
println("Fitness progress: ", progressFitness)

progressBestCandidates = map(best_candidate, listRes)

# And results should approach the Rosenbrock optimum at [1.0, 1.0]:
println("Best solution progress: ", progressBestCandidates)

println("Difference fitness restarting vs one-shot = $(progressFitness[100] - bestFitnessOneShot)")
println("Best value restarting = $(progressBestCandidates[100])")
println("Best value one-shot = $(bestCandidateOneShot)")

# Plot fitness
plot(collect(1:1:100), progressFitness[1:100])

Output:

julia> println("Best value restarting = $(progressBestCandidates[100])")
Best value restarting = [1.22267, 1.55253]

julia> println("Best value one-shot = $(bestCandidateOneShot)")
Best value one-shot = [1.07469, 1.1445]

image

Am I doing something wrong when I re-start a minimization ?

Many thanks.

Multi-objective optimization

Currently I'm considering whether multicriterion optimization (MO) can improve the convergence for my problems (Bayesian probability-like functions, i.e. prior probability is one criterion, likelihood is another).

Would it make sense to extend BlackBoxOptim to multicriterion problems? If yes, what would be the best strategy of implementing MO? What are the most efficient evolutionary methods for MO? Should MO be constrained to 2-criteria case (as then the Pareto frontier is easier to manipulate)?

MethodErrors with NumDimensions = 1

BlackBoxOptim doesn't seem to handle one dimensional problems properly; as a MWE:

using BlackBoxOptim
bboptimize(x -> x^2; NumDimensions = 1)

Running this on Julia 1.0.1 (with [a134a8b2] BlackBoxOptim v0.4.0 edit: or on master) gives me

julia> using BlackBoxOptim

julia> bboptimize(x -> x^2; NumDimensions = 1)
ERROR: MethodError: no method matching ^(::Array{Float64,1}, ::Int64)
Closest candidates are:
  ^(::Float16, ::Integer) at math.jl:795
  ^(::Missing, ::Integer) at missing.jl:120
  ^(::Missing, ::Number) at missing.jl:93
  ...
Stacktrace:
 [1] macro expansion at .\none:0 [inlined]
 [2] literal_pow at .\none:0 [inlined]
 [3] (::getfield(Main, Symbol("##7#8")))(::Array{Float64,1}) at .\REPL[12]:1
 [4] fitness(::Array{Float64,1}, ::FunctionBasedProblem{ScalarFitnessScheme{true},RangePerDimSearchSpace,Nothing}) at C:\Users\Eric-d2\.julia\packages\BlackBoxOptim\RgNEa\src\problem.jl:61
 [5] setup_problem(::Function, ::DictChain{Symbol,Any}) at C:\Users\Eric-d2\.julia\packages\BlackBoxOptim\RgNEa\src\bboptimize.jl:36
 [6] #bbsetup#73(::Base.Iterators.Pairs{Symbol,Int64,Tuple{Symbol},NamedTuple{(:NumDimensions,),Tuple{Int64}}}, ::Function, ::Function, ::Dict{Symbol,Any}) at C:\Users\Eric-d2\.julia\packages\BlackBoxOptim\RgNEa\src\bboptimize.jl:86
 [7] #bbsetup at .\none:0 [inlined]
 [8] #bboptimize#72(::Base.Iterators.Pairs{Symbol,Int64,Tuple{Symbol},NamedTuple{(:NumDimensions,),Tuple{Int64}}}, ::Function, ::Function, ::Dict{Symbol,Any}) at C:\Users\Eric-d2\.julia\packages\BlackBoxOptim\RgNEa\src\bboptimize.jl:69
 [9] #bboptimize at .\none:0 [inlined] (repeats 2 times)
 [10] top-level scope at none:0

For comparison,

bboptimize(x -> x[1]^2; NumDimensions = 2)

runs fine. I'm guessing 1D problems aren't a focus of the package; I came across this error when trying to minimize over six variables and then maximize over one variable by two calls to bboptimize (maybe I should be doing this another way though).

Non-population optimizer test fails intermittently

I've been running PackageEvaluator a bunch over the past few weeks and this package has been intermittently failing. Something about the tolerances or random inputs, or an actual bug?

Top-level interface
  > run a simple optimization
    > using bboptimize() with mostly defaults
    > using bbsetup()/bboptimize() with mostly defaults
    > using non-population optimizer
      Failure :: (line:-1) :: using non-population optimizer :: fact was false
        Expression: best_fitness(res) --> less_than(1.0)
          Expected: 3.5080192708609457 < 1.0
    > using population optimizer
  > continue running an optimization after it finished
  > continue running an optimization after serializing to disc
Out of 46 total facts:
  Verified: 45
  Failed:   1

version number/tag

Hi,
the recent BlackBoxOptim package doesn't seem to have any version numbering/tagging; preventing to pin the package to a stable version.
Would it be possible to tag the package when API changes, or is it already tagged and I am doing something wrong?

best,
steve
BlackBoxOptim 0.0.0- master
julia> versioninfo()
Julia Version 0.4.0-dev+5860
Commit 7fa43ed (2015-07-08 20:57 UTC)
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-3667U CPU @ 2.00GHz
WORD_SIZE: 64
BLAS: libmkl_rt
LAPACK: libmkl_rt
LIBM: libimf
LLVM: libLLVM-3.3

Restart optimization

Thanks for this nice and useful package!

I was trying to continue an optimization problem (with saving to disk) but failed to understand how to do that. Following one of the examples I tried the following, with the surprising result of a non-monotonic fitness. Is this a bug or did I misunderstand something?

julia> using BlackBoxOptim

julia> function rosenbrock2d(x)
         return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
       end
rosenbrock2d (generic function with 1 method)

julia> opt = bbsetup(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2,
              MaxSteps = 10, TraceMode = :silent);

julia> res = [bboptimize(opt) for _ in 1:10];

julia> println("Fitness progress: ", map(best_fitness, res))
Fitness progress: [2.60211, 350.808, 33.158, 7.12917, 13.7697, 57.5527, 5.39775, 21.7929, 2.10093, 2.10093]

julia> versioninfo()
Julia Version 0.6.0
Commit 903644385b* (2017-06-19 13:05 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
  WORD_SIZE: 64
  BLAS: libblas
  LAPACK: liblapack
  LIBM: libm
  LLVM: libLLVM-4.0.0 (ORCJIT, broadwell)

julia> Pkg.status("BlackBoxOptim")
 - BlackBoxOptim                 0.3.0

how to cite

This package helped me a lot with an MCMC problem (finding an initial point for starting the tuning, with a heavily multimodal likelihood), and I would like to mention it, or papers it is related to, in the paper the results ended up in. A reference, or ideally a BibTeX entry, would be helpful.

(sorry if this is mentioned somewhere, could not find it)

fix deprecated inner constructor syntax, require 0.6?

On both the latest tagged release and master, I get warnings like

WARNING: deprecated syntax "inner constructor DictChain(...) around /home/tamas/.julia/v0.6/BlackBoxOptim/src/parameters.jl:8".                                                                                      
Use "DictChain{K,V}(...) where {K,V}" instead.                                                                                                                                                                       

While #69 fixed a lot of things, it would be great to fix these too. My understanding is that this would imply julia 0.6 in REQUIRE, but that may be reasonable with v0.7 around the corner. Also, FemtoCleaner could perhaps fix these automatically?

Feature request: intermittently calling external function.

Hi,

I often find myself monitoring the fitness value during an optimisation, wondering if I have finally found something good enough. The problem is that it can be difficult to judge success by this number. One thing which I would have liked to do is to intermittently use the best parameter set to update some plot or other visualisation.

Currently, the opt can call a printout function every once in a while. But if we could add a feature to intermittently call a user-defined function with the parameters as arguments, then one could monitor progress in whatever way one sees fit.

Feature Request

I'm hoping you'll entertain a few feature requests for future versions:

  1. Support for parallel evaluation of objective functions. In my work a typical objective function evaluation requires several minutes, so speeding up the optimization by parallelization is imperative. The code I'm porting over from Matlab uses Hansen's cmaes.m as the optimization engine. When instructed to run in parallel mode, cmaes simply calls my objective function with a list of candidate points, rather than just a single point of the search space. My code then evaluates the objective functions for the elements in this list in parallel.
  2. An additional stopping criterion: The existence of a file in the working directory having a particular name, with particular contents on the first line would cause immediate termination of the optimization run. For example, the file name might be stop.dat and the special contents on the first line could be stop now. This would allow a user to gracefully exit a long optimization run. Your package could change the contents of the first line so that a subsequent invocation of the optimizer would run normally without the user having to remember to edit the file.

Thanks,
--Peter

New release for 0.7?

Hey can you tag a new release for 0.7? From the PR history it looks like the repo has been updated to be 0.7 compatible? I am working on updating a package that has BlackBoxOptim.jl as a dependency and it would make my job a bit easier 🙂

Thanks!

CMA-ES progress

Have you made much progress on your CMA-ES implementation?
https://github.com/Staross/JuliaCMAES/ is a bit short on features and the author has a PhD to write, so I'm not sure if he'll find time to finish it.

I'm trying to decide whether to hack a c interface/write one myself - if you have anything going it would probably be better quality though.

README example errors

Running this example from the README

function rosenbrock(x)
  return( sum( 100*( x[2:end] - x[1:end-1].^2 ).^2 + ( x[1:end-1] - 1 ).^2 ) )
end

res = compare_optimizers(rosenbrock; SearchRange = (-5.0, 5.0), NumDimensions = 30, MaxTime = 3.0);

Terminates with KeyError: key: :TraceMode not found.

Example: regression via optimization

This example produces an error in the syntax of bboptimize:

using BlackBoxOptim

ols_bestfit, ols_error = bboptimize((betas) -> ols_regression_objective(betas', x1, y1),(-10.0, 10.0); dimensions = 4, iterations = 2e4)

MethodError: no method matching bboptimize(::##7#8, ::Tuple{Float64,Float64}; dimensions=4, iterations=20000.0)
Closest candidates are:
bboptimize(::Any) at C:\Users\Denis.julia\v0.5\BlackBoxOptim\src\bboptimize.jl:73 got unsupported keyword arguments "dimensions", "iterations"
bboptimize(::Any, ::Associative{Symbol,Any}; kwargs...) at C:\Users\Denis.julia\v0.5\BlackBoxOptim\src\bboptimize.jl:73

More generally, would you have an example for the parameter estimation of a Lorenz system with BBO implementing DE?

BlackBoxOptim seems to break Julia help functionality

The help functionality stops working when loading blackboxoptim alone in a new julia v0.6 session:

julia> using BlackBoxOptim

help?> println
search:ERROR: ArgumentError: abstract type is not a valid key for type Symbol
Stacktrace:
 [1] setindex!(::Dict{Symbol,Void}, ::Void, ::String) at .\dict.jl:414
 [2] push! at .\set.jl:35 [inlined]
 [3] unique_from(::Array{Any,1}, ::Array{Symbol,1}, ::Set{Symbol}, ::Int64) at .\set.jl:157
 [4] unique(::Array{Any,1}) at .\set.jl:140
 [5] accessible(::Module) at .\docs\utils.jl:362
 [6] completions at .\docs\utils.jl:367 [inlined]
 [7] repl_search(::Base.TTY, ::String) at .\docs\utils.jl:124

help?> median
search:ERROR: ArgumentError: abstract type is not a valid key for type Symbol
Stacktrace:
 [1] setindex!(::Dict{Symbol,Void}, ::Void, ::String) at .\dict.jl:414
 [2] push! at .\set.jl:35 [inlined]
 [3] unique_from(::Array{Any,1}, ::Array{Symbol,1}, ::Set{Symbol}, ::Int64) at .\set.jl:157
 [4] unique(::Array{Any,1}) at .\set.jl:140
 [5] accessible(::Module) at .\docs\utils.jl:362
 [6] completions at .\docs\utils.jl:367 [inlined]
 [7] repl_search(::Base.TTY, ::String) at .\docs\utils.jl:124

Very helpful package otherwise :)

Efficient Pareto frontier

At the moment EpsBoxArchive type implementing Pareto frontier stores it as a simple Vector{}-based container with O(M) linear search.
Once the number of points (especially for N-objective problems) gets considerable (~5000), the overhead of maintaining the frontier (checking if candidate solution is dominated by the solutions from the frontier/removing frontier points dominated by the new candidate) becomes too high. Moreover, since the frontier is not parallelized, it becomes a real bottleneck for parallel BBO optimization.

For 2-objective problems we can solve the issue by sorting the solutions using the 1st component.
For N>2 objective problems we need something more advanced.
Without doing much research on the state-of-the-art containers for Pareto frontiers, the obvious candidate is k-d tree (maybe it's even possible to use (N-1)-dimensional k-d trees as the frontier should be (N-1) manifold). NearestNeighbours.jl implements k-d trees, but these are the static ones; also I'm not sure whether it implements the range searches we need.

Easier way to use ftol

I'm using BlackBoxOptim to solve a set of non linear equalities. I'd like to ensure that BlackBoxOptim runs until the solution of the system is found with a certain tolerance. Would it be possible to do that using the exported API? For now, I think I need to define a bbsolve function

function bbsolve(func, parameters::BlackBoxOptim.Parameters = BlackBoxOptim.EMPTY_PARAMS; kwargs...)
    parameters = chain(convert(ParamsDict, parameters), convert(ParamsDict, kwargs))
    params = chain(BlackBoxOptim.DefaultParameters, parameters)
    ss = BlackBoxOptim.check_and_create_search_space(parameters)
    problem = FunctionBasedProblem(func, "<unknown>", params[:FitnessScheme], ss, 0.0)
    optimizer_func = chain(BlackBoxOptim.SingleObjectiveMethods, BlackBoxOptim.MultiObjectiveMethods)[params[:Method]]
    optimizer = optimizer_func(problem, params)
    ctrl = BlackBoxOptim.OptController(optimizer, problem, params)
    BlackBoxOptim.run!(ctrl)
end

The supplied fitness function does NOT return the expected fitness type

Hi!
Here is my problem. I have an algorithm that uses hyper parameters tau0 and tau1. I'd like to find which of these combinations of parameters makes my algorithm the most efficient. Here is the function I created:

function optim_tau(T)
  solver(nlp, τ₀ = T[1], τ₁ = T[2])
  return nlp.counters.neval_obj + nlp.counters.neval_grad + nlp.counters.neval_hprod + nlp.counters.neval_hprod
end

nlp is an optimization problem and the return value are the number of call to the objective function, the grandient etc... So I want to minimize that.

But if I do: res = bboptimize(optim_tau, SearchRange = [(0.0, 0.5), (0.5, 1.0)])
I get the error: ERROR: ArgumentError: The supplied fitness function does NOT return the expected fitness type when called with a potential solution (when called with [0.0906341, 0.529315] it returned 162) so we cannot optimize it! I am not fullly sure of what this error means....

Is it possible to use the package to do what I want?

Thanks!

MathProgBase Interface

All of the JuliaOpt solvers (except surprisingly Optim.jl, but that should be remedied soon) have a MathProgBase interface. This makes it easy to swap between solvers in a code. However, currently one has to write a separate code just to use BlackBoxOptim.jl. Having a MathProgBase interface would fix this issue, and also make these methods accessible via JuMP.

Provide initial solution

Is there a way I could start the optimisation search from an existing solution?
I read the documentation but I can't seem to find the relevant information.
Many thanks.

Optimizing over arrays as parameters

I want to use BlackBoxOptim to optimize a treatment schedule in an ODE model. In my ODE model, I have two different treatment schedules to optimize across and I can input an array as each of those two parameters. I can't input two arrays in a tuple to describe the bounds of the system, I get an error. Other ways that I try to optimize over the treatment schedule, I just get one giant schedule back that doesn't distinguish between treatments. How can I use BBOptim to optimize across my system?

Related project

Hi,

I just came acrross your project, that addresses a very similar problem to the one we have been trying to solve. We are also interested in derivative free optimizer and we looked at parallel implementation from the start.

The project is here: MOpt.jl . We had previously coded a similar library in R.

We also had the desire of adding DE algorithms. I might have a look at your implementation and see if we can reuse some of it.

cheers,

@floswald @tlamadon

No length method for DictChain

Trying to use JLD to save the results directly gives this error.

I'm making a pull request where I just define the length of a DictChain as the length of its vector. I'm assuming it was only omitted because it wasn't needed, but is there anything I'm missing?

PSO

Is there a particle swarm optimization?

dxnes chkfinite bug for README compare_optimizers example

Today on Julia 0.7 when running the

res = compare_optimizers(rosenbrock; SearchRange = (-5.0, 5.0), NumDimensions = 30, MaxTime = 3.0);

example I got:

Starting optimization with optimizer BlackBoxOptim.DXNESOpt{Float64,RandomBound{RangePerDimSearchSpace}}
0.00 secs, 0 evals, 0 steps
σ=1.0 η[x]=1.0 η[σ]=0.0 η[B]=0.0 |tr(ln_B)|=1.3322676295501878e-14 |path|=NaN speed=NaN
ERROR: ArgumentError: matrix contains Infs or NaNs
Stacktrace:
 [1] chkfinite at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/lapack.jl:92 [inlined]
 [2] gebal!(::Char, ::Array{Float64,2}) at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/lapack.jl:204
 [3] exp!(::Array{Float64,2}) at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/dense.jl:499
 [4] exp at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/dense.jl:489 [inlined]
 [5] update_candidates!(::BlackBoxOptim.DXNESOpt{Float64,RandomBound{RangePerDimSearchSpace}}, ::Array{Float64,2}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/natural_evolution_strategies.jl:151
 [6] ask(::BlackBoxOptim.DXNESOpt{Float64,RandomBound{RangePerDimSearchSpace}}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/dx_nes.jl:101
 [7] step! at /Users/feldt/.julia/dev/BlackBoxOptim/src/opt_controller.jl:242 [inlined]
 [8] run!(::BlackBoxOptim.OptRunController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{RangePerDimSearchSpace}},BlackBoxOptim.ProblemEvaluator{Float64,Float64,TopListArchive{Float64,ScalarFitnessScheme{true}},FunctionBasedProblem{ScalarFitnessScheme{true},RangePerDimSearchSpace,Nothing}}}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/opt_controller.jl:285
 [9] run!(::BlackBoxOptim.OptController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{RangePerDimSearchSpace}},FunctionBasedProblem{ScalarFitnessScheme{true},RangePerDimSearchSpace,Nothing}}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/opt_controller.jl:424
 [10] #bboptimize#72(::Base.Iterators.Pairs{Symbol,Symbol,Tuple{Symbol},NamedTuple{(:Method,),Tuple{Symbol}}}, ::Function, ::Function, ::DictChain{Symbol,Any}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/bboptimize.jl:70
 [11] #bboptimize at ./none:0 [inlined]
 [12] #compare_optimizers#74(::Array{Symbol,1}, ::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol,Symbol},NamedTuple{(:SearchRange, :NumDimensions, :MaxTime),Tuple{Tuple{Float64,Float64},Int64,Float64}}}, ::Function, ::Function, ::Dict{Symbol,Any}) at /Users/feldt/.julia/dev/BlackBoxOptim/src/compare_optimizers.jl:9
 [13] (::getfield(BlackBoxOptim, Symbol("#kw##compare_optimizers")))(::NamedTuple{(:SearchRange, :NumDimensions, :MaxTime),Tuple{Tuple{Float64,Float64},Int64,Float64}}, ::typeof(compare_optimizers), ::Function) at ./none:0
 [14] top-level scope at none:0

Until we look into this within dxnes maybe we should make sure compare_optimizers catches exceptions so that it can still continue running? Ideas @alyst ?

Tests fail on v0.5

 Pkg.add("BlackBoxOptim"); Pkg.checkout("BlackBoxOptim"); Pkg.test("BlackBoxOptim")

gives

ERROR: LoadError: FactCheck finished with 2 non-successful tests.
 in exitstatus() at /Users/ranjan/.julia/v0.5/FactCheck/src/FactCheck.jl:568
 in include_from_node1(::String) at ./loading.jl:426
 in process_options(::Base.JLOptions) at ./client.jl:262
 in _start() at ./client.jl:318
while loading /Users/ranjan/.julia/v0.5/BlackBoxOptim/test/runtests.jl, in expression starting on line 44

Getting ready for Julia 0.6?

We are using BlackBoxOptim to optimize our cost function in DiffEqParamEstim.jl. We try to update our code to be compatible with the latest julia version 0.6+.
Though the master supports julia 0.6+ but it throws lots of warnings when I import the library for the first time.

examples are outdated

I want to use restarting_optimization.jl in the example directory. But they are not compatible with the current version of this package.

CMA-ES

Are there any plans to include the popular cma-es solver?
I am particularly interested in the option with noisy objectives...

Example in documentation does not work

Hello everyone!

I am trying to get in grips with the package. Unfortunately, I am already having trouble at the very beginning. I am running Julia 0.5.0 on my machine. When I try the first example in the documentation, that is:

using BlackBoxOptim

function rosenbrock2d(x)
  return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end

bboptimize(rosenbrock2d; SearchRange = (-5.0, 5.0), NumDimensions = 2)

I get the following error in Julia:

ERROR: BoundsError: attempt to access 0-element Array{BlackBoxOptim.TopListIndividual{Float64},1} at index [1]
 in show_report(::BlackBoxOptim.OptRunController{BlackBoxOptim.DiffEvoOpt{BlackBoxOptim.FitPopulation{Float64},BlackBoxOptim.RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}},BlackBoxOptim.ProblemEvaluator{Float64,Float64,BlackBoxOptim.TopListArchive{Float64,BlackBoxOptim.ScalarFitnessScheme{true}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ScalarFitnessScheme{true},BlackBoxOptim.RangePerDimSearchSpace,Void}}}, ::Bool) at /lhome/lgiannins/.julia/v0.5/BlackBoxOptim/src/opt_controller.jl:312
 in show_report(::BlackBoxOptim.OptRunController{BlackBoxOptim.DiffEvoOpt{BlackBoxOptim.FitPopulation{Float64},BlackBoxOptim.RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}},BlackBoxOptim.ProblemEvaluator{Float64,Float64,BlackBoxOptim.TopListArchive{Float64,BlackBoxOptim.ScalarFitnessScheme{true}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ScalarFitnessScheme{true},BlackBoxOptim.RangePerDimSearchSpace,Void}}}) at /lhome/lgiannins/.julia/v0.5/BlackBoxOptim/src/opt_controller.jl:300
 in run!(::BlackBoxOptim.OptController{BlackBoxOptim.DiffEvoOpt{BlackBoxOptim.FitPopulation{Float64},BlackBoxOptim.RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},BlackBoxOptim.RandomBound{BlackBoxOptim.RangePerDimSearchSpace}},BlackBoxOptim.FunctionBasedProblem{BlackBoxOptim.ScalarFitnessScheme{true},BlackBoxOptim.RangePerDimSearchSpace,Void}}) at /lhome/lgiannins/.julia/v0.5/BlackBoxOptim/src/opt_controller.jl:412
 in #bboptimize#75(::Array{Any,1}, ::Function, ::Function, ::Dict{Symbol,Any}) at /lhome/lgiannins/.julia/v0.5/BlackBoxOptim/src/bboptimize.jl:74
 in (::BlackBoxOptim.#kw##bboptimize)(::Array{Any,1}, ::BlackBoxOptim.#bboptimize, ::Function, ::Dict{Symbol,Any}) at ./<missing>:0 (repeats 2 times)

I also tried optimising my own functions but I get the same error message.

Any ideas?

Thanks in advance!

Compatibility for v1.0

How much work might it be to make this compatible with Julia v1.0? I'm happy to help if needed,

JuliaOpt

Hi Robert,

I know we talked about this on the julia-users mailing list a while ago, but didn't take any action on it: add BlackBoxOptim to JuliaOpt.

The package looks like its in pretty great shape, including tests, documentation, and examples.

Moving it into JuliaOpt isn't too complicated:

  • first, I give you admin rights to JuliaOpt temporarily
  • next, you'll move the package into JuliaOpt
  • I'll drop your admin rights, and create a "group" for BlackBoxOptim that'll initially be just you, but you can add others.
  • We can then tag 0.0.1 in METADATA, and I'll update the juliaopt.org website.

I think it'd be cool if your package added TravisCI testing, but I can do that after the move. What do you think?

(@mlubin am I missing anything?)

JuMP interface?

Hi, nice work on this package. I see in the README that you're interested in making an interface to JuMP. I'm not too familiar with global black-box optimization, but JuMP is focused (so far) on constrained optimization for problems given by closed-form expressions of well-defined classes like LP, QP, SOCP, MIP, etc. That's not to say that we're not interested in making it more general, just that these are the problems we're most familiar with and have had time to implement.

What would algebraic modeling for black-box problems look like? It seems, naively, that a modeling language may not be so useful if the only input to the algorithm is a black-box julia function and box constraints.

CC: @IainNZ

Problems installing blackboxoptim in Windows: URGENT SUGGESTIONS NEEDED

Hi everyone

I have tried installing blackboxoptim on Windows but I am having some trouble

I followed the instructions as stated but this is the message I get

: Cloning BlackBoxOptim from https://github.com/robertfeldt/BlackBoxOptim.jl
INFO: Computing changes...
INFO: No packages to install, update or remove

When I write usinig BlackBoxOptim
and I give it my function i get the message
bboptimize not defined

What should be my next steps?

Running BlackBoxOptim on a Cluster

Hello,

I am working with BlackBoxOptim on a slurm cluster, using ClusterManager to manage the additional processes. Unfortunately BlackBoxOptim does not run the distance function on the additional nodes.

In this situation, the distance function is only run on the processes on the same node as the master. Do you know if there's any way to fix this? A small sample code is below, containing everything but the distance function. Thank you!


using ClusterManagers
nnodes = 3
ncores = 16
np = nnodes * ncores
@time addprocs(SlurmManager(np), t="00:30:00")

@everywhere using BlackBoxOptim
opt2 = bbsetup(distance_fcn; Method=:dxnes, SearchRange = collect(values(EST_variables)),
              NumDimensions = 2, MaxFuncEvals = 3000, Workers = workers())

Make BlackBoxOptim compatible with Julia 0.5 and above

Hello,
We want to provide support of BlackBoxOptim for global optimization in DiffEqParamEstim.jl. But it is throwing lots of warnings in Julia 0.5 and also the error for 30 dimension Rosenbrock function (ERROR: MethodError: no method matching calculate_evol_path_params(::Int32, ::Array{Float64,1})). Is there any possibility of making it compatible for Julia v0.5 and above?

cc - @ChrisRackauckas

Differential Evolution optimizers take a long time to "finalize"

I have encountered the following problem: if I optimize my problem with a variation of a differential evolution algorithm, after it has finished (MaxTime = 60 sec has run out) it takes a long time (~20 min?) until the command is actually finished. There is no more output (the final solution was already printed) but Julia is still busy and using the CPU.

  • I am using a more or less fresh Julia 0.7 installation in Juno on Windows 7 with BlackBoxOptim on master.
  • I encountered this with de_rand_1_bin and adaptive_de_rand_1_bin_radiuslimited.
  • The generated solutions are fine.
  • Any of the other algorithms that I tested (*-nes, generating_set_search, probabilistic_descent, resampling_memetic_search, SPSA) don't show this behavior.
  • I wasn't able to reproduce this problem on a different machine using the standard rosenbrock function.

Unfortunately the problem occurred on my work computer where I (currently) don't have access to my github account to provide more details right off the bat. If it helps I can try to provide the (lengthy) code that produced this problem. Hopefully the description is enough to give a hint, but I will try to assist to the best of my abilities although I'm not sure how far I can help in terms of debugging.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.