Code Monkey home page Code Monkey logo

opytimizer's Introduction

Opytimizer: A Nature-Inspired Python Optimizer

Latest release DOI Build status Open issues License

Welcome to Opytimizer.

Did you ever reach a bottleneck in your computational experiments? Are you tired of selecting suitable parameters for a chosen technique? If yes, Opytimizer is the real deal! This package provides an easy-to-go implementation of meta-heuristic optimizations. From agents to search space, from internal functions to external communication, we will foster all research related to optimizing stuff.

Use Opytimizer if you need a library or wish to:

  • Create your optimization algorithm;
  • Design or use pre-loaded optimization tasks;
  • Mix-and-match different strategies to solve your problem;
  • Because it is fun to optimize things.

Read the docs at opytimizer.readthedocs.io.

Opytimizer is compatible with: Python 3.6+.


Package guidelines

  1. The very first information you need is in the very next section.
  2. Installing is also easy if you wish to read the code and bump yourself into, follow along.
  3. Note that there might be some additional steps in order to use our solutions.
  4. If there is a problem, please do not hesitate, call us.
  5. Finally, we focus on minimization. Take that in mind when designing your problem.

Citation

If you use Opytimizer to fulfill any of your needs, please cite us:

@misc{rosa2019opytimizer,
    title={Opytimizer: A Nature-Inspired Python Optimizer},
    author={Gustavo H. de Rosa, Douglas Rodrigues and João P. Papa},
    year={2019},
    eprint={1912.13002},
    archivePrefix={arXiv},
    primaryClass={cs.NE}
}

Getting started: 60 seconds with Opytimizer

First of all. We have examples. Yes, they are commented. Just browse to examples/, chose your subpackage, and follow the example. We have high-level examples for most tasks we could think of and amazing integrations (Learnergy, NALP, OPFython, PyTorch, Scikit-Learn, Tensorflow).

Alternatively, if you wish to learn even more, please take a minute:

Opytimizer is based on the following structure, and you should pay attention to its tree:

- opytimizer
    - core
        - agent
        - block
        - cell
        - function
        - node
        - optimizer
        - space
    - functions
        - constrained
        - multi_objective
    - math
        - distribution
        - general
        - hyper
        - random
    - optimizers
        - boolean
        - evolutionary
        - misc
        - population
        - science
        - social
        - swarm
    - spaces
        - boolean
        - graph
        - grid
        - hyper_complex
        - pareto
        - search
        - tree
    - utils
        - callback
        - constant
        - exception
        - history
        - logging
    - visualization
        - convergence
        - surface

Core

Core is the core. Essentially, it is the parent of everything. You should find parent classes defining the basis of our structure. They should provide variables and methods that will help to construct other modules.

Functions

Instead of using raw and straightforward functions, why not try this module? Compose high-level abstract functions or even new function-based ideas in order to solve your problems. Note that for now, we will only support multi-objective function strategies.

Math

Just because we are computing stuff does not means that we do not need math. Math is the mathematical package containing low-level math implementations. From random numbers to distribution generation, you can find your needs on this module.

Optimizers

This is why we are called Opytimizer. This is the heart of heuristics, where you can find a large number of meta-heuristics, optimization techniques, anything that can be called an optimizer. Please take a look at the available optimizers.

Spaces

One can see the space as the place that agents will update their positions and evaluate a fitness function. However, the newest approaches may consider a different type of space. Thinking about that, we are glad to support diverse space implementations.

Utils

This is a utility package. Common things shared across the application should be implemented here. It is better to implement once and use as you wish than re-implementing the same thing repeatedly.

Visualization

Everyone needs images and plots to help visualize what is happening, correct? This package will provide every visual-related method for you. Check a specific variable convergence, your fitness function convergence, plot benchmark function surfaces, and much more!


Installation

We believe that everything has to be easy. Not tricky or daunting, Opytimizer will be the one-to-go package that you will need, from the first installation to the daily tasks implementing needs. If you may just run the following under your most preferred Python environment (raw, conda, virtualenv, whatever):

pip install opytimizer

Alternatively, if you prefer to install the bleeding-edge version, please clone this repository and use:

pip install -e .

Environment configuration

Note that sometimes, there is a need for additional implementation. If needed, from here, you will be the one to know all of its details.

Ubuntu

No specific additional commands are needed.

Windows

No specific additional commands are needed.

MacOS

No specific additional commands are needed.


How-To-Use: Minimal Example

Take a look at a quick working example of Opytimizer. Note that we are not passing many extra arguments nor additional information to the procedure. For more complex examples, please check our examples/ folder.

import numpy as np

from opytimizer import Opytimizer
from opytimizer.core import Function
from opytimizer.optimizers.swarm import PSO
from opytimizer.spaces import SearchSpace

def sphere(x):
  return np.sum(x ** 2)

n_agents = 20
n_variables = 2
lower_bound = [-10, -10]
upper_bound = [10, 10]

space = SearchSpace(n_agents, n_variables, lower_bound, upper_bound)
optimizer = PSO()
function = Function(sphere)

opt = Opytimizer(space, optimizer, function)
opt.start(n_iterations=1000)

Support

We know that we do our best, but it is inevitable to acknowledge that we make mistakes. If you ever need to report a bug, report a problem, talk to us, please do so! We will be available at our bests at this repository.


opytimizer's People

Contributors

douglasrodrigues avatar gugarosa avatar lzfelix avatar mpariente avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opytimizer's Issues

[BUG]

I just cannot find the optimization process except for the log. I want to know the API to see the optimization process,

positions

I can export 'positions' in the old version, but I cannot export the 'positions' in the new version. Could you help me?

[REG] How to get a detailed print out during optimization?

Greetings,

My function takes time getting evaluated, and I want to closely monitor what happens during optimization process. When using grid search space and printing the fitness from my function, I could see the movement along the grid. But, when using normal search space, my processor utilization is 100% and it takes hours with no print out.
So, hot to get more details? is there something like a degree of verbosity?

Thanks in advance.

[NEW] Constrained optimization

Hi, thanks for the work.

Any plans to add functionality to define constraints for the optimization? For instance, inequalities or any arbitrary non-linear constraints on the inputs and/or on the outputs?

[REG] How to supress DEBUG log message in opytimizer.core.space

Hello,
After inititializing Searchspace there is a debug message that is printed to stdout. How can we turn it off/on?
Following is the message.
opytimizer.core.space — DEBUG — Agents: 25 |....

I believe its printed because of line #223 in file opytimizer/core/space.py.

For large dimensions it prints all the lower and upper bounds which we may not always require.

Thanks.

[BUG] AttributeError: 'History' object has no attribute 'show'

Describe the bug
A clear and concise description of what the bug is.

It looks like there is no show() method for the returned opytimizer history.

To Reproduce
Steps to reproduce the behavior:

  1. Follow the steps from the wiki Tutorial: Your first optimization
    1. Run the optimizer with o.start():
      o = Opytimizer(space=s, optimizer=p, function=f)
      history = o.start()
    2. Show the history:
      history.how()

Expected behavior
Not sure what I expected, just curious :)

Screenshots

2020-01-02 13:26:28,270 - opytimizer.optimizers.fa — INFO — Iteration 1000/1000
2020-01-02 13:26:28,278 - opytimizer.optimizers.fa — INFO — Fitness: 4.077875713322641e-14
2020-01-02 13:26:28,279 - opytimizer.optimizers.fa — INFO — Position: [[-1.42791381e-07]
 [-1.42791381e-07]]
2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — Optimization task ended.
2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — It took 7.672951936721802 seconds.
>>> history.show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'History' object has no attribute 'show'
>>> history
<opytimizer.utils.history.History object at 0x113b75be0>
>>> history.show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'History' object has no attribute 'show'

Desktop (please complete the following information):

  • OS: macOS Mojave 10.14.6
  • Virtual Environment: conda base
  • Python Version:
(base) justinmai$ python3
Python 3.7.3 (default, Mar 27 2019, 16:54:48) 
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin

Additional context
Add any other context about the problem here.

[NEW] Different number of step for each variable

Is your feature request related to a problem? Please describe.
It's not

Describe the solution you'd like
I'd like a different step size for each

Additional context
Sometimes variables are less sensible than others. Some are integers. Is there anyway to use a different step for each one?

[REG] Strange testing pattern

Hi, while browsing through the tests I found the following construction several times. Can you please explain the rationale behind it?

try:
    new_optimizer.hyperparams = 1  # (1)
except:
    new_optimizer.hyperparams = {
        'w': 1.5
    }

If you're executing (1) hoping for it to fail, then that should be asserted as well.

Thanks,
Felix

[REG]How to plot convergence diagram?

Hello,
I was looking for convergence diagram. Found an example of using convergence function opytimizer/examples/visualization/convergence_plotting.py /. However, it uses few constant values of agent positions. Is there a convergence example that shows how to use this function with actual optimization problem such as after carrying out PSO?

Thanks,

[NEW] Define objective function for regression problem

Hi there,

I attempted to define an objective function (using CatBoost model for data) to solve a minimum problem in a regression task, however failed to create new objective function.
So, do your package offer the solution for regression and can we define such an objective function in this case?

My desired objective function something like this to minimize the MSE:

from catboost import CatBoostRegressor as cbr
cbr_model = cbr()

def objective_function(cbr_model,X_train3, y_train3, X_test3, y_test3):      
    cbr_model.fit(X_train3,y_train3)  
    mse=mean_squared_error(y_test3,cbr_model.predict(X_test3))
    return mse

Many thanks,
Thang

[NEW] Using population data for population-based algorithms?

Hello,
First of all, thank you for sharing such a fantastic repo that can be used as an off-the-shelf meta-heuristic optimization algorithms!

I have a question regarding how to use my own population data for optimizers in optimizer. Rather than using SearchSpace that uses predetermined upper/lower bounds, is there any way I can use my own population samples to start the optimization from?

Thank you, hope you have a wonderful day!

[NEW]

Dear author
Hello, I want to use chaotic mapping to improve the initial amount of intelligent evolutionary algorithm, but can not start, do you have any suggestions。
I look forward to your suggestion。

GA roulette selection method [REG]

Pre-checkings

  • [x ] Check that you are up-to-date with the master branch of Opytimizer. You can update with:
    pip install git+git://github.com/gugarosa/opytimizer.git --upgrade --no-deps

  • [x ] Check that you have read all of our README.

Description

First thanks for the great framework you are providing, it's exactly what i was looking for! I just started using the genetic algorithm when I stumbled across the roulette selection method. For my understanding the method _roulette_selection implemented in opytimizer.optimizers.evolutionary.ga.py is designed for a maximization problem with:

# Calculates the total fitness
total_fitness = np.sum(fitness)

# Calculates the probability of each fitness
probs = [fit / total_fitness for fit in fitness]

# Performs the selection process
selected = d.generate_choice_distribution(n_agents, probs, n_individuals)

If I use my minimization problem with this optimizer, a lower fitness value would correspond to a lower probability of getting selected as parents correct? So the method should be:


total_fitness = np.sum(1-np.array(fitness))

probs = [(1-fit) / total_fitness for fit in fitness]

Tell me if I've overlooked something.
Kind regards,
Martin

[REG] How to get best agent values?

Hello,
I am following the tutorial.
After running opt.start() how do we get the optimized parameters? opt object has history. best_agent attributes, is there way to directly get the best parameters?

Also, by default it does minimization or maximization?

[REG] What is the difference between grid space, and discrete space example?

Greetings,

I have a mixed search space problem of five dimensions, 4 discrete and one continuous. how to implement a discrete search space, with these dimensions where the increment won't be the same. I found that grid space offer me the flexibility I need, yet I noticed you used a different implementation in discrete space example so which one should I use?

My search space:

step = [1, 1, 1, 1, 0.1]
lower_bound = [16, 3, 2, 0, 0]
upper_bound = [400, 20, 20, 1, 0.33]

Thanks in advance.

[NEW] Dump optimization progress

Is your feature request related to a problem? Please describe.
If the server running fails. We lose the time running.

Describe the solution you'd like
Dump the optimization object from a time to time.

Describe alternatives you've considered
Maybe dump the agents?

[BUG] Gaussian multiplicative noise may lead to unexpected behaviours on GA

Describe the bug

Hello, I noticed that mutation in the GA algorithm is performed by multiplying a given agent coordinate by a value sampled from a Gaussian distribution with mean 0 and standard deviation 1 here. Under these configurations, it's very likely the sampled coefficient will be zero, or very close to it, and since x* (y ~ 0) -> 0 one of the agent's coordinate will be replaced by zero, which is also the optimal point for several benchmark functions.

Consequently, after multiple mutations, an agent may have several of its coordinates set to zero and the remainder of the algorithm will just fine-tune the position found by the Gaussian noise towards the function optima point. Namely, such a mutation strategy will end up pushing all agents towards the space origin, which happens to be the function optimal point.

This seems to be an advantage just because the "gravity point" created by the Gaussian multiplicative noise coincides with the function optima. However, if the function optima point is translated by a given constant c, GA fails to find its optima point, which also changes from 0 to c.

Let's consider the following configurations to illustrate the aforementioned arguments:

Setting 1

- n_agents = 20
- n_iterations = 50
- target_function: s(x) = np.power(x, 2).sum(), with x* = 0 and s(x*) = 0;
  - lower_bound = -10
  - upper_bound = 10

Setting 2

- n_agents = 20
- n_iterations = 50
- target_function: s'(x) = np.power(x - 2, x).sum(), with x* = 2 and s(x*) = 0
  - lower_bound = -8
  - upper_bound = 12

Notice the second setting changes lower and upper bound values to keep the same search amplitude from the first configuration. Moreover, let's consider two variations of the GA algorithm: one using multiplicative gaussian noise for mutation (as in the current implementation) and another (say, GA') that performs mutation by replacing a given agent coordinate by a random number uniformy sampled from the interval [lower_bound, upper_bound]. By combining both algorithms with each setting and running the optimization 40 times for each scenario, the following results are obtained:

Scenario Algorithm Function mean_fitness ± stddev_fitness
1 GA s(x) 0.0000 ± 0.0000
2 GA s'(x) 9.2481 ± 2.7630
3 GA' (modified) s(x) 5.9511 ± 2.6698
4 GA' (modified) s'(x) 5.1678 ± 1.7503

Results show that by using Gaussian multiplicative noise GA finds the exact optimal point for s(x) in all cases (std is zero in Scenario 1). However, when the optimal point is translated, the algorithm fails to do so (Scenario 2). Furthermore, by replacing the mutation strategy by one that has no preference over any point in the space, the results obtained either in the original sphere function (Sphere 3) or in the translated function (Scenario 4) are very close.

Comparing the results from the 40 runs between Scenarios 3 and 4 through the Wilcoxon signed-rank test it's possible to obtain p-value>0.41, meaning both results are very likely to come from the same distribution. Conversely, comparing Scenarios 1 and 2 with the same metodology yields a p-value<3e-8, meaning results' difference is statistically significant.

Last but not least, notice that values sampled from a Gaussian distribution with mean=0 and stddev=1 are negative for half of time, meaning that optimizing a function that has a positive lower bound will push agent coordinates towards this value half of the time.

Desktop (please complete the following information):

  • OS: Linux
  • Virtual Environment: conda
  • Python Version 3.7
  • Lib version: latest

[REG]

Optimization time problem

Hello, author, why it takes 5 hours to optimize SVR's hyperparameters with PSO and AOA? Is there something wrong with me?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.