Comments (2)
Excellent explanation!! thank you!!
Just for the records, for whom could have the same or a similar question, I ran the next simple code:
import sklearn
import math
import optuna
import pandas as pd
def objective(trial):
x = trial.suggest_int("x", 2, 150)
y = trial.suggest_int("y", 2, 100)
score1 = x**2 - y**2
score2 = math.sin(y)
return score1, score2
sampler = optuna.samplers.NSGAIISampler(population_size=50)
study = optuna.create_study(study_name='studyTest',
directions=["minimize", "minimize"],
sampler=sampler)
study.optimize(objective, n_trials=100)
df = study.trials_dataframe()
df.to_csv("test.csv", sep=';', index=False)`
In "test.csv" there is the information that @toshihikoyanase put in the table above and it is possible to make the relationship between the number of generations and the number of trials, that he has explained. Thank you for the extra bonus, that is the mention about "two optimization processes".
from optuna-examples.
Thank you for your question.
By default the population_size=50, but if then I define n_trials=10, then this algorithm is like a random search
You're right. If all trials successfully finish, the relationship between the trials and generations is as follows:
Trials | Generation | Sampler |
---|---|---|
[0, 49] | 0 | NSGAIISampler calls RandomSampler internally. |
[50, 99] | 1 | NSGAIISampler |
[100, 149] | 2 | NSGAIISampler |
where is this number specified?
Please use the n_trials
argument of Study.optimize
. If you want G
generations with P
individuals in population, then please set n_trials = P * G
. Optuna chooses this design in order for users to parallelize the optimization. When users launch two optimization processes then they should set n_trials = P * G / 2
.
from optuna-examples.
Related Issues (20)
- Add `plot_rank` and `plot_timeline` to visualisation examples HOT 2
- Trial Fail HOT 1
- Optuna DDP with Slurm Cluster HOT 1
- License of the optuna-examples repo? HOT 2
- optuna.integration.XGBoostPruningCallback is not xgboost TrainingCallback HOT 2
- Updating the PyTorch Lightning example to >= 2.0 HOT 5
- Pruning Not Working in Pytorch HOT 1
- Support Lighting instead/besides Pytorch Lighting HOT 1
- Flax example HOT 2
- optuna-examples/xgboost /xgboost_integration.py error HOT 1
- Add `README.md` to `./dashboard` HOT 3
- Pruning only on some models? HOT 2
- Add Python 3.12 to the CI HOT 8
- Add README to each directory if it contains multiple examples HOT 2
- XGBoost callback problem HOT 2
- The problem of DDP training with pytorch lightning HOT 1
- NSGAIISampler number trials per generation HOT 1
- intermediate values and objective value use different metrics in `LightGBMPruningCallback` HOT 2
- Switch to NSGA sampler after initial trials with custom sampler HOT 1
- Different results in Optuna best value and re-train HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from optuna-examples.