Comments (7)
Hello Andreas, thanks for your interest.
In the context of TrialsOfExperimentResults
, a setup maps to an experiment, whose trial learning curves are shown in one of the subplots. In the example, we have setups ASHA
and HYPERTUNE-INDEP
, which are two different HPO methods. We get two sub-plots, and can compare how they differ.
You can also plot these results for a single experiment (one method, one seed). In the example above, to plot results for ASHA only, just use SETUPS_TO_COMPARE = ("ASHA",)
.
from syne-tune.
We just merge a tool to do those plots directly from ExperimentResult
, if you can pull from mainline, you should be able to do:
...
tuner.run()
from syne_tune.experiments import load_experiment
exp = load_experiment(tuner.name)
exp.plot_trials_over_time()
If you cant pull from mainline and want to use the latest version from pip, you can use the snippet I gave above.
Hope those help, feel free to reopen if not :-)
from syne-tune.
Can you attach the launcher script you were using?
from syne-tune.
Thank you for the quick reply Matthias! My launcher script is something like
import logging
from syne_tune import Tuner, StoppingCriterion
from syne_tune.backend import LocalBackend
from syne_tune.config_space import randint, loguniform, uniform, lograndint, choice
from syne_tune.optimizer.baselines import ASHA, MOBSTER
root = logging.getLogger()
root.setLevel(logging.DEBUG)
# hyperparameter search space to consider
config_space = {
'learning-rate': loguniform(1e-5, 1e-1),
'n-layers': randint(1, 10),
'hidden-size': lograndint(4, 2048),
'dropout-rate': uniform(0, 1),
'weight_decay': loguniform(1e-7, 1e-1),
'onehot': choice([True, False]),
'epochs': 10000,
}
tuner = Tuner(
trial_backend=LocalBackend(entry_point='train_parity.py'),
scheduler=MOBSTER(
config_space,
metric='accuracy',
resource_attr='epoch',
max_resource_attr="epochs",
search_options={'debug_log': False},
mode='max',
),
results_update_interval=5,
stop_criterion=StoppingCriterion(max_wallclock_time=30 *60),
n_workers=1, # how many trials are evaluated in parallel
tuner_name="parity-test"
)
tuner.run()
I think I'm confused where the names for the setups come from. Are the they the name of the scheduler classes?
from syne-tune.
any suggestions?
from syne-tune.
Hi Andreas,
Sorry for the delay, we just realize that Matthias was out of the office, let me try to help you on this.
This code assumes that experiments has been scheduled with python benchmarking/examples/benchmark_hypertune/launch_remote.py \ --experiment_tag docs-1 --random_seed 2965402734 --num_seeds 15
, the names "ASHA" and "HYPERTUNE-INDEP" comes from this file https://syne-tune.readthedocs.io/en/latest/benchmarking/benchmark_hypertune.html
which defines the experiments to run).
From your example, it seems you just want to plot the trials after an experiments, you can use the following code for instance (I will add a method in ExperimentResults
to make this easier):
from syne_tune.experiments import load_experiment
import matplotlib.pyplot as plt
from syne_tune.constants import ST_TUNER_TIME
# Replace with your experiment name here
# In your case, it will start with "parity-test" and be suffixed with a time-stamp for unicity
expname = "funky-earthworm-ASHA-0-nas201-cifar10-2023-10-10-12-38-03-865"
exp = load_experiment(expname)
fig, ax = plt.subplots(1, 1, figsize=(12, 4))
df = exp.results
metric = exp.metric_names()[0]
for trial_id in sorted(df.trial_id.unique()):
df_trial = df[df.trial_id == trial_id]
df_trial.plot(x=ST_TUNER_TIME, y=metric, marker=".", ax=ax, legend=None, alpha=0.5)
df_stop = df[df['st_decision'] == "STOP"]
plt.scatter(df_stop[ST_TUNER_TIME], df_stop[metric], marker="x", color="red")
plt.xlabel("Wallclock time (s)")
plt.ylabel(metric)
plt.title("Trial value over time")
from syne-tune.
perfect, thank you!
from syne-tune.
Related Issues (20)
- Conditional/Inactive hyperparameters HOT 6
- Troubles with maximising using MORandomScalarizationBayesOpt HOT 4
- Run BOHB/SyncBOHB using lcbench HOT 2
- Open `MultiObjectiveMultiSurrogateSearcher` to additional arguments HOT 2
- Surprising results of trial values over time HOT 3
- Conditional sampling in configuration space HOT 4
- Using sigterm / catching sigterm to enable checkpointing HOT 10
- Convenience transformation for config spaces HOT 8
- Docs for continuing aborted runs HOT 12
- Hard to find default configurations for schedulers HOT 3
- Difficulties setting rungs / stopping HOT 20
- GP not robust to NaN metric HOT 2
- Direct support for time as a resource? HOT 7
- Acquisition functions in Bayesian optimization HOT 1
- Update Ray dependencies, as dependabot flags them as security vulnerabilities
- Set custom GPU Ids for LocalBackend HOT 2
- [Question] Multiple runs for same parameter values HOT 5
- ModuleNotFoundError: No module named 'sagemaker.interactive_apps' HOT 3
- Load torch model weights to GPU associated with a given trial HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from syne-tune.