A lightweight framework for benchmarking HPO algorithms
from hposuite import create_study
study = create_study(
name="hposuite_demo",
output_dir="./hposuite-output",
optimizers=[...], #Eg: "RandomSearch"
benchmarks=[...], #Eg: "ackley"
num_seeds=5,
budget=100, # Number of iterations
)
study.optimize()
Tip
- See below for example of Running multiple Optimizers on multiple Benchmarks
- Check this example notebook for more demo examples
- This notebook contains usage examples for Optimizer and Benchmark combinations
- This notebook demonstrates some of the features of hposuite's Study
- This notebook shows how to plot results for comparison
- Check out hpoglue for core HPO API for interfacing an Optimizer and Benchmark
python -m venv hposuite_env
source hposuite_env/bin/activate
pip install hposuite # Current not functional
Tip
pip install hposuite["notebook"]
- For usage in a notebookpip install hposuite["all"]
- To install hposuite with all available optimizers and benchmarkspip install hposuite["optimizers"]
- To install hposuite with all available optimizers onlypip install hposuite["benchmarks"]
- To install hposuite with all available benchmarks only
Note
- We recommend doing doing
pip install hposuite["all"]
to install all available benchmarks and optimizers
git clone https://github.com/automl/hposuite.git
cd hposuite
pip install -e . # -e for editable install
from hposuite.benchmarks import BENCHMARKS
from hposuite.optimizers import OPTIMIZERS
from hposuite import create_study
study = create_study(
name="smachb_dehb_mfh3good_pd1",
output_dir="./hposuite-output",
optimizers=[
OPTIMIZERS["SMAC_Hyperband"],
OPTIMIZERS["DEHB_Optimizer"]
],
benchmarks=[
BENCHMARKS["mfh3_good"],
BENCHMARKS["pd1-imagenet-resnet-512"]
],
num_seeds=5,
budget=100,
)
study.optimize()
from hposuite.optimizers import OPTIMIZERS
from hposuite.benchmarks import BENCHMARKS
print(OPTIMIZERS.keys())
print(BENCHMARKS.keys())
hposuite saves the Studies by default to ./hposuite-output/
(relative to the current working directory).
Results are saved in the Run
subdirectories within the main Study
directory as parquet files.
The Study
directory and the individual Run
directory paths are logged when running Study.optimize()
python -m hposuite.plotting.utils \
--study_dir <study directory name>
--output_dir <abspath of dir where study dir is stored>
--save_dir <path relative to study_dir to store the plots> \
--save_dir
is set by default to study_dir/plots
--output_dir
by default is ../hposuite-output
For a more detailed overview, check here
Package | Optimizer | Optimizer Name in hposuite |
Blackbox | Multi-Fidelity (MF) | Multi-Objective (MO) | MO-MF | Priors |
---|---|---|---|---|---|---|---|
- | RandomSearch | "RandomSearch" |
✓ | ||||
- | RandomSearch with priors | "RandomSearchWithPriors" |
✓ | ✓ | |||
SMAC | Black Box Facade | "SMAC_BO" |
✓ | ||||
SMAC | Hyperband | "SMACHyperband" |
✓ | ||||
DEHB | DEHB | "DEHB" |
✓ | ||||
HEBO | HEBO | "HEBO" |
✓ | ||||
Nevergrad | all | default: "NGOpt" . Others see here |
✓ | ✓ | |||
Optuna | TPE | "Optuna" (TPE is automatically selected for single-objective problems) |
✓ | ||||
Optuna | NSGA2 | "Optuna" (NSGA2 is automatically selected for multi-objective problems) |
✓ | ||||
Scikit-Optimize | all | "Scikit_Optimize" |
✓ |
For a more detailed overview, check here
Package | Benchmark | Type | Multi-Fidelity | Multi-Objective | Reference |
---|---|---|---|---|---|
- | Ackley | Functional | Ackley Function | ||
- | Branin | Functional | Branin Function | ||
mf-prior-bench | MF-Hartmann | Synthetic | ✓ | MF-Hartmann Benchmark | |
mf-prior-bench | PD1 | Surrogate | ✓ | ✓ | HyperBO - PD1 Benchmark |
mf-prior-bench | LCBench-Tabular | Tabular | ✓ | ✓ | LCBench-Tabular |
Pymoo | Single-Objective | Synthetic | Pymoo Single-Objective Problems | ||
Pymoo | Multi-Objective (unconstrained) | Synthetic | ✓ | Pymoo Multi-Objective Problems | |
Pymoo | Many-Objective | Synthetic | ✓ | Pymoo Many-Objective Problems | |
IOH | BBOB | Synthetic | BBOB |