Skip to content

A lightweight framework for benchmarking HPO algorithms

License

Notifications You must be signed in to change notification settings

automl/hposuite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hposuite

A lightweight framework for benchmarking HPO algorithms

Minimal Example to run hposuite

from hposuite import create_study

study = create_study(
    name="hposuite_demo",
    output_dir="./hposuite-output",
    optimizers=[...],   #Eg: "RandomSearch"
    benchmarks=[...],   #Eg: "ackley"
    num_seeds=5,
    budget=100,         # Number of iterations
)

study.optimize()

Tip

Installation

Create a Virtual Environment using Venv

python -m venv hposuite_env
source hposuite_env/bin/activate

Installing from PyPI

pip install hposuite # Current not functional

Tip

  • pip install hposuite["notebook"] - For usage in a notebook
  • pip install hposuite["all"] - To install hposuite with all available optimizers and benchmarks
  • pip install hposuite["optimizers"] - To install hposuite with all available optimizers only
  • pip install hposuite["benchmarks"] - To install hposuite with all available benchmarks only

Note

  • We recommend doing doing pip install hposuite["all"] to install all available benchmarks and optimizers

Installation from source

git clone https://github.com/automl/hposuite.git
cd hposuite

pip install -e . # -e for editable install

Simple example to run multiple Optimizers on multiple benchmarks

from hposuite.benchmarks import BENCHMARKS
from hposuite.optimizers import OPTIMIZERS

from hposuite import create_study

study = create_study(
    name="smachb_dehb_mfh3good_pd1",
    output_dir="./hposuite-output",
    optimizers=[
        OPTIMIZERS["SMAC_Hyperband"],
        OPTIMIZERS["DEHB_Optimizer"]
    ],
    benchmarks=[
        BENCHMARKS["mfh3_good"],
        BENCHMARKS["pd1-imagenet-resnet-512"]
    ],
    num_seeds=5,
    budget=100,
)

study.optimize()

View all available Optimizers and Benchmarks

from hposuite.optimizers import OPTIMIZERS
from hposuite.benchmarks import BENCHMARKS
print(OPTIMIZERS.keys())
print(BENCHMARKS.keys())

Results

hposuite saves the Studies by default to ./hposuite-output/ (relative to the current working directory). Results are saved in the Run subdirectories within the main Study directory as parquet files.
The Study directory and the individual Run directory paths are logged when running Study.optimize()

Plotting

python -m hposuite.plotting.utils \
--study_dir <study directory name>
--output_dir <abspath of dir where study dir is stored>
--save_dir <path relative to study_dir to store the plots> \ 

--save_dir is set by default to study_dir/plots --output_dir by default is ../hposuite-output

Overview of available Optimizers

For a more detailed overview, check here

Overview of Available Optimizers

Package Optimizer Optimizer Name in hposuite Blackbox Multi-Fidelity (MF) Multi-Objective (MO) MO-MF Priors
- RandomSearch "RandomSearch"
- RandomSearch with priors "RandomSearchWithPriors"
SMAC Black Box Facade "SMAC_BO"
SMAC Hyperband "SMACHyperband"
DEHB DEHB "DEHB"
HEBO HEBO "HEBO"
Nevergrad all default: "NGOpt". Others see here
Optuna TPE "Optuna" (TPE is automatically selected for single-objective problems)
Optuna NSGA2 "Optuna" (NSGA2 is automatically selected for multi-objective problems)
Scikit-Optimize all "Scikit_Optimize"

Overview of available Benchmarks

For a more detailed overview, check here

Package Benchmark Type Multi-Fidelity Multi-Objective Reference
- Ackley Functional Ackley Function
- Branin Functional Branin Function
mf-prior-bench MF-Hartmann Synthetic MF-Hartmann Benchmark
mf-prior-bench PD1 Surrogate HyperBO - PD1 Benchmark
mf-prior-bench LCBench-Tabular Tabular LCBench-Tabular
Pymoo Single-Objective Synthetic Pymoo Single-Objective Problems
Pymoo Multi-Objective (unconstrained) Synthetic Pymoo Multi-Objective Problems
Pymoo Many-Objective Synthetic Pymoo Many-Objective Problems
IOH BBOB Synthetic BBOB