Skip to content

Releases: meta-pytorch/botorch

Compatibility release

06 Jan 00:21

Choose a tag to compare

[0.8.1] - Jan 5, 2023

Highlights

  • This release includes changes for compatibility with the newest versions of linear_operator and gpytorch.
  • Several acquisition functions now have "Log" counterparts, which provide better
    numerical behavior for improvement-based acquisition functions in areas where the probability of
    improvement is low. For example, LogExpectedImprovement (#1565) should behave better than
    ExpectedImprovement. These new acquisition functions are
    • LogExpectedImprovement (#1565).
    • LogNoisyExpectedImprovement (#1577).
    • LogProbabilityOfImprovement (#1594).
    • LogConstrainedExpectedImprovement (#1594).
  • Bug fix: Stop ModelListGP.posterior from quietly ignoring Log, Power, and Bilog outcome transforms (#1563).
  • Turn off fast_computations setting in linear_operator by default (#1547).

Compatibility

  • Require linear_operator == 0.3.0 (#1538).
  • Require pyro-ppl >= 1.8.4 (#1606).
  • Require gpytorch == 1.9.1 (#1612).

New Features

  • Add eta to get_acquisition_function (#1541).
  • Support 0d-features in FixedFeatureAcquisitionFunction (#1546).
  • Add timeout ability to optimization functions (#1562, #1598).
  • Add MultiModelAcquisitionFunction, an abstract base class for acquisition functions that require multiple types of models (#1584).
  • Add cache_root option for qNEI in get_acquisition_function (#1608).

Other changes

  • Docstring corrections (#1551, #1557, #1573).
  • Removal of _fit_multioutput_independent and allclose_mll (#1570).
  • Better numerical behavior for fully Bayesian models (#1576).
  • More verbose Scipy minimize failure messages (#1579).
  • Lower-bound noise inSaasPyroModel to avoid Cholesky errors (#1586).

Bug fixes

  • Error rather than failing silently for NaN values in box decomposition (#1554).
  • Make get_bounds_as_ndarray device-safe (#1567).

Posterior, MCSampler & Closure Refactors, Entropy Search Acquisition Functions

07 Dec 00:01

Choose a tag to compare

Highlights

This release includes some backwards incompatible changes.

  • Refactor Posterior and MCSampler modules to better support non-Gaussian distributions in BoTorch (#1486).
    • Introduced a TorchPosterior object that wraps a PyTorch Distribution object and makes it compatible with the rest of Posterior API.
    • PosteriorList no longer accepts Gaussian base samples. It should be used with a ListSampler that includes the appropriate sampler for each posterior.
    • The MC acquisition functions no longer construct a Sobol sampler by default. Instead, they rely on a get_sampler helper, which dispatches an appropriate sampler based on the posterior provided.
    • The resample and collapse_batch_dims arguments to MCSamplers have been removed. The ForkedRNGSampler and StochasticSampler can be used to get the same functionality.
    • Refer to the PR for additional changes. We will update the website documentation to reflect these changes in a future release.
  • #1191 refactors much of botorch.optim to operate based on closures that abstract away how losses (and gradients) are computed. By default, these closures are created using multiply-dispatched factory functions (such as get_loss_closure), which may be customized by registering methods with an associated dispatcher (e.g. GetLossClosure). Future releases will contain tutorials that explore these features in greater detail.

New Features

  • Add mixed optimization for list optimization (#1342).
  • Add entropy search acquisition functions (#1458).
  • Add utilities for straight-through gradient estimators for discretization functions (#1515).
  • Add support for categoricals in Round input transform and use STEs (#1516).
  • Add closure-based optimizers (#1191).

Other Changes

  • Do not count hitting maxiter as optimization failure & update default maxiter (#1478).
  • BoxDecomposition cleanup (#1490).
  • Deprecate torch.triangular_solve in favor of torch.linalg.solve_triangular (#1494).
  • Various docstring improvements (#1496, #1499, #1504).
  • Remove __getitem__ method from LinearTruncatedFidelityKernel (#1501).
  • Handle Cholesky errors when fitting a fully Bayesian model (#1507).
  • Make eta configurable in apply_constraints (#1526).
  • Support SAAS ensemble models in RFFs (#1530).
  • Deprecate botorch.optim.numpy_converter (#1191).
  • Deprecate fit_gpytorch_scipy and fit_gpytorch_torch (#1191).

Bug Fixes

  • Enforce use of float64 in NdarrayOptimizationClosure (#1508).
  • Replace deprecated np.bool with equivalent bool (#1524).
  • Fix RFF bug when using FixedNoiseGP models (#1528).

Bug fix release

10 Nov 21:33

Choose a tag to compare

Highlights

  • #1454 fixes a critical bug that affected multi-output BatchedMultiOutputGPyTorchModels that were using a Normalize or InputStandardize input transform and trained using fit_gpytorch_model/mll with sequential=True (which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release.
  • #1479 changes the inheritance structure of Models in a backwards-incompatible way. If your code relies on isinstance checks with BoTorch Models, especially SingleTaskGP, you should revisit these checks to make sure they still work as expected.

Compatibility

  • Require linear_operator == 0.2.0 (#1491).

New Features

  • Introduce bvn, MVNXPB, TruncatedMultivariateNormal, and UnifiedSkewNormal classes / methods (#1394, #1408).
  • Introduce AffineInputTransform (#1461).
  • Introduce a subset_transform decorator to consolidate subsetting of inputs in input transforms (#1468).

Other Changes

  • Add a warning when using float dtype (#1193).
  • Let Pyre know that AcquisitionFunction.model is a Model (#1216).
  • Remove custom BlockDiagLazyTensor logic when using Standardize (#1414).
  • Expose _aug_batch_shape in SaasFullyBayesianSingleTaskGP (#1448).
  • Adjust PairwiseGP ScaleKernel prior (#1460).
  • Pull out fantasize method into a FantasizeMixin class, so it isn't so widely inherited (#1462, #1479).
  • Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
  • Use get_default_partitioning_alpha for NEHVI input constructor (#1481).

Bug Fixes

  • Fix batch_shape property of ModelListGPyTorchModel (#1441).
  • Tutorial fixes (#1446, #1475).
  • Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
  • Handle RuntimeError due to constraint violation while sampling from priors (#1451).
  • Fix bug in model list with output indices (#1453).
  • Fix input transform bug when sequentially training a BatchedMultiOutputGPyTorchModel (#1454).
  • Fix a bug in _fit_multioutput_independent that failed mll comparison (#1455).
  • Fix box decomposition behavior with empty or None Y (#1489).

Improve model fitting functionality

27 Sep 18:33

Choose a tag to compare

New Features

  • A full refactor of model fitting methods (#1134).
    • This introduces a new fit_gpytorch_mll method that multiple-dispatches on the model type. Users may register custom fitting routines for different combinations of MLLs, Likelihoods, and Models.
    • Unlike previous fitting helpers, fit_gpytorch_mll does not pass kwargs to optimizer and instead introduces an optional optimizer_kwargs argument.
    • When a model fitting attempt fails, botorch.fit methods restore modules to their original states.
    • fit_gpytorch_mll throws a ModelFittingError when all model fitting attempts fail.
    • Upon returning from fit_gpytorch_mll, mll.training will be True if fitting failed and False otherwise.
  • Allow custom bounds to be passed in to SyntheticTestFunction (#1415).

Deprecations

  • Deprecate weights argument of risk measures in favor of a preprocessing_function (#1400),
  • Deprecate fit_gyptorch_model; to be superseded by fit_gpytorch_mll.

Other Changes

  • Support risk measures in MOO input constructors (#1401).

Bug Fixes

  • Fix fully Bayesian state dict loading when there are more than 10 models (#1405).
  • Fix batch_shape property of SaasFullyBayesianSingleTaskGP (#1413).
  • Fix model_list_to_batched ignoring the covar_module of the input models (#1419).

Compatibility Release

13 Sep 23:37

Choose a tag to compare

Compatibility

  • Pin GPyTorch == 1.9.0 (#1397).
  • Pin linear_operator == 0.1.1 (#1397).

New Features

  • Implement SaasFullyBayesianMultiTaskGP and related utilities (#1181, #1203).

Other Changes

  • Support loading a state dict for SaasFullyBayesianSingleTaskGP (#1120).
  • Update load_state_dict for ModelList to support fully Bayesian models (#1395).
  • Add is_one_to_many attribute to input transforms (#1396).

Bug Fixes

  • Fix PairwiseGP on GPU (#1388).

Compatibility Release

07 Sep 05:04

Choose a tag to compare

Compatibility

  • Require python >= 3.8 (via #1347).
  • Support for python 3.10 (via #1379).
  • Require PyTorch >= 1.11 (via (#1363).
  • Require GPyTorch >= 1.9.0 (#1347).
    • GPyTorch 1.9.0 is a major refactor that factors out the lazy tensor functionality into a new LinearOperator library, which required a number of adjustments to BoTorch (#1363, #1377).
  • Require pyro >= 1.8.2 (#1379).

New Features

  • Add ability to generate the features appended in the AppendFeatures input transform via a generic callable (#1354).
  • Add new synthetic test functions for sensitivity analysis (#1355, #1361).

Other Changes

  • Use time.monotonic() instead of time.time() to measure duration (#1353).
  • Allow passing Y_samples directly in MARS.set_baseline_Y (#1364).

Bug Fixes

  • Patch state_dict loading for PairwiseGP (#1359).
  • Fix batch_shape handling in Normalize and InputStandardize transforms (#1360).

Maintenance release

12 Aug 21:19

Choose a tag to compare

[0.6.6] - Aug 12, 2022

Compatibility

  • Require GPyTorch >= 1.8.1 (#1347).

New Features

  • Support batched models in RandomFourierFeatures (#1336).
  • Add a skip_expand option to AppendFeatures (#1344).

Other Changes

  • Allow qProbabilityOfImprovement to use batch-shaped best_f (#1324).
  • Make optimize_acqf re-attempt failed optimization runs and handle optimization
    errors in optimize_acqf and gen_candidates_scipy better (#1325).
  • Reduce memory overhead in MARS.set_baseline_Y (#1346).

Bug Fixes

  • Fix bug where outcome_transform was ignored for ModelListGP.fantasize (#1338).
  • Fix bug causing get_polytope_samples to sample incorrectly when variables
    live in multiple dimensions (#1341).

Documentation

Robust Multi-Objective BO, Multi-Objective Multi-Fidelity BO, Scalable Constrained BO, Improvements to Ax Integration

15 Jul 17:08

Choose a tag to compare

Compatibility

  • Require PyTorch >=1.10 (#1293).
  • Require GPyTorch >=1.7 (#1293).

New Features

  • Add MOMF (Multi-Objective Multi-Fidelity) acquisition function (#1153).
  • Support PairwiseLogitLikelihood and modularize PairwiseGP (#1193).
  • Add in transformed weighting flag to Proximal Acquisition function (#1194).
  • Add FeasibilityWeightedMCMultiOutputObjective (#1202).
  • Add outcome_transform to FixedNoiseMultiTaskGP (#1255).
  • Support Scalable Constrained Bayesian Optimization (#1257).
  • Support SaasFullyBayesianSingleTaskGP in prune_inferior_points (#1260).
  • Implement MARS as a risk measure (#1303).
  • Add MARS tutorial (#1305).

Other Changes

  • Add Bilog outcome transform (#1189).
  • Make get_infeasible_cost return a cost value for each outcome (#1191).
  • Modify risk measures to accept List[float] for weights (#1197).
  • Support SaasFullyBayesianSingleTaskGP in prune_inferior_points_multi_objective (#1204).
  • BotorchContainers and BotorchDatasets: Large refactor of the original TrainingData API to allow for more diverse types of datasets (#1205, #1221).
  • Proximal biasing support for multi-output SingleTaskGP models (#1212).
  • Improve error handling in optimize_acqf_discrete with a check that choices is non-empty (#1228).
  • Handle X_pending properly in FixedFeatureAcquisition (#1233, #1234).
  • PE and PLBO support in Ax (#1240, #1241).
  • Remove model.train call from get_X_baseline for better caching (#1289).
  • Support inf values in bounds argument of optimize_acqf (#1302).

Bug Fixes

  • Update get_gp_samples to support input / outcome transforms (#1201).
  • Fix cached Cholesky sampling in qNEHVI when using Standardize outcome transform (#1215).
  • Make task_feature as required input in MultiTaskGP.construct_inputs (#1246).
  • Fix CUDA tests (#1253).
  • Fix FixedSingleSampleModel dtype/device conversion (#1254).
  • Prevent inappropriate transforms by putting input transforms into train mode before converting models (#1283).
  • Fix sample_points_around_best when using 20 dimensional inputs or prob_perturb (#1290).
  • Skip bound validation in optimize_acqf if inequality constraints are specified (#1297).
  • Properly handle RFFs when used with a ModelList with individual transforms (#1299).
  • Update PosteriorList to support deterministic-only models and fix event_shape (#1300).

Documentation

  • Add a note about observation noise in the posterior in fit_model_with_torch_optimizer notebook (#1196).
  • Fix custom botorch model in Ax tutorial to support new interface (#1213).
  • Update MOO docs (#1242).
  • Add SMOKE_TEST option to MOMF tutorial (#1243).
  • Fix ModelListGP.condition_on_observations/fantasize bug (#1250).
  • Replace space with underscore for proper doc generation (#1256).
  • Update PBO tutorial to use EUBO (#1262).

Maintenance Release

21 Apr 23:47

Choose a tag to compare

New Features

  • Implement ExpectationPosteriorTransform (#903).
  • Add PairwiseMCPosteriorVariance, a cheap active learning acquisition function (#1125).
  • Support computing quantiles in the fully Bayesian posterior, add FullyBayesianPosteriorList (#1161).
  • Add expectation risk measures (#1173).
  • Implement Multi-Fidelity GIBBON (Lower Bound MES) acquisition function (#1185).

Other Changes

  • Add an error message for one shot acquisition functions in optimize_acqf_discrete (#939).
  • Validate the shape of the bounds argument in optimize_acqf (#1142).
  • Minor tweaks to SAASBO (#1143, #1183).
  • Minor updates to tutorials (24f7fda, #1144, #1148, #1159, #1172, #1180).
  • Make it easier to specify a custom PyroModel (#1149).
  • Allow passing in a mean_module to SingleTaskGP/FixedNoiseGP (#1160).
  • Add a note about acquisitions using gradients to base class (#1168).
  • Remove deprecated box_decomposition module (#1175).

Bug Fixes

  • Bug-fixes for ProximalAcquisitionFunction (#1122).
  • Fix missing warnings on failed optimization in fit_gpytorch_scipy (#1170).
  • Ignore data related buffers in PairwiseGP.load_state_dict (#1171).
  • Make fit_gpytorch_model properly honor the debug flag (#1178).
  • Fix missing posterior_transform in gen_one_shot_kg_initial_conditions (#1187).

Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization

28 Mar 00:30

Choose a tag to compare

New Features

  • Implement SAASBO - SaasFullyBayesianSingleTaskGP model for sample-efficient high-dimensional Bayesian optimization (#1123).
  • Add SAASBO tutorial (#1127).
  • Add LearnedObjective (#1131), AnalyticExpectedUtilityOfBestOption acquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE).
  • Add BOPE tutorial (#1138).

Other Changes

  • Use qKG.evaluate in optimize_acqf_mixed (#1133).
  • Add construct_inputs to SAASBO (#1136).

Bug Fixes

  • Fix "Constraint Active Search" tutorial (#1124).
  • Update "Discrete Multi-Fidelity BO" tutorial (#1134).