Releases: meta-pytorch/botorch
Releases Β· meta-pytorch/botorch
Compatibility release
[0.8.1] - Jan 5, 2023
Highlights
- This release includes changes for compatibility with the newest versions of linear_operator and gpytorch.
- Several acquisition functions now have "Log" counterparts, which provide better
numerical behavior for improvement-based acquisition functions in areas where the probability of
improvement is low. For example,LogExpectedImprovement(#1565) should behave better than
ExpectedImprovement. These new acquisition functions are - Bug fix: Stop
ModelListGP.posteriorfrom quietly ignoringLog,Power, andBilogoutcome transforms (#1563). - Turn off
fast_computationssetting in linear_operator by default (#1547).
Compatibility
- Require linear_operator == 0.3.0 (#1538).
- Require pyro-ppl >= 1.8.4 (#1606).
- Require gpytorch == 1.9.1 (#1612).
New Features
- Add
etatoget_acquisition_function(#1541). - Support 0d-features in
FixedFeatureAcquisitionFunction(#1546). - Add timeout ability to optimization functions (#1562, #1598).
- Add
MultiModelAcquisitionFunction, an abstract base class for acquisition functions that require multiple types of models (#1584). - Add
cache_rootoption for qNEI inget_acquisition_function(#1608).
Other changes
- Docstring corrections (#1551, #1557, #1573).
- Removal of
_fit_multioutput_independentandallclose_mll(#1570). - Better numerical behavior for fully Bayesian models (#1576).
- More verbose Scipy
minimizefailure messages (#1579). - Lower-bound noise in
SaasPyroModelto avoid Cholesky errors (#1586).
Bug fixes
Posterior, MCSampler & Closure Refactors, Entropy Search Acquisition Functions
Highlights
This release includes some backwards incompatible changes.
- Refactor
PosteriorandMCSamplermodules to better support non-Gaussian distributions in BoTorch (#1486).- Introduced a
TorchPosteriorobject that wraps a PyTorchDistributionobject and makes it compatible with the rest ofPosteriorAPI. PosteriorListno longer accepts Gaussian base samples. It should be used with aListSamplerthat includes the appropriate sampler for each posterior.- The MC acquisition functions no longer construct a Sobol sampler by default. Instead, they rely on a
get_samplerhelper, which dispatches an appropriate sampler based on the posterior provided. - The
resampleandcollapse_batch_dimsarguments toMCSamplers have been removed. TheForkedRNGSamplerandStochasticSamplercan be used to get the same functionality. - Refer to the PR for additional changes. We will update the website documentation to reflect these changes in a future release.
- Introduced a
- #1191 refactors much of
botorch.optimto operate based on closures that abstract away how losses (and gradients) are computed. By default, these closures are created using multiply-dispatched factory functions (such asget_loss_closure), which may be customized by registering methods with an associated dispatcher (e.g.GetLossClosure). Future releases will contain tutorials that explore these features in greater detail.
New Features
- Add mixed optimization for list optimization (#1342).
- Add entropy search acquisition functions (#1458).
- Add utilities for straight-through gradient estimators for discretization functions (#1515).
- Add support for categoricals in Round input transform and use STEs (#1516).
- Add closure-based optimizers (#1191).
Other Changes
- Do not count hitting maxiter as optimization failure & update default maxiter (#1478).
BoxDecompositioncleanup (#1490).- Deprecate
torch.triangular_solvein favor oftorch.linalg.solve_triangular(#1494). - Various docstring improvements (#1496, #1499, #1504).
- Remove
__getitem__method fromLinearTruncatedFidelityKernel(#1501). - Handle Cholesky errors when fitting a fully Bayesian model (#1507).
- Make eta configurable in
apply_constraints(#1526). - Support SAAS ensemble models in RFFs (#1530).
- Deprecate
botorch.optim.numpy_converter(#1191). - Deprecate
fit_gpytorch_scipyandfit_gpytorch_torch(#1191).
Bug Fixes
Bug fix release
Highlights
- #1454 fixes a critical bug that affected multi-output
BatchedMultiOutputGPyTorchModels that were using aNormalizeorInputStandardizeinput transform and trained usingfit_gpytorch_model/mllwithsequential=True(which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release. - #1479 changes the inheritance structure of
Models in a backwards-incompatible way. If your code relies onisinstancechecks with BoTorchModels, especiallySingleTaskGP, you should revisit these checks to make sure they still work as expected.
Compatibility
- Require linear_operator == 0.2.0 (#1491).
New Features
- Introduce
bvn,MVNXPB,TruncatedMultivariateNormal, andUnifiedSkewNormalclasses / methods (#1394, #1408). - Introduce
AffineInputTransform(#1461). - Introduce a
subset_transformdecorator to consolidate subsetting of inputs in input transforms (#1468).
Other Changes
- Add a warning when using float dtype (#1193).
- Let Pyre know that
AcquisitionFunction.modelis aModel(#1216). - Remove custom
BlockDiagLazyTensorlogic when usingStandardize(#1414). - Expose
_aug_batch_shapeinSaasFullyBayesianSingleTaskGP(#1448). - Adjust
PairwiseGPScaleKernelprior (#1460). - Pull out
fantasizemethod into aFantasizeMixinclass, so it isn't so widely inherited (#1462, #1479). - Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
- Use
get_default_partitioning_alphafor NEHVI input constructor (#1481).
Bug Fixes
- Fix
batch_shapeproperty ofModelListGPyTorchModel(#1441). - Tutorial fixes (#1446, #1475).
- Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
- Handle
RuntimeErrordue to constraint violation while sampling from priors (#1451). - Fix bug in model list with output indices (#1453).
- Fix input transform bug when sequentially training a
BatchedMultiOutputGPyTorchModel(#1454). - Fix a bug in
_fit_multioutput_independentthat failed mll comparison (#1455). - Fix box decomposition behavior with empty or None
Y(#1489).
Improve model fitting functionality
New Features
- A full refactor of model fitting methods (#1134).
- This introduces a new
fit_gpytorch_mllmethod that multiple-dispatches on the model type. Users may register custom fitting routines for different combinations of MLLs, Likelihoods, and Models. - Unlike previous fitting helpers,
fit_gpytorch_mlldoes not passkwargstooptimizerand instead introduces an optionaloptimizer_kwargsargument. - When a model fitting attempt fails,
botorch.fitmethods restore modules to their original states. fit_gpytorch_mllthrows aModelFittingErrorwhen all model fitting attempts fail.- Upon returning from
fit_gpytorch_mll,mll.trainingwill beTrueif fitting failed andFalseotherwise.
- This introduces a new
- Allow custom bounds to be passed in to
SyntheticTestFunction(#1415).
Deprecations
- Deprecate weights argument of risk measures in favor of a
preprocessing_function(#1400), - Deprecate
fit_gyptorch_model; to be superseded byfit_gpytorch_mll.
Other Changes
- Support risk measures in MOO input constructors (#1401).
Bug Fixes
Compatibility Release
Compatibility Release
Compatibility
- Require python >= 3.8 (via #1347).
- Support for python 3.10 (via #1379).
- Require PyTorch >= 1.11 (via (#1363).
- Require GPyTorch >= 1.9.0 (#1347).
- Require pyro >= 1.8.2 (#1379).
New Features
- Add ability to generate the features appended in the
AppendFeaturesinput transform via a generic callable (#1354). - Add new synthetic test functions for sensitivity analysis (#1355, #1361).
Other Changes
- Use
time.monotonic()instead oftime.time()to measure duration (#1353). - Allow passing
Y_samplesdirectly inMARS.set_baseline_Y(#1364).
Bug Fixes
Maintenance release
[0.6.6] - Aug 12, 2022
Compatibility
- Require GPyTorch >= 1.8.1 (#1347).
New Features
- Support batched models in
RandomFourierFeatures(#1336). - Add a
skip_expandoption toAppendFeatures(#1344).
Other Changes
- Allow
qProbabilityOfImprovementto use batch-shapedbest_f(#1324). - Make
optimize_acqfre-attempt failed optimization runs and handle optimization
errors inoptimize_acqfandgen_candidates_scipybetter (#1325). - Reduce memory overhead in
MARS.set_baseline_Y(#1346).
Bug Fixes
- Fix bug where
outcome_transformwas ignored forModelListGP.fantasize(#1338). - Fix bug causing
get_polytope_samplesto sample incorrectly when variables
live in multiple dimensions (#1341).
Documentation
Robust Multi-Objective BO, Multi-Objective Multi-Fidelity BO, Scalable Constrained BO, Improvements to Ax Integration
Compatibility
New Features
- Add MOMF (Multi-Objective Multi-Fidelity) acquisition function (#1153).
- Support
PairwiseLogitLikelihoodand modularizePairwiseGP(#1193). - Add in transformed weighting flag to Proximal Acquisition function (#1194).
- Add
FeasibilityWeightedMCMultiOutputObjective(#1202). - Add outcome_transform to
FixedNoiseMultiTaskGP(#1255). - Support Scalable Constrained Bayesian Optimization (#1257).
- Support
SaasFullyBayesianSingleTaskGPinprune_inferior_points(#1260). - Implement MARS as a risk measure (#1303).
- Add MARS tutorial (#1305).
Other Changes
- Add
Bilogoutcome transform (#1189). - Make
get_infeasible_costreturn a cost value for each outcome (#1191). - Modify risk measures to accept
List[float]for weights (#1197). - Support
SaasFullyBayesianSingleTaskGPin prune_inferior_points_multi_objective (#1204). - BotorchContainers and BotorchDatasets: Large refactor of the original
TrainingDataAPI to allow for more diverse types of datasets (#1205, #1221). - Proximal biasing support for multi-output
SingleTaskGPmodels (#1212). - Improve error handling in
optimize_acqf_discretewith a check thatchoicesis non-empty (#1228). - Handle
X_pendingproperly inFixedFeatureAcquisition(#1233, #1234). - PE and PLBO support in Ax (#1240, #1241).
- Remove
model.traincall fromget_X_baselinefor better caching (#1289). - Support
infvalues inboundsargument ofoptimize_acqf(#1302).
Bug Fixes
- Update
get_gp_samplesto support input / outcome transforms (#1201). - Fix cached Cholesky sampling in
qNEHVIwhen usingStandardizeoutcome transform (#1215). - Make
task_featureas required input inMultiTaskGP.construct_inputs(#1246). - Fix CUDA tests (#1253).
- Fix
FixedSingleSampleModeldtype/device conversion (#1254). - Prevent inappropriate transforms by putting input transforms into train mode before converting models (#1283).
- Fix
sample_points_around_bestwhen using 20 dimensional inputs orprob_perturb(#1290). - Skip bound validation in
optimize_acqfif inequality constraints are specified (#1297). - Properly handle RFFs when used with a
ModelListwith individual transforms (#1299). - Update
PosteriorListto support deterministic-only models and fixevent_shape(#1300).
Documentation
- Add a note about observation noise in the posterior in
fit_model_with_torch_optimizernotebook (#1196). - Fix custom botorch model in Ax tutorial to support new interface (#1213).
- Update MOO docs (#1242).
- Add SMOKE_TEST option to MOMF tutorial (#1243).
- Fix
ModelListGP.condition_on_observations/fantasizebug (#1250). - Replace space with underscore for proper doc generation (#1256).
- Update PBO tutorial to use EUBO (#1262).
Maintenance Release
New Features
- Implement
ExpectationPosteriorTransform(#903). - Add
PairwiseMCPosteriorVariance, a cheap active learning acquisition function (#1125). - Support computing quantiles in the fully Bayesian posterior, add
FullyBayesianPosteriorList(#1161). - Add expectation risk measures (#1173).
- Implement Multi-Fidelity GIBBON (Lower Bound MES) acquisition function (#1185).
Other Changes
- Add an error message for one shot acquisition functions in
optimize_acqf_discrete(#939). - Validate the shape of the
boundsargument inoptimize_acqf(#1142). - Minor tweaks to
SAASBO(#1143, #1183). - Minor updates to tutorials (24f7fda, #1144, #1148, #1159, #1172, #1180).
- Make it easier to specify a custom
PyroModel(#1149). - Allow passing in a
mean_moduletoSingleTaskGP/FixedNoiseGP(#1160). - Add a note about acquisitions using gradients to base class (#1168).
- Remove deprecated
box_decompositionmodule (#1175).
Bug Fixes
- Bug-fixes for
ProximalAcquisitionFunction(#1122). - Fix missing warnings on failed optimization in
fit_gpytorch_scipy(#1170). - Ignore data related buffers in
PairwiseGP.load_state_dict(#1171). - Make
fit_gpytorch_modelproperly honor thedebugflag (#1178). - Fix missing
posterior_transformingen_one_shot_kg_initial_conditions(#1187).
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
Bayesian Optimization with Preference Exploration, SAASBO for High-Dimensional Bayesian Optimization
New Features
- Implement SAASBO -
SaasFullyBayesianSingleTaskGPmodel for sample-efficient high-dimensional Bayesian optimization (#1123). - Add SAASBO tutorial (#1127).
- Add
LearnedObjective(#1131),AnalyticExpectedUtilityOfBestOptionacquisition function (#1135), and a few auxiliary classes to support Bayesian optimization with preference exploration (BOPE). - Add BOPE tutorial (#1138).