Releases: meta-pytorch/botorch
Releases · meta-pytorch/botorch
Bugfix Release
Bugfix Release
Bug fixes
- There was a mysterious issue with the 0.2.3 wheel on pypi, where part of the
botorch/optim/utils.pyfile was not included, which resulted in anImportErrorfor many central components of the code. Interestingly, the source dist (built with the same command) did not have this issue. - Preserve order in ChainedOutcomeTransform (#440).
New Features
- Utilities for estimating the feasible volume under outcome constraints (#437).
Pairwise GP for Preference Learning, Sampling Strategies
Introduces a new Pairwise GP model for Preference Learning with pair-wise preferential feedback, as well as a Sampling Strategies abstraction for generating candidates from a discrete candidate set.
Compatibility
New Features
- Add
PairwiseGPfor preference learning with pair-wise comparison data (#388). - Add
SamplingStrategyabstraction for sampling-based generation strategies, including
MaxPosteriorSampling(i.e. Thompson Sampling) andBoltzmannSampling(#218, #407).
Deprecations
- The existing
botorch.genmodule is moved tobotorch.generation.genand imports
frombotorch.genwill raise a warning (an error in the next release) (#218).
Bug fixes
- Fix & update a number of tutorials (#394, #398, #393, #399, #403).
- Fix CUDA tests (#404).
- Fix sobol maxdim limitation in
prune_baseline(#419).
Other changes
- Better stopping criteria for stochastic optimization (#392).
- Improve numerical stability of
LinearTruncatedFidelityKernel(#409). - Allow batched
best_finqExpectedImprovementandqProbabilityOfImprovement
(#411). - Introduce new logger framework (#412).
- Faster indexing in some situations (#414).
- More generic
BaseTestProblem(9e604fe).
Require Python 3.7 and new features
Require Python 3.7 and adds new features for active learning and multi-fidelity optimization, along with a number of bug fixes.
Compatibility
New Features
- Add
qNegIntegratedPosteriorVariancefor Bayesian active learning (#377). - Add
FixedNoiseMultiFidelityGP, analogous toSingleTaskMultiFidelityGP(#386). - Support
scalarize_posteriorfor m>1 and q>1 posteriors (#374). - Support
subset_outputmethod on multi-fidelity models (#372). - Add utilities for sampling from simplex and hypersphere (#369).
Bug fixes
- Fix
TestLoaderlocal test discovery (#376). - Fix batch-list conversion of
SingleTaskMultiFidelityGP(#370). - Validate tensor args before checking input scaling for more
informative error messaages (#368). - Fix flaky
qNoisyExpectedImprovementtest (#362). - Fix test function in closed-loop tutorial (#360).
- Fix num_output attribute in BoTorch/Ax tutorial (#355).
Other changes
Compatibility Release
Minor bug fix release.
New Features
- Add a static method for getting batch shapes for batched MO models (#346).
Bug fixes
- Revamp qKG constructor to avoid issue with missing objective (#351).
- Make sure MVES can support sampled costs like KG (#352).
Other changes
- Allow custom module-to-array handling in fit_gpytorch_scipy (#341).
Max-value entropy search, multi-fidelity (cost-aware) optimization
This release adds the popular Max-value Entropy Search (MES) acquisition function, as well as support for multi-fidelity Bayesian optimization via both the Knowledge Gradient (KG) and MES.
Compatibility
New Features
- Add cost-aware KnowledgeGradient (
qMultiFidelityKnowledgeGradient) for multi-fidelity optimization (#292). - Add
qMaxValueEntropyandqMultiFidelityMaxValueEntropymax-value entropy search acquisition functions (#298). - Add
subset_outputfunctionality to (most) models (#324). - Add outcome transforms and input transforms (#321).
- Add
outcome_transformkwarg to model constructors for automatic outcome transformation and un-transformation (#327). - Add cost-aware utilities for cost-sensitive acquisiiton functions (#289).
- Add
DeterminsticModelandDetermisticPosteriorabstractions (#288). - Add
AffineFidelityCostModel(f838eac). - Add
project_to_target_fidelityandexpand_trace_observationsutilities for use in multi-fidelity optimization (1ca12ac).
Performance Improvements
- New
prune_baselineoption for pruningX_baselineinqNoisyExpectedImprovement(#287). - Do not use approximate MLL computation for deterministic fitting (#314).
- Avoid re-evaluating the acquisition function in
gen_candidates_torch(#319). - Use CPU where possible in
gen_batch_initial_conditionsto avoid memory issues on the GPU (#323).
Bug fixes
- Properly register
NoiseModelAddedLossTerminHeteroskedasticSingleTaskGP(671c93a). - Fix batch mode for
MultiTaskGPyTorchModel(#316). - Honor
propagate_gradsargument infantasizeofFixedNoiseGP(#303). - Properly handle
diagarg inLinearTruncatedFidelityKernel(#320).
Other changes
- Consolidate and simplify multi-fidelity models (#308).
- New license header style (#309).
- Validate shape of
best_finqExpectedImprovement(#299). - Support specifying observation noise explicitly for all models (#256).
- Add
num_outputsproperty to theModelAPI (#330). - Validate output shape of models upon instantiating acquisition functions (#331).
Tests
Knowledge Gradient (one-shot), various maintenance
Breaking Changes
- Require explicit output dimensions in BoTorch models (#238)
- Make
joint_optimize/sequential_optimizereturn acquisition function
values (#149) [note deprecation notice below] standardizenow works on the second to last dimension (#263)- Refactor synthetic test functions (#273)
New Features
- Add
qKnowledgeGradientacquisition function (#272, #276) - Add input scaling check to standard models (#267)
- Add
cyclic_optimize, convergence criterion class (#269) - Add
settings.debugcontext manager (#242)
Deprecations
- Consolidate
sequential_optimizeandjoint_optimizeintooptimize_acqf
(#150)
Bug fixes
- Properly pass noise levels to GPs using a
FixedNoiseGaussianLikelihood(#241)
[requires gpytorch > 0.3.5] - Fix q-batch dimension issue in
ConstrainedExpectedImprovement
(6c06718) - Fix parameter constraint issues on GPU (#260)
Minor changes
- Add decorator for concatenating pending points (#240)
- Draw independent sample from prior for each hyperparameter (#244)
- Allow
dim > 1111forgen_batch_initial_conditions(#249) - Allow
optimize_acqfto useq>1forAnalyticAcquisitionFunction(#257) - Allow excluding parameters in fit functions (#259)
- Track the final iteration objective value in
fit_gpytorch_scipy(#258) - Error out on unexpected dims in parameter constraint generation (#270)
- Compute acquisition values in gen_ functions w/o grad (#274)
Tests
Compatibility and Maintenance release
Compatibility
- Updates to support breaking changes in PyTorch to boolean masks and tensor
comparisons (#224). - Require PyTorch >=1.2 (#225).
- Require GPyTorch >=0.3.5 (itself a compatibility release).
New Features
- Add
FixedFeatureAcquisitionFunctionwrapper that simplifies optimizing
acquisition functions over a subset of input features (#219). - Add
ScalarizedObjectivefor scalarizing posteriors (#210). - Change default optimization behavior to use L-BFGS-B by for box constraints
(#207).
Bug fixes
- Add validation to candidate generation (#213), making sure constraints are
strictly satisfied (rater than just up to numerical accuracy of the optimizer).
Minor changes
- Introduce
AcquisitionObjectivebase class (#220). - Add propagate_grads context manager, replacing the
propagate_gradskwarg in
modelposterior()calls (#221) - Add
batch_initial_conditionsargument tojoint_optimize()for
warm-starting the optimization (ec3365a). - Add
return_best_onlyargument tojoint_optimize()(#216). Useful for
implementing advanced warm-starting procedures.
Maintenance release
Bug fixes
- Avoid PyTorch bug resulting in bad gradients on GPU by requiring GPyTorch >= 0.3.4
- Fixes to resampling behavior in MCSamplers (#204)
Experimental Features
API updates, more robust model fitting
Breaking changes
- rename
botorch.qmctobotorch.sampling, move MC samplers from
acquisition.samplertobotorch.sampling.samplers(#172)
New Features
- Add
condition_on_observationsandfantasizeto the Model level API (#173) - Support pending observations generically for all
MCAcqusitionFunctions(#176) - Add fidelity kernel for training iterations/training data points (#178)
- Support for optimization constraints across
q-batches (to support things like
sample budget constraints) (2a95a6c) - Add ModelList <-> Batched Model converter (#187)
- New test functions
Improved functionality:
- More robust model fitting
- Introduce optional batch limit in
joint_optimizeto increases scalability of
parallel optimization (baab578) - Change constructor of
ModelListGPto comply with GPyTorch’sIndependentModelList
constructor (a6cf739) - Use
torch.randomto set default seed for samplers (rather thanrandom) to
making sampling reproducible when settingtorch.manual_seed
(ae507ad)
Performance Improvements
- Use
einsuminLinearMCObjective(22ca295) - Change default Sobol sample size for
MCAquisitionFunctionsto be base-2 for
better MC integration performance (5d8e818) - Add ability to fit models in
SumMarginalLogLikelihoodsequentially (and make
that the default setting) (#183) - Do not construct the full covariance matrix when computing posterior of
single-output BatchedMultiOutputGPyTorchModel (#185)
Bug fixes
- Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182)
- Fix a issue where
f_bestwas always max for NoisyExpectedImprovement
(410de58) - Fix bug and numerical issues in
initialize_q_batch
(844dcd1) - Fix numerical issues with
inv_transformfor qMC sampling (#162)
Other
- Bump GPyTorch minimum requirement to 0.3.3