Skip to content

Commit 7eada74

Browse files
saitcakmakfacebook-github-bot
authored andcommitted
Changelog for 0.7.3 (#1493)
Summary: --- Pull Request resolved: #1493 Reviewed By: esantorella Differential Revision: D41192689 Pulled By: saitcakmak fbshipit-source-id: 2296e98d97e9faf110f64bbba9845fbbd6c93e51
1 parent f76979d commit 7eada74

File tree

1 file changed

+35
-0
lines changed

1 file changed

+35
-0
lines changed

CHANGELOG.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,41 @@
22

33
The release log for BoTorch.
44

5+
## [0.7.3] - Nov 10, 2022
6+
7+
### Highlights
8+
* #1454 fixes a critical bug that affected multi-output `BatchedMultiOutputGPyTorchModel`s that were using a `Normalize` or `InputStandardize` input transform and trained using `fit_gpytorch_model/mll` with `sequential=True` (which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release.
9+
* #1479 changes the inheritance structure of `Model`s in a backwards-incompatible way. If your code relies on `isinstance` checks with BoTorch `Model`s, especially `SingleTaskGP`, you should revisit these checks to make sure they still work as expected.
10+
11+
#### Compatibility
12+
* Require linear_operator == 0.2.0 (#1491).
13+
14+
#### New Features
15+
* Introduce `bvn`, `MVNXPB`, `TruncatedMultivariateNormal`, and `UnifiedSkewNormal` classes / methods (#1394, #1408).
16+
* Introduce `AffineInputTransform` (#1461).
17+
* Introduce a `subset_transform` decorator to consolidate subsetting of inputs in input transforms (#1468).
18+
19+
#### Other Changes
20+
* Add a warning when using float dtype (#1193).
21+
* Let Pyre know that `AcquisitionFunction.model` is a `Model` (#1216).
22+
* Remove custom `BlockDiagLazyTensor` logic when using `Standardize` (#1414).
23+
* Expose `_aug_batch_shape` in `SaasFullyBayesianSingleTaskGP` (#1448).
24+
* Adjust `PairwiseGP` `ScaleKernel` prior (#1460).
25+
* Pull out `fantasize` method into a `FantasizeMixin` class, so it isn't so widely inherited (#1462, #1479).
26+
* Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
27+
* Use `get_default_partitioning_alpha` for NEHVI input constructor (#1481).
28+
29+
#### Bug Fixes
30+
* Fix `batch_shape` property of `ModelListGPyTorchModel` (#1441).
31+
* Tutorial fixes (#1446, #1475).
32+
* Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
33+
* Handle `RuntimeError` due to constraint violation while sampling from priors (#1451).
34+
* Fix bug in model list with output indices (#1453).
35+
* Fix input transform bug when sequentially training a `BatchedMultiOutputGPyTorchModel` (#1454).
36+
* Fix a bug in `_fit_multioutput_independent` that failed mll comparison (#1455).
37+
* Fix box decomposition behavior with empty or None `Y` (#1489).
38+
39+
540
## [0.7.2] - Sep 27, 2022
641

742
#### New Features

0 commit comments

Comments
 (0)