Releases: IntelLabs/bayesian-torch
Bayesian-Torch 0.5.0
This release includes support for quantization of all the Bayesian Convolutional layers listed below in addition to Conv2dReparameterization and Conv2dFlipout.
Conv1dReparameterization,
Conv3dReparameterization,
ConvTranspose1dReparameterization,
ConvTranspose2dReparameterization,
ConvTranspose3dReparameterization,
Conv1dFlipout,
Conv3dFlipout,
ConvTranspose1dFlipout,
ConvTranspose2dFlipout,
ConvTranspose3dFlipout
This release also includes the fixes for the following issues:
Issue #27
Issue #21
Issue #24
Issue #34
What's Changed
- Add quant prepare functions 342ca39
- Fix bug in post-training quantization evaluation due to Jit trace f5c7126
- Add quantization example for ImageNet/ResNet-50 3e74914
- Correcting the order of group and dilation parameters in Conv transpose layers 97ba16a
Full Changelog: v0.4.0...v0.5.0
Bayesian-Torch 0.4.0
This release introduces Quantization framework for Bayesian neural networks in Bayesian-Torch.
Support Post Training Quantization of Bayesian deep neural networks, enables optimized and efficient INT8 inference on Intel platforms.
What's Changed
- Add support for Bayesian neural network quantization | PR: #23
- Include example for performing post training quantization of Bayesian neural network models | commit c3e9a0f
- Add support for output padding in flipout layers | PR: #20
Contributors
- @junliang-lin made their first contribution in #20
- @ranganathkrishnan
Full Changelog: v0.3.0...v0.4.0
Bayesian-Torch 0.3.0
support arbitrary kernel sizes in the Bayesian convolutional layers
v0.2.1
Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.
v0.2.0-pre
Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.
Full Changelog: v0.1...v0.2.0
v0.1
Merge pull request #9 from piEsposito/main Let the models return prediction only, saving KL Divergence as an attribute