Skip to content

Releases: uxlfoundation/oneDNN

v0.12

29 Dec 23:04

Choose a tag to compare

Performance optimizations

  • Improved performance of fp32 direct and Winograd convolution on Intel(R) Xeon(R) processors with Intel(R) Advanced Vector Instructions 512 (Intel(R) AVX512) support
  • Improved performance of int8 direct convolution on Intel Xeon processors with Intel AVX512 instruction set
  • Improved batch normalization performance on Intel Xeon processors with Intel AVX512 instruction set
  • Optimized dilated convolution backward propagation
  • Improved initialization time of GEMM-based convolution implementations

New functionality

  • Support for int8 inference. These functions support int8 data type:
    • reorders (including quantization and dequantization)
    • convolution
    • pooling
    • eltwise
    • sum
    • concat
  • Layer fusion support with the new post-ops API. Functions that support fusion:
    • forward convolution with eltwise for inference and training
    • convolution with sum for inference
    • batch normalization with eltwise for training

API deprecations and breaking changes

  • ReLU primitive is deprecated. The functionality is a part of eltwise primitive
  • Merged convolution/ReLU primitive is deprecated. The functionality is available using the new post-ops API

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as @kruus, Yong Wu, Daoxin Pan, and Zhiming Wang. We would also like to thank everyone who asked questions and reported issues.

* Other names and brands may be claimed as the property of others.

v0.11

30 Oct 15:49

Choose a tag to compare

Performance optimizations

  • Improved convolution performance on future Intel(R) Xeon Phi(TM) processors with AVX512_4FMAPS and AVX512_4VNNIW instruction groups support
  • Improved convolution performance on Intel(R) Xeon processors with Intel(R) AVX512 instruction set support
  • Improved performance of GEMM-based convolutions for small minibatches
  • Improved performance of Winograd convolution algorithm on Intel Xeon Phi processors.

New functionality

  • Added backpropagation support for dilated convolution.
  • Eltwise primitive is extended with support for square, abs, square root, linear, bounded ReLU, soft ReLU and logistic.

Usability improvements

  • Added macOS* support.

Breaking changes to the API

  • All real-value op descriptors' parameters now have float data type (previously double). The change breaks C-API backward compatibility for sum primitive. Please refer to 0bbb22e for details. C++ API maintains backward compatibility.

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as Yu Yang @reyoung, Vladimir Mironov @vamironov, Nishant Patel @nbpatel, Leona Cook @indie, Jayaram Bobba @jbobba, Elena Gvozdeva. We would also like to thank everyone who asked questions and reported issues.

* Other names and brands may be claimed as the property of others.

v0.10

11 Aug 20:18

Choose a tag to compare

Performance optimizations

  • Improved performance on processors with Intel(R) AVX512 instruction set support
  • Added optimizations for future Intel(R) Xeon Phi(TM) processors with AVX512_4FMAPS and AVX512_4VNNIW instruction groups support

New functionality

  • Added support of Winograd convolution algorithm. The implementation has initial optimizations for Intel Xeon Phi processors with Intel AVX512 instruction set support.
  • Introduced elementwise primitive with 3 types of activations: ReLU (rectified linear unit), ELU (parametric exponential linear unit) and TANH (hyperbolic tangent non-linearity).
  • Added dilation support to forward convolution. The implementation is optimized for processors with Intel(R) SSE 4.2 and Intel(R) AVX instruction sets support.
  • Feature preview: Added int16 support in convolution, ReLU, pooling and inner product for training. Added optimized s16s16s32 convolution flavor for future Intel Xeon Phi processors.
  • Feature preview: Added optimized pooling with int8 support.

Usability improvements

  • Added Windows* support.
  • Added benchdnn test suite for comprehensive functional and performance testing of convolutions. The suite supports int8, int16 and fp32 data types.
  • Primitive implementation information can be queried using impl_info_str.

Deprecated functionality

  • ReLU primitive is deprecated and will be removed in future releases. Activation functions including ReLU are implemented in elementwise primitive.

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as Guenther Schmuelling @guschmue, Yong Wu, Dmitriy Gorokhov, Menon Jaikrishnan, Erik @kruus, Zhong Z Cao @4pao, Gleb Gladilov and @tensor-tang. We would also like to thank everyone who asked questions and reported issues.

* Other names and brands may be claimed as the property of others.

v0.9

19 May 22:59

Choose a tag to compare

Performance optimizations

  • Improved performance on processors with Intel(R) AVX2 instruction set support
  • Improved performance on processors with Intel(R) AVX512 instruction set support
  • Added optimizations for Intel(R) Xeon processors with Intel AVX512 instruction set support
  • Added inference optimizations for Intel(R) Atom processors with Intel(R) SSE4.2 support
  • Added JIT implementation of SGEMM for Intel(R) Xeon Phi(TM) processors.

New functionality

  • Average pooling supports 'exclude padding' mode
  • LRN supports arbitrary local size
  • Feature preview: Added int8 support in convolution, ReLU, pooling and inner product. Added optimized u8s8u8 convolution flavor for Intel Xeon processors with Intel AVX512 instruction set support.
  • Feature preview: Added int16 support in convolution, ReLU, pooling and inner product. Added optimized s16s16s32 convolution flavor for future Intel Xeon Phi processors.

Usability improvements

  • Improved build system to enable integration to other projects.
  • Intel(R) OpenMP runtime is used when the library built with binary dependency
  • Feature based dispatcher added to support wide range of Intel(R) processors and compatible

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as Ismo Puustinen @ipuustin, Dmitry Gorokhov, Vladimir Dudnik @vladimir-dudnik, @pruthviIntel, and Chris Olivier @cjolivier01. We would also like to thank everyone who asked questions and reported issues.

v0.7

25 Apr 15:01

Choose a tag to compare

v0.7 Pre-release
Pre-release

Changes:

  • Improved performance on processors with Intel(R) AVX2 instruction set support
  • Improved performance on processors with Intel(R) AVX512 instruction set support
  • Extended backward propagation optimizations for Intel(R) AVX2 and Intel AVX512 instruction sets
  • Added SGEMM-based reference convolution implementation significantly improving performance for cases not covered by JIT convolution
  • Added JIT version of SGEMM function for Intel(R) AVX2 instruction set. This change allows to build optimized Intel(R) MKL-DNN without binary component.
  • Added backward propagation examples

v0.5

07 Feb 21:49

Choose a tag to compare

v0.5 Pre-release
Pre-release

Changes:

  • Added runtime CPUID dispatching mechanism
  • Added initial Intel(R) AVX512 optimizations
  • Improved performance on processors with Intel(R) AVX2 instruction set support
  • Added initial backward propagation optimizations
  • Extended batch normalization primitive API with scale/shift and mean/variance parameters
  • Updated XByak to version 5.40

v0.3

18 Nov 05:30

Choose a tag to compare

v0.3 Pre-release
Pre-release

Changes:

  • Added sum primitive
  • Added backward propagation reference implementation

v0.2

10 Oct 09:49

Choose a tag to compare

v0.2 Pre-release
Pre-release

Changes:

  • Added batch normalization
  • Added split and concat
  • Added linear response normalization inside the channel
  • Added average pooling

v0.1

29 Aug 04:57

Choose a tag to compare

v0.1 Pre-release
Pre-release

This release is a technical preview with functionality limited to AlexNet and VGG topologies forward path.