Skip to content

Releases: oneapi-src/oneDNN

v1.0-pc

09 Mar 00:19
Compare
Choose a tag to compare
v1.0-pc Pre-release
Pre-release

This is preview candidate for MKL-DNN v1.0.

The preview candidate implements changes announced in v1.0 RFC. Please provide feedback and report bugs in Github issues.

v0.18

02 Mar 00:07
Compare
Choose a tag to compare

Performance optimizations

  • Improved RNN functionality performance.
  • Improved performance of GEMM-based convolutions
  • Improved performance of backpropagation for stided convolutions on processors with Intel® AVX2 support.
  • Improved performance of the gemm_s8u8s32 and gemm_s8s8s32 functions on processors with Intel® AVX512 and Intel® AVX512-DL Boost instruction sets.
  • Improved inner product performance on processors with Intel AVX512 and Intel AVX512-DL Boost instruction sets.
  • Improved performance of int8 convolutions and deconvolutions on processors with Intel AVX512 and Intel AVX512-DL Boost instruction sets.

New functionality

  • Convolutions support arbitrary elementwise operations in postops.
  • Introduced support of signed int8 data for the inner product primitive.
  • Introduced int8 LSTM cell support.
  • Introduced automatic dispatching between the direct and Winograd convolution algorithms.

API deprecations and breaking changes

  • Previously deprecated APIs were removed:
    • relu function
    • convolution_relu function
    • double precision scales support in sum
    • negative_slope parameter in eltwise
    • omit_stats flag in batch normalization

Usability improvements

  • Added library version information to verbose output and to headers.
  • Added information about detected instruction set to verbose output.
  • Introduced mkldnn_version function.
  • Added APIs to override behaviors controlled via environment variables, including verbose mode and JIT dump.

Thanks to the contributors

This release contains contributions from many Intel Performance Libraries developers as well as Ruslan Baratov @ruslo, Konstantin Basargin @basargin, Jacek Czaja @jczaja, Eugene Zhulenev @ezhulenev, Haitao Feng @fenghaitao, Yinghai Liu @yinghai, Masahiro Sakai @msakai, and Alexander Grund @Flamefire. We would also like to thank everyone who asked questions and reported issues.

v0.17.4

12 Feb 21:55
Compare
Choose a tag to compare

This is a patch release containing following changes to Intel MKL-DNN v0.17.3:

  • Fix bug in build system for old versions of CMake (61f953e)

v0.18-rc

08 Feb 03:01
Compare
Choose a tag to compare
v0.18-rc Pre-release
Pre-release

This is a release candidate package for MKL-DNN v0.18. Please provide feedback and report bugs in Github issues.

v0.17.3

01 Feb 00:58
Compare
Choose a tag to compare

This is a patch release containing following changes to MKL-DNN v0.17.2:

  • Fix integer overflow in GEMM (059b5fd)
  • Update Xbyak* to 5.751 (4f809d0)

v0.17.2

20 Dec 02:47
Compare
Choose a tag to compare

This is a patch release containing following changes to MKL-DNN v0.17.1:

  • Fix data race during initialization in the GEMM-based convolution (763513e)
  • Fix number of dimensions of a tensor in the backward deconvolution primitive descriptor (5a0a50c)
  • Fix Valgrind* complaints (ed4b08c)

v0.17.1

29 Nov 00:32
Compare
Choose a tag to compare

This is a patch release containing following change to MKL-DNN v0.17:

  • Tentatively turn on reference direct copy reorder for GNU* Compiler Collection (567dfb5)

v0.17

19 Nov 20:09
Compare
Choose a tag to compare

Performance optimizations

  • Improved int8 convolutions performance on processors with Intel® AVX512-DL Boost instruction set support.
  • Improved performance of fp32 convolutions with number of input and output channels not divisible by the SIMD width for processors with Intel® AVX2 instruction set support.
  • Improved performance of Recurrent Neural Networks (RNNs) functionality.
  • Improved performance of int8 deconvolution.
  • Added optimizations for fp32 inference and training for processors with Intel® AVX instruction set support.
  • Added optimizations for convolutions and auxiliary primitives with 3D spatial data for processors with Intel® AVX2 instruction set support.
  • Improved int8 Winograd convolution performance for real-time inference use cases.

New functionality

  • Introduced int8 data-type support for inner-product primitive.
  • Introduced support for int8 convolutions with signed input and signed weights.
  • Introduced 1D spatial data support in convolution and auxiliary primitives. This functionality is optimized for processors with Intel® AVX512 instruction set support.
  • Introduced the Shuffle primitive.
  • Introduced a general-purpose matrix-matrix multiplication function for int8 data (gemm_s8u8s32 and gemm_s8s8s32).
  • Feature preview: Threading Building Blocks (TBB) support.

API deprecations and breaking changes

  • Order of the gates for LSTM cells was changed to input, forget, candidate, output. This might produce incorrect results.
  • Backward RNN primitive creation without the hint in C++ is deprecated.
  • Int8 Winograd convolution behavior with respect to scales is aligned with the direct convolution algorithm.

Usability improvements

  • Primitives now accept tensors with 0 for the dimension and do nothing in that case.
  • Added support for clang sanitizers.
  • Build system extended with the following capabilities:
    • Allow building with static Intel MKL by passing -DMKLDNN_USE_MKL=FULL:STATIC to cmake
    • Allow specifying the Intel MKL to use by passing -DMKLDNN_USE_MKL={DEF,NONE,ML,FULL} to cmake for that
    • Allow using the compiler's OpenMP RT by passing -DMKLDNN_THREADING=OMP:COMP to cmake for that
    • Allow building a static library by passing -DMKLDNN_LIBRARY_TYPE=STATIC to cmake

Thanks to the contributors

This release contains contributions from many Intel Performance Libraries developers as well as Dmitry Baksheev @dbakshee, Yuta Okamoto @okapies, and Eduardo Gonzalez @wmeddie. We would also like to thank everyone who asked questions and reported issues.

*Other names and brands may be claimed as the property of others.

v0.17-rc

02 Nov 04:33
Compare
Choose a tag to compare
v0.17-rc Pre-release
Pre-release

This is a release candidate package for MKL-DNN v0.17. It is made available for testing by the community. Please provide feedback and report bugs in Github issues.

v0.16

15 Aug 00:28
Compare
Choose a tag to compare

Performance optimizations

  • Improved performance of int8 convolutions with number of input and output channels not divisible by SIMD width on Intel(R) Xeon processors with Intel(R) AVX512 instruction set support.
  • Winograd convolutions optimized for fp32 real time inference on Intel(R) Xeon processors with Intel(R) AVX512 instruction set support.
  • Optimized weights update of dilated convolutions for fp32 data type on Intel(R) Xeon processors with Intel(R) AVX512 instruction set support.
  • Improved performance of reorder primitive for int8 data type.

New functionality

  • Added dilation support for deconvolution (transposed convolution) primitive.
  • Introduced deconvolution (transposed convolution) primitive for int8 data type.

API deprecations and breaking changes

  • The default behavior of gemm-based convolutions was changed. Now they use internally allocated thread-local scratchpad memory for im2col and col2im operations, weights reduction, and accumulation. This may cause correctness issues when multiple gemm-based convolutions are created in one thread and executed concurrently in different threads. To support concurrent execution, MKL-DNN library must be configured with -DMKLDNN_ENABLE_CONCURRENT_EXEC=TRUE CMake flag.

Usability improvements

Thanks to the contributors

This release contains contributions from many Intel(R) Performance Libraries developers as well as Yasser Zamani @yasserzamani and Loo Rong Jie @rongjiecomputer. We would also like to thank everyone who asked questions and reported issues.

*Other names and brands may be claimed as the property of others.