Skip to content

Releases: uxlfoundation/oneDNN

v1.6.5

27 Oct 19:21

Choose a tag to compare

This is a patch release containing the following changes to v1.6.4:

  • Fixed issue with memory descriptor size computations (fc836a3)
  • Reduced required scratchpad size for RNNs (c7e165a)
  • Improved performance of fp16 convolution with bias on GPUs (943760e)
  • Fixed segmentation fault for convolution weight gradient on systems with Intel AVX512 support (85e92b3)

v2.0-beta10

29 Oct 00:02

Choose a tag to compare

v2.0-beta10 Pre-release
Pre-release

This is a preview release for oneDNN v2.0. The release is based on oneDNN v1.7.

Binary distribution of this software is available as Intel(R) oneAPI Deep Neural Network Library in Intel(R) oneAPI.

Performance optimizations

  • Intel Processor Graphics and Xe architecture-based Graphics:
    • Improved performance of convolutions and matmul primitives.
    • Improved performance of int8 convolutions for NHWC activations format.
  • Intel Architecture processors:
    • Improved performance of primitives for NHWC activations format.
    • Improved fp32 GEMM performance for small N
    • Improved performance of int8 primitives for processors with Intel SSE4.1 instruction set support.
  • AArch64-based processors
    • Added support for Arm Performance Library (ArmPL). ArmPL provides optimized GEMM implementation for aarch64.
    • Added support for (Arm Compute Library (ArmCL))[https://github.com/arm-software/ComputeLibrary]. ArmCL provides optimized convolution implementation for aarch64.

New Functionality

  • Added support for IBMz (s390x) and IBM POWER (powerpc64) architectures
  • Introduced RNN GRU for GPU.
  • Introduced int8 RNN GRU for CPU
  • Introduced asymmetric quantization support for convolutions, matmul, and inner product
  • Introduced dilated pooling support.
  • Extended matmul primitive to support multiple dimensions in batch and broadcast on CPU.
  • (preview) Introduced binary post-op for (de)-convolution, pooling, eltwise, binary, inner product, and matmul.
  • (preview) Extended the number of supported post-ops for primitives to 20.
  • (preview) Introduced reduction primitive for CPU. Together with post-ops this functionality allows to implement normalization.

Thanks to the contributors

This release contains contributions from the project core team as well as Ben Fitch, Brian Shi, David Edelsohn @edelsohn, Diana Bite @diaena, Moaz Reyad @moazreyad, Nathan John Sircombe @nSircombe, Niels Dekker @N-Dekker, Peter Caday @petercad, Pinzhen Xu @pinzhenx, pkubaj @pkubaj, Tsao Zhong @CaoZhongZ. We would also like to thank everyone who asked questions and reported issues.

Known Issues and Limitations

  • f32 convolutions may hang sporadically on Intel Processor Graphics Gen11. No workaround available.
  • Pooling, batch normalization, and binary primitives may segfault when executed on Xe architecture-based graphics. No workaround available.
  • oneDNN functionality may corrupt memory and lead to application crash on GPU with Level Zero runtime in USM mode on all GPU platforms. As a workaround use SYCL buffers or OpenCL runtime:
    export SYCL_BE=PI_OPENCL
  • Matmul function may hang on GPU with Level Zero runtime on Windows. As a workaround use OpenCL runtime:
    export SYCL_BE=PI_OPENCL
  • Convolution may hang on GPU for shapes with 3 input channels. No workaround available.
  • Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
  • When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
    o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
    $ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
    o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
    Increase TdrDelay and TdrDdiDelay values in registry
  • See DPC++ limitations that impact the library as well.

v1.6.4

01 Oct 15:12

Choose a tag to compare

This is a patch release containing the following changes to v1.6.3:

  • Fixed performance regression in dnnl_sgemm with N=1 (379a216, f35e991)
  • Extended matmul to support multiple demensions and broadcast (0728f26)
  • Fixed performance regression for convolution weight gradient implementation for Intel AVX2(9ab050b, 6cd0c35)
  • Fixed unknown primitive kind assertion on GPU (c95a01c)
  • Fixed build issue on Windows for the case when oneDNN is built as submodule (2fceddf)
  • Fixed issues with NaN results produced by dnnl_sgemm in some scenarios (5ce95ef)
  • Improved performance for convolution backpropagation with 1x1 filter and NHWC activations on systems with Intel AVX2 support (74bfc74)
  • Fixed correctness issue for convolution with 3D spatial (bf6ee84)
  • Fixed potential segmentation fault when destroying RNN primitive (0d9839b)
  • Fixed performance regression for fp32 convolutions Intel AVX512 implementation (668e282)

v1.7-rc

29 Sep 18:56

Choose a tag to compare

v1.7-rc Pre-release
Pre-release

This is a release candidate for oneDNN v1.7. Please provide feedback and report bugs in Github issues.

v1.6.3

11 Sep 16:47

Choose a tag to compare

This is a patch release containing the following changes to v1.6.2:

v2.0-beta09

16 Sep 23:37

Choose a tag to compare

v2.0-beta09 Pre-release
Pre-release

This is a preview release for oneDNN v2.0. This is a patch release based on v2.0-beta08.

Binary distribution of this software is available as Intel(R) oneAPI Deep Neural Network Library in Intel(R) oneAPI.

Known Issues and Limitations

  • int8 LSTM cell may produce incorrect results when dimensions exceed 16.
  • oneDNN functions executed on GPU with Level Zero driver in Remote Desktop Connection session on Windows may produce incorrect results or hang up an application. As a workaround switch Intel oneAPI DPC++ Runtime to OpenCL backend by setting environment variable SYCL_BE=PI_OPENCL.
  • Average pooling backpropagation may produce incorrect results for 1D spatial on Intel® Processor Graphics Gen9.
  • Optimized primitives can crash or fail for huge spatial sizes on CPU.
  • f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
  • Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
  • When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
    o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
    $ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
    o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
    Increase TdrDelay and TdrDdiDelay values in registry
  • See DPC++ limitations that impact the library as well.

v1.6.2

04 Sep 15:51

Choose a tag to compare

This is a patch release containing the following changes to v1.6.1:

  • Implemented workaround for running examples using cmake on macOS (089a877)
  • Implemented workaround for internal compiler error when building oneDNN with Microsoft Visual Studio 2019 (c6f9b7a)
  • Fixed segfault for grouped convolutions (77e5d57)
  • Fixed segfault for convolutions with 1x1 filter on Intel AVX2 systems (09c18e6)
  • Fixed segfault for convolutions with 1x1 filter on Intel AVX-512 system (2c4ad38)
  • Fixed issue with zero padding in bfloat16 convolutions with NHWC activations (4c05c18)

v1.6.1

08 Aug 01:45

Choose a tag to compare

This is a patch release containing following changes to v1.6:

  • Fixed performance regression for convolutions with 1x1 filter on Intel AVX2 (8186817)
  • Fixed invalid memory access issue for bfloat16 1D grouped convolutions (9ebda65)
  • Fixed RuntimeError: label is redefined for convolutions with large filter size on Intel AVX512 (f974b50)
  • Suppressed MSBuild warning MSB8065 (f91e641)
  • Restricted support for shared virtual memory (SVM) to OpenCL 2.0 and later (fa6bbf4)

v2.0-beta08

31 Jul 22:37

Choose a tag to compare

v2.0-beta08 Pre-release
Pre-release

This is a preview release for oneDNN v2.0. The release is based on oneDNN v1.6.

Binary distribution of this software is available as Intel(R) oneAPI Deep Neural Network Library in Intel(R) oneAPI.

Performance Optimizations

Intel Architecture processors

  • Introduced initial int8 optimizations for future Intel Xeon Scalable processor (code name Sapphire Rapids). The functionality is disable by default and should be enabled via CPU dispatcher control.
  • Improved matmul and inner product performance with bfloat16 data type.
  • Improved performance of tanh algorithm for eltwise primitive and LSTM cells.

Intel Processor Graphics and Xe architecture-based Graphics

  • Improved performance of Convolution, RNN, Inner Product and Matmul functionality for all supported GPUs.
  • Improved performance of int8 convolutions with activations in NHWC format for Xe architecture-based Graphics (code named DG1 and Tiger Lake).

New Functionality

  • Introduced support for processors based on IBM POWER architecture.
  • Introduced Linear-Before-Reset GRU for GPU.
  • Extended eltwise primitive with support for round operation.

Usability

  • Reduced primitives creation time due to enabled OpenCL pre-compiled headers feature in recent versions of OpenCL driver.
  • Reduced entitlement required on macOS with hardened runtime to allow-jit.
  • Extended documentation on runtime and build time controls for JIT profilers support, primitive cache, CPU dispatcher controls, and verbose mode.

Validation

  • Introduced validation mode for out of memory situations.

Known Issues and Limitations

  • RNN functionality is not functional with Level Zero GPU runtime. The workaround is to use OpenCL GPU runtime via setting SYCL_BE=PI_OPENCL before running a DPC++ program.
  • Optimized primitives can crash or fail for huge spatial sizes on CPU.
  • f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
  • Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
  • When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
    o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
    $ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
    o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
    Increase TdrDelay and TdrDdiDelay values in registry
  • See DPC++ limitations that impact the library as well.

v1.6

31 Jul 22:42

Choose a tag to compare

Performance optimizations

Intel Architecture processors

  • Introduced initial int8 optimizations for future Intel Xeon Scalable processor (code name Sapphire Rapids). The functionality is disabled by default and should be enabled via CPU dispatcher control.
  • Improved matmul and inner product performance with bfloat16 data type.
  • Improved performance of tanh algorithm for eltwise primitive and LSTM cells.

Intel Processor Graphics and Xe architecture-based Graphics

  • Improved performance of Convolution, RNN, Inner Product and Matmul functionality for all supported GPUs.
  • Improved performance of int8 convolutions with activations in NHWC format for Xe architecture-based Graphics (code named DG1 and Tiger Lake).

AArch64-based processors

  • Added support for ArmPL library to improve performance of functionality relying on GEMM (matmul, inner product, convolutions).

New Functionality

  • Introduced support for processors based on IBM POWER architecture.
  • Introduced Linear-Before-Reset GRU for GPU.
  • Extended eltwise primitive with support for round operation.

Usability

  • Reduced primitives creation time due to enabled OpenCL pre-compiled headers feature in recent versions of OpenCL driver.
  • Reduced entitlement required on macOS with hardened runtime to allow-jit.
  • Extended documentation on runtime and build time controls for JIT profilers support, primitive cache, CPU dispatcher controls, and verbose mode.

Validation

  • Introduced validation mode for out of memory situations.

Thanks to the contributors

This release contains contributions from the project core team as well as Alberto Gonzalez Palomo @AlbertoGP, Arthur Mitrano @aaraujom, Ilia Taraban @itaraban, Nathan John Sircombe @nSircombe, Peter Caday @petercad, Tsao Zhong @CaoZhongZ. We would also like to thank everyone who asked questions and reported issues.