Skip to content

v1.1

Choose a tag to compare

@anita-intel anita-intel released this 03 Oct 17:03
· 52 commits to rls-v1.1 since this release

Performance optimizations

  • Improved functionality performance with TBB threading achieving comparable performance with OpenMP threading.
  • Improved int8 and fp32 GEMM performance on system with Intel AVX-512 and Intel VNNI support.
  • Improved softmax performance for NHWC and corresponding blocked layouts.
  • Improved RNN cell performance and decreased dependency of RNN performance from the compiler vectorization capabilities.
  • Improved reorders performance for some shapes.

New functionality

  • Introduced layer normalization and binary elementwise primitives support (CPU engine).
  • Introduced swish (CPU and GPU engines) and gelu (GPU engine) activation support in elementwise primitive.
  • Introduced bfloat16 data type support in RNN cells (CPU engine).
  • Introduced initial int8 and bfloat16 data types support for GPU functionality.

Usability improvements

  • TBB threading support is promoted to production quality.
  • Introduced support for memory format any for memory-bound primitives backpropagation. This mechanism allows to match gradient memory format with source and destination memory formats from forward pass.
  • Changed default compiler flags to target Intel SSE4.1 instruction set to make builds portable.
  • (experimental) Introduced caching mechanism that reduces primitive creation time for repeated primitive creation. The functionality is disabled by default and has to be enabled in compile time.

Validation improvements

  • Extended benchdnn to cover all supported primitives.
  • Introduced robust validation method for RNN cells in benchdnn. The approach allows to replace activations with linear function to make error accumulation more predictable and decrease the number of false positives.
  • Extended convolution test coverage.

Thanks to the contributors

This release contains contributions from many Intel Performance Libraries developers as well as Ilia Taraban, Jacek Czaja @jczaja, William Tambellini @WilliamTambellini, Tomasz Kalina, Mateusz Guziak, Daniel Haidachuk, Konstantin Basargin @basargin, Aaron Johnson @aaronjohnson, and Jeremy Wong @jrmwng. We would also like to thank everyone who asked questions and reported issues.