Skip to content

`v0.0.30` - build for PyTorch 2.7.0

Latest
Compare
Choose a tag to compare
@lw lw released this 28 Apr 14:51
· 17 commits to main since this release

Pre-built binary wheels are available for PyTorch 2.7.0. Following PyTorch, we build wheels for CUDA 11.8, 12.6, and 12.8 only (we no longer build for CUDA 12.4).
xFormers now requires PyTorch >= 2.7

Added

  • [fMHA] Added support for local attention on the Flash3 backend (H100)
  • [fMHA] Added a new paged gappy attention bias

Improved

  • [fMHA] The FlashAttention3 backend now ships with more head dimensions to support MLA, and with a FLOPs formula in order to be compatible with PyTorch's partitioner-base automatic activation checkpointing
  • The fused operators for sequence parallelism were migrated to PyTorch's SymmetricMemory
  • The profiler prepends the traces' filenames with the rank of the process when doing distributed training

Removed

  • Removed documentation for legacy unmaintained components