Skip to content

Conversation

@sryap
Copy link
Contributor

@sryap sryap commented Jan 16, 2026

Summary:
Update kernels to benchmark in blackwell_attentions

  • Use flash_attn_func instead of flash_attn_qkvpacked_func for
    flash_v2
  • By pass xformers and use mslk.attention.cutlass_blackwell_fmha
    for cutlass_blackwell
  • Update preproc for cutedsl_blackwell

Differential Revision: D90896364

@meta-codesync
Copy link

meta-codesync bot commented Jan 16, 2026

@sryap has exported this pull request. If you are a Meta employee, you can view the originating Diff in D90896364.

Summary:

Update kernels to benchmark in `blackwell_attentions`
- Use `flash_attn_func` instead of `flash_attn_qkvpacked_func` for
  `flash_v2`
- By pass `xformers` and use `mslk.attention.cutlass_blackwell_fmha`
  for  `cutlass_blackwell`
- Update `preproc` for `cutedsl_blackwell`

Reviewed By: henrylhtsang

Differential Revision: D90896364
facebook-github-bot pushed a commit that referenced this pull request Jan 16, 2026
Summary:

Update kernels to benchmark in `blackwell_attentions`
- Use `flash_attn_func` instead of `flash_attn_qkvpacked_func` for
  `flash_v2`
- By pass `xformers` and use `mslk.attention.cutlass_blackwell_fmha`
  for  `cutlass_blackwell`
- Update `preproc` for `cutedsl_blackwell`

Reviewed By: henrylhtsang

Differential Revision: D90896364
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants