Skip to content

CUDA: broadcasting for FlashAttention mask #14500

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

JohannesGaessler
Copy link
Collaborator

This PR adds support for broadcasting the FlashAttention mask in CUDA. From what I can tell only dimension 2 is needed, not dimension 3. Until now the CUDA kernels were implicitly assuming that the mask is contiguous so they would have produced incorrect results for a non-contiguous mask, this is now fixed. The __launch_bounds__ were configured incorrectly for the tile kernels, this caused a performance regression for Pascal if just adding the broadcasting (also fixed).

It may make sense to use a struct for the kernel arguments, since FA is performance-critical I think we will in that case need to assert that all of the compilers can optimize the code correctly in that scenario.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Jul 2, 2025
@JohannesGaessler JohannesGaessler merged commit ddb6e96 into ggml-org:gg/ggml-batch-soft-max-ops Jul 2, 2025
42 of 47 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 2, 2025
* origin/master:
llama : initial Mamba-2 support (ggml-org#9126)
sync : ggml
ggml : add version function to get lib version (ggml/1286)
Set RPATH to "@loader_path" / "$ORIGIN" to ensure executables and dynamic libraries search for dependencies in their origin directory. (ggml-org#14309)
CUDA: add softmax broadcast (ggml-org#14475)
CUDA: broadcasting for FlashAttention mask (ggml-org#14500)
vulkan: support softmax/FA batch and broadcast (ggml-org#14449)
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (ggml-org#14435)
opencl : fix possible buffer overflow in dump_tensor (ggml-org#14490)
simple-chat : fix context-exceeded condition (ggml-org#14494)
opencl : skip empty nodes on cgraph compute (ggml-org#14491)
opencl : update upscale to support align corners (ggml-org#14488)
ci : add OpenCL to labeler workflow (ggml-org#14496)
github : add OpenCL backend to issue templates (ggml-org#14492)
ggml : Callback before abort (ggml-org#14481)
ci : disable fast-math for Metal GHA CI (ggml-org#14478)
Minh141120 pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 5, 2025
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants