Skip to content

Conversation

@Valentine233
Copy link
Collaborator

Descriptions:

  • Fuse the transpose and packing for Key.
  • Use the fast exp implementation.
  • Tune the block size of flash attention.

Validation:

  • For model ViT on DMR, we see a kernel improvement of 50% and E2E speedup of 10%.
  • Accuracy is as good as before.

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3541

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ No Failures, 2 Pending

As of commit 5c37eab with merge base 8d47813 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@Valentine233 Valentine233 marked this pull request as draft December 25, 2025 05:47
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 25, 2025
@Valentine233 Valentine233 added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Dec 25, 2025
@Valentine233
Copy link
Collaborator Author

@mingfeima, please help review the PR.

@Valentine233 Valentine233 marked this pull request as ready for review December 25, 2025 08:43
@drisspg
Copy link
Contributor

drisspg commented Dec 25, 2025

Cc @howardzhang-cv

@Valentine233
Copy link
Collaborator Author

@drisspg @howardzhang-cv @jerryzh168 Please take a review on this PR, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants