Skip to content

Conversation

@danielvegamyhre
Copy link
Contributor

@danielvegamyhre danielvegamyhre commented Jan 8, 2026

Stacked PRs:


[mxfp8 moe training] add bench script for scale conversion to blocked format kernels for groups along K

The speedups look crazy, but the only way I was able to represent this particular layout conversion in plain torch required both (1) d2h sync, and (2) brute force iterating through all 128x4 scale factor tiles in each group

input_shape      num_groups    torch_time_us    triton_time_us  triton_speedup      torch_mem_bw_gbps    triton_mem_bw_gbps
-------------  ------------  ---------------  ----------------  ----------------  -------------------  --------------------
(7168, 4096)              4         524148              121.86  4301.37x                         0.11                482.82
(7168, 4096)              8         525698               93.18  5641.51x                         0.11                632.62
(2048, 4096)              4         149795              105.44  1420.66x                         0.11                159.43
(2048, 4096)              8         151980               39.78  3820.89x                         0.11                423.44
(7168, 2048)              4         261501               64.48  4055.54x                         0.11                457.12
(7168, 2048)              8         264929               58.4   4536.45x                         0.11                506.67
(2048, 2048)              4          75658.3             58.37  1296.23x                         0.11                144.28
(2048, 2048)              8          75856.2             25.6   2963.13x                         0.11                330.24
(7168, 1024)              4         132826               35.81  3709.41x                         0.11                413.17
(7168, 1024)              8         133271               52.22  2551.91x                         0.11                285.49
(2048, 1024)              4          37816.7             31.74  1191.30x                         0.11                133.16
(2048, 1024)              8          38501.2             21.5   1790.42x                         0.11                198.1

… format kernels for groups along K

stack-info: PR: #3604, branch: danielvegamyhre/stack/111
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3604

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit de2beea with merge base 3955b6c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@danielvegamyhre danielvegamyhre force-pushed the danielvegamyhre/stack/111 branch from 30a4538 to de2beea Compare January 8, 2026 22:28
@danielvegamyhre danielvegamyhre force-pushed the danielvegamyhre/stack/110 branch from b15065b to da38274 Compare January 8, 2026 22:28
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 8, 2026
@danielvegamyhre danielvegamyhre added mx topic: not user facing Use this tag if you don't want this PR to show up in release notes moe labels Jan 8, 2026
@danielvegamyhre danielvegamyhre requested a review from vkuzo January 8, 2026 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. moe mx topic: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants