Skip to content

Invoke AMD specific kernel reorder_batched_ad_indices_kernel_vec #4412

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ghq24int
Copy link
Contributor

Summary:
For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:

  1. Vector loading in a warp.
  2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476

Copy link

netlify bot commented Jun 27, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 0f1e843
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/68615d1cbebf7d000717d57a
😎 Deploy Preview https://deploy-preview-4412--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants