Skip to content

Skip ROCm test for MPZCH kernel #4271

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

lizhouyu
Copy link
Contributor

@lizhouyu lizhouyu commented Jun 5, 2025

Differential Revision: D76045051

lizhouyu added 2 commits June 5, 2025 07:59
Summary:
Pull Request resolved: pytorch#4214

X-link: facebookresearch/FBGEMM#1290

Opensource FBGEMM CUDA Kernel for MPZCH feature

### Major changes
- Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder.
- Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval`
  - faster_hash.cpp
  - faster_hash.cu
  - common_utils.cuh
- Revise the `faster_hash.cpp`
  - Change `namespace fb` to `namespace fbgemm_gpu`.
  - Comment out `using namespace torch::fb::turborec;`
  - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)`
  - Fix namespace calling issue due to the namespace change.
- Revise the `faster_hash.cu`
  - Change `namespace fb` to `namespace fbgemm_gpu`.
  - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)`
  - Fix namespace calling issue due to the namespace change.
- Revise the `common_utils.cuh` file
  - Change `namespace fb` to `namespace fbgemm_gpu`.
- Add a BUCK file to compile the cpp and cuda library.
- Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder.
- Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`.
- In the `faster_hash_test.py` file
  - Load the `faster_hash` related libraries with `torch.ops.load` API.
  - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`.
  - Following other test files to add `opensource` and `gpu availability` check.

### Questions
- After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`?
- How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`?

Differential Revision: D75505020

Reviewed By: ionuthristodorescu
Differential Revision: D76045051
Copy link

netlify bot commented Jun 5, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 143eb52
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/6841b219ba9e6300081c8e39
😎 Deploy Preview https://deploy-preview-4271--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76045051

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants