Skip to content

Test outside of namespace reference #4248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

aporialiao
Copy link
Member

Summary:
X-link: https://github.com/facebookresearch/FBGEMM/pull/1326

Rollback Plan:

Differential Revision: D75887842

Summary:
Opensource FBGEMM CUDA Kernel for MPZCH feature

### Major changes
- Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder.
- Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval`
  - faster_hash.cpp
  - faster_hash.cu
  - common_utils.cuh
- Revise the `faster_hash.cpp`
  - Change `namespace fb` to `namespace fbgemm_gpu`.
  - Comment out `using namespace torch::fb::turborec;`
  - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)`
  - Fix namespace calling issue due to the namespace change.
- Revise the `faster_hash.cu`
  - Change `namespace fb` to `namespace fbgemm_gpu`.
  - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)`
  - Fix namespace calling issue due to the namespace change.
- Revise the `common_utils.cuh` file
  - Change `namespace fb` to `namespace fbgemm_gpu`.
- Add a BUCK file to compile the cpp and cuda library.
- Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder.
- Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`.
- In the `faster_hash_test.py` file
  - Load the `faster_hash` related libraries with `torch.ops.load` API.
  - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`.
  - Following other test files to add `opensource` and `gpu availability` check.

### Questions
- After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`?
- How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`?

Differential Revision: D75505020
Summary:
X-link: facebookresearch/FBGEMM#1326

Rollback Plan:

Differential Revision: D75887842
Copy link

netlify bot commented Jun 3, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 5034b0e
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/683f3f4af9f76300082bad1f
😎 Deploy Preview https://deploy-preview-4248--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75887842

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants