Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
684b319 to
a29cd2a
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
a29cd2a to
2c3846e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
2c3846e to
2b54f80
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
2b54f80 to
d94acaa
Compare
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
d94acaa to
3d36968
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
3d36968 to
a273543
Compare
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
a273543 to
d203230
Compare
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
d203230 to
4490476
Compare
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. ### Questions - After refactorying, the API calls `torch.ops.create_zch_buffer`, `torch.ops.zero_collision_hash`, `torch.ops.fbgemm.zero_collision_hash`, and `torch.ops.fbgemm.create_zch_buffer` are all valid, while `torch.ops.create_zch_buffer` and `torch.ops.zero_collision_hash` may incur certain parameter mismatches. How to resolve this issue and disable the API calls without `fbgemm`? - How to integrate the refactoryed library into fbgemm so the test can call something like `from fbgemm_gpu import create_zch_buffer, zero_collision_hash`? Differential Revision: D75505020
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
4490476 to
e35f85d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
26 similar comments
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
Summary: Pull Request resolved: pytorch#4214 X-link: facebookresearch/FBGEMM#1290 Opensource FBGEMM CUDA Kernel for MPZCH feature ### Major changes - Create a folder named `faster_hash` under the `fbgemm/fbgemmgpu/src` folder. - Copy the following files to the created folder from `fbsource/fbcode/caffe2/torch/fb/retrieval` - faster_hash.cpp - faster_hash.cu - common_utils.cuh - Revise the `faster_hash.cpp` - Change `namespace fb` to `namespace fbgemm_gpu`. - Comment out `using namespace torch::fb::turborec;` - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `faster_hash.cu` - Change `namespace fb` to `namespace fbgemm_gpu`. - Change `TORCH_LIBRARY_IMPL(fb, ...)` to `TORCH_LIBRARY_IMPL(fbgemm, ...)` - Fix namespace calling issue due to the namespace change. - Revise the `common_utils.cuh` file - Change `namespace fb` to `namespace fbgemm_gpu`. - Add a BUCK file to compile the cpp and cuda library. - Copy the `faster_hash_test.py` file to the `fbgemm/fbgemm_gpu/test` folder. - Add a section in the BUCK file under the `test` folder for `python_unittest` of `faster_hash_test`. - In the `faster_hash_test.py` file - Load the `faster_hash` related libraries with `torch.ops.load` API. - Replace all the `torch.ops.fb` to `torch.ops.fbgemm`. - Following other test files to add `opensource` and `gpu availability` check. - OSS the `murmur_hash3` function - Write wrappers for the `murmur_hash3` function in `faster_hash.cu` and `faster_hash.cpp` files, register wrapper functions and expose to external calls. - Add a test for the `murmur_hash3` function to validate on `CPU` and `GPU` the hashed values are identical for the same input value. Reviewed By: ionuthristodorescu, spcyppt Differential Revision: D75505020
|
This pull request was exported from Phabricator. Differential Revision: D75505020 |
|
This pull request has been merged in cf90aac. |
Summary:
Opensource FBGEMM CUDA Kernel for MPZCH feature
Major changes
faster_hashunder thefbgemm/fbgemmgpu/srcfolder.fbsource/fbcode/caffe2/torch/fb/retrievalfaster_hash.cppnamespace fbtonamespace fbgemm_gpu.using namespace torch::fb::turborec;TORCH_LIBRARY_IMPL(fb, ...)toTORCH_LIBRARY_IMPL(fbgemm, ...)faster_hash.cunamespace fbtonamespace fbgemm_gpu.TORCH_LIBRARY_IMPL(fb, ...)toTORCH_LIBRARY_IMPL(fbgemm, ...)common_utils.cuhfilenamespace fbtonamespace fbgemm_gpu.faster_hash_test.pyfile to thefbgemm/fbgemm_gpu/testfolder.testfolder forpython_unittestoffaster_hash_test.faster_hash_test.pyfilefaster_hashrelated libraries withtorch.ops.loadAPI.torch.ops.fbtotorch.ops.fbgemm.opensourceandgpu availabilitycheck.Questions
torch.ops.create_zch_buffer,torch.ops.zero_collision_hash,torch.ops.fbgemm.zero_collision_hash, andtorch.ops.fbgemm.create_zch_bufferare all valid, whiletorch.ops.create_zch_bufferandtorch.ops.zero_collision_hashmay incur certain parameter mismatches. How to resolve this issue and disable the API calls withoutfbgemm?from fbgemm_gpu import create_zch_buffer, zero_collision_hash?Differential Revision: D75505020