-
Notifications
You must be signed in to change notification settings - Fork 26
remove dependency on cugraph-ops #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
4f4c82e
d3f975a
eddabef
bafd813
a111a09
8e5a0f8
4992680
c48941a
1120cdb
0d285e0
c4019df
be8996f
ec9598e
6c0fc07
ac58c90
42bfc66
0e262b4
7eb128a
a5bdd81
51c30c2
7fe5168
f257f7b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,7 +18,6 @@ files: | |
- depends_on_cugraph | ||
- depends_on_cudf | ||
- depends_on_dask_cudf | ||
- depends_on_pylibcugraphops | ||
- depends_on_cupy | ||
- depends_on_pytorch | ||
- depends_on_dgl | ||
|
@@ -43,7 +42,6 @@ files: | |
- cuda_version | ||
- docs | ||
- py_version | ||
- depends_on_pylibcugraphops | ||
test_cpp: | ||
output: none | ||
includes: | ||
|
@@ -114,7 +112,6 @@ files: | |
table: project | ||
includes: | ||
- depends_on_cugraph | ||
- depends_on_pylibcugraphops | ||
- python_run_cugraph_dgl | ||
py_test_cugraph_dgl: | ||
output: pyproject | ||
|
@@ -140,7 +137,6 @@ files: | |
table: project | ||
includes: | ||
- depends_on_cugraph | ||
- depends_on_pylibcugraphops | ||
- depends_on_pyg | ||
- python_run_cugraph_pyg | ||
py_test_cugraph_pyg: | ||
|
@@ -164,7 +160,6 @@ files: | |
includes: | ||
- checks | ||
- depends_on_cugraph | ||
- depends_on_pylibcugraphops | ||
- depends_on_dgl | ||
- depends_on_pytorch | ||
- cugraph_dgl_dev | ||
|
@@ -178,7 +173,6 @@ files: | |
- checks | ||
- depends_on_cugraph | ||
- depends_on_pyg | ||
- depends_on_pylibcugraphops | ||
- depends_on_pytorch | ||
- cugraph_pyg_dev | ||
- test_python_common | ||
|
@@ -404,7 +398,6 @@ dependencies: | |
common: | ||
- output_types: [conda] | ||
packages: | ||
- pytorch>=2.3 | ||
- torchdata | ||
- pydantic | ||
specific: | ||
|
@@ -429,18 +422,16 @@ dependencies: | |
- *tensordict | ||
- {matrix: null, packages: [*pytorch_pip, *tensordict]} | ||
- output_types: [conda] | ||
# PyTorch will stop publishing conda packages after 2.5. | ||
# Consider switching to conda-forge::pytorch-gpu. | ||
# Note that the CUDA version may differ from the official PyTorch wheels. | ||
matrices: | ||
- matrix: {cuda: "12.1"} | ||
packages: | ||
- pytorch-cuda=12.1 | ||
- matrix: {cuda: "12.4"} | ||
- matrix: {cuda: "12.*"} | ||
packages: | ||
- pytorch-cuda=12.4 | ||
- matrix: {cuda: "11.8"} | ||
- pytorch-gpu>=2.3=*cuda120* | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is already using conda-forge, I think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, this PR switches to
Oh, I had not noticed that the most recent build (_306) is only against 12.6. I agree with keeping 12.0 for better backward compatibility. However, the CUDA 11 build seems missing. Do we have details on their build matrix? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. CUDA 11 builds were dropped recently. You may need an older version for CUDA 11 compatibility. I also saw this while working on rapidsai/cudf#17475. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For completeness, the latest CUDA 12.0 build was also 2.5.1 build 303. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got it, thanks. It shouldn't be a dealbreaker unless another test component ends up requiring a newer version of torch on CUDA 11 down the line. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Note: many CUDA packages, including RAPIDS, are explicitly designed not to have It looks like if we just use CUDA 11 driver present:
shows
CUDA 12 driver present:
shows
No CUDA driver present:
shows
This should be sufficient. Let's try using just There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are two benefits here, if my proposal above works.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree, let's try with That opens up a risk that there may be situations where the solver chooses a CPU-only version because of some conflict, but hopefully
I looked into this today... we shouldn't have needed to specify CUDA versions in build strings for Looks like ![]() And the ![]() So here in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jameslamb These simplifications to drop build string info are only possible now with conda-forge, iirc. I believe more complexity was required when we used the pytorch channel, and we probably just carried that over when switching to conda-forge. |
||
- matrix: {cuda: "11.*"} | ||
packages: | ||
- pytorch-cuda=11.8 | ||
# pytorch only supports certain CUDA versions... skip | ||
# adding pytorch-cuda pinning if any other CUDA version is requested | ||
- pytorch-gpu>=2.3=*cuda118* | ||
- matrix: | ||
packages: | ||
|
||
|
@@ -615,31 +606,6 @@ dependencies: | |
- pylibcugraph-cu11==25.2.*,>=0.0.0a0 | ||
- {matrix: null, packages: [*pylibcugraph_unsuffixed]} | ||
|
||
depends_on_pylibcugraphops: | ||
common: | ||
- output_types: conda | ||
packages: | ||
- &pylibcugraphops_unsuffixed pylibcugraphops==25.2.*,>=0.0.0a0 | ||
- output_types: requirements | ||
packages: | ||
# pip recognizes the index as a global option for the requirements.txt file | ||
- --extra-index-url=https://pypi.nvidia.com | ||
- --extra-index-url=https://pypi.anaconda.org/rapidsai-wheels-nightly/simple | ||
specific: | ||
- output_types: [requirements, pyproject] | ||
matrices: | ||
- matrix: | ||
cuda: "12.*" | ||
cuda_suffixed: "true" | ||
packages: | ||
- pylibcugraphops-cu12==25.2.*,>=0.0.0a0 | ||
- matrix: | ||
cuda: "11.*" | ||
cuda_suffixed: "true" | ||
packages: | ||
- pylibcugraphops-cu11==25.2.*,>=0.0.0a0 | ||
- {matrix: null, packages: [*pylibcugraphops_unsuffixed]} | ||
|
||
depends_on_cupy: | ||
common: | ||
- output_types: conda | ||
|
Uh oh!
There was an error while loading. Please reload this page.