-
Notifications
You must be signed in to change notification settings - Fork 3.9k
[ci] [CUDA] Switch to GitHub runner for GPU CI #6958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I'm seeing quite a few intermittent issues with the GitHub GPU runner in not being able to access the GPU either from the beginning or sometime during the job, while sometimes it runs through fine. I've raised an issue with GitHub support to look at it. For reference, those two might be related: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow, thank you!!
It would be AWESOME to be able to use GitHub-hosted runners for the CUDA jobs instead. It should be fine that those only have T4s... with GPU CI here, we're really just testing that the CUDA version of the library minimally can be built and the tests pass... we haven't had the resources to try to test coverage of different GPU architectures, for example.
I'm seeing quite a few intermittent issues with the GitHub GPU runner in not being able to access the GPU either from the beginning or sometime during the job
Yeah, I looked at that failed CUDA 11 job on the most recent run, see tons of these:
[LightGBM] [Fatal] [CUDA] no CUDA-capable device is detected /tmp/pip-req-build-c2yfcsg8/src/io/cuda/cuda_column_data.cpp 18
I don't see any obvious root causes in the logs. I think you're right to suspect that it's a problem with the runner itself.
I've raised an issue with GitHub support to look at it.
Was this a private support issue? If not, could you link it so I could subscribe?
Yes, it's a private "premium" support issue, typically those lead to faster outcomes but it might still take a while to diagnose it, doesn't look like a super clear issue to me. |
Have you been able to narrow it down to a subset of the jobs? For example, if it's only the CUDA 11.8 job, we could consider:
Even if we had to drop CUDA 11 CI, I think it'd be worth it in exchange for removing the manual runner maintenance by Microsoft. |
So far the 12.2.2 source build always succeeded, while 12.8.0 wheel failed once and succeeded once. cuda 11.8.0 pip always failed. I'll run a few more attempts, but I'm pretty sure it's noise as you sometimes see that |
On the most recent run (build link) only the CUDA 11.8 job failed... and that was with the one test failure from #6703 , not the types of issues like losing connection to the GPU described above. I'm going to try a few more re-runs. Will keep updating this comment.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@letmaik it looks to me like this is working!!!
Saw that over a few re-runs: #6958 (comment)
And I think it's totally fine to run LightGBM's tests on T4s.
I think this should be merged, it's a really nice improvement for the long-term health of the project.
The other CI failures are unrelated issues that have accumulated over the last week:
- failing docs build: #6978
- tests incompatible w/
pandas
3.0: #6980 - Azure DevOps pool auth issues: #6949 (comment)
@jameslamb Let's stress test this a little more to make sure it's really working reliably. Maybe do 5 more run attempts? I haven't heard back yet from GitHub Enterprise Support on the original issue unfortunately. |
Ok sure! I can do that. I think I'll do the next round with new empty commits instead of clicking "re-run all jobs", just in case that affects anything. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
After 5 more runs, this still looks to be working well! Updated #6958 (comment) I think we can + should merge this, what do you think @letmaik ? Also either way, please do let us know in the future if you get a response on your GitHub support ticket. |
@jameslamb Alright, let's do it. And even if there's still the occasional failure in the future, it's easy to re-run a job. |
Alright great, thanks! |
@jameslamb I got a response from GitHub support and they said they couldn't reproduce the issue. They mentioned that they released a new NVIDIA image last week with version 20250730.36.1 which updates the GPU driver, but I checked all the runs we had and those were all run with the older image 20250716.20.1, so it's not related to that. If you observe any issues with lost GPUs, please ping me, then I can engage with support again. |
Ok will do, thanks again for all your help! |
This PR switches to a GitHub hosted runner for the GPU CI. If all works ok, this will avoid any dependence on Microsoft managing the internal runners.