Skip to content

Update Docker images to latest Ubuntu version #610

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

vrdn-23
Copy link

@vrdn-23 vrdn-23 commented May 22, 2025

What does this PR do?

The docker base images haven't been updated in a while so I was wondering if we could port them over to the more newer base images and Ubuntu LTS version. Let me know if there are any concerns!

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

cc @Narsil @alvarobartt

@Narsil
Copy link
Collaborator

Narsil commented Jun 2, 2025

What's the rationale here ?

Upgrading deps is indeed nice, but Cuda 12.8 is rather new (Jan 2025) so it would make TEI fail to run on any older deployments/noeds. Unless it unlocks things, I don't think we should upgrade at the moment.

Ubuntu 24 should be ok.

@vrdn-23
Copy link
Author

vrdn-23 commented Jun 2, 2025

I was hoping to get us upgraded to the latest CUDA 12.x versions since within minor releases, CUDA is mostly backwards compatible.
If I understand the link correctly, since the current TEI image is on 12.2, most nodes/deployments will already have the minimum required driver version to be able to run 12.8.

Let me know if I misunderstood something here @Narsil

@Narsil
Copy link
Collaborator

Narsil commented Jun 3, 2025

I think doesn't hold for nvidia container: NVIDIA/nvidia-container-toolkit#940

It's been a while I haven't personally see this arise since we're trying to keep up a lot with newer versions of everything, but the cuda version of the node has caused issues in the past in clusters I manage.

Is there any particular reason wanting to upgrade ? (The stance here is that if it's not broken, no need to fix it, and we can take advantage of a later minor upgrade do to such potentially breaking version upgrades)

@vrdn-23
Copy link
Author

vrdn-23 commented Jun 3, 2025

@Narsil Thanks for pointing out that issue. Forward compatibility is not something that I considered.

Is there any particular reason wanting to upgrade ?

I think the rationale is to just ensure that we don't fall too behind on dependency upgrades. CUDA 12.2 was released in June 2023, and the driver version shipped for 12.2 is not really compatible with some of the newer GPUs coming out (see the AWS EC2 instance/nvidia-driver compatibility matrix).

I am fine with reverting the PR to just the Ubuntu update and we can maybe update the CUDA version in a later major TEI release (1.8.0 or 2.0?), but the current change is still technically only a minor version update of the CUDA drivers themselves. So I'm a little ambivalent/curious on how this would fit into a TEI release lifecycle?

@Narsil
Copy link
Collaborator

Narsil commented Jun 3, 2025

1.8 is fine for those kind of upgrades.

If you just update ubuntu I will definitely merge as-is, otherwise we can leave as-is and I'll merge when 1.8 hits (there are no plans just yet, usually it happens when there's something significant happening, not necessarily a breaking change).

Again, I think it's welcome in general to update regularly, but having been bitten in the past, and seeing no obvious reason right now, I tend to delay those including them by default.

Thanks a lot for the PR regardless.

@vrdn-23
Copy link
Author

vrdn-23 commented Jun 3, 2025

@Narsil Thanks for the update! I'll revert the CUDA changes then, so it can make the 1.7.1 release

@vrdn-23
Copy link
Author

vrdn-23 commented Jun 3, 2025

Oops. Looks like I was too late! Either way I can keep track of this and raise another PR when the time is right to update to the latest CUDA version. Thanks for the feedback and the discussion!

@vrdn-23 vrdn-23 changed the title Update Docker images to latest CUDA version and Ubuntu version Update Docker images to latest Ubuntu version Jun 4, 2025
@polarathene
Copy link

For such changes regarding CUDA, it's probably better to measure any actual gain from bumping min version. Likewise for the concern about lacking support, by having an actual case of breaking compatibility.

If someone does have one, I'd appreciate that but my understanding with cudarc usage is the following:

  • dynamic-loading (default) attempts to find libcuda.so or equivalent. Useful choice when you don't want to force CUDA as a dep to launch your program (perhaps it supports CPU or ROCm instead for example). If it fails, a panic is triggered (or less helpful failure if you've built with panic = "abort").
  • dynamic-linking with generic links to libs, if there is no version pinning there (I don't think there is with how cudarc is linking), then prior to initializing your program the system linker will try resolve these libs (which unlike with dynamic-loading, you can actually get a list of via ldd / patchelf --print-needed). If a dep is missing, you'll get a failure message output to the terminal before any handover to your app (helpful if you've built with panic = "abort").
  • static-linking embeds the .a static libs, bloating size up considerably, but still enforces the dynamic link to libcuda.so. The dynamic link is tied to the CUDA driver version, so there could be potential incompatibility concerns there.

That said there is no actual build of those common CUDA libs regardless of choice. static-linking doesn't really benefit from LTO (you can have cudarc opt-in to the other CUDA features, but only have a small program that uses driver feature, resulting in approx 700MB binary). Everything is already pre-compiled, and I don't think there's any build-time optimization involved with cudarc, it's just providing an API? (unless it has some conditionals to prefer calls when the target CUDA version is high enough?)

I had assumed then that the only actual concern for compatibility was if you needed an API call that wouldn't build due to not being available for a lower version of CUDA, which is more obvious need for a version bump... or as this project already does with it's Docker image builds, nvprune stripping archs from the nvidia supplied libs to minimize size... but that's completely unrelated to cudarc, if your project requires features only compatible from sm80 / compute_80 onwards, that's the actual min target, and CUDA version minimum is tied to that?

As such bumping the version of CUDA for the build be that in cudarc or the Ubuntu image here shouldn't make any notable difference in support provided those linked CUDA libs have the expected archs support covered. I haven't checked, but I assume Nvidia EOLs arch support within their images, so sm_75 could be missing for example.


Have I understood that all correctly?

There probably isn't much benefit to static-linking if distributing within a container that can provide the libs to link dynamically. That should only benefit for distributing outside of the container in other environments.

In both cases libcuda.so is provided by the host system for runtime use (the one in the image isn't used, only as a stub for linking).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants