Skip to content

Conversation

@gabeweisz
Copy link
Contributor

@gabeweisz gabeweisz commented Jan 14, 2026

Description

When running in Slurm, jax.init_distributed.initialize only attaches to one GPU unless local_device_ids is passed.
This PR uses the standard CUDA_VISIBLE_DEVICES environment variable to set local_device_ids,
falling back to the default behavior if CUDA_VISIBLE_DEVICES is None or cannot be parsed.

We have been using this internally at AMD for some time.

This change will not affect TPUs in any way.

FIXES: #865

Tests

Run a job in Slurm on a machine with multiple GPUs with and without CUDA_VISIBLE_DEVICES set

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cannot see multiple GPUs when using Slurm (with proposed fix)

1 participant