| 16 Oct 2025 |
Ari Lotter | lmk if i can help - 2.9.0 (nightly) is in active usage in the above project | 00:46:36 |
connor (burnt/out) (UTC-8) | Looks like it should be very doable to package — it’s a redist so shouldn’t be too bad and can re-use all the helpers we’ve got for that. Will take a closer look tomorrow | 04:53:50 |
Niclas Overby Ⓝ | Is there something like rust-overlay for CUDA, so you can specify exactly which CUDA version to use? | 11:39:49 |
connor (burnt/out) (UTC-8) | You can specify which CUDA version to use currently so long as it is a CUDA version supported in-tree by using an overlay or the cudaPackages.pkgs pattern (see Nixpkgs manual) | 14:08:11 |
connor (burnt/out) (UTC-8) | Arbitrary versions aren’t something doable with the current state of things because of the amount of patching required for each package (which varies by version of course) | 14:10:31 |
connor (burnt/out) (UTC-8) | Plus, some of these binaries aren’t stand-alone — NVCC for example requires a host compiler.
So if we wanted to support arbitrary CUDA versions, we’d need to somehow know ahead of time which host compilers and versions are supported by all NVCC releases (we have a list we maintain in tree but it’s updated manually by reading release notes).
And then we’d need to use the appropriate version of the host compiler… but what if it’s not packaged in Nixpkgs? CUDA 11 releases used GCC 9, 10, and 11 and those aren’t maintained in-tree any more. | 14:14:59 |
connor (burnt/out) (UTC-8) | I’ve been working on the ability to extend the CUDA package set and make new ones for out of tree users, but it’s generally non-trivial and requires a great deal of familiarity | 14:17:53 |
| 17 Oct 2025 |
connor (burnt/out) (UTC-8) | The CUDA 13 PR now has libnvshmem, built from source (that was not fun)
It does not have nvshmem4py since that’s gonna be irritating to build and requires stuff we don’t have packaged yet | 01:16:43 |