NixOS CUDA | 288 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 15 Oct 2025 | ||
| I could probably add it to the CUDA 13 PR; nvshmem is one of the dependencies of libcublasmp I didn’t try to package | 20:53:02 | |
| 16 Oct 2025 | ||
| lmk if i can help - 2.9.0 (nightly) is in active usage in the above project | 00:46:36 | |
| Looks like it should be very doable to package — it’s a redist so shouldn’t be too bad and can re-use all the helpers we’ve got for that. Will take a closer look tomorrow | 04:53:50 | |
Is there something like rust-overlay for CUDA, so you can specify exactly which CUDA version to use? | 11:39:49 | |
You can specify which CUDA version to use currently so long as it is a CUDA version supported in-tree by using an overlay or the cudaPackages.pkgs pattern (see Nixpkgs manual) | 14:08:11 | |
| Arbitrary versions aren’t something doable with the current state of things because of the amount of patching required for each package (which varies by version of course) | 14:10:31 | |
| Plus, some of these binaries aren’t stand-alone — NVCC for example requires a host compiler. So if we wanted to support arbitrary CUDA versions, we’d need to somehow know ahead of time which host compilers and versions are supported by all NVCC releases (we have a list we maintain in tree but it’s updated manually by reading release notes). And then we’d need to use the appropriate version of the host compiler… but what if it’s not packaged in Nixpkgs? CUDA 11 releases used GCC 9, 10, and 11 and those aren’t maintained in-tree any more. | 14:14:59 | |
| I’ve been working on the ability to extend the CUDA package set and make new ones for out of tree users, but it’s generally non-trivial and requires a great deal of familiarity | 14:17:53 | |
| 17 Oct 2025 | ||
| The CUDA 13 PR now has libnvshmem, built from source (that was not fun) It does not have nvshmem4py since that’s gonna be irritating to build and requires stuff we don’t have packaged yet | 01:16:43 | |
| Lost about an hour of my life to figuring out that while they “support CMake” they don’t really support CMake They do a full configure and build of a CMake project during a different project’s build and don’t thread arguments through properly, so there’s a fun note like this: https://github.com/NixOS/nixpkgs/blob/a32200680f4a5511fbc9456ff0fa689a0af12dac/pkgs/development/cuda-modules/packages/libnvshmem.nix#L104 | 01:19:55 | |
| As part of this I also packaged gdrcopy (but not the driver bit, since I’m not sure what the best way to handle that is) | 01:21:18 | |
| Gaétan Lepagedo you know if the PyTorch bump wants the nvshmem library or the Python bindings? | 01:40:29 | |
| Thanks a lot for your work connor (he/him) (UTC-7)!!! The PyTorch bump needs the library, not the python bindings. | 08:02:56 | |
| 18 Oct 2025 | ||
| 00:13:10 | ||
| Various things which need to be fixed outside of the CUDA 13 PR:
There are others but I can’t remember them :/ | 06:38:17 | |
| 17:55:08 | ||
| 19 Oct 2025 | ||
| 17:17:43 | ||
| https://github.com/peylnog/ContinuousSR is incredible | 18:48:03 | |
| And if anyone wanted to run the demo, I've packaged it with a flake: https://github.com/ConnorBaker/ContinuousSR Still need to download the model from their google drive (https://github.com/ConnorBaker/ContinuousSR?tab=readme-ov-file#pretrained-model)
| 19:07:16 | |
| 20 Oct 2025 | ||
| This look super cool! Remings me of the new 100x digital zoom feature of the Pixel Phones. So nice to see something open for that :) | 06:51:27 | |
| That uses something closer to multi-frame super-resolution (which is specifically what I’m interested in — I’d like to rewrite ContinuousSR to support that use case). Here’s a reproducer for an earlier version of the software Google used around the Pixel 3 era: https://github.com/Jamy-L/Handheld-Multi-Frame-Super-Resolution | 14:24:13 | |
| This was also a great way to find out that on master the wrong version of cuDNN is selected when building for Jetson (we get the x86 binary) — that’s mostly why the flake is using my PR with CUDA 13 and the packaging refactor | 14:29:02 | |
| 21 Oct 2025 | ||
| Started work on CUDA-legacy for everyone who needs support for older versions of CUDA https://github.com/nixos-cuda/cuda-legacy/pull/1 | 00:33:26 | |
| There might be interesting stuff in the second commit if you’re unfamiliar with flake-part’s partitions functionality | 06:22:13 | |
| SomeoneSerge (back on matrix) what if 👉👈 you merged my CUDA 13 PR 🥺 | 06:23:32 | |
In reply to @connorbaker:matrix.org'most there | 22:00:59 | |
| 22 Oct 2025 | ||
| Are there any good resources for getting CUDA projects, built with CUDA packages from Nixpkgs, running with libcuda.so provided by a non-NixOS host? Can it be done with LD_LIBRARY_PATH or LD_PRELOAD? | 08:08:09 | |
| nixGL has worked for me in the past | 08:56:54 | |
| Technically that finds a kernel-compatible libcuda.so in Nixpkgs | 08:57:24 | |
In reply to @niclas:overby.meIf you create a /run/opengl-driver/lib (it might be called something slightly different) folder and symlink all the cuda kernel mode drivers in there. It should work out of the box | 14:40:53 | |