NixOS CUDA | 274 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 55 Servers |
| Sender | Message | Time |
|---|---|---|
| 31 Oct 2025 | ||
| Gaétan LepageSomeoneSerge (back on matrix) are you okay with merging:
I’d like there to be consensus as a team for those reverts to go through. Serge, I know you’re in favor of the config.cudaSupport one, but I’d like to issue the statement/decision as a team. | 19:40:25 | |
| Correct | 19:46:10 | |
| We don’t have anywhere near the capacity (hardware or labor) to do that on a regular cadence, but that would be nice | 19:47:00 | |
| what kind of hardware is needed for reasonably-fast-ish compile cycles? | 19:59:36 | |
| That depends entirely on what you’re building. My suggestion is to compile for exactly the CUDA capabilities you need — the CUDA compiler and linker is incredibly slow so it helps a lot. | 20:01:29 | |
| yeah makes sense - was seeing if i could volunteer a personal machine to help make the dev cycle possible 😓 | 20:02:07 | |
| From experience adding compute 12 capability doubled my PyTorch build time so def keep an eye on it | 20:02:37 | |
| We have very recently acquired new hardware. That is still far from the perfect infra, but it's definitely a good progress. | 20:02:54 | |
I broke the record yesterday building python3Packages.torch with cudaSupport enabled.-> 41 min on 96 cores. | 20:03:48 | |
| Do not try to replicate on your laptop 🫠 | 20:04:03 | |
| connor (burnt/out) (UTC-7) ACK for both. | 20:04:17 | |
| I’ve oomed a machine with over 1tb of ram building nix cuda packages 😎 | 20:04:38 | |
In reply to @glepage:matrix.orgomg. i wanna try. | 20:04:40 | |
| I have only 128GB of RAM on my builder. So I got swap to a (sometimes necessary) 500GB size. | 20:05:45 | |
ptxas can be very expensive memory-wise... | 20:06:15 | |
| connor (burnt/out) (UTC-7) I found out the issue with However, before the CUDA 13 PR, So, who's fault is it? A) It is wrong that | 21:21:05 | |
| * connor (burnt/out) (UTC-7) I found out the issue with However, before the CUDA 13 PR, So, who's fault is it? A) It is wrong that | 21:21:43 | |
In reply to @apyh:matrix.orgripped it on your branch in 23m, including thr magma compile - only compute 8.9 tho | 21:38:03 | |
| Oh, I was implying "all caps enabled" | 21:39:18 | |
| lemme try it :3 | 21:43:58 | |
| stdenv can be cudaPackages.backendStdenv if the version of GCC is supported by NVCC. It’s only different when we need to use an older version of GCC. NVCC shouldn’t leak the GCC wrapper since it should be largely build-time only. Any ideas why it’s propagating like that? | 21:46:06 | |
| Thanks for the follow-up.
In | 21:50:01 | |
| https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/packages/cuda_nvcc.nix#L24-L25 | 21:50:06 | |
| do you have a command / nixpkgs config for me to work with or nah? | 21:50:14 | |
| just wanna bench against yours :p | 21:50:22 | |
| Actually, the leakage is not transitive. Basically, I can build So, if I understand correctly, having | 22:13:01 | |
| So, either: A) we get rid of | 22:15:26 | |
| * Actually, the leakage is not transitive. Basically, I can build So, if I understand correctly, having | 22:17:36 | |
Actually, I was able to build python3Packages.torch and firefox with an empty propagatedBuildInputs in cuda_nvcc. Why do we need it exactly? | 23:25:46 | |
| If it’s in propagatedBuildInputs it should still slide out of the dependencies far enough down It likely worked because the current stdenv is supported by the version of NVCC in the default CUDA package set stdenv.cc needs to be in NVCC’s propagatedBuildInputs because NVCC needs it available it when it is in nativeBuildInputs | 23:28:42 | |