NixOS CUDA | 274 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 55 Servers |
| Sender | Message | Time |
|---|---|---|
| 31 Oct 2025 | ||
| gonna slow down and take a step back for a bit | 03:18:20 | |
| You said upfront "The addition of CUDA 13 does not mean packages will suddenly work with CUDA 13. Expect breakages." I know I'm just a random bloke from GitHub an fairly new but I've had really bad burnout in the past, I'd suggest still doing a little bit of triaging and technical support here and there for the CUDA Team in strict time blocks, so you can at least see the fruits of your labour (given to the world, for free) as the breakages all get sorted out in the coming weeks in our collective efforts. From my perspective I'm just excited about the prospect of using CUDA 13 with Nixpkgs, I've basicaly used I reckon for the next big CUDA update, do something like how the | 10:26:33 | |
| * You said upfront "The addition of CUDA 13 does not mean packages will suddenly work with CUDA 13. Expect breakages." I know I'm just a random bloke from GitHub and fairly new but I've had really bad burnout in the past, I'd suggest still doing a little bit of triaging and technical support here and there for the CUDA Team in strict time blocks, so you can at least see the fruits of your labour (given to the world, for free) as the breakages all get sorted out in the coming weeks in our collective efforts. From my perspective I'm just excited about the prospect of using CUDA 13 with Nixpkgs, I've basicaly used I reckon for the next big CUDA update, do something like how the | 10:26:44 | |
| I mean I’ll still be around, just not doing as much. I’ll still be in the team weeklies, etc. | 14:35:31 | |
| CUDA 13 isn’t the default because the stuff we have in tree is too old or doesn’t support it; the expect breakages was in reference to trying to use CUDA 13 as the default. | 14:36:20 | |
| Haskell stuff goes into staging (at least partly) because of the sheer number of packages, to allow Hydra to churn through them. None of our stuff is built upstream, so there’s not really a point. | 14:37:36 | |
| I think also a fair amount of stuff upstream doesn’t even build with cuda 13 yet either | 14:37:54 | |
| Yeah NVIDIA does not care outside of projects they dedicate engineering hours to supporting, and changing the default version of OpenCV or other large projects to a commit from master adding support would be dead on arrival, and trying to special case it just for when CUDA is configured would be difficult. | 14:39:51 | |
| 14:56:01 | ||
| This is quite a convincing argument to revert the 99 commits https://github.com/NixOS/nixpkgs/pull/437723#issuecomment-3472997390 Maybe there could be a | 15:50:50 | |
| (all a bit over my current pay grade with my limited Nixpkgs experience though, lol) just really want to express my gratitude to the CUDA Team | 15:52:21 | |
| My understanding (which may be incorrect) is that CUDA 13 is opt in so will only break if you try and use it instead of the default? | 16:03:44 | |
| Gaétan LepageSomeoneSerge (back on matrix) are you okay with merging:
I’d like there to be consensus as a team for those reverts to go through. Serge, I know you’re in favor of the config.cudaSupport one, but I’d like to issue the statement/decision as a team. | 19:40:25 | |
| Correct | 19:46:10 | |
| We don’t have anywhere near the capacity (hardware or labor) to do that on a regular cadence, but that would be nice | 19:47:00 | |
| what kind of hardware is needed for reasonably-fast-ish compile cycles? | 19:59:36 | |
| That depends entirely on what you’re building. My suggestion is to compile for exactly the CUDA capabilities you need — the CUDA compiler and linker is incredibly slow so it helps a lot. | 20:01:29 | |
| yeah makes sense - was seeing if i could volunteer a personal machine to help make the dev cycle possible 😓 | 20:02:07 | |
| From experience adding compute 12 capability doubled my PyTorch build time so def keep an eye on it | 20:02:37 | |
| We have very recently acquired new hardware. That is still far from the perfect infra, but it's definitely a good progress. | 20:02:54 | |
I broke the record yesterday building python3Packages.torch with cudaSupport enabled.-> 41 min on 96 cores. | 20:03:48 | |
| Do not try to replicate on your laptop 🫠| 20:04:03 | |
| connor (burnt/out) (UTC-7) ACK for both. | 20:04:17 | |
| I’ve oomed a machine with over 1tb of ram building nix cuda packages 😎 | 20:04:38 | |
In reply to @glepage:matrix.orgomg. i wanna try. | 20:04:40 | |
| I have only 128GB of RAM on my builder. So I got swap to a (sometimes necessary) 500GB size. | 20:05:45 | |
ptxas can be very expensive memory-wise... | 20:06:15 | |
| connor (burnt/out) (UTC-7) I found out the issue with However, before the CUDA 13 PR, So, who's fault is it? A) It is wrong that | 21:21:05 | |
| * connor (burnt/out) (UTC-7) I found out the issue with However, before the CUDA 13 PR, So, who's fault is it? A) It is wrong that | 21:21:43 | |
In reply to @apyh:matrix.orgripped it on your branch in 23m, including thr magma compile - only compute 8.9 tho | 21:38:03 | |