NixOS CUDA | 288 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 12 Nov 2025 | ||
| yeah, givin up - jax dies every time :( | 21:34:25 | |
| haha, well done for trying | 22:19:37 | |
| I'm so sorry Connor. Take your time. Focus on what matters the most for you right now (i.e. not CUDA). I'll look at those two PRs. Please message me if I can do anything else. | 23:04:38 | |
| yeaaah, this is somethin' real broken with the torch 2.9.0 update :/ gonna see if i can figure it out, but it just doesn't seem that nvrtc is in the run path. looking fora way to repro it without my whole binary 😭 | 23:17:18 | |
| Redacted or Malformed Event | 23:29:07 | |
torch 2.9.1 is out. And triton 3.5.1. And both staging-next and staging-nixos were merged into master a few hours ago.CPUs will have to work hard for the next few days... | 23:29:21 | |
| 13 Nov 2025 | ||
| They come not single spies... really sorry to hear this, Connor. Take care | 14:07:48 | |
| oh woof, but torch-bin is 2.9.1 and torch is still 2.9.0 | 15:10:16 | |
| aha, repro'd :D | 16:15:41 | |
| https://github.com/NixOS/nixpkgs/issues/461334 issue opened :) | 18:54:11 | |
| 14 Nov 2025 | ||
| https://hydra.nixos-cuda.org/build/14219 magma runs into the output limit | 04:50:01 | |
| and https://hydra.nixos-cuda.org/jobset/nixos-cuda/cuda-packages-v2#tabs-jobs has no torch package 🤔 | 04:50:51 | |
| I increased it from 4GB (what nix-community has I think) to 8GB. And it seems to still be broken... | 08:53:41 | |
| This is very weird. It ends up being built anyway as a dependency. I'll try to investigate... | 08:55:38 | |
Ok, I figured it out. torch and torchWithoutRocm have the same outPaths. So torch is getting filtered out in favor of torchWithoutRocm. | 09:25:13 | |
realized this isn't a 2.9 regression, it's a -bin vs source problem :/ | 18:37:14 | |
| bin works fine T_T | 18:37:19 | |
| updated the ticket :) | 18:37:29 | |
| I updated torch-bin to 2.9.1 yesterday. The PR for the source-based build is https://github.com/NixOS/nixpkgs/pull/461241 | 21:55:22 | |
| i see your commit message says torch 2.8->2.9, but it's actually 2.9->2.9.1 :) | 21:56:32 | |
| Good catch, now fixed. | 22:06:15 | |
| 15 Nov 2025 | ||
| 12:47:57 | ||
| SomeoneSerge (back on matrix) would you have a minute to take a look at the triton/torch bump? https://github.com/NixOS/nixpkgs/pull/461241 | 14:23:53 | |
| Built with and without CUDA. No obvious regressions. | 14:24:19 | |
| 17 Nov 2025 | ||
| How would you go about conditionally setting Image I have this.
It's the aarch64-linux part specifically that I'm a bit stuck on. I have some cloud servers that have an NVIDIA GPUs in them that run aarch64-linux, but I also have some Jetson devices that are also considered aarch64-linux. And if I understand the whole thing correctly, I can't just set the Probably something stupid I'm just overlooking, sorry for bothering. 😅 | 17:35:32 | |
There's aarch64-linux and there's aarch64-linux. It's an artifact of us not including cuda/rocm stuff in | 19:43:44 | |
So it's not really about "setting cudaCapabilities conditionally", it's about instantiating nixpkgs for different platforms. For flakes you'd have to suffix the attributes of one of the aarch64-linux platforms, or move stuff to legacyPackages, but, of course, you could also simply not maintain the list of already-evaluated and not-really-overridable "recipes", i.e. drop the flake:) | 19:47:42 | |
| Think I caught a touch of a cold, sorry | 19:48:43 | |
In reply to @bjth:matrix.orgAarch based nvidia data center gpus 👀, yeah if you get the correct map of the cuda capabilities it should work fine | 19:58:02 | |
| * Aarch based nvidia data center gpus 👀, yeah if you get the correct map of the cuda capabilities it should work fine Edit: misread, isJetsonBuild sounds funky so not sure | 20:01:07 | |