| 8 Feb 2026 |
| @niten:fudo.im left the room. | 23:07:13 |
| 9 Feb 2026 |
Benjamin Isbarn | I'm not using any overlay for that purpose right now. Good point regarding the global override, will do that ;). So the cudaCapabilities would affect packages like the cudart, cublas etc. I guess i.e. what features it will consider available and thus use? the in theory this should yield better performance for the aforementioned libraries? | 07:03:05 |
connor (burnt/out) (UTC-8) | Read https://nixos.org/manual/nixpkgs/stable/#cuda -- Jetson isn't built by default and pre-thor uses different binaries so you need to make sure cudaCapabilities is set correctly; you'll get faster builds, smaller closures, and (possibly) better performance if you specify the exact capability | 07:36:40 |
| SolitudeAlma joined the room. | 07:49:25 |
Gaétan Lepage | VLLM is now 0.15.1 (latest version) | 11:11:17 |
cameronraysmith | SomeoneSerge (back on matrix): let me know if the updates to https://github.com/NixOS/nixpkgs/pull/488199 captured what you suggested. No rush: thanks! | 21:03:44 |
Gaétan Lepage | connor (burnt/out) (UTC-8) SomeoneSerge (back on matrix)
This PR should fix the last failing gpuCheck instance, i.e. python3Packages.triton.gpuCheck: https://github.com/NixOS/nixpkgs/pull/488887
I discovered one of our beloved dlopen instance in triton. We didn't know about it since then... This PR fixes it too. | 23:39:42 |
Gaétan Lepage | * connor (burnt/out) (UTC-8) SomeoneSerge (back on matrix)
This PR should fix the last failing gpuCheck instance, i.e. python3Packages.triton.gpuCheck: https://github.com/NixOS/nixpkgs/pull/488887
I discovered one of our beloved dlopen instances in triton. We didn't know about it since then... This PR fixes it too. | 23:55:34 |
| 10 Feb 2026 |
connor (burnt/out) (UTC-8) | Don't think I linked it here, maybe interesting for people with heavy eval jobs: https://github.com/ConnorBaker/nix-optimization | 01:41:36 |
| 11 Feb 2026 |
connor (burnt/out) (UTC-8) | Gaétan Lepage: there's a merge conflict and I need to rebase, but IIRC something like https://github.com/NixOS/nixpkgs/pull/485208 is necessary to make CUDA 13 the default. I still need to do the same for the PyCuda PR I have: https://github.com/NixOS/nixpkgs/pull/465047. Apologies that's taking me so long. | 18:56:11 |
| 12 Feb 2026 |
Gaétan Lepage | Ok thanks! I should get a notification when you'll have rebased. | 07:52:34 |
| 13 Feb 2026 |
| hoplopf joined the room. | 10:21:48 |
| 14 Feb 2026 |
| Marmar joined the room. | 08:59:46 |
| 15 Feb 2026 |
| matthewcroughan changed their display name from matthewcroughan @fosdem to matthewcroughan. | 18:05:52 |
Lun | torch's magma backend is being deprecated https://github.com/pytorch/pytorch/pull/172823/changes Do we want to drop it as an input to torch soon? Gaétan Lepage | 23:01:56 |
| 16 Feb 2026 |
| Hatim joined the room. | 01:01:14 |
Gaétan Lepage | Well, we will follow what upstream says. Is this warning shipped as part of 2.10.0? | 21:45:18 |
| kslad joined the room. | 23:40:04 |
| 17 Feb 2026 |
Lun | too late for 2.10 | 00:02:10 |
Gaétan Lepage | Ok, I guess we'll just wait for the next release then? | 20:09:10 |
Gaétan Lepage | * Ok, I guess we'll just wait for the next release then. | 20:09:12 |
| 20 Feb 2026 |
| youthlic changed their profile picture. | 08:15:56 |
| weasel joined the room. | 15:53:51 |
| 21 Feb 2026 |
Kevin Mittman (UTC-8) | Is there an hardcap on input tarball size? Considering splitting a particularly large one into multiple components | 01:54:03 |
connor (burnt/out) (UTC-8) | Not that I’m aware of. But such a change would require some rework on the Nixpkgs side to recombine sources or know which components to pick (which isn’t necessarily bad, just a thing that would need to happen). | 17:20:26 |
| 22 Feb 2026 |
| Haze joined the room. | 02:54:28 |
| 24 Feb 2026 |
Bot_wxt1221 | Why nvidia-open beta is still broken? | 01:30:50 |
Bot_wxt1221 | None cares it? | 01:30:58 |
Kevin Mittman (UTC-8) | Broken in what way? Which version is it 590.48.01 ? | 03:13:32 |
connor (burnt/out) (UTC-8) | The drivers are maintained by a different team -- we are the NixOS CUDA team | 04:19:31 |