| 9 Mar 2026 |
Kevin Mittman (jetlagged/UTC+8) | In reply to @connorbaker:matrix.org https://github.com/NixOS/nixpkgs/blob/3bb5f20c47dcfcab9acb3be810f42ca1261b49e2/pkgs/development/cuda-modules/packages/cuda_nvcc.nix#L167 Double ughh this just came up in another context too | 01:16:42 |
kaya 𖤐 | Im currently in process of upstreaming the nixos module for tabbapi https://github.com/NixOS/nixpkgs/pull/498281 Does anyone know how i would go about setting the default package? In my nixos config i use the module like this right now, i always override the package:
services.tabbyapi = {
enable = true;
package = pkgs.pkgsCuda.tabbyapi;
};
I feel like it might be bad to have the default package for tabbyapi module to be broken pretty much, it needs for cuda to be enabled for it to work. How do other modules do this? Do they set the default package to a cuda enabled variant somehow or do they expect the user to enable cuda themselves?
| 16:44:52 |
kaya 𖤐 | I tested adding the PR as a patch to flash-attn, it indeed no longer OOMs which is nice, but it also doesn't build, seems to get infinitely stuck on building | 16:46:56 |
connor (he/him) | Yes the user should enable CUDA. Generally going through variants (like pkgsCuda) shouldn’t be permissible in-tree. You can add an assertion to the module to require sure cuda support is configured. | 19:19:14 |
kaya 𖤐 | Hm okay, thank you. I guess assertion with a specific message is better than nothing | 19:21:29 |
| 10 Mar 2026 |
connor (he/him) | 13.2 is out https://developer.download.nvidia.com/compute/cuda/redist/ | 03:35:22 |
connor (he/him) | danielrf Orin is supported by 13.2/JP7: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices | 06:10:08 |
Gaétan Lepage | I got you connor (burnt/out) (UTC-8)
https://github.com/NixOS/nixpkgs/pull/498523 | 11:52:20 |
Gaétan Lepage | We do have libcublasmp. Is this doc outdated? https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/README.md#distinguished-packages | 12:31:03 |
connor (he/him) | Ah yep it’s outdated, I packages nvshmem: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/packages/libnvshmem.nix | 15:28:13 |
Gaétan Lepage | connor (burnt/out) (UTC-8) if I want to bump libcublasmp (to 0.7.x) for example, how do I know which cudaPackage_X should be affected? | 17:36:23 |
connor (he/him) | A very deep reading of the changelog, package contents changes, and thorough rebuilds and runtime verification for consumers | 17:38:14 |
connor (he/him) | Yet another reason we need test suites for downstream packages which exercise those libraries — relying on NVIDIA’s samples (if they’re even available) isn’t sufficient because we care about whether consumers break | 17:39:46 |
connor (he/him) | All of the assertions I added to the packages were the result of a ton of reading and gleaning meaning through changelogs and actual package contents changes | 17:40:16 |
Gaétan Lepage | Sounds like a ton of fun :') | 17:47:23 |
| Cameron Barker joined the room. | 18:18:26 |
| 11 Mar 2026 |
Kevin Mittman (jetlagged/UTC+8) | Redacted or Malformed Event | 01:54:11 |
Gaétan Lepage | connor (burnt/out) (UTC-8) would you agree with a 12.8 -> 12.9 global bump before messing around with 13.0? | 11:05:21 |
| Theuni changed their display name from Theuni to Christian Theune. | 14:13:00 |
connor (he/him) | Sure! I remember some weird breakages a while back when I had wanted to bump immediately after 12.9 became available, but hopefully they’re all resolved by now :) | 16:08:54 |
Gaétan Lepage | https://github.com/NixOS/nixpkgs/pull/498861 | 16:43:46 |
Gaétan Lepage | connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With the CUDA PRs on the way, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:37:24 |
Gaétan Lepage | (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678#issuecomment-4035473707). | 23:39:46 |
Gaétan Lepage | * (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678). | 23:39:52 |
connor (he/him) | Sounds good! I’ll leave a comment on them | 23:51:04 |
Gaétan Lepage | I'm testing the CUDA bump more thoroughly though.
~1.3k rebuilds left (out of 1.8k) | 23:53:19 |
Gaétan Lepage | * connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With all the CUDA PRs in the queue, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:54:04 |
| 12 Mar 2026 |
| Theuni changed their display name from Christian Theune to Theuni. | 07:18:55 |
Bryan Honof | It looks like torch's supportedTorchCudaCapabilities was out-of-sync with upstream. https://github.com/NixOS/nixpkgs/pull/499216
How would I use nixpkgs-review to test these changes?
| 10:53:19 |
Gaétan Lepage | Thanks for the PR!
Well, you don't want to rebuild all torch consumers for this. What you can do is the following:
nixpkgs-review --extra-nixpkgs-config "{ allowUnfree = true; cudaSupport = true; }" -p python3Packages.torch -p python3Packages.vllm -p python3Packages.torchvision
| 12:39:42 |