| 9 Mar 2026 |
kaya 𖤐 | Im currently in process of upstreaming the nixos module for tabbapi https://github.com/NixOS/nixpkgs/pull/498281 Does anyone know how i would go about setting the default package? In my nixos config i use the module like this right now, i always override the package:
services.tabbyapi = {
enable = true;
package = pkgs.pkgsCuda.tabbyapi;
};
I feel like it might be bad to have the default package for tabbyapi module to be broken pretty much, it needs for cuda to be enabled for it to work. How do other modules do this? Do they set the default package to a cuda enabled variant somehow or do they expect the user to enable cuda themselves?
| 16:44:52 |
kaya 𖤐 | I tested adding the PR as a patch to flash-attn, it indeed no longer OOMs which is nice, but it also doesn't build, seems to get infinitely stuck on building | 16:46:56 |
connor (burnt/out) (UTC-8) | Yes the user should enable CUDA. Generally going through variants (like pkgsCuda) shouldn’t be permissible in-tree. You can add an assertion to the module to require sure cuda support is configured. | 19:19:14 |
kaya 𖤐 | Hm okay, thank you. I guess assertion with a specific message is better than nothing | 19:21:29 |
| 10 Mar 2026 |
connor (burnt/out) (UTC-8) | 13.2 is out https://developer.download.nvidia.com/compute/cuda/redist/ | 03:35:22 |
connor (burnt/out) (UTC-8) | danielrf Orin is supported by 13.2/JP7: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices | 06:10:08 |
Gaétan Lepage | I got you connor (burnt/out) (UTC-8)
https://github.com/NixOS/nixpkgs/pull/498523 | 11:52:20 |
Gaétan Lepage | We do have libcublasmp. Is this doc outdated? https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/README.md#distinguished-packages | 12:31:03 |
connor (burnt/out) (UTC-8) | Ah yep it’s outdated, I packages nvshmem: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/packages/libnvshmem.nix | 15:28:13 |
Gaétan Lepage | connor (burnt/out) (UTC-8) if I want to bump libcublasmp (to 0.7.x) for example, how do I know which cudaPackage_X should be affected? | 17:36:23 |
connor (burnt/out) (UTC-8) | A very deep reading of the changelog, package contents changes, and thorough rebuilds and runtime verification for consumers | 17:38:14 |
connor (burnt/out) (UTC-8) | Yet another reason we need test suites for downstream packages which exercise those libraries — relying on NVIDIA’s samples (if they’re even available) isn’t sufficient because we care about whether consumers break | 17:39:46 |
connor (burnt/out) (UTC-8) | All of the assertions I added to the packages were the result of a ton of reading and gleaning meaning through changelogs and actual package contents changes | 17:40:16 |
Gaétan Lepage | Sounds like a ton of fun :') | 17:47:23 |
| Cameron Barker joined the room. | 18:18:26 |
| 11 Mar 2026 |
Kevin Mittman (jetlagged/UTC+8) | Redacted or Malformed Event | 01:54:11 |
Gaétan Lepage | connor (burnt/out) (UTC-8) would you agree with a 12.8 -> 12.9 global bump before messing around with 13.0? | 11:05:21 |
| Christian Theune changed their display name from Theuni to Christian Theune. | 14:13:00 |
connor (burnt/out) (UTC-8) | Sure! I remember some weird breakages a while back when I had wanted to bump immediately after 12.9 became available, but hopefully they’re all resolved by now :) | 16:08:54 |
Gaétan Lepage | https://github.com/NixOS/nixpkgs/pull/498861 | 16:43:46 |
Gaétan Lepage | connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With the CUDA PRs on the way, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:37:24 |
Gaétan Lepage | (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678#issuecomment-4035473707). | 23:39:46 |
Gaétan Lepage | * (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678). | 23:39:52 |
connor (burnt/out) (UTC-8) | Sounds good! I’ll leave a comment on them | 23:51:04 |
Gaétan Lepage | I'm testing the CUDA bump more thoroughly though.
~1.3k rebuilds left (out of 1.8k) | 23:53:19 |
Gaétan Lepage | * connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With all the CUDA PRs in the queue, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:54:04 |
| 4 Aug 2022 |
| Winter (she/her) joined the room. | 03:26:42 |
Winter (she/her) | (hi, just came here to read + respond to this.) | 03:28:52 |
tpw_rules | hey. i had previously sympathzied with samuela and like i said before had some of the same frustrations. i just edited my github comment to add "[CUDA] packages are universally complicated, fragile to package, and critical to daily operations. Nix being able to manage them is unbelievably helpful to those of us who work with them regularly, even if support is downgraded to only having an expectation of function on stable branches." | 03:29:14 |
Winter (she/her) | In reply to @tpw_rules:matrix.org i'm mildly peeved about a recent merging of something i maintain where i'm pretty sure the merger does not own the expensive hardware required to properly test the package. i don't think it broke anything but i was given precisely 45 minutes to see the notification before somebody merged it ugh, 45 minutes? that's... not great. not to air dirty laundry but did you do what samuela did in the wandb PR and at least say that that wasn't a great thing to do? (not sure how else to word that, you get what i mean) | 03:30:23 |