!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

301 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda59 Servers

Load older messages


SenderMessageTime
7 Mar 2026
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Ugh I thought I imagined this ughhhhhhhhhhhhh06:52:02
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Patch NVIDIA’s stuff?06:54:24
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Wait no this feels too familiar06:54:34
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)https://github.com/NixOS/nixpkgs/blob/3bb5f20c47dcfcab9acb3be810f42ca1261b49e2/pkgs/development/cuda-modules/packages/cuda_nvcc.nix#L16706:55:00
@glepage:matrix.orgGaétan LepageYep, I'm very proud of this. I will not take any additional questions.10:47:37
@glepage:matrix.orgGaétan Lepage

This PR was harder to finish than I expected. It's now ready and fixes a bunch of cudaSupport package builds.
https://github.com/NixOS/nixpkgs/pull/495151

(waiting for reviews)

10:49:39
@skainswo:matrix.orgSamuel Ainsworthooh, thanks! so far i've been trying to use clang as the host compiler since that's what xla says they support and iirc i got errors with gcc in the cpu-only build. so maybe i'm getting errors bc of mixing in clang? is clang as host compiler a supported combo?12:59:03
@glepage:matrix.orgGaétan LepageGCC is definitely the default in nixpkgs for linux. I'd try to stick to that as much as possible.13:06:32
@skainswo:matrix.orgSamuel AinsworthOk Roger that13:08:35
@skainswo:matrix.orgSamuel Ainsworthok iiuc xla uses a "cuda_clang" such that clang compiles cuda code directly, not nvcc18:04:35
@skainswo:matrix.orgSamuel Ainsworththis whole thing is a bit of a mess afaict. there are some xla files that segfault nvcc18:05:38
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I tried but was unable to get clang working as the host compiler for NVCC, but using clang’s CUDA backend is a whole other thing 🫩18:11:17
8 Mar 2026
@glepage:matrix.orgGaétan Lepage

[FYI: vllm]

vllm 0.16.0 update merged.
https://github.com/NixOS/nixpkgs/pull/490175

Changelog: https://github.com/vllm-project/vllm/releases/tag/v0.16.0

10:20:58
@ss:someonex.netSomeoneSerge (matrix works sometimes) changed their display name from SomeoneSerge (back on matrix) to SomeoneSerge (matrix works sometimes).23:33:35
9 Mar 2026
@justbrowsing:matrix.orgKevin Mittman (jetlagged/UTC+8)
In reply to @connorbaker:matrix.org
https://github.com/NixOS/nixpkgs/blob/3bb5f20c47dcfcab9acb3be810f42ca1261b49e2/pkgs/development/cuda-modules/packages/cuda_nvcc.nix#L167
Double ughh this just came up in another context too
01:16:42
@kaya:catnip.eekaya 𖤐

Im currently in process of upstreaming the nixos module for tabbapi https://github.com/NixOS/nixpkgs/pull/498281
Does anyone know how i would go about setting the default package? In my nixos config i use the module like this right now, i always override the package:

    services.tabbyapi = {
      enable = true;
      package = pkgs.pkgsCuda.tabbyapi;
    };

I feel like it might be bad to have the default package for tabbyapi module to be broken pretty much, it needs for cuda to be enabled for it to work. How do other modules do this? Do they set the default package to a cuda enabled variant somehow or do they expect the user to enable cuda themselves?

16:44:52
@kaya:catnip.eekaya 𖤐I tested adding the PR as a patch to flash-attn, it indeed no longer OOMs which is nice, but it also doesn't build, seems to get infinitely stuck on building 16:46:56
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Yes the user should enable CUDA. Generally going through variants (like pkgsCuda) shouldn’t be permissible in-tree. You can add an assertion to the module to require sure cuda support is configured.19:19:14
@kaya:catnip.eekaya 𖤐Hm okay, thank you. I guess assertion with a specific message is better than nothing19:21:29
10 Mar 2026
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)13.2 is out 🫩 https://developer.download.nvidia.com/compute/cuda/redist/03:35:22
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) danielrf Orin is supported by 13.2/JP7: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices 06:10:08
@glepage:matrix.orgGaétan Lepage I got you connor (burnt/out) (UTC-8)
https://github.com/NixOS/nixpkgs/pull/498523
11:52:20
@glepage:matrix.orgGaétan Lepage We do have libcublasmp. Is this doc outdated? https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/README.md#distinguished-packages 12:31:03
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Ah yep it’s outdated, I packages nvshmem: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/packages/libnvshmem.nix15:28:13
@glepage:matrix.orgGaétan Lepage connor (burnt/out) (UTC-8) if I want to bump libcublasmp (to 0.7.x) for example, how do I know which cudaPackage_X should be affected? 17:36:23
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)A very deep reading of the changelog, package contents changes, and thorough rebuilds and runtime verification for consumers17:38:14
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Yet another reason we need test suites for downstream packages which exercise those libraries — relying on NVIDIA’s samples (if they’re even available) isn’t sufficient because we care about whether consumers break17:39:46
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)All of the assertions I added to the packages were the result of a ton of reading and gleaning meaning through changelogs and actual package contents changes17:40:16
@glepage:matrix.orgGaétan LepageSounds like a ton of fun :')17:47:23
@cameron-matrix:matrix.orgCameron Barker joined the room.18:18:26

Show newer messages


Back to Room ListRoom Version: 9