!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

286 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
18 Nov 2025
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Would appreciate if someone could review https://github.com/NixOS/nixpkgs/pull/46276106:36:03
@ss:someonex.netSomeoneSerge (back on matrix) Gaétan Lepage: not quite a morning slot, but wdyt about 21:15 Paris for the weekly? 14:13:14
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I should be able to attend too16:00:11
@glepage:matrix.orgGaétan LepageWay better for me.16:14:49
19 Nov 2025
@eymeric:onyx.ovhEymeric joined the room.12:59:28
@jfly:matrix.orgJeremy Fleischman (jfly) joined the room.18:13:28
@jfly:matrix.orgJeremy Fleischman (jfly)

i'm confused about the compatibility story between whatever libcuda.so file i have in /run/opengl-driver and my nvidia kernel module. i've read through <nixos/modules/hardware/video/nvidia.nix> and i see that hardware.graphics.extraPackages basically gets set to pkgs.linuxKernel.packages.linux_6_12.nvidiaPackages.stable.out (or whatever kernel i have selected)

how much drift (if any) is allowed here?

18:18:44
@jfly:matrix.orgJeremy Fleischman (jfly)to avoid an XY problem: what i'm actually doing is experimenting with defining systemd nixos containers that run cuda software internally, and i'm not sure how to get the right libcuda.so's in those containers so they play nicely with the host's kernel18:21:46
@jfly:matrix.orgJeremy Fleischman (jfly)if the answer is "just keep them perfectly in sync with the host kernel's version", that's OK. just trying to flesh out my mental model18:22:27
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) libcuda.so is provided by the NVIDIA CUDA driver, which for our purposes is generally part of the NVIDIA driver for your GPU.
Do the systemd NixOS containers provide their own copy of NVIDIA's driver? If not, they wouldn't have libcuda.so available.
The CDI stuff providing GPU access in containers provides /run/opengl-driver/lib (among other things): https://github.com/NixOS/nixpkgs/blob/6c634f7efae329841baeed19cdb6a8c2fc801ba1/nixos/modules/services/hardware/nvidia-container-toolkit/default.nix#L234-L237
General information about forward-backward compat is in NVIDIA's docs here: https://docs.nvidia.com/deploy/cuda-compatibility/#
18:31:45
@sporeray:matrix.orgRobbie Buxton
In reply to @jfly:matrix.org
to avoid an XY problem: what i'm actually doing is experimenting with defining systemd nixos containers that run cuda software internally, and i'm not sure how to get the right libcuda.so's in those containers so they play nicely with the host's kernel
If you run the host systems cuda kernel drivers ahead of the user mode drivers it’s normally fine provided it’s not a major version change (I.e 13 vs 12)
18:35:26
@jfly:matrix.orgJeremy Fleischman (jfly)

Do the systemd NixOS containers provide their own copy of NVIDIA's driver? If not, they wouldn't have libcuda.so available.

afaik, they do not automatically do anything (please correct me if i'm wrong). i making them get their own libcuda.so by explicitly configuring them with hardware.graphics.enable = true; and hardware.graphics.extraPackages.

mounting the cuda runtime from the host makes sense, though! thanks for the link to this nvidia-container-toolkit

18:39:03
@lt1379:matrix.orgLun What's the current best practice / future plans for impure GPU tests? Is the discussion in https://github.com/NixOS/nixpkgs/issues/225912 up to date? cc SomeoneSerge (back on matrix) 18:43:23
@ss:someonex.netSomeoneSerge (back on matrix)

Do the systemd NixOS containers provide their own copy of NVIDIA's driver? If not, they wouldn't have libcuda.so available.

They don't (unless forced). Libcuda and its closure are mounted from the host.

20:10:33
@ss:someonex.netSomeoneSerge (back on matrix) The issue is maybe growing stale, but I'd say there haven't been any fundamental updates.
One bit it doesn't mention is that we rewrote most of the tests in terms of a single primitive, cudaPackages.writeGpuTestPython (can be overridden for e.g. rocm; could be moved outside cuda-modules).
It's now also clear that the VM tests can also be done, we'd just have to use a separate marker to signal that a builder exposes an nvidia device with a vfio driver.
If we replace the sandboxing mechanism (e.g. with microvms) it'll get trickier... but again, a low-bandwidth baseline with vfio is definitely achievable.
And there's still the issue of describing constraints, like listing the architectures or like memory quotas: we need a pluggable mechanism for assessing which builders are compatible with the derivation?
20:37:12

Show newer messages


Back to Room ListRoom Version: 9