!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

296 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda58 Servers

Load older messages


SenderMessageTime
29 Jan 2026
@glepage:matrix.orgGaétan Lepage

Hello everyone!

I want to share some news regarding the progress yorik.sar and I made on the infra. Some were already communicated, but it doesn't work to summarize everything.

  • We have been building and caching all cudaSupport-sensitive nixpkgs packages on both stable and unstable nixpkgs channels on our Hydra instance for a few weeks. You can check the cuda-packages-unstable and cuda-packages-stable jobsets.
  • All gpuCheck instances across nixpkgs are now automatically and exhaustively collected and built in Hydra as well. See the cuda-gpu-checks-unstable and cuda-gpu-checks-stable jobsets. For a reminder, gpuChecks are derivations that run some package tests that need access to a physical GPU.
  • Regarding package maintenance and updates, recent times were hectic as nixpkgs received several breaking changes since the beginning of 2026 (recursing into python314Packages, update to GCC 15 and more staging-next treats). Besides, here are notable ones:
  • This is still an idea, but my plan is to create two cuda-specific nix channels: nixos-unstable-cuda and nixos-stable-cuda where we could ensure that a curated set of package builds and tests are succesful (release blockers). I would be glad to hearing your feedback on this idea.

We're trying our best to move things forward as fast as possible. Unfortunately, time and compute resources are limited, so it's never fast enough 😅

Have a nice day!

09:26:20
@glepage:matrix.orgGaétan Lepage *

Hello everyone!

I want to share some news regarding the progress yorik.sar, connor (burnt/out) (UTC-8), SomeoneSerge (back on matrix) and I made recently. Some were already communicated, but it doesn't work to summarize everything.

  • We have been building and caching all cudaSupport-sensitive nixpkgs packages on both stable and unstable nixpkgs channels on our Hydra instance for a few weeks. You can check the cuda-packages-unstable and cuda-packages-stable jobsets.
  • All gpuCheck instances across nixpkgs are now automatically and exhaustively collected and built in Hydra as well. See the cuda-gpu-checks-unstable and cuda-gpu-checks-stable jobsets. For a reminder, gpuChecks are derivations that run some package tests that need access to a physical GPU.
  • Regarding package maintenance and updates, recent times were hectic as nixpkgs received several breaking changes since the beginning of 2026 (recursing into python314Packages, update to GCC 15 and more staging-next treats). Besides, here are notable ones:
  • This is still an idea, but my plan is to create two cuda-specific nix channels: nixos-unstable-cuda and nixos-stable-cuda where we could ensure that a curated set of package builds and tests are succesful (release blockers). I would be glad to hearing your feedback on this idea.

We're trying our best to move things forward as fast as possible. Unfortunately, time and compute resources are limited, so it's never fast enough 😅

Have a nice day!

09:26:59
@glepage:matrix.orgGaétan Lepage *

Hello everyone!

I want to share some news regarding the progress yorik.sar, connor (burnt/out) (UTC-8), SomeoneSerge (back on matrix) and I made recently. Some were already communicated, but it doesn't work to summarize everything.

  • We have been building and caching all cudaSupport-sensitive nixpkgs packages on both stable and unstable nixpkgs channels on our Hydra instance for a few weeks. You can check the cuda-packages-unstable and cuda-packages-stable jobsets.
  • All gpuCheck instances across nixpkgs are now automatically and exhaustively collected and built in Hydra as well. See the cuda-gpu-checks-unstable and cuda-gpu-checks-stable jobsets. For a reminder, gpuChecks are derivations that run some package tests that need access to a physical GPU.
  • Regarding package maintenance and updates, recent times were hectic as nixpkgs received several breaking changes since the beginning of 2026 (recursing into python314Packages, update to GCC 15 and more staging-next treats). Besides, here are notable ones:
  • This is still an idea, but my plan is to create two cuda-specific nix channels: nixos-unstable-cuda and nixos-25.11-cuda where we could ensure that a curated set of package builds and tests are succesful (release blockers). I would be glad to hearing your feedback on this idea.

We're trying our best to move things forward as fast as possible. Unfortunately, time and compute resources are limited, so it's never fast enough 😅

Have a nice day!

09:27:55
@hexa:lossy.networkhexanixos-25.11 should be fairly cheap on rebuilds14:12:03
@hexa:lossy.networkhexathe opencv 4.13.0 blocks openvino updates14:12:22
@hexa:lossy.networkhexa* the opencv 4.13.0 update blocks openvino updates14:12:27
@hexa:lossy.networkhexa* the opencv 4.13.0 update is required to update openvino too14:12:44
@snakyeyes:matrix.orgGilles Poncelet joined the room.22:07:17
30 Jan 2026
@connorbaker:matrix.orgconnor (he/him)Can someone review/merge https://github.com/NixOS/nixpkgs/pull/485211?02:20:04
@connorbaker:matrix.orgconnor (he/him)Also coming up: https://github.com/NixOS/nixpkgs/pull/48520803:10:33
@matthewcroughan:defenestrate.itmatthewcroughan @fosdem changed their display name from matthewcroughan to matthewcroughan @fosdem.13:50:24
31 Jan 2026
@bjth:matrix.orgBryan HonofHey hey, live from FOSDEM here. Is there an easy way to generate those manifest JSON files? Or is that a fully manual process?16:26:02
@bjth:matrix.orgBryan HonofNevermind, should've read the READEME. :)16:26:59
@connorbaker:matrix.orgconnor (he/him)Oh god is it up to date16:32:45
@connorbaker:matrix.orgconnor (he/him)Those manifests should come directly from NVIDIA (but they need a new line added to comply with the Nixpkgs formatter)16:33:13
1 Feb 2026
@sigmasquadron:matrix.orgFernando Rodrigues changed their display name from SigmaSquadron to Fernando Rodrigues.10:43:22
@glepage:matrix.orgGaétan Lepage OpenCV 4.13.0 bump has just been merged! 22:56:18
3 Feb 2026
@justbrowsing:matrix.orgKevin Mittman (UTC-8) Hi Bryan Honof I can help answer questions about the JSON manifestsconnor (burnt/out) (UTC-8) You could have mentioned that, happy to add a newline at the end 21:41:47
@connorbaker:matrix.orgconnor (he/him) Gaétan Lepage: PR for fixes related to CUDA 13 breakages: https://github.com/NixOS/nixpkgs/pull/485208 22:06:42
4 Feb 2026
@shadowrz:nixos.devYorusaka Miyabi joined the room.01:48:31
@benesim:benesim.orgBenjamin Isbarn joined the room.09:10:02
@benesim:benesim.orgBenjamin Isbarn Hi, I'm trying to run an application that uses OpenCV with cuda support built with nix on a NVIDIA® Jetson™ Orin™ Nano 8GB. This fails essentially with the following message: Internal Error: OpenCV(4.11.0) /build/source/modules/dnn/src/cuda4dnn/init.hpp:55: error: (-217:Gpu API call) CUDA driver version is insufficient for CUDA runtime version in function 'getDevice'\n (code: GpuApiCallError, -217) I did the old /run/opengl-driver/lib trick which worked flawlessly on another device which was a PC running a 3050. But this doesn't seem to work on the Jetson (I do see that the libcuda i symlinked in /run/opengl-driver/lib get's loaded when running it within strace). I tried to use the same cuda version thats on the Jetson (i.e.g cat /usr/local/cuda/version.json gave me "version" : "11.4.19" so I went with cuda_11_4 in nix. Any help would highly be appreciated :) 11:39:09
@connorbaker:matrix.orgconnor (he/him) So your Orin is running JetPack 5, correct?
Where did you find/get cuda_11_4? I'm not aware of that. How did you build OpenCV, from which commit, how did you configure Nixpkgs, etc.
20:13:16
@glepage:matrix.orgGaétan Lepage

RE: effort to migrate towards cuda 13 treewide

magma fails to build with cuda 13. I opened https://github.com/NixOS/nixpkgs/pull/487064 to fix it.

22:37:31
5 Feb 2026
@connorbaker:matrix.orgconnor (he/him)Merging; also made a backport for it since it's worth having there as well01:31:25
@benesim:benesim.orgBenjamin Isbarn

Yes it's currently on JetPack 5.1.3. I'm using 11cb3517b3af6af300dd6c055aeda73c9bf52c48 from nixpkgs (still 25.05 ;)). As for opencv:

          opencv = pkgs.opencv.override {
            enableCudnn = true;
            cudaPackages = pkgs.cudaPackages_11_4;
          };

and I'm using this for the nixpkgs config:

    {
      config = {
        cudaSupport = true;
        allowUnfree = true;
        allowBroken = true;
      };
      overlays = [
        (import rust-overlay)
      ];
    }
07:31:18
@benesim:benesim.orgBenjamin Isbarn

ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson

ldd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000ffffa0729000) libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_sync.so (0x0000ffffa05f2000) libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_chip.so (0x0000ffffa037f000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffa035b000)

I set LD_LIBRARY_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)

10:23:26
@benesim:benesim.orgBenjamin Isbarn *

ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson

``
dd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000ffffa0729000) libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_sync.so (0x0000ffffa05f2000) libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_chip.so (0x0000ffffa037f000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffa035b000)


I set LD\_LIBRARY\_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)
10:23:58
@benesim:benesim.orgBenjamin Isbarn *

ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson

dd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm\_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_gpu.so (0x0000ffffa0729000) libnvrm\_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm\_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_sync.so (0x0000ffffa05f2000) libnvrm\_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm\_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_chip.so (0x0000ffffa037f000) libgcc\_s.so.1 => /lib/aarch64-linux-gnu/libgcc\_s.so.1 (0x0000ffffa035b000)

I set LD_LIBRARY_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)

10:24:24
@benesim:benesim.orgBenjamin IsbarnOk that was the issue, it works now ;) 11:08:10

Show newer messages


Back to Room ListRoom Version: 9