| 5 Feb 2026 |
connor (burnt/out) (UTC-8) | Merging; also made a backport for it since it's worth having there as well | 01:31:25 |
Benjamin Isbarn | Yes it's currently on JetPack 5.1.3. I'm using 11cb3517b3af6af300dd6c055aeda73c9bf52c48 from nixpkgs (still 25.05 ;)). As for opencv:
opencv = pkgs.opencv.override {
enableCudnn = true;
cudaPackages = pkgs.cudaPackages_11_4;
};
and I'm using this for the nixpkgs config:
{
config = {
cudaSupport = true;
allowUnfree = true;
allowBroken = true;
};
overlays = [
(import rust-overlay)
];
}
| 07:31:18 |
Benjamin Isbarn | ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson
ldd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000ffffa0729000) libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_sync.so (0x0000ffffa05f2000) libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_chip.so (0x0000ffffa037f000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffa035b000)
I set LD_LIBRARY_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)
| 10:23:26 |
Benjamin Isbarn | * ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson
`` dd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000ffffa0729000) libnvrm_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_sync.so (0x0000ffffa05f2000) libnvrm_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_chip.so (0x0000ffffa037f000) libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffa035b000)
I set LD\_LIBRARY\_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)
| 10:23:58 |
Benjamin Isbarn | * ok, i did make some progress, turns out libcuda has a couple of more dependencies on the jetson
dd /run/opengl-driver/lib/libcuda.so linux-vdso.so.1 (0x0000ffffa1fb3000) libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffa0796000) libnvrm\_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_gpu.so (0x0000ffffa0729000) libnvrm\_mem.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_mem.so (0x0000ffffa0711000) libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffa0666000) libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffa0652000) librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffa063a000) libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffa0609000) libnvrm\_sync.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_sync.so (0x0000ffffa05f2000) libnvrm\_host1x.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_host1x.so (0x0000ffffa05d1000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa1f83000) libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000ffffa05b1000) libnvsocsys.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsocsys.so (0x0000ffffa059d000) libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffa03b8000) libnvsciipc.so => /usr/lib/aarch64-linux-gnu/tegra/libnvsciipc.so (0x0000ffffa0393000) libnvrm\_chip.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm\_chip.so (0x0000ffffa037f000) libgcc\_s.so.1 => /lib/aarch64-linux-gnu/libgcc\_s.so.1 (0x0000ffffa035b000)
I set LD_LIBRARY_PATH such that the linker is able to load those which did work for a small sample c program that I wrote. Need to try this workaround for the "big app" now ;)
| 10:24:24 |
Benjamin Isbarn | Ok that was the issue, it works now ;) | 11:08:10 |
connor (burnt/out) (UTC-8) | Are you using the JetPack NixOS or cuda-legacy overlays?
Make sure you’re changing the default CUDA package set globally through an overlay — it’s not okay or enough to just change it for a single package because it doesn’t change it for dependencies etc.
Jetson compute capabilities are never built by default so make sure you’re setting cudaCapabilities per the docs | 16:33:58 |
| 6 Feb 2026 |
Yorusaka Miyabi | https://matrix.to/#/!hQKBiJjjGQdiJBvMgK:matrix.org/$4uVDwhUB5SqzFlJ5lGGTc-419hRN2vzXUoJ0_rJhv-A?via=matrix.org&via=t2bot.io&via=beeper.com
We won't be adding cache.nixos-cuda.org to the trusted caches on garnix, sorry! Our users trust us to build software for them that they run on their machines and servers, so we have to be very conservative here. (Of course we don't have any reason to believe that nixos-cuda.org is not trustworthy. But yeah, we just have to be very conservative.)
What Garnix said when asked about adding NixOS CUDA Cache to Garnix
| 02:16:22 |
Yorusaka Miyabi | * https://matrix.to/#/!hQKBiJjjGQdiJBvMgK:matrix.org/$4uVDwhUB5SqzFlJ5lGGTc-419hRN2vzXUoJ0_rJhv-A?via=matrix.org&via=t2bot.io&via=beeper.com
We won't be adding cache.nixos-cuda.org to the trusted caches on garnix, sorry! Our users trust us to build software for them that they run on their machines and servers, so we have to be very conservative here. (Of course we don't have any reason to believe that nixos-cuda.org is not trustworthy. But yeah, we just have to be very conservative.)
What Garnix team said when asked about adding NixOS CUDA Cache to Garnix
| 02:16:27 |
SomeoneSerge (back on matrix) | Makes sense, why cross-contaminate the caches any further | 02:17:45 |
| Yinfeng joined the room. | 02:22:45 |
connor (burnt/out) (UTC-8) | Gaétan LepageI’d like to merge https://github.com/NixOS/nixpkgs/pull/484031 but I’m still concerned about whether the patch needs to be guarded by something — it shouldn’t since the compiler NVCC uses should always use the GLIBC from the default stdenv which should be the newest GLIBC, right? | 16:35:17 |
SomeoneSerge (back on matrix) | Mind waiting until tomorrow, I'd like to take a look? | 17:20:39 |
Gaétan Lepage | Sure SomeoneSerge (back on matrix), feel free to double check. | 22:35:32 |
Gaétan Lepage | Indeed, I don't think so. | 22:35:43 |
| 8 Feb 2026 |
hexa | where can I find libnvidia-ml.so.1? | 03:09:43 |
hexa | * where can I find libnvidia-ml.so.1 used by py3nvml? | 03:09:50 |
hexa | nvm … /nix/store/9g9zb0r0hk63fm1xq8582bgjd8d69k0k-nvidia-x11-580.119.02-6.12.68/lib/libnvidia-ml.so.1 | 03:10:49 |
Robbie Buxton | In reply to @hexa:lossy.network where can I find libnvidia-ml.so.1? This is a nvidia kernel library so if you aren’t on nixos you need to get it from where you install it on the host | 03:37:00 |
Robbie Buxton | But looks like you found it! | 03:37:12 |
hexa | it is below the driverLink path | 03:38:20 |
Robbie Buxton | Yeah on nixos iirc it’s symlinked into /run/opengl-driver/lib if I’m not mistaken | 03:39:39 |
hexa | correct | 03:40:14 |
hexa | addDriverRunpath.driverLink is the relevant attribute | 03:40:24 |
| kaya 𖤐 changed their profile picture. | 22:50:15 |
Gaétan Lepage | After some testing, our current torch version (2.9.0) does build against cuda 13.0, but not cuda 13.1:
/nix/store/42f8i6v4gfkvdimy9aczwqik3scl6dpw-cuda13.1-cuda_cccl-13.1.115/include/cub/device/dispatch/dispatch_radix_sort.cuh(1425): error: no operator "+=" matches these operands
operand types are: at::native::<unnamed>::offset_t += const int64_t
end_offsets_current_it += num_current_segments;
Context: https://github.com/NixOS/nixpkgs/pull/486717 | 23:01:20 |
Gaétan Lepage | I'll try to ship torch 2.10.0 ASAP, hoping that it is compatible with cuda 13.1 (which should unfortunately * | 23:02:38 |
Gaétan Lepage | * I'll try to ship torch 2.10.0 ASAP, hoping that it is compatible with cuda 13.1 (which should unfortunately not be the case). | 23:02:57 |
| @niten:fudo.im left the room. | 23:07:13 |
| 9 Feb 2026 |
Benjamin Isbarn | I'm not using any overlay for that purpose right now. Good point regarding the global override, will do that ;). So the cudaCapabilities would affect packages like the cudart, cublas etc. I guess i.e. what features it will consider available and thus use? the in theory this should yield better performance for the aforementioned libraries? | 07:03:05 |