!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

281 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
2 Dec 2025
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) CC Gaétan Lepage SomeoneSerge (back on matrix) 23:21:23
3 Dec 2025
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) ugh
it's through cudnn-frontend
because https://github.com/NVIDIA/cudnn-frontend/blob/0258951d4d512f4714eb1574496f4d57669b1b93/CMakeLists.txt#L43 means the generate cmake target has the include directory of NVCC
and since I made NVCC a single-output derivation, the output containing include and bin are one and the same
so ONNX Runtime's disallowedRequisites = lib.optionals cudaSupport [ (lib.getBin cuda_nvcc) ]; is tripped because of the cmake config from cudnn-frontend
so I guess I'm trying to split up NVCC again
fml
00:09:44
@glepage:matrix.orgGaétan Lepage FYI, https://hydra.nixos-cuda.org is now building nixos-25.11-small too :) 09:50:51
@glepage:matrix.orgGaétan Lepage * FYI (for stable channel users), https://hydra.nixos-cuda.org is now building nixos-25.11-small too :) 09:51:06
@hexa:lossy.networkhexa (UTC+1)after renaming you need to toggle the jobsets for them to work again09:52:00
@hexa:lossy.networkhexa (UTC+1)otherwise they just09:52:10
@hexa:lossy.networkhexa (UTC+1)

evaluation failed with exit code 255

09:52:15
@hexa:lossy.networkhexa (UTC+1)as can be seen here https://hydra.nixos-cuda.org/jobset/nixos-cuda/cuda-packages-unstable#tabs-errors09:52:24
@hexa:lossy.networkhexa (UTC+1)and here https://hydra.nixos-cuda.org/jobset/nixos-cuda/cuda-packages-legacy#tabs-errors09:52:38
@hexa:lossy.networkhexa (UTC+1)https://github.com/NixOS/hydra/issues/128809:52:56
@glepage:matrix.orgGaétan Lepage Thanks for the tip hexa (UTC+1)! 09:54:14
@hexa:lossy.networkhexa (UTC+1) Gaétan Lepage: you probably also want a lower priority (higher value) than cache.nixos.org 15:58:01
@hexa:lossy.networkhexa (UTC+1) * Gaétan Lepage: you probably also want a lower priority (higher value) than cache.nixos.org, which has prio 30 15:58:10
@glepage:matrix.orgGaétan LepageWhat are you referring to? Something in the nix-cuda infra?17:36:09
@hexa:lossy.networkhexa (UTC+1)https://cache.nixos-cuda.org/nix-cache-info18:22:39
@hexa:lossy.networkhexa (UTC+1)https://cache.nixos.org/nix-cache-info18:22:52
@hexa:lossy.networkhexa (UTC+1)harmonia is greedy18:23:00
@hexa:lossy.networkhexa (UTC+1)this makes everyone who wants to substitute prefer nixos-cuda over c.n.o18:23:28
@glepage:matrix.orgGaétan LepageOk makes sense! Done.19:23:21
@glepage:matrix.orgGaétan Lepage connor (burnt/out) (UTC-8) the onnx bump should be good to go. 20:50:50
@keiichi:matrix.orgtetoI wonder what priority to set the cache.nixos-cuda.org cache to ? it's the defualt on the webpage but shouldn't a typical user fetch from cache.nixos.org first ? in terms of cost etc, is there any preference ? 21:17:23
@hexa:lossy.networkhexa (UTC+1)the priority is set on the server-side no?21:18:07
@hexa:lossy.networkhexa (UTC+1)and yes, you'll always want to prefer c.n.o21:18:28
@glepage:matrix.orgGaétan Lepage You can also append ?priority=3 to the substituters in /etc/nix/nix.conf 21:20:09
@hexa:lossy.networkhexa (UTC+1)ah ok21:20:19
@keiichi:matrix.orgtetoha yeah the server shows 50 as default https://cache.nixos-cuda.org/ so I have nothing to do nice :) 22:22:24
@keiichi:matrix.orgteto (I was indeed thinking of ?priority ) 22:22:39
@hexa:lossy.networkhexa (UTC+1)it does now 😛22:24:16
@corroding556:matrix.orgcorroding556 joined the room.23:55:01
4 Dec 2025
@corroding556:matrix.orgcorroding556

Hi all! Very much appreciate the work that's been put into CUDA support in nixpkgs/NixOS.
Recently updated my system configuration to a more recent version of nixpkgs and had to pin cudaCapabilities to 6.1 now that CUDA 13.0 has dropped support for Pascal, started getting some confusing build failures as a result.
Spent several hours looking into how the CUDA packaging ecosystem works only to realize using --trace-verbose gave the answer straight up >.<.

It seems nixpkgs updating to use cuDNN 9.13 means that other packages pulling in cudaPackages_12_{6,8,9} no longer support compute capabilities < 7.5 even though CUDA supports compute capabilities >= 5.0 up until the jump to 13.0.

I noticed 9.13 is not the only version in nixpkgs though, what is the strategy around how many legacy versions of CUDA packages to maintain in nixpkgs?
Does it make sense to add cuDNN 9.11 as a pinned version to bridge the gap since 9.12 has dropped support for compute capabilities < 7.5?
If that's not appropriate 8.9.7 is the most recent version available in nixpkgs which still supports my hardware, how would I/how reasonable is it to force my config to use that?

Sorry for all the questions, appreciate any advice 😅

01:57:08

There are no newer messages yet.


Back to Room ListRoom Version: 9