!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

286 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
18 Dec 2025
@ss:someonex.netSomeoneSerge (back on matrix) Had to keep the kernelPackages pinned most of the time too 16:23:37
@ss:someonex.netSomeoneSerge (back on matrix) I'm afraid even NixOS cannot fixes all of NVIDIA's problems 16:24:07
@ss:someonex.netSomeoneSerge (back on matrix) * I'm afraid even NixOS cannot fix all of NVIDIA's problems 16:24:14
@ss:someonex.netSomeoneSerge (back on matrix)Best if you have an integrated graphics to keep your display on...16:25:23
@justbrowsing:matrix.orgKevin Mittman (EOY sleep)I use swaywm with --unsupported-gpu (though not on nixos)16:39:04
@glepage:matrix.orgGaétan LepageSame16:51:42
19 Dec 2025
@adrian-gierakowski:matrix.orgadrian-gierakowskisorry if this has been discussed before, but looking at https://hydra.nix-community.org/jobset/nixpkgs/cuda-stable#tabs-evaluations the last Job I'm seeing was a week ago. I thought this jobset was following nixos-unstable-small (although now that I've checked, it's nixos-25.05-small)16:11:35
@adrian-gierakowski:matrix.orgadrian-gierakowskicuda-job.png
Download cuda-job.png
16:11:36
@adrian-gierakowski:matrix.orgadrian-gierakowski* sorry if this has been discussed before, but looking at https://hydra.nix-community.org/jobset/nixpkgs/cuda-stable#tabs-evaluations the last Job I'm seeing was a week ago. I thought this jobset was following nixos-unstable-small (although now that I've checked, it's nixos-25.05-small). Has something happened that stopped it from running? 16:12:19
@glepage:matrix.orgGaétan Lepage

Hi!
nix-community does not build anything cuda-related anymore.
We are now testing and building packages on our own infra:

We build packages for both the unstable channel and the latest stable nixpkgs channel.

16:19:44
@adrian-gierakowski:matrix.orgadrian-gierakowskiawesome, thanks!17:00:29
@m3l6h:matrix.orgm3l6h joined the room.17:35:43
@aliarokapis:matrix.orgAlexandros LiarokapisDoes anyone know of a good solution to making nix built oci containers usable on systems with nvidia container toolkit installed? Making /run/opengl/drivers etc to work with the mounted paths and all that I mean.19:04:15
@apyh:matrix.orgapyhoh yeah, i run everything thru nix-gl-host19:05:36
@apyh:matrix.orgapyhwe're using it in prod, seems to work great19:05:46
@adrian-gierakowski:matrix.orgadrian-gierakowskiDoes anyone have some tips regarding automatically bumping nixpkgs in a repo which depends on cuda cache? The idea is to only bump to latest commit from nixos-unstable-small once it's been built by the cuda jobset 19:08:17
@glepage:matrix.orgGaétan Lepage It's just an idea for now, but long-term, I would like us to have nixos-unstable-cuda and nixos-25.11-cuda channels that have their specific channel blockers.
People will be able to follow these channels knowing that they will have reliable cache hits.
20:06:26
@adrian-gierakowski:matrix.orgadrian-gierakowskiThat would be great! 20:09:18
@m3l6h:matrix.orgm3l6h set a profile picture.23:39:03
@m3l6h:matrix.orgm3l6h changed their profile picture.23:39:11
@justbrowsing:matrix.orgKevin Mittman (EOY sleep) changed their display name from Kevin Mittman (UTC-7) to Kevin Mittman (EOY sleep).23:58:45
20 Dec 2025
@etehtsea:matrix.orgkis-kis joined the room.07:29:33
@le-chat:matrix.orgle-chat I'm just copying/symlinking something like libcuda* lib*nvidia* libnv* from the directory where the toolkit put them.
Once upon a time I've been required (for trtexec) to run patchelf --add-rpath /run/opengl-driver/lib on these copies, but that was on older Nixos, I don't know if it's necessary now.
19:24:19
21 Dec 2025
@ss:someonex.netSomeoneSerge (back on matrix)s/idea for now/idea for the past 4+ years that keeps running into penny counting and compliance excuses/ Ftfy01:23:14
@adrian-gierakowski:matrix.orgadrian-gierakowski
In reply to @ss:someonex.net
s/idea for now/idea for the past 4+ years that keeps running into penny counting and compliance excuses/
Ftfy
Yeah, @glepage:matrix.org: why not now?
09:01:01
@glepage:matrix.orgGaétan LepageWell, because we unfortunately are quite busy with a lot of maintenance work. It is hard to find some time to work on those more long-term projects.09:32:42
@adrian-gierakowski:matrix.orgadrian-gierakowski
In reply to @glepage:matrix.org
Well, because we unfortunately are quite busy with a lot of maintenance work. It is hard to find some time to work on those more long-term projects.
I'd be happy to help if there is anything I could do to speed this up.
12:00:04
@rpcruz:matrix.orgrpcruz joined the room.15:50:06
@rpcruz:matrix.orgrpcruzHey guys, does anyone else have a setup with A100 (or some such) that require nvidia-fabricmanager? Could you maybe share with me the relevant .nix configuration bits? (If using a relatively modern NixOS - 25.05 or 25.11) hardware.nvidia.datacenter.enable=true produces for me a broken nv-fabricmanager with undefined symbols. I managed to make it work by packaging nvidia-fabricmanager myself, but it is a bit ugly and as a novice I am not sure everything is well done. If anyone has a configuration with nvidia-fabricmanager that could share with me that would be great...!16:05:31
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)A lot of the functionality gated behind datacenter-grade GPUs or multi-GPU setups is out of the reach of the maintainers at the moment as we’ve just recently been able to get a Hydra set up to build packages and run a few GPU checks. Part of the quick iteration time I’ve had in the past is because I own a 4090 and so can benchmark and test quickly. But for bigger stuff, the only approach I’ve had any luck with is using Lambda Labs to rent multi-GPU instances for fairly cheap and try Nix-built binaries on them. But that doesn’t test using NixOS as the host system or any other number of features unique to the hardware (or even specific code paths). If you have such hardware or have access to it, please don’t hesitate to open PRs. Access to hardware (among other things like time and burnout) are big blockers for us supporting more stuff. We can always coach or provide feedback on packaging! And we can certainly use such an opportunity to update (or make) contributing documents.19:01:11

There are no newer messages yet.


Back to Room ListRoom Version: 9