NixOS CUDA | 288 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 18 Dec 2025 | ||
| I use swaywm with --unsupported-gpu (though not on nixos) | 16:39:04 | |
| Same | 16:51:42 | |
| 19 Dec 2025 | ||
| sorry if this has been discussed before, but looking at https://hydra.nix-community.org/jobset/nixpkgs/cuda-stable#tabs-evaluations the last Job I'm seeing was a week ago. I thought this jobset was following nixos-unstable-small (although now that I've checked, it's nixos-25.05-small) | 16:11:35 | |
Download cuda-job.png | 16:11:36 | |
| * sorry if this has been discussed before, but looking at https://hydra.nix-community.org/jobset/nixpkgs/cuda-stable#tabs-evaluations the last Job I'm seeing was a week ago. I thought this jobset was following nixos-unstable-small (although now that I've checked, it's nixos-25.05-small). Has something happened that stopped it from running? | 16:12:19 | |
| Hi! We build packages for both the unstable channel and the latest stable nixpkgs channel. | 16:19:44 | |
| awesome, thanks! | 17:00:29 | |
| 17:35:43 | ||
| Does anyone know of a good solution to making nix built oci containers usable on systems with nvidia container toolkit installed? Making /run/opengl/drivers etc to work with the mounted paths and all that I mean. | 19:04:15 | |
| oh yeah, i run everything thru nix-gl-host | 19:05:36 | |
| we're using it in prod, seems to work great | 19:05:46 | |
| Does anyone have some tips regarding automatically bumping nixpkgs in a repo which depends on cuda cache? The idea is to only bump to latest commit from nixos-unstable-small once it's been built by the cuda jobset | 19:08:17 | |
It's just an idea for now, but long-term, I would like us to have nixos-unstable-cuda and nixos-25.11-cuda channels that have their specific channel blockers.People will be able to follow these channels knowing that they will have reliable cache hits. | 20:06:26 | |
| That would be great! | 20:09:18 | |
| 23:39:03 | ||
| 23:39:11 | ||
| 23:58:45 | ||
| 20 Dec 2025 | ||
| 07:29:33 | ||
I'm just copying/symlinking something like libcuda* lib*nvidia* libnv* from the directory where the toolkit put them.Once upon a time I've been required (for trtexec) to run patchelf --add-rpath /run/opengl-driver/lib on these copies, but that was on older Nixos, I don't know if it's necessary now. | 19:24:19 | |
| 21 Dec 2025 | ||
| s/idea for now/idea for the past 4+ years that keeps running into penny counting and compliance excuses/ Ftfy | 01:23:14 | |
In reply to @ss:someonex.netYeah, @glepage:matrix.org: why not now? | 09:01:01 | |
| Well, because we unfortunately are quite busy with a lot of maintenance work. It is hard to find some time to work on those more long-term projects. | 09:32:42 | |
In reply to @glepage:matrix.orgI'd be happy to help if there is anything I could do to speed this up. | 12:00:04 | |
| 15:50:06 | ||
| Hey guys, does anyone else have a setup with A100 (or some such) that require nvidia-fabricmanager? Could you maybe share with me the relevant .nix configuration bits? (If using a relatively modern NixOS - 25.05 or 25.11) hardware.nvidia.datacenter.enable=true produces for me a broken nv-fabricmanager with undefined symbols. I managed to make it work by packaging nvidia-fabricmanager myself, but it is a bit ugly and as a novice I am not sure everything is well done. If anyone has a configuration with nvidia-fabricmanager that could share with me that would be great...! | 16:05:31 | |
| A lot of the functionality gated behind datacenter-grade GPUs or multi-GPU setups is out of the reach of the maintainers at the moment as we’ve just recently been able to get a Hydra set up to build packages and run a few GPU checks. Part of the quick iteration time I’ve had in the past is because I own a 4090 and so can benchmark and test quickly. But for bigger stuff, the only approach I’ve had any luck with is using Lambda Labs to rent multi-GPU instances for fairly cheap and try Nix-built binaries on them. But that doesn’t test using NixOS as the host system or any other number of features unique to the hardware (or even specific code paths). If you have such hardware or have access to it, please don’t hesitate to open PRs. Access to hardware (among other things like time and burnout) are big blockers for us supporting more stuff. We can always coach or provide feedback on packaging! And we can certainly use such an opportunity to update (or make) contributing documents. | 19:01:11 | |
| 4 Aug 2022 | ||
| 03:26:42 | ||
| (hi, just came here to read + respond to this.) | 03:28:52 | |
| hey. i had previously sympathzied with samuela and like i said before had some of the same frustrations. i just edited my github comment to add "[CUDA] packages are universally complicated, fragile to package, and critical to daily operations. Nix being able to manage them is unbelievably helpful to those of us who work with them regularly, even if support is downgraded to only having an expectation of function on stable branches." | 03:29:14 | |
In reply to @tpw_rules:matrix.orgugh, 45 minutes? that's... not great. not to air dirty laundry but did you do what samuela did in the wandb PR and at least say that that wasn't a great thing to do? (not sure how else to word that, you get what i mean) | 03:30:23 | |