NixOS CUDA | 289 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 57 Servers |
| Sender | Message | Time |
|---|---|---|
| 15 Dec 2024 | ||
We seem unable to compile torch, so okay I override python to use torch-bin, but then I still have to allowBroken | 21:02:48 | |
| 21:02:52 | |
| and then if I do, this happens | 21:02:56 | |
| SomeoneSerge (utc+3): connor (he/him) (UTC-7): I spent some time revamping my personal flake today/yesterday and now have a better understanding of the new hostPlatform/buildPlatform stuff, alongside lib.platforms I do think it's the right interface long-term for both cudaSupport, gencodes, and also configuring fabrics. The entire extent of my contributions to nixpkgs has been just doing small contributions to specific packages. I want to write up a small doc proposing this and discussing the migration path, but also wanted to collaborate w/ you guys. I don't even know where to post this, do I just open an issue on gh or does it need to go on the discourse? | 21:18:06 | |
| 16 Dec 2024 | ||
| I'd recommend testing the waters and getting a sense of prior art done in terms of extending those; perhaps the folks in the Nixpkgs Stdenv room would be good to reach out to? https://matrix.to/#/#stdenv:nixos.org After that (and you've gotten a list of people interested in or knowledgeable about such work), I think drafting an RFC would be the next step, to fully lay out the plan for design and implementation. If it's a relatively small change, maybe an RFC is unnecessary and a PR would be fine! | 06:55:30 | |
| That's the reason I've lately been ignoring flakes in personal projects: just stick to impure eval and autocalling. For me it's usually going in the direction of
| 14:45:04 | |
| * That's the reason I've lately been ignoring flakes in personal projects: just stick to impure eval and autocalling. For me it's usually going in the direction of
| 14:45:18 | |
| I actually ended up with a good pattern using flake.parts | 14:45:28 | |
| 14:46:22 | |
*
| 14:46:27 | |
| Could be deduplicated using a function, but this is what it looks like unfolded | 14:46:38 | |
Then other flake-modules have these rocmPkgs and nvidiaPkgs arguments passed to them | 14:47:01 | |
| 14:47:04 | |
*
| 14:47:05 | |
| I think this should be doable without major backwards incompatible changes? | 14:47:07 | |
| the issue is that rocmSupport is fully broken in Nixpkgs, so this doesn't work, but it should | 14:47:25 | |
| * the issue is that rocmSupport is fully broken in Nixpkgs, so this doesn't work, but it should in future | 14:47:29 | |
| That's more or less what the llama-cpp flake did, but didn't you say
| 14:48:09 | |
| I'm happy as long as I don't have to do weird things to achieve it | 14:48:58 | |
| and for me, this is not weird | 14:49:02 | |
| previously what my flake was doing was far weirder | 14:49:07 | |
| https://github.com/nixified-ai/flake/blob/master/projects/invokeai/default.nix#L66-L96 | 14:49:32 | |
previously it was defining functions that were able to create variants of packages without setting rocmSupport or cudaSupport | 14:49:51 | |
| Just terrible | 14:50:00 | |
Besides, the modules the flake will export, won't interact with the comfyui-nvidia or comfyui-amd attrs, this is just for people who want to try it with nix run | 14:50:42 | |
In a system using the nixosModules, the overlay will be applied, which strictly ignores the packages attr of the flake | 14:51:05 | |
| the packages attr of the flake is just there for people wanting to use things in a non-nixos context really | 14:53:53 | |
| They're just completely separate. I guess there are mappings between subsets of the frameworks, as evidenced by ZLUDA, hipify, and https://docs.scale-lang.com. I suppose one could say that ZLUDA is a sort of a runtime proxy, although the multi-versioning bit is still missing. | 14:55:59 | |
| Interestingly in the case of comfyui, I didn't need to add any rocm specific stuff | 15:33:35 | |
| * Interestingly in the case of comfyui, I didn't need to add any rocm or cuda specific stuff | 15:33:37 | |