NixOS CUDA | 282 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 14 Dec 2024 | ||
| Are the ecosystems really varying enough to need these two cases? Or can we commonise them? | 21:52:44 | |
| 15 Dec 2024 | ||
| I don’t believe there’s a way we can commonise them, at least currently, but my focus has been more on tools (ONNX, PyTorch, etc.) than things that use them (actual models, inference capabilities, etc.), so that’s definitely shaped my opinion. As I understand it, from an implementation perspective, you’d need some sort of mapping from functionality (BLAS, FFT, DNN) to the actual library (or libraries?) which implement that functionality. But the different ecosystems offer different functionalities and might not have corresponding libraries. | 05:54:52 | |
In reply to @matthewcroughan:defenestrate.it That depends entirely on the project and the nix expression for it. From what I understand from what you desire, the closest analogue I can think of would be Apple’s “universal binary” which supports both x86-64 and aarch64… but I suspect you’d also want the equivalent of function multiversioning, where each function is compiled into multiple variants using different architecture features (think SSE, NEON, AVX256, AVX512, etc.) so that at runtime the function matching the host’s most advanced capability is used. This would correspond to CUDA compute capabilities. NVCC can produce binaries with device code for multiple capabilities, but it does increase the compile time and binary size significantly — enough so that linking can fail due to size limitations! That’s part of the reason Magma in Nixpkgs produces a static library when building with CUDA. | 06:04:37 | |
| Is it fair to say that rocm is not supported very well right now in nixpkgs? | 21:01:43 | |
| yes, rather | 21:02:38 | |
We seem unable to compile torch, so okay I override python to use torch-bin, but then I still have to allowBroken | 21:02:48 | |
| 21:02:52 | |
| and then if I do, this happens | 21:02:56 | |
| SomeoneSerge (utc+3): connor (he/him) (UTC-7): I spent some time revamping my personal flake today/yesterday and now have a better understanding of the new hostPlatform/buildPlatform stuff, alongside lib.platforms I do think it's the right interface long-term for both cudaSupport, gencodes, and also configuring fabrics. The entire extent of my contributions to nixpkgs has been just doing small contributions to specific packages. I want to write up a small doc proposing this and discussing the migration path, but also wanted to collaborate w/ you guys. I don't even know where to post this, do I just open an issue on gh or does it need to go on the discourse? | 21:18:06 | |
| 16 Dec 2024 | ||
| I'd recommend testing the waters and getting a sense of prior art done in terms of extending those; perhaps the folks in the Nixpkgs Stdenv room would be good to reach out to? https://matrix.to/#/#stdenv:nixos.org After that (and you've gotten a list of people interested in or knowledgeable about such work), I think drafting an RFC would be the next step, to fully lay out the plan for design and implementation. If it's a relatively small change, maybe an RFC is unnecessary and a PR would be fine! | 06:55:30 | |
| That's the reason I've lately been ignoring flakes in personal projects: just stick to impure eval and autocalling. For me it's usually going in the direction of
| 14:45:04 | |
| * That's the reason I've lately been ignoring flakes in personal projects: just stick to impure eval and autocalling. For me it's usually going in the direction of
| 14:45:18 | |
| I actually ended up with a good pattern using flake.parts | 14:45:28 | |
| 14:46:22 | |
*
| 14:46:27 | |
| Could be deduplicated using a function, but this is what it looks like unfolded | 14:46:38 | |
Then other flake-modules have these rocmPkgs and nvidiaPkgs arguments passed to them | 14:47:01 | |
| 14:47:04 | |
*
| 14:47:05 | |
| I think this should be doable without major backwards incompatible changes? | 14:47:07 | |
| the issue is that rocmSupport is fully broken in Nixpkgs, so this doesn't work, but it should | 14:47:25 | |
| * the issue is that rocmSupport is fully broken in Nixpkgs, so this doesn't work, but it should in future | 14:47:29 | |
| That's more or less what the llama-cpp flake did, but didn't you say
| 14:48:09 | |
| I'm happy as long as I don't have to do weird things to achieve it | 14:48:58 | |
| and for me, this is not weird | 14:49:02 | |
| previously what my flake was doing was far weirder | 14:49:07 | |
| https://github.com/nixified-ai/flake/blob/master/projects/invokeai/default.nix#L66-L96 | 14:49:32 | |
previously it was defining functions that were able to create variants of packages without setting rocmSupport or cudaSupport | 14:49:51 | |
| Just terrible | 14:50:00 | |
Besides, the modules the flake will export, won't interact with the comfyui-nvidia or comfyui-amd attrs, this is just for people who want to try it with nix run | 14:50:42 | |