NixOS CUDA | 290 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 57 Servers |
| Sender | Message | Time |
|---|---|---|
| 25 Dec 2024 | ||
| I imagine haskell packages are auto-generated? If so, there must be some customization examples for other native/ffi libraries that might use dlopen at runtime or pkg-config at build time, e.g. wrappers for opengl or vulkan | 00:07:08 | |
| If accelerate is working or supported, check that out. Not sure it’s still supported given it relies on LLVM 12 (or earlier) for LLVM-HS. Outside of that, not sure what people use for GPU stuff with Haskell | 00:14:03 | |
| * Hello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as unbroken so I figured it would be good to get first class cuda support implemented here as well. https://github.com/NixOS/nixpkgs/pull/367998/ | 03:28:32 | |
| He ended up doing this in the
| 11:39:48 | |
| I'll follow up on github | 17:19:13 | |
| 26 Dec 2024 | ||
| Does anyone have any configuration files for deep learning on nixos? I want to use cuda to train pytorch models on nixos, but I can't install cuda and cudnn correctly. I tried some but failed. Can anyone share the configuration files with me? I use a 4090 graphics card. | 10:43:05 | |
Have you set cudaSupport = true in your nixpkgs config ? | 10:43:54 | |
| This enables cuda support for all packages that support it in nixpkgs | 10:44:08 | |
| 15:50:27 | ||
| Please have a look at https://github.com/NixOS/nixpkgs/pull/368366 . I have no idea what I am doing. | 15:50:46 | |
Oh my god builtins.sort requires strict total orderings? | 16:28:17 | |
No, it requires strict weak ordering, but >= does not provide it. a >= b can't act as lessThat. b < a can, or !(b >= a) can as well. | 16:31:28 | |
| Read your blog post. You got a talent for discovering this stuff before anyone else | 18:57:22 | |
| If it makes you feel a bit better
| 23:14:25 | |
| 27 Dec 2024 | ||
| Trofi would you ping Valentin on the issue? Feels like it’d be good to have this requirement stated in the docs | 01:08:02 | |
| 28 Dec 2024 | ||
| ugh thinking about software making me sad Samuel Ainsworth did you ever find some sort of serenity with CUDA and Nixpkgs? | 00:43:26 | |
| I'm having thoughts about https://github.com/connorbaker/cuda-packages. In particular, does it make sense to include CUDA stuff in Nixpkgs proper when we can't take advantage of anything but eval checks? Would nix-community be a better home? Just having a growing sense of dread about updating and trying to maintain fast-moving libraries in an environment where stuff can (or does) break constantly and there's no notification of such breakage (except maybe by the community Hydra instance?). There's also the understanding that in Nixpkgs, everything work together simultaneously. As an example, I'd hate to try to upgrade OpenCV (or PyTorch) so it works with newer versions of CUDA, only to find out it causes some gnarly Darwin/ROCm/non-CUDA issue. Thinking out-of-tree designs would afford us the ability to break stuff, though that comes with a number of drawbacks (duplicating nix expressions for packages and having slight variations, merging in upstream changes, etc.). Maybe this is just fatigue talking; I think a number of these complaints were raised in a discourse post Sam made a few years ago. | 00:54:19 | |
| I mean, I certainly want to upstream the library functions and additional setup hooks/logging functionality I wrote because they're (in my opinion) widely useful. Just... the CUDA stuff. | 00:55:26 | |
| * I'm having thoughts about https://github.com/connorbaker/cuda-packages. In particular, does it make sense to include CUDA stuff in Nixpkgs proper when we can't take advantage of anything but eval checks? Would nix-community be a better home? Just having a growing sense of dread about updating and trying to maintain fast-moving libraries in an environment where stuff can (or does) break constantly and there's no notification of such breakage (except maybe by the community Hydra instance?). There's also the understanding that in Nixpkgs, everything work together simultaneously. As an example, I'd hate to try to upgrade OpenCV (or PyTorch) so it works with newer versions of CUDA, only to find out it causes some gnarly Darwin/ROCm/non-CUDA issue. Thinking out-of-tree designs would afford us the ability to break stuff, though that comes with a number of drawbacks (duplicating nix expressions for packages and having slight variations, merging in upstream changes, etc.). Maybe this is just fatigue talking; I think a number of these complaints were raised in a discourse post Samuel made a few years ago. | 12:33:14 | |
| Is anyone at chaos congress? | 16:16:44 | |
| Good idea! Done as https://github.com/NixOS/nix/issues/12106#issuecomment-2564375843 | 16:36:40 | |
| 18:41:55 | ||
| 29 Dec 2024 | ||
| 16:13:20 | ||
| Just tried to build PyTorch and I completely forgot it vendors its dependencies, was stunned to see it building ONNX | 21:49:20 | |
| I wish... matthewcroughan (DECT: 56490) maybe? | 21:50:20 | |
| Yeah | 21:50:35 | |
| I remember I had tried to work on using system-provided dependencies (I guess more than a year ago now) and gave up because it would have required a bunch CMake rewriting. And every time upstream changed something, BOOM! Another merge conflict or more rewriting required. But I suppose it’s that way with lots of projects. | 21:55:11 | |
| Serge, how do you stay upbeat about packaging stuff? | 21:55:58 | |
| Yes, which is why this is really is about working with the upstream and getting the changes through on their side, not on nixpkgs side | 21:56:38 | |
| I clearly don't... | 21:57:31 | |