!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

290 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
25 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)I imagine haskell packages are auto-generated? If so, there must be some customization examples for other native/ffi libraries that might use dlopen at runtime or pkg-config at build time, e.g. wrappers for opengl or vulkan00:07:08
@connorbaker:matrix.orgconnor (he/him) If accelerate is working or supported, check that out. Not sure it’s still supported given it relies on LLVM 12 (or earlier) for LLVM-HS.
Outside of that, not sure what people use for GPU stuff with Haskell
00:14:03
@collinarnett:matrix.orgCollin Arnett* Hello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as unbroken so I figured it would be good to get first class cuda support implemented here as well. https://github.com/NixOS/nixpkgs/pull/367998/03:28:32
@collinarnett:matrix.orgCollin Arnett

He ended up doing this in the hackage-packages.nix file:


  "libtorch-ffi" = callPackage
    ({ mkDerivation, async, base, bytestring, c10, containers, hspec
     , inline-c, inline-c-cpp, lib, libtorch-ffi-helper
     , optparse-applicative, safe-exceptions, sysinfo, template-haskell
     , text, torch, torch_cpu, torch_cuda ? null } :
     mkDerivation {
       pname = "libtorch-ffi";
       version = "2.0.1.1";
       sha256 = "0m6gg0z6dc67rxijqycyza197365xf1p71s74a8p4pkc2m2yl6p3";
       libraryHaskellDepends = [
         async base bytestring containers inline-c inline-c-cpp
         libtorch-ffi-helper optparse-applicative safe-exceptions sysinfo
         template-haskell text
       ];
       librarySystemDepends = [ c10 torch torch_cpu ];
       testHaskellDepends = [ base hspec safe-exceptions ];
       homepage = "https://github.com/hasktorch/hasktorch#readme";
       description = "Haskell bindings for PyTorch";
       license = lib.licenses.bsd3;
       configureFlags = [
         "--extra-include-dirs=${lib.getDev pkgs.libtorch-bin}/include/torch/csrc/api/include"
       ]  ++ lib.optionals pkgs.config.cudaSupport [ "-f cuda" ];
     }) ({
       c10 = pkgs.libtorch-bin;
       torch_cpu = pkgs.libtorch-bin;
       torch = pkgs.libtorch-bin;
     } // lib.optionalAttrs (pkgs.config.cudaSupport) {
       torch_cuda = pkgs.libtorch-bin;
     });
11:39:48
@ss:someonex.netSomeoneSerge (back on matrix) I'll follow up on github 17:19:13
26 Dec 2024
@jiashuaixu:matrix.orgJesseDoes anyone have any configuration files for deep learning on nixos? I want to use cuda to train pytorch models on nixos, but I can't install cuda and cudnn correctly. I tried some but failed. Can anyone share the configuration files with me? I use a 4090 graphics card.10:43:05
@glepage:matrix.orgGaétan Lepage Have you set cudaSupport = true in your nixpkgs config ? 10:43:54
@glepage:matrix.orgGaétan LepageThis enables cuda support for all packages that support it in nixpkgs10:44:08
@trofi:matrix.org@trofi:matrix.org joined the room.15:50:27
@trofi:matrix.org@trofi:matrix.orgPlease have a look at https://github.com/NixOS/nixpkgs/pull/368366 . I have no idea what I am doing.15:50:46
@connorbaker:matrix.orgconnor (he/him) Oh my god builtins.sort requires strict total orderings? 16:28:17
@trofi:matrix.org@trofi:matrix.org No, it requires strict weak ordering, but >= does not provide it. a >= b can't act as lessThat. b < a can, or !(b >= a) can as well. 16:31:28
@ss:someonex.netSomeoneSerge (back on matrix) Read your blog post. You got a talent for discovering this stuff before anyone else 18:57:22
@trofi:matrix.org@trofi:matrix.org

If it makes you feel a bit better cuda is not alone in getting sort a bit wrong:

  • https://github.com/NixOS/nixpkgs/pull/368429
  • https://github.com/NixOS/nixpkgs/pull/368433
23:14:25
27 Dec 2024
@connorbaker:matrix.orgconnor (he/him)Trofi would you ping Valentin on the issue? Feels like it’d be good to have this requirement stated in the docs01:08:02
28 Dec 2024
@connorbaker:matrix.orgconnor (he/him) ugh thinking about software making me sad
Samuel Ainsworth did you ever find some sort of serenity with CUDA and Nixpkgs?
00:43:26
@connorbaker:matrix.orgconnor (he/him)I'm having thoughts about https://github.com/connorbaker/cuda-packages. In particular, does it make sense to include CUDA stuff in Nixpkgs proper when we can't take advantage of anything but eval checks? Would nix-community be a better home? Just having a growing sense of dread about updating and trying to maintain fast-moving libraries in an environment where stuff can (or does) break constantly and there's no notification of such breakage (except maybe by the community Hydra instance?). There's also the understanding that in Nixpkgs, everything work together simultaneously. As an example, I'd hate to try to upgrade OpenCV (or PyTorch) so it works with newer versions of CUDA, only to find out it causes some gnarly Darwin/ROCm/non-CUDA issue. Thinking out-of-tree designs would afford us the ability to break stuff, though that comes with a number of drawbacks (duplicating nix expressions for packages and having slight variations, merging in upstream changes, etc.). Maybe this is just fatigue talking; I think a number of these complaints were raised in a discourse post Sam made a few years ago.00:54:19
@connorbaker:matrix.orgconnor (he/him)I mean, I certainly want to upstream the library functions and additional setup hooks/logging functionality I wrote because they're (in my opinion) widely useful. Just... the CUDA stuff.00:55:26
@connorbaker:matrix.orgconnor (he/him)* I'm having thoughts about https://github.com/connorbaker/cuda-packages. In particular, does it make sense to include CUDA stuff in Nixpkgs proper when we can't take advantage of anything but eval checks? Would nix-community be a better home? Just having a growing sense of dread about updating and trying to maintain fast-moving libraries in an environment where stuff can (or does) break constantly and there's no notification of such breakage (except maybe by the community Hydra instance?). There's also the understanding that in Nixpkgs, everything work together simultaneously. As an example, I'd hate to try to upgrade OpenCV (or PyTorch) so it works with newer versions of CUDA, only to find out it causes some gnarly Darwin/ROCm/non-CUDA issue. Thinking out-of-tree designs would afford us the ability to break stuff, though that comes with a number of drawbacks (duplicating nix expressions for packages and having slight variations, merging in upstream changes, etc.). Maybe this is just fatigue talking; I think a number of these complaints were raised in a discourse post Samuel made a few years ago.12:33:14
@lromor:matrix.orglromorIs anyone at chaos congress?16:16:44
@trofi:matrix.org@trofi:matrix.orgGood idea! Done as https://github.com/NixOS/nix/issues/12106#issuecomment-256437584316:36:40
@matthewcroughan:defenestrate.itmatthewcroughan changed their display name from matthewcroughan to matthewcroughan (DECT: 56490).18:41:55
29 Dec 2024
@lromor:matrix.orglromor set a profile picture.16:13:20
@connorbaker:matrix.orgconnor (he/him)Just tried to build PyTorch and I completely forgot it vendors its dependencies, was stunned to see it building ONNX21:49:20
@ss:someonex.netSomeoneSerge (back on matrix) I wish... matthewcroughan (DECT: 56490) maybe? 21:50:20
@ss:someonex.netSomeoneSerge (back on matrix) Yeah 21:50:35
@connorbaker:matrix.orgconnor (he/him) I remember I had tried to work on using system-provided dependencies (I guess more than a year ago now) and gave up because it would have required a bunch CMake rewriting.
And every time upstream changed something, BOOM! Another merge conflict or more rewriting required.
But I suppose it’s that way with lots of projects.
21:55:11
@connorbaker:matrix.orgconnor (he/him)Serge, how do you stay upbeat about packaging stuff?21:55:58
@ss:someonex.netSomeoneSerge (back on matrix) Yes, which is why this is really is about working with the upstream and getting the changes through on their side, not on nixpkgs side 21:56:38
@ss:someonex.netSomeoneSerge (back on matrix) I clearly don't... 21:57:31

Show newer messages


Back to Room ListRoom Version: 9