!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

290 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
24 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix) I just keep subscribing to issues but idk when I'll have the energy to reply like... to any of the github pings 13:56:05
@glepage:matrix.orgGaétan Lepage
In reply to @hexa:lossy.network
is anyone looking into the triton-llvm test issues on staging-next?
Can we at least mark it as broken, because every attempt at building it is currently useless and wasting CPU time ?
14:01:16
@glepage:matrix.orgGaétan Lepage* Can we at least mark it as broken ? Because every attempt at building it is currently useless and wasting CPU time14:01:24
@hexa:lossy.networkhexais this the royal we?14:02:08
@glepage:matrix.orgGaétan LepageI meant, would you be OK with that ?14:02:33
@glepage:matrix.orgGaétan LepageI can make the PR, but I prefer to ask before14:02:45
@hexa:lossy.networkhexaI don't maintain that package 😄 14:04:34
@hexa:lossy.networkhexaideally someone can disable that test instead?14:04:50
@hexa:lossy.networkhexa * and ideally someone can disable that test instead?14:04:52
@matthewcroughan:defenestrate.itmatthewcroughan Amen, you are not being well enough supported financially to solve tough issues, most people in this community are not. 14:38:25
@matthewcroughan:defenestrate.itmatthewcroughanThe polish is a matter of resources14:38:58
@matthewcroughan:defenestrate.itmatthewcroughanI did some postfixups, etc and worked around it14:39:15
@matthewcroughan:defenestrate.itmatthewcroughanBut yes I also landed on the same PRs as you, and I'm not sure why it is still an issue despite those PR's having been merged14:39:32
@collinarnett:matrix.orgCollin Arnett joined the room.21:10:15
@collinarnett:matrix.orgCollin ArnettHello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as broken so I figured it would be good to get first class cuda support implemented here as well. https://github.com/NixOS/nixpkgs/pull/367998/21:16:42
25 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)Oooooh that's a really great question, not least because, afaik, the haskell package set is its own thing with slightly different override patterns than elsewhere?00:04:12
@ss:someonex.netSomeoneSerge (back on matrix)I imagine haskell packages are auto-generated? If so, there must be some customization examples for other native/ffi libraries that might use dlopen at runtime or pkg-config at build time, e.g. wrappers for opengl or vulkan00:07:08
@connorbaker:matrix.orgconnor (he/him) If accelerate is working or supported, check that out. Not sure it’s still supported given it relies on LLVM 12 (or earlier) for LLVM-HS.
Outside of that, not sure what people use for GPU stuff with Haskell
00:14:03
@collinarnett:matrix.orgCollin Arnett* Hello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as unbroken so I figured it would be good to get first class cuda support implemented here as well. https://github.com/NixOS/nixpkgs/pull/367998/03:28:32
@collinarnett:matrix.orgCollin Arnett

He ended up doing this in the hackage-packages.nix file:


  "libtorch-ffi" = callPackage
    ({ mkDerivation, async, base, bytestring, c10, containers, hspec
     , inline-c, inline-c-cpp, lib, libtorch-ffi-helper
     , optparse-applicative, safe-exceptions, sysinfo, template-haskell
     , text, torch, torch_cpu, torch_cuda ? null } :
     mkDerivation {
       pname = "libtorch-ffi";
       version = "2.0.1.1";
       sha256 = "0m6gg0z6dc67rxijqycyza197365xf1p71s74a8p4pkc2m2yl6p3";
       libraryHaskellDepends = [
         async base bytestring containers inline-c inline-c-cpp
         libtorch-ffi-helper optparse-applicative safe-exceptions sysinfo
         template-haskell text
       ];
       librarySystemDepends = [ c10 torch torch_cpu ];
       testHaskellDepends = [ base hspec safe-exceptions ];
       homepage = "https://github.com/hasktorch/hasktorch#readme";
       description = "Haskell bindings for PyTorch";
       license = lib.licenses.bsd3;
       configureFlags = [
         "--extra-include-dirs=${lib.getDev pkgs.libtorch-bin}/include/torch/csrc/api/include"
       ]  ++ lib.optionals pkgs.config.cudaSupport [ "-f cuda" ];
     }) ({
       c10 = pkgs.libtorch-bin;
       torch_cpu = pkgs.libtorch-bin;
       torch = pkgs.libtorch-bin;
     } // lib.optionalAttrs (pkgs.config.cudaSupport) {
       torch_cuda = pkgs.libtorch-bin;
     });
11:39:48
@ss:someonex.netSomeoneSerge (back on matrix) I'll follow up on github 17:19:13
26 Dec 2024
@jiashuaixu:matrix.orgJesseDoes anyone have any configuration files for deep learning on nixos? I want to use cuda to train pytorch models on nixos, but I can't install cuda and cudnn correctly. I tried some but failed. Can anyone share the configuration files with me? I use a 4090 graphics card.10:43:05
@glepage:matrix.orgGaétan Lepage Have you set cudaSupport = true in your nixpkgs config ? 10:43:54
@glepage:matrix.orgGaétan LepageThis enables cuda support for all packages that support it in nixpkgs10:44:08
@trofi:matrix.org@trofi:matrix.org joined the room.15:50:27
@trofi:matrix.org@trofi:matrix.orgPlease have a look at https://github.com/NixOS/nixpkgs/pull/368366 . I have no idea what I am doing.15:50:46
@connorbaker:matrix.orgconnor (he/him) Oh my god builtins.sort requires strict total orderings? 16:28:17
@trofi:matrix.org@trofi:matrix.org No, it requires strict weak ordering, but >= does not provide it. a >= b can't act as lessThat. b < a can, or !(b >= a) can as well. 16:31:28
@ss:someonex.netSomeoneSerge (back on matrix) Read your blog post. You got a talent for discovering this stuff before anyone else 18:57:22
@trofi:matrix.org@trofi:matrix.org

If it makes you feel a bit better cuda is not alone in getting sort a bit wrong:

  • https://github.com/NixOS/nixpkgs/pull/368429
  • https://github.com/NixOS/nixpkgs/pull/368433
23:14:25

Show newer messages


Back to Room ListRoom Version: 9