| 24 Dec 2024 |
SomeoneSerge (back on matrix) | I just keep subscribing to issues but idk when I'll have the energy to reply like... to any of the github pings | 13:56:05 |
Gaétan Lepage | In reply to @hexa:lossy.network is anyone looking into the triton-llvm test issues on staging-next? Can we at least mark it as broken, because every attempt at building it is currently useless and wasting CPU time ? | 14:01:16 |
Gaétan Lepage | * Can we at least mark it as broken ? Because every attempt at building it is currently useless and wasting CPU time | 14:01:24 |
hexa | is this the royal we? | 14:02:08 |
Gaétan Lepage | I meant, would you be OK with that ? | 14:02:33 |
Gaétan Lepage | I can make the PR, but I prefer to ask before | 14:02:45 |
hexa | I don't maintain that package 😄 | 14:04:34 |
hexa | ideally someone can disable that test instead? | 14:04:50 |
hexa | * and ideally someone can disable that test instead? | 14:04:52 |
matthewcroughan | Amen, you are not being well enough supported financially to solve tough issues, most people in this community are not. | 14:38:25 |
matthewcroughan | The polish is a matter of resources | 14:38:58 |
matthewcroughan | I did some postfixups, etc and worked around it | 14:39:15 |
matthewcroughan | But yes I also landed on the same PRs as you, and I'm not sure why it is still an issue despite those PR's having been merged | 14:39:32 |
| Collin Arnett joined the room. | 21:10:15 |
Collin Arnett | Hello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as broken so I figured it would be good to get first class cuda support implemented here as well.
https://github.com/NixOS/nixpkgs/pull/367998/ | 21:16:42 |
| 25 Dec 2024 |
SomeoneSerge (back on matrix) | Oooooh that's a really great question, not least because, afaik, the haskell package set is its own thing with slightly different override patterns than elsewhere? | 00:04:12 |
SomeoneSerge (back on matrix) | I imagine haskell packages are auto-generated? If so, there must be some customization examples for other native/ffi libraries that might use dlopen at runtime or pkg-config at build time, e.g. wrappers for opengl or vulkan | 00:07:08 |
connor (he/him) | If accelerate is working or supported, check that out. Not sure it’s still supported given it relies on LLVM 12 (or earlier) for LLVM-HS.
Outside of that, not sure what people use for GPU stuff with Haskell | 00:14:03 |
Collin Arnett | * Hello, are there any packages in the hackellPackage set that setup cudaSupport that I can use as an example to add cuda support for hasktorch? The author submitted a PR to bump the version here and has marked it as unbroken so I figured it would be good to get first class cuda support implemented here as well.
https://github.com/NixOS/nixpkgs/pull/367998/ | 03:28:32 |
Collin Arnett | He ended up doing this in the hackage-packages.nix file:
"libtorch-ffi" = callPackage
({ mkDerivation, async, base, bytestring, c10, containers, hspec
, inline-c, inline-c-cpp, lib, libtorch-ffi-helper
, optparse-applicative, safe-exceptions, sysinfo, template-haskell
, text, torch, torch_cpu, torch_cuda ? null } :
mkDerivation {
pname = "libtorch-ffi";
version = "2.0.1.1";
sha256 = "0m6gg0z6dc67rxijqycyza197365xf1p71s74a8p4pkc2m2yl6p3";
libraryHaskellDepends = [
async base bytestring containers inline-c inline-c-cpp
libtorch-ffi-helper optparse-applicative safe-exceptions sysinfo
template-haskell text
];
librarySystemDepends = [ c10 torch torch_cpu ];
testHaskellDepends = [ base hspec safe-exceptions ];
homepage = "https://github.com/hasktorch/hasktorch#readme";
description = "Haskell bindings for PyTorch";
license = lib.licenses.bsd3;
configureFlags = [
"--extra-include-dirs=${lib.getDev pkgs.libtorch-bin}/include/torch/csrc/api/include"
] ++ lib.optionals pkgs.config.cudaSupport [ "-f cuda" ];
}) ({
c10 = pkgs.libtorch-bin;
torch_cpu = pkgs.libtorch-bin;
torch = pkgs.libtorch-bin;
} // lib.optionalAttrs (pkgs.config.cudaSupport) {
torch_cuda = pkgs.libtorch-bin;
});
| 11:39:48 |
SomeoneSerge (back on matrix) | I'll follow up on github | 17:19:13 |
| 26 Dec 2024 |
Jesse | Does anyone have any configuration files for deep learning on nixos? I want to use cuda to train pytorch models on nixos, but I can't install cuda and cudnn correctly. I tried some but failed. Can anyone share the configuration files with me? I use a 4090 graphics card. | 10:43:05 |
Gaétan Lepage | Have you set cudaSupport = true in your nixpkgs config ? | 10:43:54 |
Gaétan Lepage | This enables cuda support for all packages that support it in nixpkgs | 10:44:08 |
| @trofi:matrix.org joined the room. | 15:50:27 |
@trofi:matrix.org | Please have a look at https://github.com/NixOS/nixpkgs/pull/368366 . I have no idea what I am doing. | 15:50:46 |
connor (he/him) | Oh my god builtins.sort requires strict total orderings? | 16:28:17 |
@trofi:matrix.org | No, it requires strict weak ordering, but >= does not provide it. a >= b can't act as lessThat. b < a can, or !(b >= a) can as well. | 16:31:28 |
SomeoneSerge (back on matrix) | Read your blog post. You got a talent for discovering this stuff before anyone else | 18:57:22 |
@trofi:matrix.org | If it makes you feel a bit better cuda is not alone in getting sort a bit wrong:
- https://github.com/NixOS/nixpkgs/pull/368429
- https://github.com/NixOS/nixpkgs/pull/368433
| 23:14:25 |