| 13 Nov 2025 |
SomeoneSerge (matrix works sometimes) | They come not single spies... really sorry to hear this, Connor. Take care | 14:07:48 |
Ari Lotter | oh woof, but torch-bin is 2.9.1 and torch is still 2.9.0 | 15:10:16 |
Ari Lotter | aha, repro'd :D | 16:15:41 |
Ari Lotter | https://github.com/NixOS/nixpkgs/issues/461334
issue opened :) | 18:54:11 |
| 14 Nov 2025 |
hexa (UTC+1) | https://hydra.nixos-cuda.org/build/14219 magma runs into the output limit | 04:50:01 |
hexa (UTC+1) | and https://hydra.nixos-cuda.org/jobset/nixos-cuda/cuda-packages-v2#tabs-jobs has no torch package š¤ | 04:50:51 |
GaƩtan Lepage | I increased it from 4GB (what nix-community has I think) to 8GB. And it seems to still be broken... | 08:53:41 |
GaƩtan Lepage | This is very weird. It ends up being built anyway as a dependency. I'll try to investigate... | 08:55:38 |
GaƩtan Lepage | Ok, I figured it out. torch and torchWithoutRocm have the same outPaths. So torch is getting filtered out in favor of torchWithoutRocm. | 09:25:13 |
Ari Lotter | realized this isn't a 2.9 regression, it's a -bin vs source problem :/ | 18:37:14 |
Ari Lotter | bin works fine T_T | 18:37:19 |
Ari Lotter | updated the ticket :) | 18:37:29 |
GaƩtan Lepage | I updated torch-bin to 2.9.1 yesterday. The PR for the source-based build is https://github.com/NixOS/nixpkgs/pull/461241 | 21:55:22 |
apyh | i see your commit message says torch 2.8->2.9, but it's actually 2.9->2.9.1 :) | 21:56:32 |
GaƩtan Lepage | Good catch, now fixed. | 22:06:15 |
| 15 Nov 2025 |
| cafkafk joined the room. | 12:47:57 |
GaƩtan Lepage | SomeoneSerge (back on matrix) would you have a minute to take a look at the triton/torch bump?
https://github.com/NixOS/nixpkgs/pull/461241 | 14:23:53 |
GaƩtan Lepage | Built with and without CUDA. No obvious regressions. | 14:24:19 |
| 17 Nov 2025 |
Bryan Honof | How would you go about conditionally setting cudaCapabilities when instantiating nixpkgs? I.e.
Image I have this.
{
inputs = {
nixpkgs = "github:nixos/nixpkgs?ref=nixos-25.05";
};
outputs = { self, nixpkgs }: {
packages.x86_64-linux.default = let
pkgs = import nixpkgs {
overlays = [ ];
config = {
allowUnfree = true;
cudaSupport = true;
cudaCapabilities = [ "..." "..." ];
};
};
in
pkgs.hello;
packages.aarch64-linux.default = let
pkgs = import nixpkgs {
overlays = [ ];
config = {
allowUnfree = true;
cudaSupport = true;
cudaCapabilities = if isJetson then [ "..." "..." ] else [ "..." "..." ];
};
};
in
pkgs.hello;
};
}
It's the aarch64-linux part specifically that I'm a bit stuck on. I have some cloud servers that have an NVIDIA GPUs in them that run aarch64-linux, but I also have some Jetson devices that are also considered aarch64-linux.
And if I understand the whole thing correctly, I can't just set the cudaCapabilities list to include both the non-jetson and jetson capabilities, right? Or at least, than isJetsonBuild would just always eval to true even if the build was meant for the cloud server.
Probably something stupid I'm just overlooking, sorry for bothering. š
| 17:35:32 |
SomeoneSerge (matrix works sometimes) |
It's the aarch64-linux part specifically that I'm a bit stuck
There's aarch64-linux and there's aarch64-linux. It's an artifact of us not including cuda/rocm stuff in hostPlatform (yet). The isJetsonBuild should only evaluate to true if your cudaCapabilities are jetson capabilities
| 19:43:44 |
SomeoneSerge (matrix works sometimes) | So it's not really about "setting cudaCapabilities conditionally", it's about instantiating nixpkgs for different platforms. For flakes you'd have to suffix the attributes of one of the aarch64-linux platforms, or move stuff to legacyPackages, but, of course, you could also simply not maintain the list of already-evaluated and not-really-overridable "recipes", i.e. drop the flake:) | 19:47:42 |
SomeoneSerge (matrix works sometimes) | Think I caught a touch of a cold, sorry | 19:48:43 |