| 10 Sep 2025 |
matthewcroughan | * Actually, with the HSA override to 11.0.0 it worked, but I get a different kind of error
0%| | 0/1 [00:00<?, ?it/s:0:rocdevice.cpp :3020: 78074348282d us: Callback: Queue 0x7f831c600000 aborting with error : HSA_STATUS_ERROR_INVALID_ISA: The instruction set architecture is invalid. code: 0x100f
Aborted (core dumped) command nix "$@"```
| 19:59:14 |
Robbie Buxton | In reply to @layus:matrix.org Is this team involved in flox/nvidia partnership ? (See https://flox.dev/cuda/) I guess so since the nixos foundation also is, but there is no mention of this team or its amazing work. Ron mentioned them here https://discourse.nixos.org/t/nix-flox-nvidia-opening-up-cuda-redistribution-on-nix/69189/7 | 20:01:06 |
matthewcroughan | Is there a rocm room? | 20:54:56 |
Lun | https://matrix.to/#/#ROCm:nixos.org | 21:40:38 |
| connor (burnt/out) (UTC-8) changed their display name from connor (he/him) (UTC+2) to connor (he/him) (UTC-7). | 22:20:37 |
Gaétan Lepage | Well done guys for allowing this to happen (connor (he/him) (UTC-7) SomeoneSerge (back on matrix) stick...)
👏 | 23:07:17 |
Gaétan Lepage | * Well done guys for allowing this to happen (connor (he/him) (UTC-7) SomeoneSerge (back on matrix) stick Samuel Ainsworth...)
👏 | 23:22:06 |
SomeoneSerge (back on matrix) | The negotiations with NVIDIA have been run by Flox (although in parallel with many other companies' simultaneous inquiries). Ron kept us, the Foundation, and the SC in the loop, and offered both legal help and workforce. The current idea roughly is that the CUDA team gets access to the relevant repo and infra, and work closely together with Flox to secure the position and a commx channel to NVIDIA. | 23:26:05 |
hexa (UTC+1) | What were the blockers for setting this up within the NixOS Foundation? | 23:54:32 |
| 11 Sep 2025 |
Tristan Ross | From what I recall, it was something to do with having a legal entity in the US. If the foundation was registered in the US, then it would've been fine. This has been going on since at least Planet Nix, glad to see it finally pull through. | 00:03:41 |
| @ihar.hrachyshka:matrix.org joined the room. | 00:09:51 |
connor (burnt/out) (UTC-8) | Will try to take a look later | 00:10:55 |
connor (burnt/out) (UTC-8) | You’d probably need to override writeGpuTestPython to use the Python package set you specify. It’s inside cudaPackages so it has no way of knowing what scope you’re using it in. | 00:12:26 |
connor (burnt/out) (UTC-8) | They’re still mostly okay; I’ve been exhausted recently so haven’t been awake too early (I was up at 3am this morning but that’s something else)
If we can do something closer to 8 that might be easier; Kevin Mittman how are you with morning meetings? It’d be good to catch up and discuss what’s been done so far with the database project Serge’s been working on. | 00:14:16 |
connor (burnt/out) (UTC-8) | NVIDIA’s EULA effectively prohibits running CUDA binaries they release on non-NVIDIA hardware (see 1.2.8: https://docs.nvidia.com/cuda/eula/index.html#limitations) | 00:20:07 |
Kevin Mittman (UTC-7) | https://developer.nvidia.com/blog/developers-can-now-get-cuda-directly-from-their-favorite-third-party-platforms/ | 01:17:14 |
SomeoneSerge (back on matrix) | The fact remains: it was amd who shut it down, not nvidia? | 01:26:18 |
SomeoneSerge (back on matrix) | nvidia playing the "please submit this in paper by post, and attach proofs of your residence such as electricity bills delivered to your address" game (being the bureaucrat and coming up with arbitrary terms as they go) | 01:30:19 |
SomeoneSerge (back on matrix) | ah yeah, true | 01:33:21 |
Gaétan Lepage | When is the next one? I'll try to join | 10:16:38 |
le-chat | I've updated the gist with the latest version, it seems to compile and run a pipeline with tensor_filter framework=pytorch accelerator=true:gpu ... ! fakesink, but I haven't got a time to check it really. | 10:45:16 |
| 12 Sep 2025 |
connor (burnt/out) (UTC-8) | Ugh | 14:02:04 |
connor (burnt/out) (UTC-8) | https://github.com/NixOS/nixpkgs/issues/442378 | 14:02:06 |
SomeoneSerge (back on matrix) | Ah nice | 14:18:25 |
SomeoneSerge (back on matrix) | Let's start adding special branches for nix semvers and for lix | 14:18:38 |
connor (burnt/out) (UTC-8) | https://github.com/NixOS/nixpkgs/pull/442389 | 14:25:03 |
SomeoneSerge (back on matrix) | Offtopic but does the original Nix commit not change old nixlang expressions' drvPaths? | 14:40:50 |
SomeoneSerge (back on matrix) | Ahhh I see, the ATerm repr is still the same | 14:45:15 |
connor (burnt/out) (UTC-8) | Okay what should nix-community/cuda-legacy look or be structured like? As an example: supporting CUDA 11. NCCL has already cut its last release supporting CUDA 11, so we need a package expression for that in the repo. Then there’s PyTorch: if that’s already cut its last release supporting CUDA 11, we need an expression for that as well. In the case of packages with many dependencies, like PyTorch, I’m not sure how long we’d be able to use upstream to provide dependencies, even if we vendor the package expression in tree, because eventually they’ll get bumped to something too new for the version of the package we’re locked to.
Is it viable for cuda-legacy to just re-expose a copy of Nixpkgs fixed to some point in time? I would think not (at least naively) without a way to deduplicate the number of Nixpkgs instantiated (e.g., providing cuda-legacy as an overlay and having it draw relevant packages from a fixed version of Nixpkgs while somehow using the most of the underlying instance of Nixpkgs). | 19:54:54 |
| 13 Sep 2025 |
| ysndr joined the room. | 00:58:03 |