| 10 Mar 2026 |
connor (burnt/out) (UTC-8) | 13.2 is out š«© https://developer.download.nvidia.com/compute/cuda/redist/ | 03:35:22 |
connor (burnt/out) (UTC-8) | danielrf Orin is supported by 13.2/JP7: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices | 06:10:08 |
GaƩtan Lepage | I got you connor (burnt/out) (UTC-8)
https://github.com/NixOS/nixpkgs/pull/498523 | 11:52:20 |
GaƩtan Lepage | We do have libcublasmp. Is this doc outdated? https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/README.md#distinguished-packages | 12:31:03 |
connor (burnt/out) (UTC-8) | Ah yep itās outdated, I packages nvshmem: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/cuda-modules/packages/libnvshmem.nix | 15:28:13 |
GaƩtan Lepage | connor (burnt/out) (UTC-8) if I want to bump libcublasmp (to 0.7.x) for example, how do I know which cudaPackage_X should be affected? | 17:36:23 |
connor (burnt/out) (UTC-8) | A very deep reading of the changelog, package contents changes, and thorough rebuilds and runtime verification for consumers | 17:38:14 |
connor (burnt/out) (UTC-8) | Yet another reason we need test suites for downstream packages which exercise those libraries ā relying on NVIDIAās samples (if theyāre even available) isnāt sufficient because we care about whether consumers break | 17:39:46 |
connor (burnt/out) (UTC-8) | All of the assertions I added to the packages were the result of a ton of reading and gleaning meaning through changelogs and actual package contents changes | 17:40:16 |
GaƩtan Lepage | Sounds like a ton of fun :') | 17:47:23 |
| Cameron Barker joined the room. | 18:18:26 |
| 11 Mar 2026 |
Kevin Mittman (jetlagged/UTC+8) | Redacted or Malformed Event | 01:54:11 |
GaƩtan Lepage | connor (burnt/out) (UTC-8) would you agree with a 12.8 -> 12.9 global bump before messing around with 13.0? | 11:05:21 |
| Theuni changed their display name from Theuni to Christian Theune. | 14:13:00 |
connor (burnt/out) (UTC-8) | Sure! I remember some weird breakages a while back when I had wanted to bump immediately after 12.9 became available, but hopefully theyāre all resolved by now :) | 16:08:54 |
GaƩtan Lepage | https://github.com/NixOS/nixpkgs/pull/498861 | 16:43:46 |
GaƩtan Lepage | connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With the CUDA PRs on the way, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:37:24 |
GaƩtan Lepage | (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678#issuecomment-4035473707). | 23:39:46 |
GaƩtan Lepage | * (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678). | 23:39:52 |
connor (burnt/out) (UTC-8) | Sounds good! Iāll leave a comment on them | 23:51:04 |
GaƩtan Lepage | I'm testing the CUDA bump more thoroughly though.
~1.3k rebuilds left (out of 1.8k) | 23:53:19 |
GaƩtan Lepage | * connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With all the CUDA PRs in the queue, I won't have the capacity to exhaustively test all of them.
No objection on your side? | 23:54:04 |
| 12 Mar 2026 |
| Theuni changed their display name from Christian Theune to Theuni. | 07:18:55 |
Bryan Honof | It looks like torch's supportedTorchCudaCapabilities was out-of-sync with upstream. https://github.com/NixOS/nixpkgs/pull/499216
How would I use nixpkgs-review to test these changes?
| 10:53:19 |
GaƩtan Lepage | Thanks for the PR!
Well, you don't want to rebuild all torch consumers for this. What you can do is the following:
nixpkgs-review --extra-nixpkgs-config "{ allowUnfree = true; cudaSupport = true; }" -p python3Packages.torch -p python3Packages.vllm -p python3Packages.torchvision
| 12:39:42 |
GaƩtan Lepage | I'll try to have a look at it before next week | 12:39:58 |
GaƩtan Lepage | connor (burnt/out) (UTC-8) actually, neither the current nor the new gpu-burn version work:
ā® ./result/bin/gpu_burn
Run length not specified in the command line. Using compare file: /nix/store/9c2avfi2bxc2aydfl2sdgkp8iamhj8as-gpu-burn-0-unstable-2024-04-09/share/compare.ptx
Burning for 10 seconds.
GPU 0: NVIDIA GeForce RTX 3060 (UUID: GPU-7d08a1e6-4634-499f-d58a-91bf77137f69)
Initialized device 0 with 11911 MB of memory (11788 MB available, using 10609 MB of it), using FLOATS
Results are 268435456 bytes each, thus performing 39 iterations
Couldn't init a GPU test: Error in load module (gpu_burn-drv.cpp:239): a PTX JIT compilation failed
0.0% proc'd: -1 (0 Gflop/s) errors: 0 (DIED!) temps: 36 C
(tested on 2 GPUs) | 19:56:18 |
GaƩtan Lepage | Nevermind, all good.
You need to carefully set cudaCapabilities for it to run fine on a given GPU: https://github.com/NixOS/nixpkgs/pull/499323#issuecomment-4049769046 | 20:25:50 |
apyh | In reply to @glepage:matrix.org Nevermind, all good.
You need to carefully set cudaCapabilities for it to run fine on a given GPU: https://github.com/NixOS/nixpkgs/pull/499323#issuecomment-4049769046 should it have an isBroken if cudaCapabilities has more than one item, then? | 20:42:25 |
GaƩtan Lepage | Not really. It selects the highest (techically, the last) capability from your config.cudaCapabilities.
So there's no fundamental reason why a list with additional, lower caps than the one of your GPU could not work. | 22:14:15 |