!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

302 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda59 Servers

Load older messages


SenderMessageTime
11 Mar 2026
@glepage:matrix.orgGaétan Lepage (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678#issuecomment-4035473707). 23:39:46
@glepage:matrix.orgGaétan Lepage * (same reasonning for https://github.com/NixOS/nixpkgs/pull/498678). 23:39:52
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Sounds good! I’ll leave a comment on them23:51:04
@glepage:matrix.orgGaétan Lepage I'm testing the CUDA bump more thoroughly though.
~1.3k rebuilds left (out of 1.8k)
23:53:19
@glepage:matrix.orgGaétan Lepage *

connor (burnt/out) (UTC-8)
About https://github.com/NixOS/nixpkgs/pull/498681, I plan to build torch and vllm. If this works fine, I will merge it.
With all the CUDA PRs in the queue, I won't have the capacity to exhaustively test all of them.

No objection on your side?

23:54:04
12 Mar 2026
@ctheune:matrix.flyingcircus.ioTheuni changed their display name from Christian Theune to Theuni.07:18:55
@bjth:matrix.orgBryan Honof

It looks like torch's supportedTorchCudaCapabilities was out-of-sync with upstream. https://github.com/NixOS/nixpkgs/pull/499216

How would I use nixpkgs-review to test these changes?

10:53:19
@glepage:matrix.orgGaétan Lepage

Thanks for the PR!

Well, you don't want to rebuild all torch consumers for this. What you can do is the following:

nixpkgs-review --extra-nixpkgs-config "{ allowUnfree = true; cudaSupport = true; }" -p python3Packages.torch -p python3Packages.vllm -p python3Packages.torchvision
12:39:42
@glepage:matrix.orgGaétan LepageI'll try to have a look at it before next week12:39:58
@glepage:matrix.orgGaétan Lepage

connor (burnt/out) (UTC-8) actually, neither the current nor the new gpu-burn version work:

❮ ./result/bin/gpu_burn
Run length not specified in the command line. Using compare file: /nix/store/9c2avfi2bxc2aydfl2sdgkp8iamhj8as-gpu-burn-0-unstable-2024-04-09/share/compare.ptx
Burning for 10 seconds.
GPU 0: NVIDIA GeForce RTX 3060 (UUID: GPU-7d08a1e6-4634-499f-d58a-91bf77137f69)
Initialized device 0 with 11911 MB of memory (11788 MB available, using 10609 MB of it), using FLOATS
Results are 268435456 bytes each, thus performing 39 iterations
Couldn't init a GPU test: Error in load module (gpu_burn-drv.cpp:239): a PTX JIT compilation failed
0.0%  proc'd: -1 (0 Gflop/s)   errors: 0  (DIED!)  temps: 36 C 

(tested on 2 GPUs)

19:56:18
@glepage:matrix.orgGaétan Lepage Nevermind, all good.
You need to carefully set cudaCapabilities for it to run fine on a given GPU: https://github.com/NixOS/nixpkgs/pull/499323#issuecomment-4049769046
20:25:50
@apyh:matrix.orgapyh
In reply to @glepage:matrix.org
Nevermind, all good.
You need to carefully set cudaCapabilities for it to run fine on a given GPU: https://github.com/NixOS/nixpkgs/pull/499323#issuecomment-4049769046
should it have an isBroken if cudaCapabilities has more than one item, then?
20:42:25
@glepage:matrix.orgGaétan Lepage Not really. It selects the highest (techically, the last) capability from your config.cudaCapabilities.
So there's no fundamental reason why a list with additional, lower caps than the one of your GPU could not work.
22:14:15
13 Mar 2026
@bjth:matrix.orgBryan HonofThanks! That seems to have built succesfully. :)09:26:19
16 Mar 2026
@justbrowsing:matrix.orgKevin Mittman (jetlagged/UTC-7) changed their display name from Kevin Mittman (jetlagged/UTC+8) to Kevin Mittman (jetlagged/UTC-7).00:57:26
17 Mar 2026
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Does https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-compiler (improved C++ standards conformance bullet) mean we could drop our patches to NVCC for 13.2?22:38:09
@kirillrdy:matrix.orgkirillrdy joined the room.22:48:24
@kirillrdy:matrix.orgkirillrdy connor (burnt/out) (UTC-8): git blame pkgs/development/cuda-modules/packages/tests/onnx-tensorrt/short.nix shows your name, is there onnx-tensorrt support in nixpkgs ? i think not 23:03:22
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)No, I packaged a bunch of stuff in https://github.com/connorbaker/cuda-packages but haven't had the chance to upstream it because they have massive dependency chains and I also need to support every release since CUDA 11.4 🫠23:47:30
@kirillrdy:matrix.orgkirillrdyah awesome ! thanks for the link23:48:25
@kirillrdy:matrix.orgkirillrdyi guess i could have searched myself23:48:42
18 Mar 2026
@justbrowsing:matrix.orgKevin Mittman (jetlagged/UTC-7) kirillrdy: do you happen to know how is it related to libnvonnxparser ? 03:55:22
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)iirc it provides that lib when built; see https://github.com/onnx/onnx-tensorrt/blob/10.15-GA/NvOnnxParser.h. I think previously when I packaged it (with the intention of upstreaming it to Nixpkgs, never got around to it) I preferred to use libnvonnxparser from onnx-tensorrt instead of the one provided by the tensorrt binary archives06:00:28
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)ugh there's still a typo on the TensorRT page where ONNX is spelled "ONYX" https://developer.nvidia.com/tensorrt#:~:text=for%20production%20applications.-,ONYX,-%3A06:01:30
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) Gaétan Lepage: various patches for CUDA 13 stuff https://github.com/ConnorBaker/blender-temp/blob/7132a52cd929b764e83dc97fb0596b8caae58c95/overlay.nix 06:43:24
@glepage:matrix.orgGaétan LepageThanks for sharing :)11:32:30
@ereslibre:ereslibre.socialereslibre removed their profile picture.15:52:02
@ereslibre:ereslibre.socialereslibrehello! was wondering, can we get github.com/NixOS/nixpkgs/pull/489218 merged? Thank you :)15:52:49
@glepage:matrix.orgGaétan Lepage connor (burnt/out) (UTC-8), can you stamp https://github.com/NixOS/nixpkgs/pull/501079 whenever you have a minute please? 17:55:27
@ereslibre:ereslibre.socialereslibre thanks connor (burnt/out) (UTC-8) ! 17:56:59

There are no newer messages yet.


Back to Room ListRoom Version: 9