!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

308 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda60 Servers

Load older messages


SenderMessageTime
30 Mar 2026
@glepage:matrix.orgGaétan Lepage Cross posting: https://matrix.to/#/%21CTCrFzsBPYmDLmrja4%3A0upti.me/%2422QX0_K687Lf14cWQH2WGaxKX-rBZo0CAmm17KHnxWQ?via=nixos.org&via=matrix.org&via=tchncs.de 07:50:04
@connorbaker:matrix.orgconnor (he/him)The proposal serge linked is the closest thing I’m aware of to any attempt to standardize14:27:46
31 Mar 2026
@connorbaker:matrix.orgconnor (he/him)Is there anything we need to get in for https://github.com/NixOS/nixpkgs/issues/504935? I know I should package cutile and the triton cutile backend stuff :/14:29:14
1 Apr 2026
@ccicnce113424:matrix.orgccicnce113424
In reply to @ccicnce113424:matrix.org
https://github.com/NixOS/nixpkgs/pull/498612
ping
04:03:52
@glepage:matrix.orgGaétan LepageWho's reviewing driver-related PRs usually? I'm not familiar at all with this part of the code base :/15:36:46
@ss:someonex.netSomeoneSerge (matrix works sometimes)Yet to check the link, but generally Kiskae is the driver guru15:53:44
@matrixpenguin:matrix.orgpenguin joined the room.21:57:36
2 Apr 2026
@connorbaker:matrix.orgconnor (he/him)you ever see something that you just know is going to cause you immense pain in 3-6mo?18:41:10
@connorbaker:matrix.orgconnor (he/him)

https://github.com/NixOS/nixpkgs/pull/505958

  • I need to support onnx-tensorrt
  • It's an absolute pain in the ass to package and make work over all the CUDA versions I have to support (cough cuda-legacy cough)
  • It's not packaged in-tree (I packaged a (then new!) copy here https://github.com/ConnorBaker/cuda-packages/blob/7604ebdb8e9484c633710b408e50816e95ebfac9/pkgs/by-name/on/onnx-tensorrt/package.nix)
  • I don't understand why ONNX_ML being set to 1 broke onnx-tensorrt or even know if it's still a problem
18:43:19
@ccicnce113424:matrix.orgccicnce113424
In reply to @ccicnce113424:matrix.org
ping
NVIDIA transitioned to zstd compression starting with version 530.30.02 three years ago, yet the build dependencies were never updated to reflect this. Consequently, driver extraction has been relying on the bsdtar fallback for three years—an oversight that has gone completely unnoticed until now.
19:28:44
3 Apr 2026
@ss:someonex.netSomeoneSerge (matrix works sometimes)Yes, daily.01:43:30
@julm:matrix.orgjulm joined the room.02:36:25
@connorbaker:matrix.orgconnor (he/him)Okay I started packaging cuda_tileiras and cuda-tile14:56:35
@connorbaker:matrix.orgconnor (he/him) Does... does tileiras use dlopen with relative paths to find libnvvm.so 😱 15:08:03
@connorbaker:matrix.orgconnor (he/him)Also, can someone explain https://developer.download.nvidia.com/compute/cuda/redist/cuda_compat_orin/ to me? If I had to hazard a guess, it would be that it would allow using CUDA 13.1/13.2 on a Jetson based on JetPack 7 (since support for Orin is added in JetPack 7.2, but that hasn't been released yet as far as I can tell, though it is talked about in the CUDA 13.2 blog post: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices). Is that correct? Or does it allow forward compat across major versions and can be used with JetPack 6, similar to how cuda_compat for Orins on JetPack 5 allows support for up to CUDA 12.2 despite shipping with CUDA 11.4.15:19:07
@connorbaker:matrix.orgconnor (he/him)

Unrelated, but I'm getting errors with the nix-required-mounts hook, maybe I'm just on a bad commit:

$ /nix/store/kgcbq3ablba98myqn8j4sq7yla6nzs3m-nix-required-mounts-0.0.1/bin/nix-required-mounts /nix/store/5i5w8byychlxjbrjnvfl4rwbi9wqr66d-nix-shell-env.drv
Traceback (most recent call last):
  File "/nix/store/kgcbq3ablba98myqn8j4sq7yla6nzs3m-nix-required-mounts-0.0.1/bin/.nix-required-mounts-wrapped", line 9, in <module>
    sys.exit(entrypoint())
             ~~~~~~~~~~^^
  File "/nix/store/kgcbq3ablba98myqn8j4sq7yla6nzs3m-nix-required-mounts-0.0.1/lib/python3.13/site-packages/nix_required_mounts.py", line 142, in entrypoint
    [canon_drv_path] = parsed_drv.keys()
    ^^^^^^^^^^^^^^^^
ValueError: too many values to unpack (expected 1)

Using commit https://github.com/NixOS/nixpkgs/commits/8110df5ad7abf5d4c0f6fb0f8f978390e77f9685 of Nixpkgs. I remember something about JSON derivation format change but I can't remember for the life of me if we already fixed that. Previous commit was https://github.com/NixOS/nixpkgs/commits/a6531044f6d0bef691ea18d4d4ce44d0daa6e816.

15:49:56
@glepage:matrix.orgGaétan Lepage

Interesting, I'm facing another error:

error:
       … while setting up the build environment

       error: getting attributes of path '/nix/store/02wkfkv635277dyq176lw5dcqpxlpsl0-kmod-31/sbin/bin': No such file or directory
15:56:06
@justbrowsing:matrix.orgKevin Mittman (UTC-7)
In reply to @connorbaker:matrix.org
Also, can someone explain https://developer.download.nvidia.com/compute/cuda/redist/cuda_compat_orin/ to me? If I had to hazard a guess, it would be that it would allow using CUDA 13.1/13.2 on a Jetson based on JetPack 7 (since support for Orin is added in JetPack 7.2, but that hasn't been released yet as far as I can tell, though it is talked about in the CUDA 13.2 blog post: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices). Is that correct? Or does it allow forward compat across major versions and can be used with JetPack 6, similar to how cuda_compat for Orins on JetPack 5 allows support for up to CUDA 12.2 despite shipping with CUDA 11.4.
I don't deal with Jetpack but the general idea is it allows using a newer CUDA Toolkit, than the driver in the BSP would otherwise support - that mapping is a bit fuzzy though.
17:46:54
@justbrowsing:matrix.orgKevin Mittman (UTC-7)
In reply to @connorbaker:matrix.org
Also, can someone explain https://developer.download.nvidia.com/compute/cuda/redist/cuda_compat_orin/ to me? If I had to hazard a guess, it would be that it would allow using CUDA 13.1/13.2 on a Jetson based on JetPack 7 (since support for Orin is added in JetPack 7.2, but that hasn't been released yet as far as I can tell, though it is talked about in the CUDA 13.2 blog post: https://developer.nvidia.com/blog/cuda-13-2-introduces-enhanced-cuda-tile-support-and-new-python-features/#embedded_devices). Is that correct? Or does it allow forward compat across major versions and can be used with JetPack 6, similar to how cuda_compat for Orins on JetPack 5 allows support for up to CUDA 12.2 despite shipping with CUDA 11.4.
This one is specific to Jetson Orin, the other cuda_compat provides forward compatibility for x86_64 server, arm64 server, Jetson Thor, etc
17:48:59
@justbrowsing:matrix.orgKevin Mittman (UTC-7) changed their display name from Kevin Mittman (jetlagged/UTC-7) to Kevin Mittman (UTC-7).17:50:31
@neobrain:matrix.orgneobrain joined the room.18:31:27
@glepage:matrix.orgGaétan Lepage

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusions:

"Nixpkgs CUDA aarch64 ecosystem is quite healthy"
Thanks to nix-community's 80 core ampere system

"Flakes make a good user interface"
This one is for you SomeoneSerge (matrix works sometimes)

"Lack of cached aarch64 CUDA builds"
👀💸

"Uneven freshness of packages"
Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

"Community lacks GPU hardware"
Yes. And CPU too 😅

23:11:35
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusion:

"Nixpkgs CUDA aarch64 ecosystem is quite healthy"
Thanks to nix-community's 80 core ampere system

"Flakes make a good user interface"
This one is for you SomeoneSerge (matrix works sometimes)

"Lack of cached aarch64 CUDA builds"
👀💸

"Uneven freshness of packages"
Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

"Community lacks GPU hardware"
Yes. And CPU too 😅

23:12:17
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusion:

"Nixpkgs CUDA aarch64 ecosystem is quite healthy"

Thanks to nix-community's 80 core ampere system

"Flakes make a good user interface"
This one is for you SomeoneSerge (matrix works sometimes)

"Lack of cached aarch64 CUDA builds"
👀💸

"Uneven freshness of packages"
Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

"Community lacks GPU hardware"
Yes. And CPU too 😅

23:16:51
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusion:

"Nixpkgs CUDA aarch64 ecosystem is quite healthy"

Thanks to nix-community's 80 core ampere system

"Flakes make a good user interface"

This one is for you SomeoneSerge (matrix works sometimes)

"Lack of cached aarch64 CUDA builds"

👀💸

"Uneven freshness of packages"

Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

"Community lacks GPU hardware"

Yes. And CPU too 😅

23:17:20
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusion:

"Nixpkgs CUDA aarch64 ecosystem is quite healthy"

Thanks to nix-community's 80 core ampere system

"Flakes make a good user interface"

This one is for you SomeoneSerge (matrix works sometimes)

"Lack of cached aarch64 CUDA builds"

👀💸

"Uneven freshness of packages"

Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

"Community lacks GPU hardware"

Yes. And CPU too 😅

23:17:45
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE

Some quotes from the conclusion:

  • "Nixpkgs CUDA aarch64 ecosystem is quite healthy"
    Thanks to nix-community's 80 core ampere system

  • "Flakes make a good user interface"
    This one is for you SomeoneSerge (matrix works sometimes)

  • "Lack of cached aarch64 CUDA builds"
    👀💸

  • "Uneven freshness of packages"
    Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).

  • "Community lacks GPU hardware"
    Yes. And CPU too 😅

23:18:57
@glepage:matrix.orgGaétan Lepage *

Neat project!
https://www.youtube.com/watch?v=AvK_gi_snJE
I find it really cool that someone wrote a convenient flake to make the DGX spark work OOTB.

Some quotes from the conclusion:

  • "Nixpkgs CUDA aarch64 ecosystem is quite healthy"
    Thanks to nix-community's 80 core ampere system
  • "Flakes make a good user interface"
    This one is for you SomeoneSerge (matrix works sometimes)
  • "Lack of cached aarch64 CUDA builds"
    👀💸
  • "Uneven freshness of packages"
    Until we unlock core and maintainer cloning, that's going to be tough to change (especially reagrding how fast the whole space is moving).
  • "Community lacks GPU hardware"
    Yes. And CPU too 😅
23:20:11
4 Apr 2026
@neobrain:matrix.orgneobrainIt's working really nicely on mine too, but the lack of cache definitely hurts 😭07:07:44
@connorbaker:matrix.orgconnor (he/him)Oh yikes a 1.0 release from the Helion team: https://github.com/pytorch/helion/releases/tag/v1.0.017:34:57

Show newer messages


Back to Room ListRoom Version: 9