| 10 Apr 2025 |
connor (he/him) | As an update, I’ve been added to the nix-community org but haven’t had a chance to push a copy of the to-be-removed CUDA components and GCC expressions | 16:26:37 |
connor (he/him) | Complication there is mostly figuring out important paths in the tree so I know what to keep when doing a filter (to retain commit history) | 16:27:19 |
connor (he/him) | SomeoneSerge (UTC+U[-12,12]): as a quick fix for nix-gl-host for Jetson devices for the problem I was seeing where cuda_compat wouldn’t be set in the LD_LIBRARY_PATH and the drivers would take priority, would it be enough to scan the run path of the binary passed to nixglhost to look for cuda_compat and conditionally prepend it to LD_LIBRARY_PATH if it is present?
Kind of gross, but given usage of cuda_compat is per-application and not a single vendor supplied directory on the host, not sure how else to handle it.
Ugh I imagine one should also check the version of the host driver to see if using cuda_compat when it is present in the run path would actually break things (i.e., the driver is newer than what cuda_compat provides for, so we need to use the driver’s backward compatibility instead of cuda_compat’s forward compatibility). | 16:32:12 |
SomeoneSerge (back on matrix) | Hmmm why'd you need the scanning? | 16:32:51 |
SomeoneSerge (back on matrix) |
the driver is newer than what cuda_compat provides for
H'm, interesting
| 16:33:24 |
connor (he/him) | The cuda_compat used in the runpath of the binary provided to nix-gl-host depends on the version of CUDA used
OH
Maybe I’m thinking about this wrong | 16:35:10 |
connor (he/him) | X86 has a cuda_compat library too from what I remember, it’s just not available as a redist
So maybe we shouldn’t package the one for Jetsons
And instead, nixglhost should use the one on the host system if it is available | 16:36:17 |
connor (he/him) | Although that won’t us on NixOS systems — cuda_compat is usually provided as a Debian with newer releases of CUDA, so it would just fail to run on NixOS systems if the driver isn’t new enough | 16:37:40 |
connor (he/him) | * Although that won’t help us on NixOS systems — cuda_compat is usually provided as a Debian with newer releases of CUDA, so it would just fail to run on NixOS systems if the driver isn’t new enough | 16:37:48 |
SomeoneSerge (back on matrix) |
So maybe we shouldn’t package the one for Jetsons
No, I think whenever it's available we'd rather do the pure linking, because that's what we do to other libraries. This is in general a tradeoff, and it would have been great if we had tools for quickly relinking stuff/tools for building stuff against reproducible content-addressed stubs with a separate linking phase, but that's not where we are
| 16:39:50 |
connor (he/him) | Ugh
So on all platforms, we should only use cuda_compat if the host driver is old and we need forward compat
I guess the question is where cuda_compat should come from, if the decision to use it or not requires knowing what version the host driver is | 16:40:10 |
SomeoneSerge (back on matrix) | This is not different from the GL/vulkan situation | 16:41:29 |
connor (he/him) | (Where it should come from meaning Nixpkgs and the runpath or from the host OS, which is a non-starter on NixOs systems since we don’t package it, although we could, but then for people to add it to their environment they’d need to rebuild ugh) | 16:41:35 |
connor (he/him) | Oh? What’s that situation? | 16:42:43 |
SomeoneSerge (back on matrix) | The situation is we'd like to develop and link a dynamic shim (libglvnd-like) that can select the right thing at runtime (per the logic you wrote down) | 16:44:07 |
SomeoneSerge (back on matrix) | Nixpkgs breaks GL/Vulkan on NixOS when mixing revisions because we don't have this shim logic | 16:45:01 |
SomeoneSerge (back on matrix) | * Nixpkgs breaks GL/Vulkan on NixOS when mixing revisions because we don't have this shim | 16:45:23 |