!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

288 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

Load older messages


SenderMessageTime
6 Feb 2025
@ruroruro:matrix.orgruro

This might be a stupid question, but when the nixpkgs manual says

All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit
does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs.

What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).

16:54:13
@ruroruro:matrix.orgruro *

This might be a stupid question, but when the nixpkgs manual says

All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit

does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs.

What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).

16:54:19
@ruroruro:matrix.orgruro *

This might be a stupid question, but when the nixpkgs manual says

All new projects should use the CUDA redistributables available in cudaPackages in place of cudaPackages.cudatoolkit

does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs?

What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required binaries/libraries under a single path, but I am not sure if this is the most elegant way to do this (also, it's not immediately clear, how would a single CUDA_PATH interact with cross compilation).

16:54:58
@ss:someonex.netSomeoneSerge (back on matrix) Honestly, I've no idea what license, if any, applies to torch-bin 17:24:28
@ss:someonex.netSomeoneSerge (back on matrix) Yes, or we could just agree that testing for insecure dependencies is out of scope for hydra 17:26:09
@ss:someonex.netSomeoneSerge (back on matrix) I expect static and devrt to be in .dev's propagatedBuildInputs 17:27:54
@ss:someonex.netSomeoneSerge (back on matrix)

What if the upstream package expects a single CUDA_PATH path containing all the cuda dependencies? I think, I saw some people using buildEnv to collect all of the required

If upstream is co-operative, they need to be contacted and offered a proper solution like FindCUDAToolkit.cmake without any CUDA_PATHs or merged-layout assumptiosn

17:29:22
@zopieux:matrix.zopi.euzopieux thanks for looking. Sadly, I have now compiled the package myself so it's cached and this doesn't say anything useful. I suppose I can try next time I update. What do you except out of emptying builders? 17:29:50
@ss:someonex.netSomeoneSerge (back on matrix)

does it mean that individual derivations from cudaPackages.* should be manually added to buildInputs/nativeBuildInputs. For example, would this mean that I should just manually add cuda_nvcc to nativeBuildInputs?

That's the idea. We could consider an automation more along the lines of propagatedBuildInputs, but symlinks we hope to avoid, because it's hard to prune the references to static libraries after the build

17:31:10
@ss:someonex.netSomeoneSerge (back on matrix) Maybe gc it? 17:31:58
@ss:someonex.netSomeoneSerge (back on matrix)

4s
Run # Get the latest eval.yml workflow run for the PR's target commit
Comparing against "https://github.com/NixOS/nixpkgs/actions/runs/13155928895"
Workflow was not successful (conclusion: failure), cannot make comparison

Has anyone encountered this? I've no idea what this workflow is even for

17:41:42
@ss:someonex.netSomeoneSerge (back on matrix) changed their display name from SomeoneSerge (Gand St. Pieters) to SomeoneSerge (UTC+U[-12,12]).17:51:07
@ruroruro:matrix.orgruro The upstream in question is NVIDIA/cuda-samples. They are currently using "plain" Makefiles. I think that it's unlikely that we could get them to switch (and I don't really want to try to implement this myself). What would be "the most nixpkgs way" to create a merged CUDA_PATH in this case? 17:54:37
@ruroruro:matrix.orgruro Apart from just using cudatoolkit that is. 17:55:17
@ss:someonex.netSomeoneSerge (back on matrix)It would be what you said, buildEnv/symlinkJoin (which is what cudaPackages.cudatoolkit currently is)17:55:50
@ruroruro:matrix.orgruro

Hmmm. I just noticed that according to this page the latest supported GCC version for CUDA 12.4 is GCC 13.2, but currently

cudaPackages_12_4.backendStdenv.cc.version == "13.3.0"

is this expected?

18:57:21
@ruroruro:matrix.orgruroNvm, I am blind, it says that newer minor versions are also supported.19:02:35
7 Feb 2025
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8) Ugh FINALLY have a test to catch different versions of the package set leaking into each other: https://github.com/ConnorBaker/cuda-packages/commit/6c9cb3a17962427e9772849a3b7ca08899897aae
Got tried of seeing multiple versions of CUDA dependencies in the closure of members of the package set
02:04:37
@ss:someonex.netSomeoneSerge (back on matrix) Let's do Thursday February 13th 2-3PM UTC? 14:41:38
@stick:matrix.orgstickno idea - seems like an intermittent issue?15:07:19
@stick:matrix.orgstickother than that, are you ok with merging the PR? I would love vllm to appear in the cache15:07:44
@stick:matrix.orgstick* other than that, are you ok with merging the PR? I would love vllm to appear in the nix-community cache15:07:50
@stick:matrix.orgstickand i just merged an update from 0.7.1 -> 0.7.2 to master15:08:02
@stick:matrix.orgsticki rebased the PR to check whether the CI fails again on the same test15:09:55
@stick:matrix.orgstick* i rebased the PR https://github.com/NixOS/nixpkgs/pull/379575 to check whether the CI fails again on the same test15:10:13
@stick:matrix.orgstickupdate: no it did not - i guess there was an error in master, not in my branch15:11:22
@ss:someonex.netSomeoneSerge (back on matrix) Yes ofc. I was about to press the button but then this weird action failed even after I restarted it manually 15:23:31
@stick:matrix.orgstickis that the only thing needed to get vllm into nix-community cache?15:23:58
@ss:someonex.netSomeoneSerge (back on matrix)Looks like it's happy after the rebase?15:24:01
@stick:matrix.orgstickyes, it is15:24:09

Show newer messages


Back to Room ListRoom Version: 9