NixOS CUDA | 288 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 58 Servers |
| Sender | Message | Time |
|---|---|---|
| 27 Feb 2025 | ||
| 02:57:20 | ||
| 28 Feb 2025 | ||
| SomeoneSerge (UTC+U[-12,12]): in opencv, why cxxdev and not dev as the output name? | 05:57:42 | |
| and what does that mean for buildInputs? | 05:58:37 | |
| oh, opencv has no dev output | 05:59:19 | |
| and openvino has been using the wrong output all along | 05:59:33 | |
| 🤦♂️ | 05:59:36 | |
| couldn't the effectiveStdenv pattern be reduced to | 06:19:53 | |
| 06:19:56 | |
*
| 06:20:01 | |
| 06:23:11 | ||
| 06:23:32 | ||
| 06:29:08 | ||
| 06:30:58 | ||
| 06:32:14 | ||
*
| 06:50:42 | |
*
| 06:51:04 | |
In reply to @hexa:lossy.networkRecursion aside, the default value is going to be discarded because there is a stdenv in the parent scope :( | 08:44:51 | |
In reply to @hexa:lossy.networkIt contains the propagated build inputs referenced by the cmake module. It's not in dev, because withPackages sources dev... | 08:46:15 | |
| hm, changing to that yields the same derivations as before, but maybe that is because nothing currently relies on the cuda backendStdenv 🤔 | 17:35:50 | |
| 2 Mar 2025 | ||
| In case anybody is interested: I got nsight compute working: https://github.com/Snektron/nixos-config/blob/main/packages/nsight-compute.nix last time I checked there were still issues running the one in nixpkgs and I don't think the related PR has changed in the meantime | 11:46:23 | |
| 3 Mar 2025 | ||
| 12:41:18 | ||
| Hello, I'm having trouble getting CUDA working on Nixos. I made a post on Discourse but I thought I'd ask here since it's specific to CUDA: https://discourse.nixos.org/t/ollama-cuda-driver-library-init-failure-3/61068/2 In short, I installed ollama, but ollama reports:
I guess error 3 corresponds to the I think my issue is that is have CUDA 12.4 installed, when my GPU supports CUDA 12.8? Atlough I'm not certain what the CUDA version reported by | 12:50:21 | |
| * Hello, I'm having trouble getting CUDA working on Nixos. I made a post on Discourse but I thought I'd ask here since it's specific to CUDA: https://discourse.nixos.org/t/ollama-cuda-driver-library-init-failure-3/61068/2 In short, I installed ollama, but ollama reports:
I guess error 3 corresponds to the I think my issue is that is have CUDA 12.4 installed, when my GPU supports CUDA 12.8? Atlough I'm not certain what the CUDA version reported by | 12:50:49 | |
| Also I hope it's ok to cross-post like this. Apologies if that seems pushy. | 12:51:36 | |
| I also thought the error might be because I'm using the GPU to run Wayland, but I assume that just like CPUs, GPUs can run multiple workloads in parallel? Or does CUDA need to have exclusive access to the GPU? (I know these are very naive questions, I just never dealt with GPUs before) | 12:54:37 | |
| * I also thought the error might be because I'm using the GPU to run Wayland, but I assume that just like CPUs, GPUs can run multiple workloads? Or does CUDA need to have exclusive access to the GPU? (I know these are very naive questions, I just never dealt with GPUs before) | 12:56:23 | |
connor (he/him) (UTC-8): I just noticed that pkgs/development/libraries/science/math/tensorrt/extension.nix is a thing. At first glance, this code seems dead to me (or at least I wasn't able to find a place where it is called from)? It seems that nowadays all of the TensorRT-related code lives in pkgs/development/cuda-modules. The last commit (excluding automated reformatting) that touched pkgs/development/libraries/science/math/tensorrt seems to be 8e800cedaf24f5ad9717463b809b0beef7677000 authored by you in 2023. That commit also removed pkgs/development/libraries/science/math/tensorrt/generic.nix. So I am guessing that you forgot to also delete the extension.nix? | 13:43:50 | |
| ruro: yes, seems likely :l | 16:39:14 | |
| little_dude: it's fine to cross-post! Sorry it's not working, my only suggestion would be to try running it with whatever flags Ollama needs to enable debugging and/or The version difference across CUDA driver version and CUDA library version is fine -- just means you can run CUDA libraries using up to and including 12.8. The GPU definitely supports multiple workloads, so that shouldn't be a problem either. I'm strapped for time so I probably won't be able to help debug or troubleshoot, but I think some other people in here use ollama, so they might be able to chime in. | 16:46:44 | |
| 4 Mar 2025 | ||
| i have prepared a cudaPackages_12 update from 12.4 to 12.8 here: https://github.com/NixOS/nixpkgs/pull/386983 can you have a look? I also included a nixpkgs-review result - 229 marked as broken / 219 failed to build / 2455 packages built but I am having hard time figuring out which build failures are new and which were happening even before can you advise what is the best way how to proceed? please comment on github, i am not always following the discussion here | 10:48:13 | |