NixOS CUDA | 289 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 57 Servers |
| Sender | Message | Time |
|---|---|---|
| 3 Mar 2025 | ||
| little_dude: it's fine to cross-post! Sorry it's not working, my only suggestion would be to try running it with whatever flags Ollama needs to enable debugging and/or The version difference across CUDA driver version and CUDA library version is fine -- just means you can run CUDA libraries using up to and including 12.8. The GPU definitely supports multiple workloads, so that shouldn't be a problem either. I'm strapped for time so I probably won't be able to help debug or troubleshoot, but I think some other people in here use ollama, so they might be able to chime in. | 16:46:44 | |
| 4 Mar 2025 | ||
| i have prepared a cudaPackages_12 update from 12.4 to 12.8 here: https://github.com/NixOS/nixpkgs/pull/386983 can you have a look? I also included a nixpkgs-review result - 229 marked as broken / 219 failed to build / 2455 packages built but I am having hard time figuring out which build failures are new and which were happening even before can you advise what is the best way how to proceed? please comment on github, i am not always following the discussion here | 10:48:13 | |
| an ideal thing for me would be if someone indicated the list of packages that really need to have the build fixed before the merge happens and I would (try to) work on fixing these | 10:53:23 | |
| * an ideal thing for me would be if someone indicated the list of packages that really need to have the build fixed before the merge happens and I will (try to) work on fixing these | 10:53:33 | |
In addition to Connor's suggestions, can you check what is the output when you run cudaPackages.saxpy? | 11:26:55 | |
| Maybe the merge of this PR should happen shortly after the merge of ROCm update in #367695 to not do massive rebuilds two times? | 12:12:00 | |
| * Maybe the merge of this PR should happen shortly after the merge of ROCm update in #367695 to not do massive rebuilds two times? https://github.com/NixOS/nixpkgs/pull/367695 | 12:12:17 | |
| 5 Mar 2025 | ||
| 08:07:57 | ||
| 08:08:10 | ||
| 7 Mar 2025 | ||
| 13:03:38 | ||
| Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but either enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; } | 13:12:26 | |
| * Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but neither enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; }` just dowload and install the appropriate packages, but lead to extremely long build times. Are these packages (llama-cpp and open-webui, of which I think onnxruntime takes the longest) just not built in the community cache? | 13:13:54 | |
| Let's see | 13:15:53 | |
| https://hydra.nix-community.org/job/nixpkgs/cuda/llama-cpp.x86_64-linux | 13:15:55 | |
| https://hydra.nix-community.org/job/nixpkgs/cuda/onnxruntime.x86_64-linux | 13:16:26 | |
open-webui apparently wasn't added to the release-cuda.nix file yet: https://hydra.nix-community.org/job/nixpkgs/cuda/open-webui.x86_64-linuxd | 13:17:10 | |
| As for onnxruntime and llama-cpp, let's compare the hashes in your llama-cpp and the one reported by hydra | 13:18:20 | |
| I am on x86_64, nixos-unstable with flakes with an RTX 3060 Ti and following substituters:
| 13:19:29 | |
| Thank you for your quick answer | 13:20:01 | |
| I think onnxruntime is a dependency of open-webui and not llama-cpp, open-webui itself probably (?) does not need cuda support itself | 13:20:45 | |
services.llama-cpp.package has the value "«derivation /nix/store/dhqdwqp6akr6h6f1k3rz190m3syrv6iy-llama-cpp-4731.drv»" | 13:23:06 | |
Let's try nix path-info --override-input nixpkgs github:NixOS/nixpkgs/1d2fe0135f360c970aee1d57a53f816f3c9bddae --derivation .#nixosConfigurations.$(hostname).config.services.llama-cpp.package to make it comparable with https://hydra.nix-community.org/build/3552955#tabs-buildinputs | 13:24:11 | |
| I'd maybe not focus on these concerns, the expert hours are arguably more expensive that rebuild costs | 13:25:27 | |
| (still pending =) | 13:25:51 | |
Wait a minute, I am slightly confused as llama-cpp seems to actually have cuda support now that I rebuilt a couple of minutes ago. It just does not use my GPU when running inference even though it reports it as visible and usable. Maybe a configuration mistake on my side (although I am using the default NixOS service). I'll look into open-webui and onnxruntime now... | 13:27:31 | |
| Yes, onnxruntime does recompile, as well as python3.12-torch-2.5.1. I'm checkin the hashes now... | 13:33:36 | |
| I am definitely building onnxruntime myself even though I get:
Which is the same hash as the hydra build store path | 13:48:48 | |
| I get the same hash for pytorch locally and in hydra as well! | 13:56:11 | |
And if you nix build --override-input nixpkgs github:NixOS/nixpkgs/9f41a78ead0fbe2197cd4c09b5628060456cd6e3 .#nixosConfigurations.$(hostname).pkgs.onnxruntime? | 13:59:42 | |
| Then I'm building nccl and cudnn-frontend for some reason? | 14:15:15 | |