NixOS CUDA | 290 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 57 Servers |
| Sender | Message | Time |
|---|---|---|
| 5 Mar 2025 | ||
| 08:08:10 | ||
| 7 Mar 2025 | ||
| 13:03:38 | ||
| Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but either enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; } | 13:12:26 | |
| * Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but neither enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; }` just dowload and install the appropriate packages, but lead to extremely long build times. Are these packages (llama-cpp and open-webui, of which I think onnxruntime takes the longest) just not built in the community cache? | 13:13:54 | |
| Let's see | 13:15:53 | |
| https://hydra.nix-community.org/job/nixpkgs/cuda/llama-cpp.x86_64-linux | 13:15:55 | |
| https://hydra.nix-community.org/job/nixpkgs/cuda/onnxruntime.x86_64-linux | 13:16:26 | |
open-webui apparently wasn't added to the release-cuda.nix file yet: https://hydra.nix-community.org/job/nixpkgs/cuda/open-webui.x86_64-linuxd | 13:17:10 | |
| As for onnxruntime and llama-cpp, let's compare the hashes in your llama-cpp and the one reported by hydra | 13:18:20 | |
| I am on x86_64, nixos-unstable with flakes with an RTX 3060 Ti and following substituters:
| 13:19:29 | |
| Thank you for your quick answer | 13:20:01 | |
| I think onnxruntime is a dependency of open-webui and not llama-cpp, open-webui itself probably (?) does not need cuda support itself | 13:20:45 | |
services.llama-cpp.package has the value "«derivation /nix/store/dhqdwqp6akr6h6f1k3rz190m3syrv6iy-llama-cpp-4731.drv»" | 13:23:06 | |
Let's try nix path-info --override-input nixpkgs github:NixOS/nixpkgs/1d2fe0135f360c970aee1d57a53f816f3c9bddae --derivation .#nixosConfigurations.$(hostname).config.services.llama-cpp.package to make it comparable with https://hydra.nix-community.org/build/3552955#tabs-buildinputs | 13:24:11 | |
| I'd maybe not focus on these concerns, the expert hours are arguably more expensive that rebuild costs | 13:25:27 | |
| (still pending =) | 13:25:51 | |
Wait a minute, I am slightly confused as llama-cpp seems to actually have cuda support now that I rebuilt a couple of minutes ago. It just does not use my GPU when running inference even though it reports it as visible and usable. Maybe a configuration mistake on my side (although I am using the default NixOS service). I'll look into open-webui and onnxruntime now... | 13:27:31 | |
| Yes, onnxruntime does recompile, as well as python3.12-torch-2.5.1. I'm checkin the hashes now... | 13:33:36 | |
| I am definitely building onnxruntime myself even though I get:
Which is the same hash as the hydra build store path | 13:48:48 | |
| I get the same hash for pytorch locally and in hydra as well! | 13:56:11 | |
And if you nix build --override-input nixpkgs github:NixOS/nixpkgs/9f41a78ead0fbe2197cd4c09b5628060456cd6e3 .#nixosConfigurations.$(hostname).pkgs.onnxruntime? | 13:59:42 | |
| Then I'm building nccl and cudnn-frontend for some reason? | 14:15:15 | |
| Well this certainly shouldn't be happening if the hashes indeed match | 14:21:40 | |
| Which hydra eval did you refer to? | 14:22:00 | |
| * Which hydra eval did you refer to, can you link it? | 14:22:06 | |
| Sorry, I am back now. It seems that my setup had complicated things: I was trying stuff on a laptop while the actual setup was on another host (with the actual GPU), but I did use the correct hostname for the workstation, which should (I mean, that is the whole point of Nix?) lead to the same build. (Both systems are x86_64) However I was also trying globally enable cudaSupport and package-overridden cudaSupport, which might have lead to me making a mistake, I don't know. All I can say is that
now just downloads onnxruntime from the cache, which is the expected behaviour. I'm going to check without overridden input and pytorch again and then with the whole system. | 15:27:49 | |
Building without the overridden nixpkgs input forces rebuild (I used just nix build .#nixosConfigurations.$(hostname).pkgs.onnxruntime) | 15:29:00 | |
| For earlier, I meant the derivation store path of https://hydra.nix-community.org/build/3297277#tabs-details | 15:39:13 | |
| Ok, python3.12-torch-2.5.1 is fetched from the community cache again iff I override the nixpkgs input again to the same hash as in https://hydra.nix-community.org/build/3534138#tabs-buildinputs | 15:43:38 | |
As in nix build --override-input nixpkgs github:NixOS/nixpkgs/e9b0ff70ddc61c42548501b0fafb86bb49cca858 .#nixosConfigurations.$(hostname).pkgs.python3Packages.pytorch | 15:43:55 | |