| 7 Mar 2025 |
mdietrich | Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but either enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; } | 13:12:26 |
mdietrich | * Hey all, first of all thank you for your work, last time I tried to use any cuda-related programs and services I had to give up because this joint effort had not been set up. I am just wondering if I am doing something wrong when trying to set up llama-cpp and open-webui on my NixOS machine. I've set up the nix-community cache (and ollama with CUDA support installs fine in any case), but neither enabling nixpkgs.config.cudaSupport or overwriting e.g. llama-cpp's package with `services.llama-cpp.package = pkgs.overwrite { config.cudaSupport = true; config.rocmSupport = false; }` just dowload and install the appropriate packages, but lead to extremely long build times. Are these packages (llama-cpp and open-webui, of which I think onnxruntime takes the longest) just not built in the community cache? | 13:13:54 |
SomeoneSerge (back on matrix) | Let's see | 13:15:53 |
SomeoneSerge (back on matrix) | https://hydra.nix-community.org/job/nixpkgs/cuda/llama-cpp.x86_64-linux | 13:15:55 |