NixOS CUDA | 297 Members | |
| CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda | 59 Servers |
| Sender | Message | Time |
|---|---|---|
| 2 Dec 2024 | ||
| * TR system: | 14:46:30 | |
| 14:52:33 | |
| this is the latest ucode for amd on nixpkgs master | 14:52:39 | |
| so 0x19 is family 25 | 14:53:59 | |
| and for the model you probably have to binary or your model with 0xa0 if it is > 17 | 14:54:27 | |
| Ok, so anyway the issues are not a problem of the python package then | 15:11:51 | |
| 4 Dec 2024 | ||
| Anyone planning to attend PlanetNix https://www.socallinuxexpo.org/scale/22x/events/planet-nix ? Looks like CFP is still open | 02:21:01 | |
| I probably will; I'm also planning to submit two talks | 04:38:33 | |
| 6 Dec 2024 | ||
| 00:01:17 | ||
| Does anyone have a NixOS system they recommend using to test eval performance? | 05:49:07 | |
| Ideally something which takes on the order of 30s or so to eval | 05:51:25 | |
| 21:17:32 | ||
| 7 Dec 2024 | ||
In reply to @connorbaker:matrix.orgWorking on cppnix? | 01:40:03 | |
| Kind of? More like preliminary work for a talk I plan to give at Nix Planet | 06:52:28 | |
| Looking forward to watch the record:) | 12:28:11 | |
| 8 Dec 2024 | ||
| Hi, im attempting to upstream my nix derivation for exllamav2 from https://github.com/BatteredBunny/nix-ai-stuff/blob/main/pkgs/exllamav2.nix For some reason its complaining about CUDA_HOME being missing even though im specifying it which im kind of confused about, i thought maybe i would replace torch with torchWithCuda but then i get some mysterious error which i dont get in the flake Anyone had issues with anything similar? Current attempt for anyone curious https://gist.github.com/BatteredBunny/2212ac469f07244d954bf556f128cb07 | 16:42:14 | |
| Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and allowUnfree as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure | 16:44:00 | |
| * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure | 16:44:11 | |
| * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos config and im building with --impure | 16:50:40 | |
| 9 Dec 2024 | ||
It might just want some more components than just nvcc and cudart. Also off the top of my head not sure which outputs are propagated into the symlinkJoin | 00:11:43 | |
| Could you publish the full logs? | 00:11:56 | |
| is there a relationship between cuda and the open nvidia kmod? | 03:40:17 | |
| because my cuda things stopped working some time after migrating to 24.11 | 03:40:27 | |
| though nvidia-smi is working | 03:40:47 | |
| but ollama and wyoming-faster-whisper can't init cuda | 03:41:08 | |
| will try to drop hardening next, as usual 😄 | 03:42:19 | |
| ok, DevicePolicy related 🙂 | 03:46:59 | |
| ok, apparently not | 15:44:08 | |
| nvidia_uvm doesn't get loaded at boot anymore | 15:44:25 | |
| * it seems like nvidia_uvm doesn't get loaded at boot anymore | 15:44:31 | |