!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

290 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda57 Servers

You have reached the beginning of time (for this room).


SenderMessageTime
7 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @connorbaker:matrix.org
Does anyone have a NixOS system they recommend using to test eval performance?
Working on cppnix?
01:40:03
@connorbaker:matrix.orgconnor (he/him)Kind of? More like preliminary work for a talk I plan to give at Nix Planet06:52:28
@ss:someonex.netSomeoneSerge (back on matrix)Looking forward to watch the record:)12:28:11
8 Dec 2024
@kaya:catnip.eekaya 𖤐Hi, im attempting to upstream my nix derivation for exllamav2 from https://github.com/BatteredBunny/nix-ai-stuff/blob/main/pkgs/exllamav2.nix For some reason its complaining about CUDA_HOME being missing even though im specifying it which im kind of confused about, i thought maybe i would replace torch with torchWithCuda but then i get some mysterious error which i dont get in the flake Anyone had issues with anything similar? Current attempt for anyone curious https://gist.github.com/BatteredBunny/2212ac469f07244d954bf556f128cb0716:42:14
@kaya:catnip.eekaya 𖤐Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and allowUnfree as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure16:44:00
@kaya:catnip.eekaya 𖤐 * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure16:44:11
@kaya:catnip.eekaya 𖤐 * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos config and im building with --impure16:50:40
9 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)

For some reason its complaining about CUDA_HOME being missing even though im specifying it which im kind of confused

It might just want some more components than just nvcc and cudart. Also off the top of my head not sure which outputs are propagated into the symlinkJoin

00:11:43
@ss:someonex.netSomeoneSerge (back on matrix)Could you publish the full logs?00:11:56
@hexa:lossy.networkhexais there a relationship between cuda and the open nvidia kmod?03:40:17
@hexa:lossy.networkhexabecause my cuda things stopped working some time after migrating to 24.1103:40:27
@hexa:lossy.networkhexathough nvidia-smi is working03:40:47
@hexa:lossy.networkhexabut ollama and wyoming-faster-whisper can't init cuda03:41:08
@hexa:lossy.networkhexawill try to drop hardening next, as usual 😄 03:42:19
@hexa:lossy.networkhexaok, DevicePolicy related 🙂 03:46:59
@hexa:lossy.networkhexaok, apparently not15:44:08

Show newer messages


Back to Room ListRoom Version: 9