!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

288 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda56 Servers

Load older messages


SenderMessageTime
2 Dec 2024
@hexa:lossy.networkhexa (UTC+1)this is the latest ucode for amd on nixpkgs master14:52:39
@hexa:lossy.networkhexa (UTC+1)so 0x19 is family 2514:53:59
@hexa:lossy.networkhexa (UTC+1)and for the model you probably have to binary or your model with 0xa0 if it is > 1714:54:27
@glepage:matrix.orgGaΓ©tan LepageOk, so anyway the issues are not a problem of the python package then15:11:51
4 Dec 2024
@justbrowsing:matrix.orgKevin Mittman (UTC-8)Anyone planning to attend PlanetNix https://www.socallinuxexpo.org/scale/22x/events/planet-nix ? Looks like CFP is still open02:21:01
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)I probably will; I'm also planning to submit two talks04:38:33
6 Dec 2024
@vannagamma:matrix.orgvannagamma joined the room.00:01:17
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Does anyone have a NixOS system they recommend using to test eval performance?05:49:07
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Ideally something which takes on the order of 30s or so to eval05:51:25
@kaya:catnip.eekaya 𖀐 changed their profile picture.21:17:32
7 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)
In reply to @connorbaker:matrix.org
Does anyone have a NixOS system they recommend using to test eval performance?
Working on cppnix?
01:40:03
@connorbaker:matrix.orgconnor (burnt/out) (UTC-8)Kind of? More like preliminary work for a talk I plan to give at Nix Planet06:52:28
@ss:someonex.netSomeoneSerge (back on matrix)Looking forward to watch the record:)12:28:11
8 Dec 2024
@kaya:catnip.eekaya 𖀐Hi, im attempting to upstream my nix derivation for exllamav2 from https://github.com/BatteredBunny/nix-ai-stuff/blob/main/pkgs/exllamav2.nix For some reason its complaining about CUDA_HOME being missing even though im specifying it which im kind of confused about, i thought maybe i would replace torch with torchWithCuda but then i get some mysterious error which i dont get in the flake Anyone had issues with anything similar? Current attempt for anyone curious https://gist.github.com/BatteredBunny/2212ac469f07244d954bf556f128cb0716:42:14
@kaya:catnip.eekaya 𖀐Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and allowUnfree as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure16:44:00
@kaya:catnip.eekaya 𖀐 * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos and im building with --impure16:44:11
@kaya:catnip.eekaya 𖀐 * Pretty sure the biggest difference between the flake and my upstreaming attempt is that in the flake i have allowUnfree and cudaSupport as true, but those options should carry over (i assume) as i also have those enabled on my nixos config and im building with --impure16:50:40
9 Dec 2024
@ss:someonex.netSomeoneSerge (back on matrix)

For some reason its complaining about CUDA_HOME being missing even though im specifying it which im kind of confused

It might just want some more components than just nvcc and cudart. Also off the top of my head not sure which outputs are propagated into the symlinkJoin

00:11:43
@ss:someonex.netSomeoneSerge (back on matrix)Could you publish the full logs?00:11:56
@hexa:lossy.networkhexa (UTC+1)is there a relationship between cuda and the open nvidia kmod?03:40:17
@hexa:lossy.networkhexa (UTC+1)because my cuda things stopped working some time after migrating to 24.1103:40:27
@hexa:lossy.networkhexa (UTC+1)though nvidia-smi is working03:40:47
@hexa:lossy.networkhexa (UTC+1)but ollama and wyoming-faster-whisper can't init cuda03:41:08
@hexa:lossy.networkhexa (UTC+1)will try to drop hardening next, as usual πŸ˜„ 03:42:19
@hexa:lossy.networkhexa (UTC+1)ok, DevicePolicy related πŸ™‚ 03:46:59
@hexa:lossy.networkhexa (UTC+1)ok, apparently not15:44:08
@hexa:lossy.networkhexa (UTC+1)nvidia_uvm doesn't get loaded at boot anymore15:44:25
@hexa:lossy.networkhexa (UTC+1) * it seems like nvidia_uvm doesn't get loaded at boot anymore 15:44:31
@ss:someonex.netSomeoneSerge (back on matrix)https://github.com/NixOS/nixpkgs/issues/33418015:52:57
@hexa:lossy.networkhexa (UTC+1)
fbdcdde Kiskae             2024-05-22 13:46 +0200 308β”‚             # Don't add `nvidia-uvm` to `kernelModules`, because we want
fbdcdde Kiskae             2024-05-22 13:46 +0200 309β”‚             # `nvidia-uvm` be loaded only after `udev` rules for `nvidia` kernel
fbdcdde Kiskae             2024-05-22 13:46 +0200 310β”‚             # module are applied.
fbdcdde Kiskae             2024-05-22 13:46 +0200 311β”‚             #
fbdcdde Kiskae             2024-05-22 13:46 +0200 312β”‚             # Instead, we use `softdep` to lazily load `nvidia-uvm` kernel module
fbdcdde Kiskae             2024-05-22 13:46 +0200 313β”‚             # after `nvidia` kernel module is loaded and `udev` rules are applied.
fbdcdde Kiskae             2024-05-22 13:46 +0200 314β”‚             extraModprobeConfig = ''
fbdcdde Kiskae             2024-05-22 13:46 +0200 315β”‚               softdep nvidia post: nvidia-uvm
fbdcdde Kiskae             2024-05-22 13:46 +0200 316β”‚             '';
16:03:03

Show newer messages


Back to Room ListRoom Version: 9