!eWOErHSaiddIbsUNsJ:nixos.org

NixOS CUDA

211 Members
CUDA packages maintenance and support in nixpkgs | https://github.com/orgs/NixOS/projects/27/ | https://nixos.org/manual/nixpkgs/unstable/#cuda42 Servers

Load older messages


SenderMessageTime
29 Jul 2024
@kaya:catnip.eekaya changed their profile picture.07:47:57
@ironbound:hackerspace.pl@ironbound:hackerspace.pl removed their display name P_Big.14:47:13
@ironbound:hackerspace.pl@ironbound:hackerspace.pl removed their profile picture.14:47:26
30 Jul 2024
@srhb:matrix.orgsrhbI'm waving around a big hammer here. Does anyone want to save cuda-modules/aliases.nix? 😁 https://github.com/NixOS/nixpkgs/pull/33101705:57:45
@ss:someonex.netSomeoneSerge (utc+3)
In reply to @srhb:matrix.org

I'm waving around a big hammer here. Does anyone want to save cuda-modules/aliases.nix? 😁

https://github.com/NixOS/nixpkgs/pull/331017

Yes we want to save aliases. I think the solution should be to ensure these attributes are kept lazy, not to remove them
07:06:42
@srhb:matrix.orgsrhbSad. Alright, I'll draft them :)07:06:57
@srhb:matrix.orgsrhbThe aliases I nuked are still OK to go by now, right?07:07:11
@ss:someonex.netSomeoneSerge (utc+3)I need to fetch my laptop:)07:07:36
@srhb:matrix.orgsrhbAnd laziness won't save that torch check, right? (equality has no choice but to strict)07:07:54
@srhb:matrix.orgsrhbThough I suppose I could exempt those exact attributes in the torch check. Lots of spooky action at a distance though.07:09:22
@ss:someonex.netSomeoneSerge (utc+3)Ooh, that, the package set comparison. I forgot it was there07:10:30
@srhb:matrix.orgsrhbI understand why it's there, but I think it should go.07:11:48
@ss:someonex.netSomeoneSerge (utc+3)Yea the check is quite a bit of a heuristic actually07:12:14
@ss:someonex.netSomeoneSerge (utc+3)
In reply to @srhb:matrix.org
The aliases I nuked are still OK to go by now, right?
Yes agreed
07:12:48
@srhb:matrix.orgsrhbSo my preferred choice of action would be to a) nuke the old aliases, b) keep the alias infrastructure, and ideally c) remove that torch check, because any aliasing will just reintroduce this problem across all tooling that touches torch, producing warnings that may be completely irrelevant as they are in this case.07:13:59
@ss:someonex.netSomeoneSerge (utc+3)Commented on github07:19:31
@ss:someonex.netSomeoneSerge (utc+3)And thanks a lot for picking up the shovel...07:20:20
@srhb:matrix.orgsrhbNo problem, thanks for the response :D07:26:53
@philiptaron:matrix.orgPhilip Taron (UTC-8) SomeoneSerge (UTC+3): I'm taking a look at your llama.cpp PR. The TODO makes me think that it's in draft actually. Is that the case? 18:22:57
@ss:someonex.netSomeoneSerge (utc+3)I mean it's more of a sanity check, I tested this with a bunch of packages in nixpkgs and generally the closures got smaller18:23:47
@philiptaron:matrix.orgPhilip Taron (UTC-8) I generally check closure size with nix path-info. Do you do that, or something else? 18:24:13
@philiptaron:matrix.orgPhilip Taron (UTC-8)On another topic, I see a lot of build spam when building llama.cpp about "nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used." I'd like to remove that. Is there a pointer you have to get started there?18:25:46
@philiptaron:matrix.orgPhilip Taron (UTC-8)
In reply to @philiptaron:matrix.org
I generally check closure size with nix path-info. Do you do that, or something else?
Using nix path-info results in identical closure sizes.
18:35:31
@ss:someonex.netSomeoneSerge (utc+3)
In reply to @philiptaron:matrix.org
On another topic, I see a lot of build spam when building llama.cpp about "nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used."

I'd like to remove that. Is there a pointer you have to get started there?
Yeah it's somehwere in the setupCudaHook, I believe connor (he/him) (UTC-5) had actually located the source at some point?
20:40:53
31 Jul 2024
@ss:someonex.netSomeoneSerge (utc+3) connor (he/him) (UTC-5) you might want to know that https://github.com/NixOS/nixpkgs/pull/318614 exists 07:57:55
@connorbaker:matrix.orgconnor (he/him) (UTC-7)Oh hell yeah15:21:27
@philiptaron:matrix.orgPhilip Taron (UTC-8)That's a baller PR.19:06:12
1 Aug 2024
@julius:mtx.liftm.deˈt͡sɛːzaɐ̯
In reply to @phirsch:matrix.org

@SomeoneSerge (UTC+3) @ˈt͡sɛːzaɐ̯ No dice... While ollama (without '-cuda') somehow manages to get GPU serial and VRAM allocation into, it doesn't use the GPU when actually running a model (outputs 'Not compiled with GPU offload support'). And unfortunately, using 'nix run --impure' as above from within a nix shell with 'nvcc' from nixpkgs still fails because it's using nvcc from /usr/local/...

Weird. I mean, you could build the thing in a container or vm that's actually nixos, and then pull it to your store from there. But this should really work. I wonder how you're running your nix. As user? I guess the sandbox is relaxed?
10:45:59
@ss:someonex.netSomeoneSerge (utc+3)
In reply to @phirsch:matrix.org

@SomeoneSerge (UTC+3) @ˈt͡sɛːzaɐ̯ No dice... While ollama (without '-cuda') somehow manages to get GPU serial and VRAM allocation into, it doesn't use the GPU when actually running a model (outputs 'Not compiled with GPU offload support'). And unfortunately, using 'nix run --impure' as above from within a nix shell with 'nvcc' from nixpkgs still fails because it's using nvcc from /usr/local/...

You do need to build with cuda support in order to use cuda
12:51:57
@yorickvp:matrix.orgyorickvp

I'm trying to link something to torch, but it complains

┃        > ImportError: /nix/store/kzx58d5pbb78gnv9s4d62f4r46x9waw9-gcc-12.3.0-lib/lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /nix/store/q7hlip3anbg4gd4wqa1lwy0jksk25pck-python3.10-torch-2.3.1-lib/lib/libc10.so)

why does it use gcc-12.3.0-lib?!

14:32:46

Show newer messages


Back to Room ListRoom Version: 9