30 Jul 2024 |
SomeoneSerge (utc+3) | In reply to @srhb:matrix.org
I'm waving around a big hammer here. Does anyone want to save cuda-modules/aliases.nix? 😁
https://github.com/NixOS/nixpkgs/pull/331017 Yes we want to save aliases. I think the solution should be to ensure these attributes are kept lazy, not to remove them | 07:06:42 |
srhb | Sad. Alright, I'll draft them :) | 07:06:57 |
srhb | The aliases I nuked are still OK to go by now, right? | 07:07:11 |
SomeoneSerge (utc+3) | I need to fetch my laptop:) | 07:07:36 |
srhb | And laziness won't save that torch check, right? (equality has no choice but to strict) | 07:07:54 |
srhb | Though I suppose I could exempt those exact attributes in the torch check. Lots of spooky action at a distance though. | 07:09:22 |
SomeoneSerge (utc+3) | Ooh, that, the package set comparison. I forgot it was there | 07:10:30 |
srhb | I understand why it's there, but I think it should go. | 07:11:48 |
SomeoneSerge (utc+3) | Yea the check is quite a bit of a heuristic actually | 07:12:14 |
SomeoneSerge (utc+3) | In reply to @srhb:matrix.org The aliases I nuked are still OK to go by now, right? Yes agreed | 07:12:48 |
srhb | So my preferred choice of action would be to a) nuke the old aliases, b) keep the alias infrastructure, and ideally c) remove that torch check, because any aliasing will just reintroduce this problem across all tooling that touches torch, producing warnings that may be completely irrelevant as they are in this case. | 07:13:59 |
SomeoneSerge (utc+3) | Commented on github | 07:19:31 |
SomeoneSerge (utc+3) | And thanks a lot for picking up the shovel... | 07:20:20 |
srhb | No problem, thanks for the response :D | 07:26:53 |
Philip Taron (UTC-8) | SomeoneSerge (UTC+3): I'm taking a look at your llama.cpp PR. The TODO makes me think that it's in draft actually. Is that the case? | 18:22:57 |
SomeoneSerge (utc+3) | I mean it's more of a sanity check, I tested this with a bunch of packages in nixpkgs and generally the closures got smaller | 18:23:47 |
Philip Taron (UTC-8) | I generally check closure size with nix path-info . Do you do that, or something else? | 18:24:13 |
Philip Taron (UTC-8) | On another topic, I see a lot of build spam when building llama.cpp about "nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used."
I'd like to remove that. Is there a pointer you have to get started there? | 18:25:46 |
Philip Taron (UTC-8) | In reply to @philiptaron:matrix.org I generally check closure size with nix path-info . Do you do that, or something else? Using nix path-info results in identical closure sizes. | 18:35:31 |
SomeoneSerge (utc+3) | In reply to @philiptaron:matrix.org On another topic, I see a lot of build spam when building llama.cpp about "nvcc warning : incompatible redefinition for option 'compiler-bindir', the last value of this option was used."
I'd like to remove that. Is there a pointer you have to get started there? Yeah it's somehwere in the setupCudaHook , I believe connor (he/him) (UTC-5) had actually located the source at some point? | 20:40:53 |
31 Jul 2024 |
SomeoneSerge (utc+3) | connor (he/him) (UTC-5) you might want to know that https://github.com/NixOS/nixpkgs/pull/318614 exists | 07:57:55 |
connor (he/him) (UTC-7) | Oh hell yeah | 15:21:27 |
Philip Taron (UTC-8) | That's a baller PR. | 19:06:12 |
1 Aug 2024 |
ˈt͡sɛːzaɐ̯ | In reply to @phirsch:matrix.org
@SomeoneSerge (UTC+3) @ˈt͡sɛːzaɐ̯ No dice... While ollama (without '-cuda') somehow manages to get GPU serial and VRAM allocation into, it doesn't use the GPU when actually running a model (outputs 'Not compiled with GPU offload support'). And unfortunately, using 'nix run --impure' as above from within a nix shell with 'nvcc' from nixpkgs still fails because it's using nvcc from /usr/local/...
Weird. I mean, you could build the thing in a container or vm that's actually nixos, and then pull it to your store from there. But this should really work. I wonder how you're running your nix. As user? I guess the sandbox is relaxed? | 10:45:59 |
SomeoneSerge (utc+3) | In reply to @phirsch:matrix.org
@SomeoneSerge (UTC+3) @ˈt͡sɛːzaɐ̯ No dice... While ollama (without '-cuda') somehow manages to get GPU serial and VRAM allocation into, it doesn't use the GPU when actually running a model (outputs 'Not compiled with GPU offload support'). And unfortunately, using 'nix run --impure' as above from within a nix shell with 'nvcc' from nixpkgs still fails because it's using nvcc from /usr/local/...
You do need to build with cuda support in order to use cuda | 12:51:57 |
yorickvp | I'm trying to link something to torch, but it complains
┃ > ImportError: /nix/store/kzx58d5pbb78gnv9s4d62f4r46x9waw9-gcc-12.3.0-lib/lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /nix/store/q7hlip3anbg4gd4wqa1lwy0jksk25pck-python3.10-torch-2.3.1-lib/lib/libc10.so)
why does it use gcc-12.3.0-lib?!
| 14:32:46 |
yorickvp | all I can find is the line
-- Looking for a CUDA host compiler - /nix/store/vk12rv84vs98bv3wi4jgbpi59lrs3ymj-gcc-wrapper-12.3.0/bin/c++
in the build logs | 14:34:20 |
yorickvp | okay, that would be because setup-cuda-hook sets that. but it does have -L/nix/store/bn7pnigb0f8874m6riiw6dngsmdyic1g-gcc-13.3.0-lib/lib -L/nix/store/kzx58d5pbb78gnv9s4d62f4r46x9waw9-gcc-12.3.0-lib/lib | 14:44:23 |
SomeoneSerge (utc+3) | Are you using multiple nixpkgs revisions? | 14:46:52 |
SomeoneSerge (utc+3) | Ah, no, I guess the second one is propagated by something else | 14:47:07 |