| 9 Jun 2024 |
connor (he/him) | It works without compile, just curious if this is a problem with the PR | 20:50:43 |
connor (he/him) | I'll try again with master to make sure it's not a regression | 20:51:08 |
connor (he/him) | Cool, fails on master too | 20:51:57 |
Gaétan Lepage | Nice ^^ | 20:53:19 |
Gaétan Lepage | Is it OK for me to merge now ? | 20:53:44 |
Gaétan Lepage | Oh, I've just seen your message | 20:53:58 |
| 10 Jun 2024 |
| NixOS Moderation Bot unbanned @jonringer:matrix.org. | 00:17:14 |
Gaétan Lepage |  Download clipboard.png | 06:44:40 |
Gaétan Lepage | Haha botorch has probably taken ~11h but it succeeded X) | 06:44:56 |
| shekhinah set their display name to yaldebaoth. | 11:02:59 |
| shekhinah changed their display name from yaldebaoth to yaldabaoth. | 11:03:43 |
connor (he/him) | Gaétan Lepage: did you mention there was a PR or something merged to disable the checkPhase or test suite for botorch, or did I misunderstand? | 14:01:56 |
connor (he/him) | On another note, has anyone built elpa (https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/elpa/default.nix) successfully with CUDA support? I let it run for like 20h and it was still building. Seems to compile four object files at a time? | 14:04:03 |
Gaétan Lepage | In reply to @connorbaker:matrix.org Gaétan Lepage: did you mention there was a PR or something merged to disable the checkPhase or test suite for botorch, or did I misunderstand? No, I have not done anything. I was actually able to build it just fine from master earlier today. | 14:29:01 |
hexa | Gaétan Lepage: have you considered pulling this patch for tensorflow-bin? https://github.com/tensorflow/tensorflow/issues/58073#issuecomment-2097055553 | 20:58:34 |
| 11 Jun 2024 |
teto | when using localai 2.15 from unstable and even after a reboot I get ggml_cuda_init: failed to initialize CUDA: CUDA driver is a stub library. It's a bit random but if anyone has a tip, I take it. nvidia-smi output looks fine | 00:25:38 |
Gaétan Lepage | In reply to @hexa:lossy.network Gaétan Lepage: have you considered pulling this patch for tensorflow-bin? https://github.com/tensorflow/tensorflow/issues/58073#issuecomment-2097055553 This looks like it could work !
However, how do you apply a patch to a wheel-type python derivation ? | 06:38:47 |
Gaétan Lepage | What phase of the buildPythonPackage script should I hook it into ? | 06:39:02 |
Gaétan Lepage | I tried patches = [ but it does not work | 06:39:15 |
Gaétan Lepage | I am packaging this: https://github.com/EricLBuehler/mistral.rs?tab=readme-ov-file#installation-and-build
You can see that it support several variations for building (CUDA, metal, mkl...)
-> What should be the approach ? Adding cudaSupport ? metalSupport ? mklSupport ? | 07:01:41 |
| kaya 𖤐 changed their profile picture. | 08:03:48 |
hexa | In reply to @glepage:matrix.org This looks like it could work !
However, how do you apply a patch to a wheel-type python derivation ? likely in postInstall 😕 | 11:58:18 |
hexa | curses | 11:59:30 |
Gaétan Lepage | Ok, but I can I use fetchpatch though ? | 12:02:43 |
SomeoneSerge (matrix works sometimes) | connor (he/him) (UTC-5) IIRC you brought up setting legacy (FindCUDA&c) variables from the setup hooks. I think we should set them, and we should put that logic behind a guard (e.g. findCudaCmakeSupport=true), just as we should guard the current logic (e.g. findCudatoolkitCmakeSupport=true). We should disable the legacy by default. We should only set cmake flags when the cmake hook is actually used or when cmake flags are explicitly requested. | 13:19:22 |
SomeoneSerge (matrix works sometimes) | In reply to @keiichi:matrix.org when using localai 2.15 from unstable and even after a reboot I get ggml_cuda_init: failed to initialize CUDA: CUDA driver is a stub library. It's a bit random but if anyone has a tip, I take it. nvidia-smi output looks fine LD_DEBUG=libs | 13:19:44 |
SomeoneSerge (matrix works sometimes) | In reply to @glepage:matrix.org
I am packaging this: https://github.com/EricLBuehler/mistral.rs?tab=readme-ov-file#installation-and-build
You can see that it support several variations for building (CUDA, metal, mkl...)
-> What should be the approach ? Adding cudaSupport ? metalSupport ? mklSupport ? Does it allow enabling multiple features at once? | 13:20:13 |
Gaétan Lepage | In reply to @ss:someonex.net Does it allow enabling multiple features at once? No, but I think that I will copy the implementation from ollama | 13:20:38 |
Gaétan Lepage | It looks very clean to me | 13:20:44 |
Gaétan Lepage | https://github.com/NixOS/nixpkgs/blob/master/pkgs/by-name/ol/ollama/package.nix#L65-L82 | 13:21:07 |