Sender | Message | Time |
---|---|---|
30 Dec 2023 | ||
19:53:16 | ||
31 Dec 2023 | ||
Redacted or Malformed Event | 08:30:54 | |
In reply to @benoitdr:matrix.orgYeah protip, if you're looking for AI packages, use github search and do language:Nix PACKAGE_NAME buildPythonPackage and there's a good chance someone has already packaged it. That saves me so much time :D | 17:04:35 | |
2 Jan 2024 | ||
In reply to @trexd:matrix.orgGreat, thanks ! | 08:35:38 | |
11:33:37 | ||
3 Jan 2024 | ||
Still a question : I have successfully packaged ctransformers based on the information above, but it cannot see my GPU. If I would pip install it, I would use the cuda option (pip install ctransfomers[cuda]). Is there a way to pass the same option to buildPythonPackage ? | 13:01:55 | |
In reply to @benoitdr:matrix.org (partially) answering to myslef, looking at setup.py, I can see that the cuda option corresponds to 2 extra packages :
| 14:18:44 | |
(casual reminder that these are poetry2nix-generated wrappers for the pypi wheels, expect them to break) | 14:33:54 | |
In reply to @benoitdr:matrix.orgPass the respective flags to cmake | 14:34:10 | |
* (casual reminder that these are poetry2nix-generated wrappers for the pypi wheels, expect them to break) EDIT: aj this is from setup.py, nvm | 14:34:33 | |
Need more info ... indeed these 2 pkgs from setup.py are nvidia proprietary and only distributed as wheels on pypi. So what is the way out (if any) ? I have already tried to include python310Packages.pycuda, cudaPackages.libcublas, cudaPackages.cuda_cudart, cudaPackages.cudatoolkit , also passing allowUnfree = true; cudaSupport = true; to nixpkgs config, without success. I'm feeling a bit stuck here ... | 14:43:55 | |
You need to pass this flag during the ctransformers build: https://github.com/marella/ctransformers/blob/ed02cf4b9322435972ff3566fd4832806338ca3d/CMakeLists.txt#L6 | 14:45:55 | |
You can remove this one | 14:46:22 | |
I don't see pycuda in ctrasnformers' dependencies? | 14:47:01 | |
In reply to @ss:someonex.netTheir setup.py introduces an ad hoc environment variable for that: https://github.com/marella/ctransformers/blob/ed02cf4b9322435972ff3566fd4832806338ca3d/setup.py#L10C41-L10C42 | 14:50:11 | |
It would've been better if they weren't wrapping/hiding cmake from the user but at least there's a variable | 14:50:37 | |
So you can set something like env.CT_CUBLAS = "ON" | 14:51:22 | |
Yep, env.CT_CUBLAS = "ON"; was the trick. Still I need cudatoolkit in propagatedBuildInputs, nothing more. Thanks a lot ! | 15:58:26 | |
4 Jan 2024 | ||
| 04:47:57 | |
Also there's no need to propagate it | 04:48:08 | |
06:25:27 | ||
Strange ... It doesn't work without it on my side | 19:56:48 | |
5 Jan 2024 | ||
08:38:35 | ||
08:54:18 | ||
10:16:35 | ||
10:16:48 | ||
13:17:55 | ||
In reply to @benoitdr:matrix.orgCould you elaborate? What are the errors | 17:31:24 | |
6 Jan 2024 | ||
Here is my shell.nix file :
Without cudatoolkit, at compile time, ctransformers complaints that cublas is not found, and at runtime, ctransformers cannot use the GPU. | 10:29:05 | |
BTW, the same thing happens if I set cudaSupport to true like this :
cudatoolkit is still needed | 10:43:54 |