| 6 Nov 2023 |
matthewcroughan | Although, again, Nix won't be the only tool that has problems dealing with the lack of specification in Python. | 18:23:28 |
pareto-optimal-dev | This looks interesting for the cargo problem, but then I'll have to learn the flake parts stuff it seems :D
https://github.com/yusdacra/nix-cargo-integration | 18:23:50 |
matthewcroughan | It is just information. Is there enough information in the repo to reproduce something/ | 18:23:55 |
matthewcroughan | * It is just information. Is there enough information in the repo to reproduce something? | 18:23:56 |
matthewcroughan | If there is not enough information in the source code, then even a human will have issues discerning how to reproduce what is inside of it. | 18:24:11 |
matthewcroughan | It's like OCR, it's error prone because of lack of accuracy | 18:24:40 |
matthewcroughan | I have an issue now with llama-index :( | 18:30:20 |
matthewcroughan | llama-index = super.llama-index.overridePythonAttrs
(
old: {
propagatedBuildInputs = (old.propagatedBuildInputs or [ ]) ++ [ self.tiktoken ];
nativeBuildInputs = (old.nativeBuildInputs or [ ]) ++ [ super.poetry ];
}
);
| 18:30:22 |
matthewcroughan | Traceback (most recent call last):
File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/utils.py", line 49, in tokenizer
import tiktoken
ModuleNotFoundError: No module named 'tiktoken'
| 18:31:00 |
pareto-optimal-dev | It has poetry.lock at least :) | 18:31:13 |
matthewcroughan | https://github.com/imartinez/privateGPT | 18:31:54 |
pareto-optimal-dev | is it because it wants version 0.3.3? | 18:31:54 |
matthewcroughan | I'm building this | 18:31:57 |
pareto-optimal-dev | I packaged a similar one, khoj: https://khoj.dev/
I want to compare them though. | 18:32:38 |
pareto-optimal-dev | Oh privateGPT has a RAG/memory similar to memgpt and khoj doesn't IIRC. | 18:33:34 |
matthewcroughan | In reply to @pareto-optimal-dev:matrix.org is it because it wants version 0.3.3? maybe, but it should have killed the build if it didn't have the version | 18:34:10 |
matthewcroughan | llama-index builds fine, it just doesn't run fine | 18:34:17 |
pareto-optimal-dev | so... the python packages directory it's being run with doesn't contain tiktoken for some reason? | 18:34:50 |
matthewcroughan | yeah, as a result of mkPoetryApplication | 18:35:11 |
pareto-optimal-dev | I saw a bug about that in poetry2nix | 18:35:39 |
matthewcroughan | it actually results in a lib dir, maybe I've called the wrong function | 18:35:48 |
matthewcroughan | result/lib/python3.11/site-packages/private_gpt
components constants.py di.py __init__.py __main__.py main.py open_ai paths.py __pycache__ server settings ui utils
| 18:36:00 |
pareto-optimal-dev | https://github.com/nix-community/poetry2nix/issues/1226 | 18:36:55 |
matthewcroughan | I'll put this on github, I wanted to give them the flake anyway | 18:40:21 |
pareto-optimal-dev | Also someone has built llama-index with Nix it seems using buildPythonApplication, not sure if it'll help or not:
https://github.com/jpetrucciani/nix/blob/9c9bf04afc46fc440cbf0e87089d330d9339ef86/mods/python/ai/prompts.nix#L167
Sometimes I see runtime things they do I didn't think of related to my issue though. | 18:40:16 |
matthewcroughan | https://github.com/MatthewCroughan/privateGPT | 18:43:55 |
matthewcroughan | here's the flake | 18:44:05 |
matthewcroughan | it is seemingly just one runtime issue away from running perfectly | 18:44:19 |
matthewcroughan | just that llama-index can't see tiktoken | 18:45:02 |
matthewcroughan | nix develop github:matthewcroughan/privateGPT#foo | 18:45:29 |