!rWxyQqNqMUDLECdsIf:blad.is

Poetry2nix

320 Members
https://github.com/nix-community/poetry2nix61 Servers

Load older messages


SenderMessageTime
6 Nov 2023
@matthewcroughan:defenestrate.itmatthewcroughan
Traceback (most recent call last):
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/utils.py", line 49, in tokenizer
    import tiktoken
ModuleNotFoundError: No module named 'tiktoken'
18:31:00
@pareto-optimal-dev:matrix.orgpareto-optimal-devIt has poetry.lock at least :)18:31:13
@matthewcroughan:defenestrate.itmatthewcroughanhttps://github.com/imartinez/privateGPT18:31:54
@pareto-optimal-dev:matrix.orgpareto-optimal-devis it because it wants version 0.3.3?18:31:54
@matthewcroughan:defenestrate.itmatthewcroughanI'm building this18:31:57
@pareto-optimal-dev:matrix.orgpareto-optimal-devI packaged a similar one, khoj: https://khoj.dev/ I want to compare them though.18:32:38
@pareto-optimal-dev:matrix.orgpareto-optimal-devOh privateGPT has a RAG/memory similar to memgpt and khoj doesn't IIRC.18:33:34
@matthewcroughan:defenestrate.itmatthewcroughan
In reply to @pareto-optimal-dev:matrix.org
is it because it wants version 0.3.3?
maybe, but it should have killed the build if it didn't have the version
18:34:10
@matthewcroughan:defenestrate.itmatthewcroughanllama-index builds fine, it just doesn't run fine 18:34:17
@pareto-optimal-dev:matrix.orgpareto-optimal-devso... the python packages directory it's being run with doesn't contain tiktoken for some reason?18:34:50
@matthewcroughan:defenestrate.itmatthewcroughanyeah, as a result of mkPoetryApplication 18:35:11
@pareto-optimal-dev:matrix.orgpareto-optimal-devI saw a bug about that in poetry2nix18:35:39
@matthewcroughan:defenestrate.itmatthewcroughan it actually results in a lib dir, maybe I've called the wrong function 18:35:48
@matthewcroughan:defenestrate.itmatthewcroughan
result/lib/python3.11/site-packages/private_gpt
components  constants.py  di.py  __init__.py  __main__.py  main.py  open_ai  paths.py  __pycache__  server  settings  ui  utils
18:36:00
@pareto-optimal-dev:matrix.orgpareto-optimal-devhttps://github.com/nix-community/poetry2nix/issues/122618:36:55
@matthewcroughan:defenestrate.itmatthewcroughanI'll put this on github, I wanted to give them the flake anyway 18:40:21
@pareto-optimal-dev:matrix.orgpareto-optimal-devAlso someone has built llama-index with Nix it seems using buildPythonApplication, not sure if it'll help or not: https://github.com/jpetrucciani/nix/blob/9c9bf04afc46fc440cbf0e87089d330d9339ef86/mods/python/ai/prompts.nix#L167 Sometimes I see runtime things they do I didn't think of related to my issue though.18:40:16
@matthewcroughan:defenestrate.itmatthewcroughanhttps://github.com/MatthewCroughan/privateGPT18:43:55
@matthewcroughan:defenestrate.itmatthewcroughanhere's the flake18:44:05
@matthewcroughan:defenestrate.itmatthewcroughanit is seemingly just one runtime issue away from running perfectly 18:44:19
@matthewcroughan:defenestrate.itmatthewcroughanjust that llama-index can't see tiktoken 18:45:02
@matthewcroughan:defenestrate.itmatthewcroughan nix develop github:matthewcroughan/privateGPT#foo 18:45:29
@matthewcroughan:defenestrate.itmatthewcroughan then python -m private_gpt 18:45:42
@matthewcroughan:defenestrate.itmatthewcroughan

full trace

❯ python -m private_gpt
Traceback (most recent call last):
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/utils.py", line 49, in tokenizer
    import tiktoken
ModuleNotFoundError: No module named 'tiktoken'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/matthew/git/privateGPT/private_gpt/__main__.py", line 5, in <module>
    from private_gpt.main import app
  File "/home/matthew/git/privateGPT/private_gpt/main.py", line 4, in <module>
    import llama_index
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/__init__.py", line 21, in <module>
    from llama_index.indices.common.struct_store.base import SQLDocumentContextBuilder
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/indices/__init__.py", line 4, in <module>
    from llama_index.indices.document_summary.base import DocumentSummaryIndex
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/indices/document_summary/__init__.py", line 4, in <module>
    from llama_index.indices.document_summary.base import (
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/indices/document_summary/base.py", line 14, in <module>
    from llama_index.indices.base import BaseIndex
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/indices/base.py", line 6, in <module>
    from llama_index.chat_engine.types import BaseChatEngine, ChatMode
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/chat_engine/__init__.py", line 1, in <module>
    from llama_index.chat_engine.condense_question import CondenseQuestionChatEngine
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/chat_engine/condense_question.py", line 6, in <module>
    from llama_index.chat_engine.types import (
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/chat_engine/types.py", line 11, in <module>
    from llama_index.memory import BaseMemory
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/memory/__init__.py", line 1, in <module>
    from llama_index.memory.chat_memory_buffer import ChatMemoryBuffer
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/memory/chat_memory_buffer.py", line 12, in <module>
    class ChatMemoryBuffer(BaseMemory):
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/memory/chat_memory_buffer.py", line 18, in ChatMemoryBuffer
    default_factory=cast(Callable[[], Any], GlobalsHelper().tokenizer),
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/nix/store/3g04wving6mlz1vxmnaqxy1wfsyh8g50-python3.11-llama-index-0.8.47/lib/python3.11/site-packages/llama_index/utils.py", line 51, in tokenizer
    raise ImportError(tiktoken_import_err)
ImportError: `tiktoken` package not found, please run `pip install tiktoken`
18:45:57
@matthewcroughan:defenestrate.itmatthewcroughan here's how it does the import in utils.py 18:46:45
@matthewcroughan:defenestrate.itmatthewcroughan
    def tokenizer(self) -> Callable[[str], List]:
        """Get tokenizer."""
        if self._tokenizer is None:
            tiktoken_import_err = (
                "`tiktoken` package not found, please run `pip install tiktoken`"
            )
            try:
                import tiktoken
            except ImportError:
                raise ImportError(tiktoken_import_err)
            enc = tiktoken.get_encoding("gpt2")
            self._tokenizer = cast(Callable[[str], List], enc.encode)
            self._tokenizer = partial(self._tokenizer, allowed_special="all")
        return self._tokenizer  # type: ignore

18:46:48
@matthewcroughan:defenestrate.itmatthewcroughanseems reasonable 18:46:50
@pareto-optimal-dev:matrix.orgpareto-optimal-devSomeone is adding something to importscheck here, not sure if related.. but interesting: https://sourcegraph.com/github.com/ibis-project/ibis/-/blob/nix/ibis.nix?L5418:58:21
@k900:0upti.meK900 pythonImportsCheck just checks the modules can be imported after the install 19:00:02
@k900:0upti.meK900It doesn't actually affect what is installed19:00:12

Show newer messages


Back to Room ListRoom Version: 6