11 Jun 2021 |
@hexa:lossy.network | ideally :) | 21:45:00 |
Mic92 (Old) | rhasspy is quite a beast. | 21:45:13 |
Mic92 (Old) | It would take a while to get all packaged | 21:45:19 |
@hexa:lossy.network | my rhasspy setup is broken somewhere after intent recognition | 21:45:19 |
Mic92 (Old) | One needs to know what package version to use from every component | 21:45:40 |
@hexa:lossy.network | [DEBUG:2021-06-11 23:45:36,581] rhasspyspeakers_cli_hermes: ['aplay', '-q', '-t', 'wav', '-D', 'default']
[ERROR:2021-06-11 23:45:36,581] rhasspyspeakers_cli_hermes: maybe_change_volume
Traceback (most recent call last):
File "/nix/store/478ddrlklp0411rj7abzq531rqif1y0f-python3.8-rhasspy-speakers-cli-hermes-0.3.0/lib/python3.8/site-packages/rhasspyspeakers_cli_hermes/__init__.py", line 189, in maybe_change_volume
info_data = wavchunk.get_chunk(wav_in_io)
File "/nix/store/ck372vxw6xiylxmmg9zal15lqj7jc47l-python3.8-wavchunk-1.0.1/lib/python3.8/site-packages/wavchunk/__init__.py", line 170, in get_chunk
chunk_size = read_size(wav_file)
File "/nix/store/ck372vxw6xiylxmmg9zal15lqj7jc47l-python3.8-wavchunk-1.0.1/lib/python3.8/site-packages/wavchunk/__init__.py", line 41, in read_size
return struct.unpack_from("<L", file.read(4))[0]
struct.error: unpack_from requires a buffer of at least 4 bytes for unpacking 4 bytes at offset 0 (actual buffer size is 0)
[DEBUG:2021-06-11 23:45:37,744] rhasspyspeakers_cli_hermes: -> AudioPlayFinished(id='08079b0f-fae7-4c02-b16e-5a4585035538', session_id='08079b0f-fae7-4c02-b16e-5a4585035538')
| 21:46:11 |
@hexa:lossy.network | yeah, some packagesets are that way | 21:46:32 |
@hexa:lossy.network | add pkgs/homeautomation/rhasspy/{,README.md} and rhasspyPackages | 21:46:59 |
Mic92 (Old) | I had to take educated guesses in many places | 21:47:28 |
Mic92 (Old) | Looks like deepspeech is now becoming more mainstream in rhasspy. Maybe it is now better than kaldi? | 21:48:09 |
@hexa:lossy.network | packaging it from source is still painful | 21:48:25 |
@hexa:lossy.network | needs some bazel build for a native library | 21:48:41 |
@hexa:lossy.network | https://github.com/coqui-ai/STT/tree/main/native_client/ctcdecode | 21:49:14 |
Mic92 (Old) | Looks like https://github.com/rhasspy/rhasspy-wake-raven/commits/c3c6d0633473223873b808829eecf4f4624c9e06 is now faster thanks to cython rewrite | 21:49:37 |
Mic92 (Old) | In reply to @hexa:lossy.network https://github.com/coqui-ai/STT/tree/main/native_client/ctcdecode Does this really build tensorflow from source? | 21:52:07 |
@hexa:lossy.network | Mic92: I think it needs the tensorflow submodule checked out and required its WORKPLACE file | 21:54:08 |
@hexa:lossy.network | so … maybe? | 21:54:11 |
@hexa:lossy.network | In theory we could fetch the wheel I guess: https://pypi.org/project/coqui-stt-ctcdecoder/#files | 21:54:38 |
Mic92 (Old) | Do they need patchelf? | 21:59:55 |
@hexa:lossy.network | idk | 22:01:58 |
@hexa:lossy.network | also there is the tts fork from michael hansen (rhasspy) | 22:02:21 |
12 Jun 2021 |
Fabian Affolter | hexa: yeah, the tracking in the project doesn't work well at the moment | 11:57:14 |
Fabian Affolter | * hexa: yeah, the tracking in the project board doesn't work well at the moment | 11:58:08 |
@hexa:lossy.network | In reply to @hexa:lossy.network Fabian Affolter: https://github.com/NixOS/nixpkgs/pull/126326#issuecomment-857620450 what's going on here? And this? | 12:23:32 |
@hexa:lossy.network | Ah i see, he does not grok nixpkgs | 12:24:19 |
lukegb (he/him) | "Slightly assholic at first sight 🤒" yup, can confirm | 12:34:43 |
@hexa:lossy.network | indeed | 13:53:17 |
@hexa:lossy.network | I quoted that yesterday in a private conversation | 13:53:26 |
@hexa:lossy.network | hoped for "Actually a nice guy that just likes to get stuff done" | 13:53:40 |
@hexa:lossy.network | so, looking at the packages he maintains … that also includes wled | 13:54:03 |