3 Oct 2024 |
lilith | Heyo! I've run into an issue and couldn't get it to work for the last couple hours, maybe someone here has an idea where to look?
I'm configuring (declarative NixOS-native, as far as I can) an Anthropic voice assistant, and got it working to the point I can chat with it, but it only has access to the HassTurnOn and HassTurnOff intents, nothing else. All I get when trying to look for the issue is guides how to create custom intents, but all I'm really interested in is other built-ins like HassGetWeather or even just HassGetState, and I'm running out of ideas at this point. Any ideas? Thanks! ^-^ | 20:52:08 |
hexa | try the ollama integration instead? | 23:48:29 |
lilith | I will eventually, but the server I run home assistant on at the moment isn't powerful enough to run local LLMs, and using another server would require me to configure secure networking first, and I want to take it one step at a time. | 23:57:42 |
lilith | Although I might just do that, at this point it might be easier | 23:58:11 |
4 Oct 2024 |
hexa | oh, I forgot that antrophic is a cloud offering | 00:04:10 |
lilith | no problem! do you know if the ollama integration has a way to do token authentication? | 00:05:12 |
hexa | I don't believe so | 00:08:54 |
lilith | ah, oh well. guess I'll set up that wireguard tunnel that I've been procrastinating 🫠 | 00:09:59 |
lilith | okay, I did some quick and dirty testing, and the issue persists with Ollama also. Is there perhaps some configuration that I've overlooked to enable built-in Intents beyond HassTurnOn and HassTurnOff? | 00:31:20 |
hexa | Download image.png | 00:32:25 |
lilith | Strange, I must be doing something wrong | 00:32:57 |
lilith | would you mind sharing your config? there must be something stupid I'm missing | 00:33:18 |
hexa | Download image.png | 00:33:44 |
hexa | the model is llama3.2:3b-instruct-q8_0 | 00:34:55 |
hexa | generally the llama 3.x models work pretty well | 00:35:06 |
hexa | it is still a bit dumb sometimes | 00:36:09 |
hexa | but works much better with freeform requests | 00:36:24 |
lilith | I'm using llama3.1 8B, but I don't think the model is the issue | 00:46:25 |
lilith | Download image.png | 00:46:29 |
lilith | I'm giving up for today and going to sleep, thanks for trying to help ^-^ | 00:47:28 |
hexa | {"type":"function","function":{"name":"HassTurnOn","name":"HassTurnOn"}}; {"type":"function","function":{"name":"HassTurnOff","name":"HassTurnOff"}}; {"type":"function","function":{"name":"HassVacuumStart","name":"HassVacuumStart"}}; {"type":"function","function":{"name":"HassVacuumReturnToBase","name":"HassVacuumReturnToBase"}}
| 09:30:46 |
hexa | is the response to that for me | 09:30:52 |
hexa | which is also incomplete, given that I showed you | 09:30:58 |
hexa | and when listing them it also calls them | 09:33:59 |
hexa | ouch | 09:34:05 |
| rendakuenthusiast⚡️ left the room. | 09:51:23 |
5 Oct 2024 |
| ˈt͡sɛːzaɐ̯ joined the room. | 04:12:26 |
CRTified | Short sanity check. I'm running a config with lovelaceConfig = null; . Can I still use customLovelaceModules in a sane way (i.e., without manually configuring every resource) | 17:16:27 |
CRTified | * I'm running a config with lovelaceConfig = null; . Can I still use customLovelaceModules in a sane way (i.e., without manually configuring every resource) | 17:16:35 |
Sandro 🐧 | you should | 17:47:37 |